Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Designing an optimal software intensive system acquisition: a game theoretic approach
(USC Thesis Other)
Designing an optimal software intensive system acquisition: a game theoretic approach
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
DESIGNING AN OPTIMAL SOFTWARE INTENSIVE SYSTEM ACQUISITION: A GAME THEORETIC APPROACH by Douglas John Buettner A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (ASTRONAUTICAL ENGINEERING) September 2008 Copyright 2008 Douglas John Buettner ii E P I G R A P H “Just the place for a Snark!” the Bellman cried, As he landed his crew with care; Supporting each man on the top of the tide By a finger entwined in his hair. “Just the place for a Snark! I have said it twice: That alone should encourage the crew. Just the place for a Snark! I have said it thrice: What I tell you three times is true.” The Hunting of the Snark by Lewis Carroll iii D E D I C A T I O N To my wife Joanne who lost her husband to his computer, and to my daughter Jennifer who found that studying with dad in his den was the only way to be with him. My enduring gratitude and thanks for your patience. iv A C K N O W L E D G E M E N T S Numerous individuals and various experiences (both rewarding and frustrating) from working on software programs inside national security space and from my experience in industry have played crucial roles in molding my thoughts towards the concepts provided in this work. My numerous friends and colleagues at The Aerospace Corporation, my professors, teachers, friends and their parents, and of course my siblings all deserve gratitude. Thanks also to my dissertation committee members – in particular professors Dr. Dan Erwin and Dr. Barry Boehm, and my Aerospace mentor Dr. Kirstie Bellman for providing guidance. And finally, my parents whose love and support through the years I will never be able to repay. v T A B L E O F C O N T E N T S EPIGRAPH ii DEDICATION iii ACKNOWLEDGEMENTS iv LIST OF TABLES ix ABBREVIATIONS xvi ABSTRACT xxiii CHAPTER 1: INTRODUCTION 1 1. Statement of the Problem 1 1.1 Purpose of the Research 4 1.2 Contents 5 Chapter 1 Endnotes 8 CHAPTER 2: BACKGROUND 10 2. Introduction 10 2.1 Space System Software Overview 10 2.1.1 Spacecraft Bus Software 10 2.1.2 Payload Software 11 2.1.3 Ground Software 11 2.1.4 Launch Vehicle Software 11 2.2 Software Development Models and Methods Overview 12 2.2.1 Waterfall Model 12 2.2.2 Software Process Dynamics Modeling 16 2.2.3 Software Cost/Quality Model 16 2.2.3.1 Constructive Cost Model (COCOMO) 17 2.2.3.2 Constructive Quality Model (COQUALMO) 18 2.2.4 Capability Maturity Model ® Integration (CMMI) 19 2.2.5 Software Defect Prevention and Detection Methods 20 2.2.5.1 Software Defect Taxonomies 20 2.2.5.2 Defect Classification Methods 20 2.2.5.3 Software Defect Detection from Inspections 21 2.2.5.4 Analytic Models, Simulation and Analysis 21 2.2.5.5 Unified Modeling Language (UML) 22 2.2.5.6 Software Prototypes 23 2.2.5.7 Formal methods 24 2.2.5.8 Software Testing Methods 24 2.2.6 Software Risk 26 2.2.6.1 Probabilistic Risk Assessment (PRA) 27 2.3 Software-Intensive Space System Acquisition Overview 32 2.3.1 Government Acquisition of Software Intensive Space Systems 32 2.3.2 Software Acquisition in the U.S. Space Environment 34 2.4 Game Theory Literature Review 35 2.4.1 Non-Relevent Uses of Game Theory 36 2.4.1.1 The Recent Sassenburg Dissertation 36 vi 2.4.1.2 The Buisman Thesis 36 2.4.1.3 Network Routing and Telecommunication 37 2.4.1.4 The Prisoner’s Dilemma for Choosing Extreme Programming 38 2.4.1.5 Game Theory use to Secure Reliability of Measurement Data 38 2.4.2 Relevent Uses of Game Theory 38 2.4.2.1 Value-Based Software Engineering and Theory-W 39 2.4.2.2 Human Aspects in Software Engineering 40 2.4.2.3 Software Development as a Non-cooperative Game 41 2.4.2.4 “Corner Cutting” Description for Time Pressured Developers 41 2.5 Summary and Discussion 42 2.6 Conclusions 45 Chapter 2 Endnotes 49 CHAPTER 3: CASE STUDIES 60 3. Introduction 60 3.1 Data Availability and Types of Data 60 3.2 Case Study Selection Approach 61 3.3 Qualitative Data Research and Analysis 63 3.3.1 Open Coding Methodology 64 3.3.2 Quantitative Data from Qualitative Research 66 3.3.3 Hypothesis Testing 66 3.3.4 Public Summary of Qualitative Findings 67 3.3.4.1 Designs Abandoned or Not Used To Full Potential 67 3.3.4.2 Engineers See No Need for Government Required Documentation 68 3.3.4.3 Discussion about the Lack of Test Thoroughness 69 3.3.4.4 Examples of Qualitative Research Data 72 3.4 Quantitative Research and Analysis 74 3.4.1 Project Peer Review Metrics 76 3.4.1.1 Project-A Peer Review Metrics 77 3.4.1.2 Project-C Peer Review Metrics 79 3.4.2 Project Staffing, Defects and their Correlations 79 3.4.2.1 Project-A Staffing, Defects and Correlations 80 3.4.2.2 Project-C Staffing, Defects, and Correlations 86 3.4.2.3 Project-A Defect Distributions 89 3.4.2.4 Project-C Defect Distributions 91 3.4.3 Code Generation and Unit Test Data for Modeling 95 3.5 Discussion 96 3.6 Public Recommendations 98 Chapter 3 Endnotes 101 CHAPTER 4: A SYSTEM DYNAMICS MODEL 102 4. Background 102 4.1 Procedures for Modeling Dynamic Systems 102 4.2 Block-Diagrams and the State-Space from Classical System Dynamics 104 4.3 Commercial Tool Selected for Modeling the Dynamic Software System 106 4.3.1 Powersim Studio™ Modeling Tool Symbols 107 4.4 Description of a Modified Madachy Model (MMM) 107 4.4.1 The Modified Madachy Model (MMM) 109 4.4.2 Discussion of Model Modifications and Differences 125 4.4.3 Implementation and Testing Approach 128 4.4.3.1 Results from Modeling Unit Testing with an Integration Test Feedback Loop 129 vii 4.4.3.2 Counter Intuitive Manpower Rate Results 133 4.4.3.3 Effort Paths and Defects Found in Integration Testing 134 4.4.3.4 Schedule Constraint Effects 136 4.5 Simulation of Space Flight Software Projects 137 4.5.1 Approach for Studying Flight Software Defect Discovery Dynamics 137 4.5.2 Approach for Modifying Staffing Curves 140 4.5.3 Simulation Results and Discussion for Flight Software Defect Discovery Dynamics 141 4.5.4 Latin Hypercube Sampling for Quality, Schedule, and Cost-Driven Projects 143 4.6 Conclusions and Discussion 144 Chapter 4 Endnotes 147 CHAPTER 5: GAME THEORY 149 5. Introduction 149 5.1 Background 150 5.1.1 Normal Form Games (Static Games) 150 5.1.2 A Solution Example for a 3x3 Zero-Sum Game 150 5.1.3 3x3 Non-Zero-Sum Game 156 5.1.4 Cooperative Bargaining and The Nash Solution 158 5.2 Austin’s Original Expanded Normal Form Game 159 5.3 N-Player System Development Games 161 5.4 Extensive Form Games (Game Trees) 161 5.5 A Differential Game of Optimal Production with Defects 163 5.6 Other Methods 166 5.6.1 Dynamic Programming and Recursive Decision-Making 166 5.6.2 Probabilistic Risk Analysis and Game Theory 167 5.7 Discussion and Conclusions for Negotiating The Solution 167 5.7.1 Peer Review and Unit Test Counter Strategies and Bargaining Solutions 167 5.7.2 Player Strategies for Software Intensive System 171 5.7.3 Some Threat Strategies (from the Contractor’s Viewpoint) 173 5.8 Conclusion: A Proposed Nash Bargaining Solution 174 Chapter 5 Endnotes 178 CHAPTER 6: CONCLUSIONS 180 6. Introduction 180 6.1 Summary 180 6.2 Case Study Findings and Recommendations 183 6.3 Contributions 186 6.4 Future Research 187 Chapter 6 Endnotes 189 GLOSSARY 190 BIBLIOGRAPHY 203 APPENDIX: A – SOFTWARE MODELS AND METHODS 215 A.1 Incremental Model 215 A.2 Transform Model 215 A.3 Spiral Model 215 A.4 Agile Process Model 216 A.5 Rational Unified Process Model 217 viii A.6 System Evaluation and Estimation of Resources–Software Estimating Model (SEER-SEM) 217 A.7 Earned Value Management (EVM) 218 A.8 Software Risk Management 219 Appendix A Endnotes 222 APPENDIX: B – RAW QUANTITATIVE DATA 224 B.1 Introduction 224 B.2 Peer Review Data 224 B.3 Software Defect Repository (SDR) Data 229 APPENDIX: C – PERL SCRIPT FOR PARSING DR DATA 282 C.1 Introduction 282 C.2 Project-D Perl Script 282 APPENDIX: D – DYNAMICS MODEL EQUATIONS 289 D.1 Introduction 289 D.2 Modified Madachy Model (MMM) Equations 289 APPENDIX: E – DYNAMICS MODELING DATA 306 E.1 Introduction 306 E.2 Model Comparison Tables 306 E.3 Modified Model Test Matrix and Results Tables 311 APPENDIX: F – SIMULATED DYNAMIC DEFECT DATA 347 F.1 Introduction 347 F.2 Plots of Simulated Dynamic Defects from Integration Testing 347 APPENDIX: G – LATIN HYPERCUBE SAMPLING 356 G.1 Introduction 356 G.2 Latin Hypercube Sampling Versus the Use of the Monte Carlo Sampling Method 356 G.3 Sampling Distribution and Results 356 APPENDIX: H – A NOTE ON FREEMAN DYSON’S PROBLEM 358 H.1 Introduction 358 H.2 Software Intensive System Stakeholders 358 H.3 N-Dimensional Dynamic Bargaining Theory 361 Appendix H Endnotes 363 APPENDIX: I – UML ANALOGY FOR NON-SOFTWARE PEOPLE 364 I.1 Introduction 364 I.2 A Project-D Analogy of UML to Blueprints 364 I.3 A Discussion on the use of UML 366 Appendix I Endnotes 368 ix L I S T O F T A B L E S Table 1: Space System Failures Caused By Software 3 Table 2: Waterfall Model Life Cycle Phase Activities 14 Table 3: Types of Software Tests 25 Table 4: Software Probabilistic Risk Assessment Techniques 28 Table 5: Software PRA Steps 30 Table 6: Taxonomy of Software Related System Failures 31 Table 7: Taxonomy of Space Software Failures 31 Table 8: Data Availability for Flight Software Projects in this Study 61 Table 9: Sample Codes 64 Table 10: Quantitative Data Extracted During Qualitative Research 73 Table 11: Project-A Correlation Coefficients between Staff, Noise, and Defects 84 Table 12: Project-C Correlation Coefficients between Staff, Noise, and Defects 88 Table 13: Results from the 1 st Month of Re-Unit Testing 95 Table 14: Results after Six Months of Re-Unit Testing 95 Table 15: Average Data from the 1 st Month of Re-Unit Testing 95 Table 16: Average Data from Six Months of Re-Unit Testing 96 Table 17: Symbols Used by the Powersim Studio™ Modeling Tool 107 Table 18: Remaining Contents for Chapter 4 108 Table 19: Test Cases for Flight Software Defect Discovery Dynamics 138 Table 20: Results from Simulating Code and Design Reverse Engineering 142 Table 21: Sampling of Possible Software Developer Strategies 171 Table 22: Sampling of Possible Software Management Strategies 172 Table 23: Strategy Advice for the Development of Software Intensive Systems 183 Table 24: Summary of Findings and Recommendations 184 Table 25: Software Risk Management Steps 219 x Table 26: Project-A Requirements and Design Peer Review Metrics 224 Table 27: Project-A Code Peer Review Metrics 225 Table 28: Project-A Unit Test Peer Review Metrics 225 Table 29: Project-A Qualification Test Peer Review Metrics 226 Table 30: Project- C Accumulated (All) Peer Review Metrics 227 Table 31: Project- A All Defect Data 229 Table 32: Project- C All Defect Data 252 Table 33: MMM Equations 289 Table 34: Baseline Test Matrix 307 Table 35: Comparison Results (iThink minus Powersim) 309 Table 36: Augmented Test Matrix 312 Table 37: Modified Madachy Model Results 317 Table 38: Project-A Results: Using Baseline Effort Fraction and an Unmodified Staffing Profile 324 Table 39: Project-A Results: Using Switched Effort Fraction and an Unmodified Staffing Profile 326 Table 40: Project-A Results: Using Baseline Effort Fraction and a Modified Staffing Profile 328 Table 41: Project-A Results: Using Switched Effort Fraction and a Modified Staffing Profile 330 Table 42: Project-C Results: Using Baseline Effort Fraction and an Unmodified Staffing Profile 332 Table 43: Project-C Results: Using Switched Effort Fraction and an Unmodified Staffing Profile 334 Table 44: Project-C Results: Using Baseline Effort Fraction and a Modified Staffing Profile 335 Table 45: Project-C Results: Low Defect Density Effects with Switched Effort Fraction and an Unmodified Staffing Profile 337 Table 46: Project-C Results: Using Switched Effort Fraction and a Modified Staffing Profile 338 Table 47: Project-C Results: Low Defect Density Effects with Switched Effort Fraction and a Modified Staffing Profile 340 Table 48: Project-A Raw Staffing and Modification Curves 341 Table 49: Project-C Raw Staffing and Modification Curves 344 Table 50: Latin Hypercube Sampling Distributions 356 xi Table 51: Latin Hypercube Sampling Results 357 xii L I S T O F F I G U R E S Figure 1: SMC On-board Flight Software Size Trend 1 Figure 2: The Waterfall Model 13 Figure 3: Test Phases Aligned With Development Phases 15 Figure 4: Software Defect Introduction and Removal Model 19 Figure 5: Example of a Possible UML Use Case for Satellite Software 23 Figure 6: Simple Model of Budget and Acquisition Authority Interaction 32 Figure 7: Simple Model of Acquisition Authority and Offeror Interaction 33 Figure 8: Model of Acquisition Authority and Offeror Interaction 34 Figure 9: Project-D Example of TBD’s and Late Algorithm Changes POST-CDR 73 Figure 10: All Cumulative Defects Discovered for Project A, B, C, and D 75 Figure 11: Co-Plotted Peer Review Findings from Project-A and C 76 Figure 12: Cumulative Project-A Major and Minor Peer Review Findings 77 Figure 13: Cumulative Project-C Major and Minor Peer Review Findings 79 Figure 14: Number of FTE or Staff for Project-A and C 80 Figure 15: EVM, Planning, and Organization Chart Staffing for Project-A 81 Figure 16: Project-A Estimated Total Staffing with Subcontractors and Noise 82 Figure 17: Project-A FTE Plotted with the Defect Discoveries per Week 83 Figure 18: Organization Chart Staffing for Project-C 86 Figure 19: Project-C Staff Trend and Uncertainty Noise Levels 87 Figure 20: Project-C Staff Plotted with the Defect Discoveries per Week 88 Figure 21: Project-A Software ONLY Defects by Severity per Week 90 Figure 22: Project-A Severity Distribution (All Defects) 90 Figure 23: Project-A Cumulative Plot of All, All SW, and All SW Sev 1-3 Defects 91 Figure 24: Project-C Software ONLY Defects by Severity per Week 92 Figure 25: Project-C Severity Distribution (All Defects) 93 xiii Figure 26: Project-C Cumulative Plot of All, All SW, and All SW Sev 1-3 Defects 94 Figure 27: Project-C Percent of Rejected-Deferred Defects 94 Figure 28: Labeled Version of Ogata’s Generic Block-Diagram with a Feedback Loop 104 Figure 29: Block-Diagram for a mid-Phase Waterfall Process 105 Figure 30: Modified Effort Model (Top Left Quadrant) 110 Figure 31: Modified Effort Model (Top Right Quadrant) 111 Figure 32: Modified Effort Model (Bottom Right Quadrant) 112 Figure 33: Modified Effort Model (Bottom Left Quadrant) 113 Figure 34: Modified Errors Models (Top Left) 114 Figure 35: Modified Errors Models (Top Right) 115 Figure 36: Modified Errors Models (Bottom Right) 116 Figure 37: Modified Errors Models (Bottom Left) 117 Figure 38: Modified Tasks Models (Top Left) 118 Figure 39: Modified Tasks Models (Top Right) 119 Figure 40: Modified Tasks Models (Bottom Right) 120 Figure 41: Modified Tasks Models (Bottom Left) 121 Figure 42: Un-Modified Test Effort Adjustment Model 122 Figure 43: Un-Modified Cumulative Total Effort Model 123 Figure 44: Powersim Time Calibration with iThink Model 124 Figure 45: Powersim Variables and Calibration Constants 125 Figure 46: Simulated Manpower Rate Without Unit Test or IT 130 Figure 47: Simulated Manpower Rate With Unit Testing but Without the IT 131 Figure 48: Simulated Manpower Rates With the Feedback Induced Late Effort Spike 131 Figure 49: Simulated Manpower Rates with Various Defect Detection Parameter Settings 132 Figure 50: Result with No Inspections Coupled with No Unit Testing or 50% Unit Testing 133 Figure 51: Cumulative Effort Results from Varying Degrees of Inspections and Unit Testing 134 xiv Figure 52: Focused Look at Modeled Near-Term Effort Paths (Weeks 200 to 240) 135 Figure 53: Results on Errors Found in Integration Test 135 Figure 54: Latin Hypercube Sampling for Schedule-Driven Processes 143 Figure 55: Latin Hypercube Sampling for Quality-Driven Processes 144 Figure 56: 2-Person Zero-Sum Game Matrix with 3 Strategies Each 151 Figure 57: Zero-Sum Game Matrix with Player 1 Payoffs Displayed 151 Figure 58: The Game’s Decision Movement Diagram 151 Figure 59: The Game’s minimax and maximin 152 Figure 60: The Modified Game’s Decision Movement Diagram 152 Figure 61: The Modified Game’s Saddle Point 153 Figure 62: Player 2’s Optimal Mixed Strategy to Minimize Player 1’s Payoff 153 Figure 63: No Mixed Strategy Solution for Player 1’s use of A-C Strategy 154 Figure 64: Player 2’s Mixed Strategy for Player 1’s B-C Mixed Strategy 154 Figure 65: 2x2 Sub-game in the Original 3x3 game 155 Figure 66: An Example 3x3 Non-Zero-Sum Game 156 Figure 67: Player 1’s Decision Diagram for the Non-Zero-Sum Game 156 Figure 68: Player 2’s Decision Diagram for the Non-Zero-Sum Game 157 Figure 69: Austin’s Original Expanded Normal Form Game for Developer Quality Decisions 159 Figure 70: Austin’s Diagram for Adding Effort as an Alternative to Shortcut-Taking 160 Figure 71: Austin’s Original Extensive Form Game for Developer Quality Decisions 162 Figure 72: The Spiral Model 216 Figure 73: Project-A Reference Cases Using Interpolated (Raw) Staff Curves 348 Figure 74: Project-A Reference Cases Using Interpolated (Modified) Staff Curves 348 Figure 75: Project-A Dynamics of Varying Defect Densities with Baseline Effort 349 Figure 76: Project-A Dynamics of Varying Defect Densities with Switched Effort 349 Figure 77: Project-A Unmodified Staff Dynamics with Moderate Quality Practices 350 xv Figure 78: Project-A Modified Staff Dynamics with Moderate Quality Practices 350 Figure 79: Project-C Reference Cases Using Interpolated (Raw) Staff Curves 351 Figure 80: Project-C Reference Cases Using Interpolated (Modified) Staff Curves 351 Figure 81: Project-C Dynamics of Varying Defect Densities with Baseline Effort 352 Figure 82: Project-C Dynamics of Varying Defect Densities with Switched Effort 352 Figure 83: Project-C Unmodified Staff Dynamics with High Quality Practices 353 Figure 84: Project-C Modified Staff Dynamics with High Quality Practices 353 Figure 85: Project-C Dynamics for Ultra-low Design Defect Densities 354 Figure 86: Project-A ‘Starved’ Requirements Task in the RW8.5 baseline 354 Figure 87: Project-A Noisy Behavior in Test Case MD5.5 355 Figure 88: Hypothetical Execution Path that is GOOD for 3-Players 360 xvi A B B R E V I A T I O N S ACC Appendage Command and Control ACS Attitude Control Subsystem ACWP Actual Cost for Work Performed ADC Attitude Determination and Control ADD Algorithm Design Document Adj. Adjustment AIAA American Institute of Aeronautics and Astronautics Alg. Algorithm ANSI American National Standards Institute Arch. Architecture ASCII American Standard Code for Information Interchange BAC Budget At Completion BCWP Budget Cost for Work Performed BCWS Budget Cost for Work Scheduled BOL Beginning-Of-Life C&DH command, and data handling CASRE Computer Aided Software Reliability Engineering CDR Critical Design Review CDRL Contract Data Requirements List CMM Capability Maturity Model CMMI Capability Maturity Model Integration CMMI-AM Capability Maturity Model Integration - Acquisition Model COCOMO Constructive Cost Model Compl. Completion xvii Const. Constant COQUALMO Constructive Quality Model COTS Commercial Off The Shelf CPI Cost Performance Index CPU Central Processing Unit CSCI Computer Software Configuration Item CSOW Contractors Statement of Work CSP Communicating Sequential Processes CSTOL Colorado System and Test Operations Language DART Demonstration of Autonomous Rendezvous Technology Dev Development DITL Day-In-The-Life Docs Documents DoD Department of Defense DR Defect Report EAC Estimate At Completion Effect. Effectiveness Elec Electronic ELOC or ESLOC Effective Line Of Code EOL End-Of-Life EPDS Electrical Power And Distribution Subsystem EPS electrical power sub-system Equip Equipment Err. Error ESD Event Sequence Diagram EVM Earned Value Management xviii Extern External FHA Functional Hazard Analysis FMEA or SFMEA Software “Failure Modes and Effects Analysis” FMECA or SFMECA Failure Modes, Effects and Criticality Analysis FQT Formal Qualification Test FTE Full Time Equivalent GN&C Guidance, Navigation and Control GTST Goal Tree-Success Tree H&S Health and Safety H High (quality) HAZOP Hazard and Operability Studies HIV Human Immunodeficiency Virus HW Hardware IBM International Business Machines ICD Interface Control Document ICM Incremental Commitment Model IEEE Institute of Electrical and Electronics Engineers Insp. Inspection Int. and Integ. Integration Interp Interpolated IPT Integrated Product Team IT Integration Test IV&V Independent Verification and Validation JPEG Joint Photographic Experts Group (file format) JPL Jet Propulsion Laboratory K.E. Kinematic Equations xix KPP Key Performance Parameters KSLOC Thousand Source Lines of Code L Low (quality) Lg Large M.E. Main Equation Maint_Prep Maintenance Preparation MC Modified Curve MCDC Modified condition decision coverage MD Modified MLD Master Logic Diagram MMM Modified Madachy Model Mod. Modification MTBF Mean Time Between Failure MTTF Mean Time To Failure MTTR Mean Time To Repair MUBLCOM Multiple Paths, Beyond-Line-of-Sight Communications N/A Not Applicable NASA National Aeronautics and Space Administration NDIA National Defense Industry Association ODC Orthogonal Defect Classification OOD Object Oriented Design PDF Portable Document Format or Probability Distribution Function PDR Preliminary Design Review PM Person Months (effort) PNs Petri nets PRA Probabilistic Risk Assessment xx Prac. Practice Prep Prepare PSSA Preliminary System Safety Analysis Qual Qualification RAM Random Access Memory RAND RAND Corporation RBD Reliability Block Diagrams RCA Root Cause Analysis Rej/Def Reject/Defer Rel. Relative RELY Required Software Reliability Req Requirements RFP Request For Proposal RPE Retrogressive Path Equations RTOS Real-Time-Operating-System RUP Rational Unified Process RW Raw SCED Schedule cost driver (COCOMO parameter) SCS Success Critical Stakeholders SD Standard Deviation SDD Software Design Description SDM Statistical Defect Modeling SDP Software Development Plan SDR Software Defect Repository SEER-SEM System Evaluation and Estimation of Resources–Software Estimating Model SEI Software Engineering Institute xxi Sev. Severity SHARD Software Hazard Analysis and Resolution in Design SIQT Software Item Qualification Test SLOC Source Lines of Code Sm Small SMC Space and Missile Systems Center SOHO Solar Heliospheric Observatory SOO Statement of Objectives SPI Schedule Performance Index SPO System Program Office SQA Software Quality Assurance SRB or RB Software Review Board SRE Software Reliability Engineering SRGM Software Reliability Growth Model SRS Software Requirements Specification STD Software Test Description STP Software Test Plan STR Software Test Report SW Software SW-CMM Software Capability Maturity Model SW-CMMI Software Capability Maturity Model Integration Sys System TBD To Be Determined TC Thermal Control TDEV schedule (time to develop) TLM Telemetry xxii TLYF Test-Like-You-Fly TOR Technical Operating Report Totl. Total TSPR Total System Performance Responsibility U.S. United States UML Unified Modeling Language Unk Unknown US Universal Surfaces USAF United States Air Force Vari. Variable VBSE Value-based software engineering VDM Vienna Development Method WBS Work Breakdown Structure XP eXtreme Programming xxiii A B S T R A C T The development of schedule-constrained software-intensive space systems is challenging. Case study data from national security space programs developed at the U.S. Air Force Space and Missile Systems Center (USAF SMC) provide evidence of the strong desire by contractors to skip or severely reduce software development design and early defect detection methods in these schedule-constrained environments. The research findings suggest recommendations to fully address these issues at numerous levels. However, the observations lead us to investigate modeling and theoretical methods to fundamentally understand what motivated this behavior in the first place. As a result, Madachy’s inspection-based system dynamics model is modified to include unit testing and an integration test feedback loop. This Modified Madachy Model (MMM) is used as a tool to investigate the consequences of this behavior on the observed defect dynamics for two remarkably different case study software projects. Latin Hypercube sampling of the MMM with sample distributions for quality, schedule and cost-driven strategies demonstrate that the higher cost and effort quality-driven strategies provide consistently better schedule performance than the schedule-driven up-front effort- reduction strategies. Game theory reasoning for schedule-driven engineers cutting corners on inspections and unit testing is based on the case study evidence and Austin’s agency model to describe the observed phenomena. Game theory concepts are then used to argue that the source of the problem and hence the solution to developers cutting corners on quality for schedule-driven system acquisitions ultimately lies with the government. The game theory arguments also lead to the suggestion that the use of a multi-player dynamic Nash bargaining game provides a solution for our observed lack of quality game between the government (the acquirer) and “large-corporation” software developers. A note is provided that argues this multi-player dynamic Nash bargaining game also provides the solution to Freeman Dyson’s problem, for a way to place a label of good or bad on systems. 1 C H A P T E R 1 : I N T R O D U C T I O N [SMC and The Aerospace Corporation] also conducted detailed analysis on testing in launch vehicle and satellite programs, growing quality problems in components and subsystems, and increasing system complexity, especially in the area of software development. [1] 1. Statement of the Problem There is a growth trend in the size and complexity of on-board flight software in the national security space systems under development at the Space and Missile Systems Center (SMC) [2]. Figure 1: SMC On-board Flight Software Size Trend This trend is not unexpected. As with terrestrial computer systems that follow Moore’s Law, there is a clear improvement trend in radiation hardened central processing units (CPUs) and the storage capacity of radiation hardened memory [3]. Furthermore, software provides advanced on-board digital processing, autonomy and other advanced functionality, in addition to providing a capability to modify these complex systems from afar since they cannot easily be taken back to the shop to fix. The increase in flight software size has been accompanied by an increase in the number of reported on-orbit anomalies attributed to flight software [4]. New Vehicles With High Complexity Legacy Vehicle Types or Low Complexity Legacy Vehicles P/L = payload S/C = spacecraft > ~60 KSLOC <= ~60 KSLOC 2 Compounding the problem associated with dramatic code growth (both in spacecraft bus and payload software size), in the mid-1990s, the U.S. government had concurrently invoked a new system acquisition strategy commonly referred to as “Acquisition Reform Initiatives” or acquisition reform. The goal of the new strategy was to improve the management and quality of our acquisition workforce [5] [6]. The assumptions that led towards a “Total System Performance Responsibility” (TSPR) acquisition reform policy were flawed [7]. Ballhaus [7] lists the assumptions and the results from the flawed policy change; as an example there was the assumption that the consolidated defense contractors could develop these complex systems with little government oversight allowing a reduction in government workforce. Buettner and Arnheim [8] report that one of the results of TSPR for military space systems was a deterioration of testing practices which coincided with the trend of embedding more software into the space vehicle. Even though the trend can be explained away by the removal of government mandated test constraints; Eslinger [9] noted that another outcome of acquisition reform was the removal of military specifications and standards from space systems contracts; we will suggest here that the removal of the contractual constraints and their result simply adds evidence of even more fundamental strategic thinking – or game playing by those involved with this policy change. Furthermore, from our position where we are concerned with engineering quality systems to protect these costly taxpayer investments, appear to be motivated by cost and schedule. We must all understand that software development for space system software is an activity where we must get it right. Moreover, we point out that low-impact satellite flight software anomalies (or as they are routinely referred to in the press “glitches”) for government or even commercial space systems are rarely if-ever reported in the press. These “glitches” are, however reported when a failure results in the unmistakable loss of the asset or the loss of important mission data. Publicly reported examples of space system software caused failures (most of which are from NASA (National Aeronautical and Space Administration)) are provided in Table 1. 3 Table 1: Space System Failures Caused By Software Date Description Repairable (Yes/No) June 4, 1996 About 40 seconds into the Ariane 5’s maiden flight the rocket was automatically destroyed. The guidance computer tried to convert a 64-bit velocity component into a 16-bit format causing an overflow. The dual redundant guidance system (using the same software) overflowed in the same manner [10]. No Sept. 27, 1997 Communication was lost with the Mars Sojourner Rover (and associated mission data during those periods). A priority inversion caused a lower priority task to block a higher priority task from completing to the point where the watchdog timer functionality invoked a total system reset, which reinitialized both the hardware and the software [11]. Yes June 5, 1998 Communication was lost with the Solar Heliospheric Observatory (SOHO) spacecraft due to a series of errors introduced when making software changes for the flight operations team trying to modify their ground operations to streamline operations. Communication was reestablished four months later [12]. Yes April 30, 1999 An incorrect software filter constant in the Titan IV’s Centaur upper stage zeroed out the roll rate data. This caused the loss of roll axis control and then yaw and pitch control. The loss of attitude control caused excessive firings of the reaction control system and subsequent fuel depletion left its satellite payload in an unusable orbit [12]. No Sept. 23, 1999 The Mars Climate Orbiter was lost during the orbit insertion maneuver, burning up in the lower Martian atmosphere. The accident investigation board identified the cause as incorrectly using English units for thruster performance data in ground software [12]. No Dec. 4, 1999 Communication from the Mars Polar Lander was not received as expected. The JPL special review board concluded that the most plausible cause was from a spurious-false touchdown sensor signal. It is believed that the software incorrectly interpreted these spurious signals as a touchdown, and turned off the descent engines at an altitude of 40 meters causing the lander to crash into the surface [12] [13]. No Mar. 12, 2000 The Sea Launch rocket flew off course and crashed 2700 miles Southeast of its launch site. The ground software neglected to properly close a valve in the pneumatic system of the rocket’s second stage keeping the rocket from gaining the altitude and speed necessary to achieve orbit [14]. No Jan. 21, 2004 Eighteen days into its mission, the Mars “Spirit” Rover failed to execute any task that requested memory from its flight computer [15] [16]. Yes Jan. 14, 2005 Cassini/Titan Huygens probe successfully landed and beamed back photos from the surface of Saturn’s moon Titan. However, the mission had two significant software related post launch issues. (1) The first was a significant firmware defect (software that cannot be changed after launch) that had it not been discovered would have caused total loss of mission data. (2) The second failure caused loss of half of the probe’s mission data because a Cassini receiver was not programmed to switch on [17]. (1) Yes (2) No 4 Table 1: Continued Date Description Repairable (Yes/No) Apr. 15, 2005 The Demonstration of Autonomous Rendezvous Technology (DART) spacecraft collided into the Multiple Paths, Beyond-Line-of-Sight Communications (MUBLCOM) satellite, with which it was supposed to perform rendezvous and maneuvers. Inspection of the flight software discovered that the software failed to account for a biased GPS receiver produced velocity [18]. No Nov. 2, 2006 Contact was lost with the Mars Global Surveyor. The cause was traced to a direct memory command (to update the High Gain Antenna’s positioning for contingency operations) that was written to the wrong memory address corrupting two independent parameters. The faulty command had been uploaded five months earlier [19]. No For the recent space software anomalies deemed newsworthy, the seriousness of the issues range from cases that were not detrimental and could be fixed—resulting only in loss of mission data and/or availability, to those that led directly or indirectly to the loss of the mission. From the Table 1 list of software-related spacecraft anomalies, Levenson [20] investigated the Ariane 5, Mars Climate Orbiter, Mars Polar Lander, Titan/Centaur upper stage and SOHO anomalies for NASA; concluding that, “Complacency and misunderstanding software and its risks were at the root of all these accidents.” 1.1 Purpose of the Research This research was started in an attempt to fundamentally explain the organizational and behavioral reasons behind what drove troubling software development practices and test inadequacies that were uncovered on one of our national security space system acquisitions (discussed in detail in the case study data as project-A). The test issues on this particular system were fixable; however, the required re-work cost would have been avoided had proper software acquisition and development practices been used. Beyond simply understanding the reasons for the test inadequacies on that project, it is the goal of this research to identify software intensive space system acquisition methods, policies or areas of further investigation (such as the feasibility of new analytical methods) that will allow us to avoid or at least minimize the possibility for a reoccurrence of these development issues. Ultimately then, space 5 system acquisitions (or any software acquisition with an agency relationship) can utilize the data, findings and methods used in this research to provide: (1) Qualitative and quantitative evidence that other programs can use to compare with their own software acquisitions in order to ensure that engineering rigor and attention paid to detail does not directly contribute to costly and prolonged testing, a complete redesign of the software, the loss of availability from the space vehicle, the loss of mission capability, or even the complete loss of the mission. (2) Better modeling methods that will allow early software acquisitions insight into long term quality, cost and schedule risks from near (or short term) attempts at curtailing product inspection and test rigor. Thus, we provide a modification to Madachy’s inspection-based system dynamics model incorporating unit test and an integration test feedback loop for program offices to use during the early phases of the acquisition. (3) A theoretical foundation with arguments that uses game theory to not only provide a justification for the observed case study behaviors; the arguments lead to a Nash bargaining game as the solution for the current situation. 1.2 Contents The introduction section of this chapter is used to describe the software growth problem and policy changes from the mid-90’s acquisition reform initiative that have had a deleterious effect on our national security space system acquisitions. Preceding the contents in this sub-section was a statement describing the purpose of the research. Chapter 2 provides background information on space system software, software engineering models and methods, a high-level view of the U.S. space system acquisition environment, and a literature review of the existing applications of game theory to the field of software development. Chapter 3 presents the kinds of qualitative and quantitative data available from seven different SMC flight software projects. While, the public version of the case studies provides some specific 6 qualitative and quantitative data; its analysis was narrowed to that data which was needed to support the system dynamics model in chapter 4. Policy recommendations based on the qualitative and quantitative data are made at the end of this chapter. The engineering and development of software intensive systems is a lengthy dynamic endeavor that cannot easily be put into a laboratory inorder to verify the outcomes of different development scenarios, covering variations in funding and schedule on the quality outcome of the software. Hence, system dynamics is used as a research tool in Chapter 4 to model and then investigate various software development ‘what if’ scenarios on the quality. Furthermore, because of the harmful impact of the decisions made by the software developers in the case studies, we especially wanted to model the impact of different software inspection and testing policies. To accomplish this, Ray Madachy’s [21] inspection- based model was modified to incorporate unit testing and an integration test feedback loop. The resulting tool uses synthetically created staffing curves from Madachy to investigate the dynamics of software development situations with varying degrees of rigor in inspections and in unit testing for a wide variety of parametric modifications. The quantitative and qualitative case study data from chapter 3 are then used to provide modified staffing curves to investigate the dynamics of the project’s identified defects. The chapter concludes with the results from simulations that use Latin Hypercube 1 sampling to perform a sensitivity analysis for software processes that are quality, schedule or cost-driven by using distributions that were created to mimic the observed phenomenon. Since, one of the stated purposes of this dissertation is to fundamentally understand what drove the observed behavior in the case study data, in chapter 5 we provide a game theoretical explanation of the phenomena. The chapter begins with an introduction to game theory concepts using a 3x3 static matrix game, and then reviews Austin’s use of game theory [22] to describe “corner cutting.” A differential game of optimal steel production is slightly modified to include a mixture of good and bad 1 Latin Hypercube sampling (LHS) is a more efficient method of doing Monte Carlo sampling. The selected system dynamics-modeling tool includes both methods for sampling distributions in its risk analysis functionality, however this dissertation only reports the results from LHS. 7 steel production and then used to provide arguments for why the government is ultimately the source and solution for the problem. Following the suggested areas for future investigations in game theory and the economics of large software intensive systems in the presence of defects, the chapter concludes using game theory concepts to provide various strategies in a multi-player dynamic Nash bargaining solution to the low-quality software problem. Chapter 6 begins with a restatement of the fact that the research was done in a manner to identify the forces driving the case study observed engineering (i.e. human) behavior without implicating specific companies, individuals within management, or any of the engineers involved. Following this is a consice summary of the overall conclusios and results from each chapter. A discussion of the contributions made by the author with additional avenues for further research round out the chaper. Appendices include, appendix-A containing additional software development models and information not retained in chapter 1. Appendix-B providing for the dynamics modeling community the non-attributable raw quantitative data that was used in this dissertation. This is possible due to the number of projects from various contractor products contained in The Aerospace Corporation software reliability research database. Appendix-C provides an example of a Perl script that was used to extract data from contractor ASCII defect databases. Appendix-D provides the equations used in the Modified Madachy Model. Appendix E contains tables of the numerical comparison between Madachy’s original implementation in iThink, and the Powersim implementation (with the feedback and unit testing models disabled) that was used in this dissertation. Appendix-E also contains the model’s results from an expanded test matrix to document the model’s behavior from the addition of unit testing and the integration test feedback loop functionality. Appendix-F provides plots of the simulated dynamic defect data from integration testing with a feedback loop, while appendix-G provides the results of Latin Hypercube sampling of the model (with inspection effectiveness using a constant value of 0.6) with distributions for quality, schedule and cost affected parameters and notes issues from using Monte Carlo sampling of the model. Appendix-H provides a note about the solution to Freeman Dyson’s problem and appendix-I contains an analogy of UML to blueprints for non-software people. 8 C h a p t e r 1 E n d n o t e s [1] Michael A. Hamel, “Military Space Acquisition: Back to the Future,” High Frontier 2, no. 2: 6. [2] Douglas J. Buettner and Bruce L. Arnheim, “The Need for Advanced Space Software Development Technologies,” Proceedings of the 23rd Aerospace Testing Seminar, 10-12 October 2006, by The Aerospace Corporation: 3-4. [3] Ibid., 3-8 - 3-10. [4] Ibid., 3-5. [5] Congress, House, Government Reform Subcommittee on Technology and Procurement Policy, Acquisition Reform Working Group Statement on “Acquisition Reform Initiatives,” 107th Cong., 22 May 2001; available from http://www.csa-dc.org/documents /TestimonybyARWGbeforeTechnologyandProcurementPolicysu.pdf; Internet; accessed 22 April 2007. [6] Darleen A. Druyun, Testimony to Congressional House Armed Services Committee, (April 8 th , 1997), Internet: available from http://armedservices.house.gov/comdocs/testimony/105thcongress/97-4-8Druyun.htm [7] William F. Ballhaus, Jr., “National Security Keynote,” 2004 Space Systems Engineering & Risk Management Symposium, available online at www.aero.org/conferences/riskmgmt/pdfs/Ballhaus.pdf [8] Buettner and Arnheim, 3-11. [9] Suellen Eslinger, “Space System Software Testing: The New Standards,” Proceedings of the 23rd Aerospace Testing Seminar, 10-12 October 2006, by The Aerospace Corporation, 3-27. [10] James Gleick, “A Bug and a Crash: Sometimes a Bug Is More Than a Nuisance,” available from http://www.around.com/ariane.html; Internet; accessed 5 May 2007. [11] Jack Woehr, “A Conversation with Glenn Reeves: Really remote debugging for real-time systems,” Dr. Dobbs Journal, November 1999; available from http://www.ddj.com/184411097; Internet; accessed 5 May 2007. [12] Nancy G. Leveson, “The Role of Software in Spacecraft Accidents”, Massachusetts Institute of Technology; unpublished; available from http://sunnyday.mit.edu/papers/jsr.pdf; Internet; accessed 5 May 2007, 2-3. [13] JPL Special Review Board, Report on the Loss of the Mars Polar Lander and Deep Space 2 Missions; Jet Propulsion Laboratory, California Institute of Technology, JPL D-18709, 22 March 2000; available from ftp://ftp.hq.nasa.gov/pub/pao/reports/2000/2000_mpl_report_1.pdf; Internet; accessed on 5 May 2007; 13. 9 [14] Justin Ray, “Sea Launch malfunction blamed on software glitch”, Spaceflight Now, 30 March 2000; available from http://spaceflightnow.com/sealaunch/ico1/000330software.html; Internet; accessed on 5 May 2007. [15] CNN News article, “Scientist: Mars rock photo shows 'Holy Grail‘,” CNN, 27 January 2004; available from http://www.cnn.com/2004/TECH/space/01/26/mars.rovers/; Internet; accessed on 5 May 2007. [16] Leonard David, “'Serious Anomaly' Silences Mars Spirit Rover,” SPACE.com news article contributed by The Associated Press, 22 January 2004; available from http://www.space.com/missionlaunches/spirit_silent_040122.html; Internet; accessed on 5 May 2007. [17] Joe Winchester , “Software Testing Shouldn't Be Rocket Science,” JDJ - Java Developers Journal, 9 February 2005; available from http://java.sys-con.com/read/48176.htm; Internet; accessed on 5 May 2007. [18] NASA, “Overview of the DART Mishap Investigation Results,” available from http://www.nasa.gov/pdf/148072main_DART_mishap_overview.pdf; Internet; accessed on 5 May 2007. [19] NASA, “Mars Global Surveyor (MGS) Spacecraft Loss of Contact,” 13 April 2007, available from http://www.nasa.gov/pdf/174244main_mgs_white_paper_20070413.pdf; Internet; accessed on 5 May 2007. [20] Leveson, “Software in Spacecraft Accidents”, 25. [21] Raymond J. Madachy, A Software Project Dynamics Model For Process Cost, Schedule And Risk Assessment, Ph.D. Dissertation, Department of Industrial and Systems Engineering, USC, (December: 1994). [22] Robert D. Austin, “The effects of time pressure on quality in software development: An agency model,” Information Systems Research (INFORMS), Vol. 12, no. 2, June 2001: 195- 207. 10 C H A P T E R 2 : B A C K G R O U N D A good scientist is a person with original ideas. A good engineer is a person who makes a design that works with as few original ideas as possible. There are no prima donnas in engineering. [1] 2. Introduction This chapter provides background material for readers, where the references in the end notes offer additional sources of information. First, since software (and firmware) is found throughout modern space systems the chapter begins with a short overview section of where it is used and what it is used for [2] [3] [4]. Thus, a functionality overview section provides the reader with a frame of reference for what space vehicle system functionality is touched by software. Next, a section is provided that gives an overview of the engineering methods that engineers must use in order to successfully build complex software intensive space systems. Since this section would be prohibitively long even for an overview section (where there are volumes of books and technical papers on these subjects), supplementary material is provided in appendix-A. In addition, the references can be used to supplement the provided material. The final section of the chapter provides the results of the author’s literature search into game theory use in the field of software engineering. 2.1 Space System Software Overview Provided here is a brief overview of the functionality provided by software and where it is used in our space systems to orient the readers. 2.1.1 Spacecraft Bus Software The spacecraft bus usually describes all of the systems on the satellite (or the spacecraft that we send to other planets) to successfully operate it. On-board satellite spacecraft bus software (or firmware) functionality includes algorithms for monitoring, controlling, and communicating with every system on the space vehicle (the term used to indicate the bus and all of the payloads) and the operators on the ground. Fundamentally, the entire health and safety of the space vehicle depends in some way on software. 11 The functionality list includes algorithms for handling digital signals from various input sensors and actuators, communication telemetry (TLM), tracking, command, and data handling (C&DH), electrical power monitoring and regulation (EPS), thermal monitoring and control (TC), appendage command and control (ACC), guidance, navigation and control (GN&C), attitude determination and control (ADC), and autonomous actions for health and safety (H&S) [5]. 2.1.2 Payload Software Satellite payload(s) include software for handling the same general types of functionality that is found in the bus software (examples are sensor control, and command handling from the bus software, and bus communication protocols) in addition to routines for processing the specialized mission data from the payload’s sensors [6]. (This general reproduction of functionality in the spacecraft bus and between payloads in different generations of satellites provides a ripe environment for repeated flight software adaptation and reuse. This situation is mentioned later in the case study research.) 2.1.3 Ground Software Ground systems include software functionality for sending and receiving commands to and from the satellite(s), routing sensor data to operator terminals, spacecraft and payload operator display and data processing, satellite tracking, and mission data processing [7]. In addition, depending on the results of early space-ground tradeoff studies during the design of the system, various spacecraft and payload mission processing and even health and safety processes are handled by ground software [8]. 2.1.4 Launch Vehicle Software Launch vehicle software (actually firmware since it cannot be changed during the launch) functionality includes routines for handling digital signals from various input sensors and actuators, communication telemetry, command and data handling, guidance, navigation and control, attitude determination and control, and autonomy for launch events such as jettisoning stages [9] [10]. 12 2.2 Software Development Models and Methods Overview This section provides background information; first on how software is developed, and second about the analytical methods and processes used to ensure it is developed correctly. Software intensive development can follow a number of possible life cycle models from the literature. However, the analytical methods used to make sure the software will work and will be developed correctly remain the same between all of them. For example, beyond the simple ad-hoc software development process (i.e. there is no engineering process beyond chaos) are the waterfall, incremental, evolutionary, transform, spiral, agile and the Rational Unified Process (or RUP) processes [11] [12] [13] [14] [15] [16]. Finally, what is not provided in this section is included for reference in appendix-A, as these methods are mentioned in the research. 2.2.1 Waterfall Model We focus here on the waterfall model as it not only provides the fundamental life cycle phases showing the feed-foward of information (typically in the form plans, specifications, and designs in the form of documentation), it also shows the feed-back of the flaws identified in the next dependent life cycle phase. This forward and reverse flow of information will be shown as an important aspect in the research. Furthermore, to this day it is still engrained in our software development processes. In the waterfall development model, phases flow from one to the next, where rework to documents from the prior development phase are shown in Figure 2 on the next page. Rework arrows are on the left flowing back up into the prior phase, while the development flows-down into the subsequent phase on the right. 13 System feasibility studies System/software plans & requirements System/software design Detailed design Code Integration System integration Operations & maintenance Figure 2: The Waterfall Model Table 2 (on the following page) summarizes the activities that can occur in each of the waterfall model’s software/system development phases [17] [18] [19]. The Phase column lists the system- engineering phase moniker (underlined text) with the corresponding system or software development phase occurring at this time [20] [21]. The activities column identifies the work that can occur during each of the phases, including items ideally to incur rework in parentheses [21]. 14 Table 2: Waterfall Model Life Cycle Phase Activities Phase Activities System feasibility studies Study alternative system concepts, identify potential solutions, identify technology, and estimate software size. System/software plans and requirements Develop technologies, define the concept of operations, system requirements with flow down to hardware and software requirements, conduct risk reduction and trade studies (e.g. prototype and determine algorithms), define baseline and initial high-level design, create system use cases, write software development/quality plans. (Rework software size estimates.) Software/system definition Complete technology development, baseline management/definitization, create preliminary design (high-level Unified Modeling Language (UML) use cases and high level object oriented design (OOD) diagrams and finish algorithm descriptions), and write the interface control description and the initial software/system test plan. (Rework use-cases, software plans, and the software requirements.) Detailed design Complete design, create detailed design (detailed UML diagrams with completely defined algorithms), pseudo-code developed and initial software test description. (Rework high-level design, software test plans, and interface control description.) Code Implement the software units. (Rework detailed software design, and the software test descriptions.) Integration Component integration of software units, and integration with hardware in the loop simulators. (Rework software code.) System Integration System checkout, and authority/consent to ship or launch. (Rework software code, database values, documentation and manuals.) Operations and maintenance On-orbit test and operations, software upgrades with defect fixes. (Rework software code, database values, documentation and manuals.) In this phased life cycle view, testing is any dynamic execution of the software code to find ‘bugs’. Figure 3 is a modified “V-model” alignment of the system and software development phases with their dependent testing phase [22]. The original V-Model was defined by Paul Rooks in the late 1980’s and attempted to give equal weight to testing, instead of treating it as an afterthought [22]. As shown in Figure 3, ideally, rework only flows back to the development phase that immediately precedes the one before it. However, the dependency of the test phase on information contained in the various documents (requirements, design or code) used to write tests for that V-modeled phase can identify rework from phases significantly earlier in the development cycle. As a consequence, Royce’s original waterfall model encountered difficulties with some classes of software, where the primary issue is the 15 need for fully elaborated documents as completion criteria to proceed out of the requirements and design phases without adversely affecting the entire waterfall paradigm [23]. Figure 3: Test Phases Aligned With Development Phases In terms of modeling, software testing is a layered approach where, as it is methodically built, it is methodically tested. Software code is supposed to move from 100% unit testing into software unit-unit integration testing (component testing) where the interfaces between units are tested. After component integration testing, the components are integrated and tested with the ever-growing software system according to the software’s design and build plan. As much testing at the early test levels is done in a developers test environment as possible, and then it is moved onto hardware-in-the-loop simulators or emulators to verify that it works in the actual hardware environment in the same manner that it worked in the developer’s environment. Software Item Qualification Testing ((SIQT) but also called Formal Qualification Testing (FQT)) verifies that the software meets its negotiated and flowed down 16 requirements (found in the Software Requirements Specification (SRS), and various Interface Control Document(s) (ICD)). From the context of software in the development life cycle, artifacts that are required by software test engineers to do their job are created during the requirements and design phase; the quality of which depends on the adequacy and rigor of formal reviews by a requirements engineering and design process to ensure that the requirements are in fact testable. Software testing is then conducted through each product release and maintenance cycle via regression and new tests to ensure the proper implementation of ‘bug fixes’ or that the addition of the bug fix with the addition of new functionality has not broken anything else. 2.2.2 Software Process Dynamics Modeling System dynamics has been used by researchers to model the feedback dynamics of the software development processes where it provides an ability to predict the performance of a system as a function of time [24] [25] [26]. The first application of systems dynamics to software engineering processes was by Abdel-Hamid in his 1984 Ph.D. dissertation [25] [26]. Today, in the field of software engineering, system dynamicists use these simulation tools to find a solution to our dynamic problems. One such example is: What are the effects of interactions between requirements elicitation and areas of software development such as software implementation, testing, process improvement initiatives, hiring practices and training [27]? While another example is Madachy’s Ph.D. dissertation investigation of inspection practice on a project’s manpower rate [28]. Madachy provides numerous examples of system dynamics models for decision infrastructure, defect chains, people, overtime, slack time, and pressure effects on the time to complete - to name just a few [29]. 2.2.3 Software Cost/Quality Model Madachy’s model uses as its foundation the original Constructive Cost Model (COCOMO). Therefore, it is important to provide background on these cost and quality models. The software cost models provide us with an ability to estimate the amount of effort and thus cost and schedule required to develop a product of a certain size, by taking into account various cost and effort drivers via 17 multiplicative parameters derived from the analysis of a large number of software projects [30] [31]. Even though Jones [32] provides a list of nine commercial cost models still on the marked as of 2005 this section will only discuss the COCOMO/COCOMO II and COQUALMO models. 2.2.3.1 Constructive Cost Model (COCOMO) The original COCOMO provided models for a Basic (an early – rough order of magnitude estimate) effort and schedule estimation for familiar in-house software development, an Intermediate form that takes into account many factors known to affect software development, and a Detailed form that takes into account software development life cycle phase-sensitive multipliers and a three-level hierarchy consisting of a module, sub-system and system level [33]. A new generation of software processes, products, approaches and maturity initiatives required the creation of the updated COCOMO II model [34]. COCOMO II replaces the Basic and Intermediate COCOMO forms with the Applications Composition, Early Design and Post-Architecture model forms with other significant parametric updates [35]. The Early Design and Post-Architecture model’s general mathematical form for a nominal schedule (NS) effort (in person months (PM)) is in eqn. 2-1 [36], . 01 . 0 where 5 1 1 ∑ ∏ = = × + = × × = j j n i i E NS SF B E EM Size A PM Equation 2-1 The calendar time it will take to build the software for this nominal person month effort is estimated from eqn 2-2 [36], ( ) ( ). 2 . 0 01 . 0 where 5 1 B E D SF D F PM C TDEV j j F NS NS − × + = × + = × = ∑ = Equation 2-2 The reader is referred to Boehm et al. [36] for detailed information on the effort and duration calculation values for the factors A, B, C, D, the effort multipliers (EM i ), and the scale factors (SF j ). 18 Nevertheless, these mathematical equations tie together the schedule dependency on the effort multipliers and dependent factors such as software size, reuse, requirements volatility, process maturity, and many others [37]. Of note in the model is a dependency on the product’s reliability requirements, where Boehm et al. [38] state, “For example, a high rating of Required Software Reliability (RELY) will add 10 percent to the estimated effort, as determined by the COCOMO II.2000 data calibration. A Very High RELY rating will add 26 percent.” Out of necessity, space systems require software that has a very high reliability. The required development schedule cost driver (SCED) accounts for schedule stretch out, or compression [39]. This effort multiplier is found in both the calculation for the person month effort PM NS , as well as the time to develop TDEV, where it is pulled out of PM NS for the computation of the time to develop [39], ( ) ( ) 91 . 0 , 28 . 0 , 67 . 3 where 100 % 2 . 0 = = = × = − × + B D C SCED PM C TDEV B E D NS Equation 2-3 Hence, the equations for schedule (TDEV), cost (dollar value multiplied to the PM effort), and quality (reliability as RELY) are intrinsically related. 2.2.3.2 Constructive Quality Model (COQUALMO) We describe here the Software Defect Introduction and Removal Model (Figure 4 which is also referred to as the “tank and pipe”) as this model will be referenced later in the dissertation [40]. Pictorially, Figure 4 shows the phases that introduce defects as requirements, design or code. While, the defects are removed through reviews, software prototyping, other methods (such as dynamic or static code analysis), testing and finally through their discovery during operational use. In operation, however, the user is now committed to a discover and fix process where patches are typically provided to the customer to up load changes to the software to fix the satellite’s defects. 19 Requirements Defects Design Defects Code Defects … other methods Requirements Reviews Prototypes Design Reviews Code Reviews Unit Tests Operational Test/Use Undiscovered Defects Figure 4: Software Defect Introduction and Removal Model The defect introduction sub-model of the Constructive Quality Model (COQUALMO) uses the same parameters that are found in the COCOMO II life cycle cost estimation model to estimate the number of non-trivial defects in a product [41]. The defect removal sub-model, however, requires defect removal profiles in order to estimate the residual defect density [42]. Data is currently being gathered to calibrate COQUALMO with completed projects using a Bayesian approach [43]. 2.2.4 Capability Maturity Model ® Integration (CMMI) We now provide a brief discussion about the Capability Maturity Model ® and the Capability Maturity Model ® Integration (CMMI) as they are used to assess a software team’s ability to repeatedly follow their own development practices. This information is important as one of the key findings is the recommendation that all flight code be written by teams that are assessed at the highest maturity levels, where the processes for fighting defects are demonstrated as mature. The CMMI grew out of the Software Engineering Institute (SEI) (operated by Carnegie Mellon University) effort to assist a number of organizations with assessing their software development processes which was initially based on a maturity questionnaire [44] [45]. The assessment practices have themselves matured since then, to the Defects introduced Defects removed 20 point where organizations that have adopted the CMMI have reported dramatic improvements in cost, schedule, productivity, quality, customer satisfaction, and return on investment [46]. The CMMI achieves these improvement gains by defining the key best practice areas that development organizations should incorporate into their development processes [47]. The numerical capability maturity level assessment (with integer values between 1 and 5) is a measurement of an organization’s progress in defining and improving its development processes, where highest level (level 5) demonstrates the incorporation of the defect fighting processes [47]. For reference, the maturity of the software development organizations working on the satellite code in the primary case studies were any where between 2 and 4. 2.2.5 Software Defect Prevention and Detection Methods Here we provide background information on software defects and methods to identify where in the process these defects were introduced (clearly an importance of the topic for the dissertation should by now be obvious). 2.2.5.1 Software Defect Taxonomies Taxonomies of software anomalies and defects are available in IEEE standards and the software testing literature [48] [49]. The IEEE standard requires the identification of the project activity during which the anomaly was discovered, the identification of the anomaly’s actual cause, source and type, the corrective action that resolves the anomaly, the impact of the anomaly on the product, and its disposition [50]. Beizer [51] provides a discussion on the categorization of “bugs” and creates the taxonomy with a 4-digit number that includes project statistics accumulated from a number of published sources. 2.2.5.2 Defect Classification Methods Traditionally, software defects have been analyzed using Root Cause Analysis (RCA) or Statistical Defect Modeling (SDM) techniques [52]. SDM fits a statistical model to the defect distributions by assuming each defect is a random sample from an ensemble; while RCA attempts to find the root cause by considering each defect as a unique occurrence [52]. Orthogonal Defect Classification 21 (ODC) is described as a technique that is mid-way between these two traditional techniques, where RCA is more qualitative but time consuming, and SDM is more quantitative but not easily translated into a corrective action [53]. ODC simplifies the defect taxonomy into a small number of orthogonal classes. These classes are used to provide quick in-phase turn around information about the effectiveness of the process and quality of the product to the development organization via quantitative analysis [54]. Chillarege reports that ODC can reduce the cost of projects by a factor of 10, and in one case an organization was able to reduce defects by a factor of 80 over a 5-year period. None of the projects reviewed in the case studies used ODC, but some of them did use RCA. Hence, they all lacked the in-phase feedback of defects. 2.2.5.3 Software Defect Detection from Inspections Michael Fagan introduced the use of a formal review process (also called Fagan inspections and formal peer reviews) at IBM on their software work products (e.g. requirements, plans, designs, code, tests, reports, and users manuals) to improve programmer productivity and software quality [55] [56]. In his seminal paper on the subject, Fagan [55] describes the improvement provided by formal inspections over a similarly developed operating system component that had used walk-throughs (characterized by Fagan as an un-repeatable and inconsistent process of authors reading their code to a group for feedback on design alternatives and potential errors). Since Fagan’s introduction of the formal inspection process, numerous researchers have validated the quality improvement and have investigated methods for fine-tuning the process [57] [58] [59] [60] [61] [62]. 2.2.5.4 Analytic Models, Simulation and Analysis Analytic models are mathematical models or models that provide an abstract diagrammatic view or language that can be translated into mathematical form that can be solved to provide insight into performance attributes of the software or system. Examples of analytic models are Reliability Block Diagrams, Markov models, Petri nets, Queuing networks, and component (or sub-system) based Reliability Growth models [63] [64] [65] [66]. 22 Using analytic models to create computer simulations of the system, sub-system and their components with subsequent analysis of the results determines if the system/software design will meet its reliability requirements [67] [68] [69]. Analysis by itself, however, is the careful review of engineering artifacts through various means (e.g. manual, graphical, or automated) and includes items such as requirements, designs, code, safety hazards, fault trees, failure modes, tests (and residual defects/faults), risks, processes or metrics [70]. 2.2.5.5 Unified Modeling Language (UML) Usually design modeling languages like the Unified Modeling Language (or UML) are not considered as a defect detection method, however through its use teams can find requirements and design flaws, and thus can provide better designed products. There are numerous other modeling languages, however UML has become the de-facto standard method for documenting software architectures and designs using the object oriented design methodology. The UML is a pictorial design language that allows engineers to capture the important behavioral as well as structural aspects of a system [71]. Rumbaugh et al. also note that [71]: [UML] is used to understand, design, browse, configure, maintain, and control information about such systems. It is intended for use with all development methods, lifecycle stages, application domains, and media. The modeling language is intended to unify past experience about modeling techniques and to incorporate current software best practices into a standard approach. Figure 5 is an example of a possible UML use case to capture the actor (the stick figures) interactions required to initialize (or reinitialize) a satellite’s temperature data and the applicable requirement’s project unique identifiers for the use case. 23 Figure 5: Example of a Possible UML Use Case for Satellite Software The UML itself consists of a number of diagrams in what is termed the 4+1 architectural views that are used to fully capture and document the software’s behavioral aspects and its design [72]. Dos Santos et al. [73] documented the use of UML for real time satellite software concluding in their paper that, “As emphasis is put on initial conceptual issues leaving implementation details on a second level, chances are the final system may suffer less from design flaws, which demand more expensive corrections. Moreover, this approach is less restrictive concerning design choices and may lead to a richer final product.” We will find this description and the findings in the case studies as important later in the dissertation. 2.2.5.6 Software Prototypes The spiral software development model (described in appendix-A) explicitly calls for the use of prototypes in an early spiral as a method for mitigating risk that a technology is ready [74]. Software prototypes are a very useful tool for refining the software requirements and are particularly useful to burn down algorithm development risk for complex functionality. For real-time systems, prototypes are F-INPT-TEMPRTR F-INPT-TEMPRTR-DATFLTR-NRMLOUT P-INPT-TEMPRTR-DATFLTR MySat Amux MySatDB GetRawTemperatureData MyGrndCmd F-INIT-TEMPRTR F-INIT-TEMPRTR-DATFLTR Amux MySatDB InitializeTemperatureData <<include>> 24 used to verify the throughput and key performance aspects of the potential algorithms [75]. Hence, prototyping certain aspects of the software that are perceived as risk areas to its eventual development provides a method for the early discovery of potential design issues. 2.2.5.7 Formal methods “Formal methods” is a general term applied to a set of analysis techniques that allow for the mathematical modeling and application of formal logical proofs to items such as software requirements and designs [77]. Examples of formal methods include Larch (a multi-site project investigating the use of methods, languages and tools for the practical use of formal specifications), Communicating Sequential Processes (CSP), Petri nets, state charts, and specification languages such as the Vienna Development Method (VDM) and Z [78] [79]. Zimmerman et al. [80] describe the hurdles slowing adoption of Formal Methods in the U.S., even though NASA cites among the realizable benefits for their application as: (1) the discovery of defects that went undetected through extensive testing, (2) the discovery of defects earlier in the software life cycle, thus reducing mistakes due to misinterpretation and incorrect implementation, and (3) the detection of more defects and eliminating altogether certain types of defects [81]. Why is this important? Here is an entire class of methods that NASA sites as important for the early detection of defects, and yet you will find no mention of their use on the troubled projects in the case studies. 2.2.5.8 Software Testing Methods Software testing is the dynamic execution of the source code on a computer. This is the single most important method used to find software defects, however as noted from our V-model description, cannot be used to find defects until the software has been written. In general, the methods are classified as “black box” or “white box” tests [82] [83] [84]. 2 Black box methods disregard the software’s internal 2 Information regarding software-testing techniques was published in [85]. The accepted classic reference for software-testing techniques is Beizer [51]. 25 structure and implementation and only look at the results of sending inputs into the “black box” and reviewing the resulting outputs. Hence, the test procedures are developed without consideration of the internal structure. Black box testing can be used during any software test phase, but in particular they are applied during validation testing of the software requirements. White box methods, on the other hand, require an understanding of the internal software’s structure. The most common type of white box test use a set of inputs designed to test each logical branch and sets of branches (called paths). White box methods are generally applied only during the software’s unit and unit integration testing. It is useful to provide a general overview (Table 3 below) of the numerous testing techniques mapped into these two general methods. Table 3: Types of Software Tests Method type Description Test type Scenario tests (also called thread testing)[86] [87] Tests that are derived from usage scenarios (or use cases), in order to simulate the mission. Black box Requirements based tests[88] Tests to assess the conformance (or non-conformance) of the software with its requirements. Black box Qualification tests (also called validation testing)[89] Formally conducted requirements based tests usually conducted for the customer on the integrated software product on flight- like hardware to show that the software fully meets its requirements. Black box Positive tests (also called clean, nominal, or “happy path” tests)[90] Tests using input values within the expected range and of the correct type to elicit what are expected as typically nominal behavior of the software. Black & White box Endurance tests Tests performed over lengthy durations. Black box Negative tests (also called dirty, and off-nominal tests) [90] Tests that are designed to severely challenge or “break” the software by inducing anomalous behavior from users, equipment, the environment or by other means to the software. Black & White box Stress tests (also called workload tests) (a subcategory of negative tests) [91] Tests of the software in the system using beyond the highest expected workloads (usually run concurrently with endurance tests). Goal is to measure capacity and throughput, evaluate system behavior under heavy loads and anomalous conditions, to determine workload levels at which system degrades or fails to ensure graceful failure. Black box Robustness tests (a subcategory of negative tests)[92] [93] Tests with values, data rates, operator inputs, and workloads outside expected ranges attempting to challenge or “break” the system with the objective of testing fail safe and recovery capabilities. Black & White box 26 Table 3: Continued Method type Description Test type Boundary value tests (a subcategory of negative tests)[87] [94] Tests the software with data at and immediately outside of expected value ranges. Black & White box Extreme value tests (a subcategory of negative tests) [94] Tests with extremely large positive and negative values, small values, and the value zero. Black & White box Random tests (also called statistical tests)[95] [96] Tests with input data values randomly selected from the operational profile probability distribution during scenario and endurance testing. Black box Fault injection tests[97] [98] [99] Tests on the nominal baseline source code and randomly altered versions of the source (white box) or object code (black box) in order to assess the software’s failure behavior and ensure that system properly responds to these software component failures. Black & White box Branch tests[87] [100] Careful or automated selection of sub-routine (function or method) inputs designed to force execution of specific software branches (e.g. if and switch statements) at least once in order to determine the correctness of branch entrance logic and the code internal to the branch. White box Path tests (also called flow graph tests)[101] Careful or automated selection of inputs designed to force execution of sets of branches (paths) (i.e., every feasible set of branches) at least once to determine the correctness of the set of branches. White box Modified condition decision coverage (MCDC)[102] Tests of every point of entry and exit in the program such that every condition in a branch decision in the program has taken all possible outcomes at least once, and each condition in a decision has been shown to independently affect that decision’s outcome. White box 2.2.6 Software Risk A software risk overview provides background information to familiarize the reader with this subject for software. Since, we’ve found on some of our projects a complete lack of use of software risk management methods. Hence, first to describe software risk, we turn to Rowe [103] who suggests that, “Risk is the potential for realization of unwanted, negative consequences of an event.” For an event to be considered a “risk” there must be: (1) some loss associated with the event, (2) an element of uncertainty or chance, and (3) some amount of choice. Where software is embedded into systems, examples of loss caused by software are cost, schedule, life, mission, availability, and capability. There are three types of uncertainty, (1) descriptive 27 or structural uncertainty associated with the lack of information about the variables that explicitly describe the system, (2) measurement uncertainty associated with the lack of information about the value of the variables used to describe the system, and (3) event outcome uncertainty associated with the fact that the predicted outcome and thus their probabilities cannot be identified. Choice is associated with the decisions made in the face of uncertainty that increase or decrease the probability of an outcome or the magnitude of the loss [103]. The wide variety of software risk management processes are provided in appendix-A. However, here we provide in the ensuing sub-sections analytical methods for identifying our exposure to software risk and quantify the probability of failure or success. 2.2.6.1 Probabilistic Risk Assessment (PRA) Risk exposure is defined as the product of the probability of an unfavorable outcome and the consequence of that unfavorable outcome; Probabilistic Risk Assessment (PRA), on the other hand, is a technique for finding the quantitative probability of failure or success for a system [104]. PRA’s quantitative analytical techniques have been used on space systems in the past. Examples include PRA’s use as a reactive method on the space shuttle following the Challenger disaster, and for the analysis of an un-crewed and tended international space station study following the Columbia tragedy, and proactively in a next generation launch technology program [105] [106] [107]. Bin Li et al. create a framework for integrating software into PRA identifying four specific steps in three levels that PRA for software should follow to answer questions concerning; “What can go wrong?,” “What are the initiators?,” “How sever are the consequences?,” “How likely is the occurrence of these undesirable consequences?,” and moreover “How confident are we about our answers to these questions?” [108]. They conclude that their framework can contribute to the understanding of the impacts from software to system failures. A sampling of the suite of techniques is provided in Table 4, while the steps followed to answer the questions concerning what can go wrong are provided in Table 5. 28 Table 4: Software Probabilistic Risk Assessment Techniques Technique Description Checklists Checklists are lists of items to be checked for verification that methods were used, life cycle processes were followed or that artifacts are present and in the correct state. PHA “Preliminary Hazard Analysis” or PHA for software occurs during the system design and attempts to identify software-related hazards. The hazard analysis team determines if they should/can eliminate the hazard by moving the hazard or removing the consequences of its occurrence [109]. FMEA or SFMEA Software “Failure Modes and Effects Analysis” is a bottoms-up assessment of the system design to determine its ability to react in a predictable manner to ensure system safety [110]. FMECA or SFMECA Software FMEA that emphasizes prioritization is referred to as “Failure Modes, Effects and Criticality Analysis” [111]. HAZOP “Hazard and Operability Studies” attempts to anticipate hazards via an imaginative method that uses guiding words to prompt the team of engineers to consider the hazard potential of various deviations from the normal-expected behavior of the system [112]. SHARD “Software Hazard Analysis and Resolution in Design” is a more cost- effective individual approach during system design to apply the principles of HAZOP [112]. MLD “Master Logic Diagram” is a hierarchical graphical representation of independent parts of a system that includes system support, human, hardware and software interdependencies. The MLD’s provide a hierarchy to model causal effects of a failure in complex systems [113]. GTST “Goal Tree-Success Tree” is a functional decomposition framework for developing models of physical systems [113]. Combining GSTS with MLD provides a functional/structural description method [113]. Event Tree Analysis Event Tree Analysis is a mathematical representation for event trees that incorporate concepts from set and probability theories and ‘Not logic’ for systems [114]. Event Sequence Diagram An ESD is a visual representation of a set of possible risk scenarios that originates from an initiating event [115]. Petri nets PNs (there are many types) are a modeling language that provides a system fault-tolerance analysis approach that incorporates hardware, software and human behavior. E.g. time PNs allow the incorporation of timing information, a requirement for real-time embedded systems, allowing the analysis of software actions leading to unsafe behavior [116]. 29 Table 4: Continued Technique Description Markov chains “Markov chains” are discrete (countable) time stochastic processes where the probability of being in a specific process state does not depend on the probability of subsequent states [117] [118]. They are used in the reliability analysis of fault-tolerant (non-repairable) systems using transient analysis of continuous time MC and in attempts to model and identify rare events (state transitions with low probabilities) [119] [120]. Fault Tree Analysis Software FTA is a visual top-down analysis technique that identifies the higher-level events that trace down to elements (e.g. errors, faults and failures) that can contribute to a system level hazard [121]. Probabilistic Methods In the software PRA, this is the use of standard statistical distributions (e.g. exponential, beta, normal, etc.) to calculate the component failure probability. Through the convolution of the probability distribution from the beginning state via subsequent transition states to determine the probability of the end state using the probability of each event in the accident sequence. In addition, the probability of the cascading failure in the accident sequence can also be determined using the failure probability of each system component [122]. Common cause analysis Common cause analysis is the identification of dependencies between individual events consisting of all combinations of basic events that cause the top event in the FTA. In software FTA, common cause analysis is used determine the environmental effects on the top event [123]. Human reliability analysis Human reliability analysis deals with the probability for human errors that subsequently affect the reliability of the system and subsequent methodologies to reduce the probability of occurrence. Estimates of human error contributing to accidents rose from 20% in the ‘60s to about 80% in the ‘90s [124]. Classical Statistics Classical statistics is the nominal use of classical probability theory to assess an uncertainty distribution in the software PRA methodology [125]. Bayesian Statistics In the software PRA methodology if a specific event has already occurred Bayesian statistics (based on Bayes’ theorem for computing the conditional probability of an event given its probability is dependent on an event which has already occurred) is used to assess the uncertainty distribution of another event [125] [126]. Sensitivity Analysis Sensitivity Analysis is used to assess sensitivity of the uncertainty distribution to parametric changes where the input parameters are not well defined. Thus, allowing system design engineers to improve or possibly optimize the design while considering other factors such as cost, power, size and performance [127]. 30 Table 5: Software PRA Steps PRA Process Techniques Questions That Should Be Answered Initiators Checklists PHA FMEA HAZOP Master Logic Diagram What are the possible software related failures for the system? What is the classification of the failure (critical, non- critical, etc.)? At what level in the software should we handle/consider the failure? Consequences Event Tree Analysis Event Sequence Diagram Petri Nets Markov Chains What are the identified methods that can be used for recovery if software is involved in a failure? Probabilities Fault Tree Analysis Probabilistic Methods Common Cause Analysis Human reliability Analysis Which quantification models/methods can be used to quantify the software related failures? Which kinds of data will be needed for these models, how do we get that data, or how do we work around the lack of the data? Uncertainty Classical Statistics Bayesian Statistics Sensitivity Analysis What is the necessary uncertainty analysis for this research? Which kinds of uncertainty analysis should be performed? Bin Li et al. [128] also create a failure-mode taxonomy (provided in Table 6 on the following page) and categorized a large number of public software-related system failures. Their larger list also contained some of the space software failures described in Table 1, where their categorization of the failures is also given on the next page in Table 7. 31 Table 6: Taxonomy of Software Related System Failures Failure Mode Description Functional Failure Mode These are internal software function (method or routine) centric failure modes, for e.g. omission of a function, incorrect realization of the function, function implemented although not specified in the requirements. They also include attribute and function interaction failure modes in this category. (Software) Input/Output Interaction Failure Modes In general, software interaction failure modes are those by which software interacts with components such as hardware, software, and humans. Input/Output failure modes include the exchange of information between humans and software, hardware and software. An example is the human error in entering input values into a database. Multiple Interaction Failure Mode These are triggered by an event where multiple processes execute concurrently leading to an undesirable system state. Support Failure Mode From Resource Contention These are resource competition modes, which cause deadlock and lockout conditions that lead to failures. Support Failure Mode From Physical Platform Failure Physical failures can be decomposed into failures in the CPU, memory, peripheral devices or other physical support devices like power supplies or communication lines. Software fundamentally depends on these to execute properly. Failure in the fundamental hardware can cause software to behave erratically. Environmental Impact Failure Mode Failure modes from environmental effects like gravity, temperature, pressure, radiation, Foreign Object Debris (FOD) electrical shorts, bit errors from noise, etc. that lead to the immediate destruction of hardware or the gradual degradation that influences hardware component failure rates. Table 7: Taxonomy of Space Software Failures Space Software Accident Identified Failure Mode Ariane 5 Functional Failure Mode - Omission of an attribute Mars Climate Observer Input/Output Interaction Failure Mode - Wrong value of input Mars Lander Functional Failure Mode - Omission of a function SOHO Functional Failure Mode - Omission of a function Sea Launch Functional Failure Mode - Incorrect realization of a function Titan IV A-20 Functional Failure Mode - Incorrect realization of a function In this limited set of data from publicly visible failures, the distribution clearly indicates that the functional failure mode is the primary mode for space software failure. Why is this significant? Software functionality is identified during the requirements and design process. Thus, this is the process that is used to find and remove these defects. We will find later in the case study data, that a finding was that 32 this design process was not done sequentially (before implementation) by the schedule constrained software engineers. 2.3 Software-Intensive Space System Acquisition Overview While it is well recognized that the success of the software development project is dependent on the software engineering processes used, acquisition and up-front planning processes are also highly influential on the success of space systems and their dependent software projects [129]. Further, the SEI has created the CMMI-AM encompassing acquisition practices that should be performed by government acquisition organizations to ensure the acquisition is conducted effectively [130]. We provide here a brief overview of our software development environment, as our game theoretical approach will allude to the linking of decision processes in this environment. 2.3.1 Government Acquisition of Software Intensive Space Systems In a general sense, the acquisition of U.S. space systems is characterized by complex interactions between the government employees of the represented organizations, the politically elected members, politically appointed personnel and their support staff, technical support personnel, the “Budget Authority” organization, and the “Acquisition Authority” organization. Figure 6 shows one possible model for the interactions between the “Acquisition Authority” organization and the “Budget Authority” organization. Policy, Fiscal Year Budget, Legal and Other Constraint(s) and Decision(s) Fiscal Year Budget Req. Decision(s) White House U.S. Senate U.S. Congress Budget Authority Headquarters Technical Support Controller Acquisition Authority Information Figure 6: Simple Model of Budget and Acquisition Authority Interaction 33 The key concept portrayed in Figure 6 is the information, decision, and constraint flows between these two organizations. Information influences decisions made by decision makers, which are made either individually or by committee. A feed-forward example (information passed from the “Budget Authority” to the “Acquisition Authority”) could be the voting position of the political decision makers whose votes are critical to fund a large system acquisition, while a feedback example could be the military benefits or intelligence reports that defend the fiscal year budget request to fund the acquisition of the system. Likewise, Figure 7 shows the same elements flowing between the “Acquisition Authority” and a “System Offeror” organization. Decision(s) Headquarters Controller Acquisition Authority Information System Offeror N Constraint(s) Technical Support Figure 7: Simple Model of Acquisition Authority and Offeror Interaction The more complex model for Figure 7 that would take into account a fuller range of information, decisions and constraints is shown in Figure 8. 34 Policy, Fiscal Year Budget, Legal and Other Constraint(s) and Decision(s) Fiscal Year Budget Req. Decision(s) Headquarters Technical Support Controller Internal Constraints Controller’s Internal Constraints Inform ation, D ecisions & Constraints Information, Decisions & Constraints Inform ation, Decisions & Constraints Acquisition Authority External Information Tech. Support’s Internal Constraints External Constraints Decision(s) Information System Offeror N Offeror’s Internal Constraints External Information Constraint(s) Figure 8: Model of Acquisition Authority and Offeror Interaction The impact of policy and fiscal year budget decisions on the “Acquisition Authority’s” internal constraint(s) is not modeled at this point, nor is the decision process internal to the organization; just the external flows that could affect the other organization’s decision making process. (The end user community in this model is considered as embedded within the Acquisition Authority.) The next section will provide an overview of the acquisition process. 2.3.2 Software Acquisition in the U.S. Space Environment The current approach for acquiring reliable software intensive space systems requires an assessment of an offeror’s formal response to a Request For Proposal (RFP) by the acquiring agency, and selection of an offeror from the pool of offerors responding to the RFP. The RFP assessment is a review of the materials requested by the RFP. An example document that can be requested by the RFP to describe the software process is the Software Development Plan (SDP). In competitive situations, the offerors attempt to respond with an SDP that is an amalgam of the practices defined in the RFP (for example software development standards that will be placed on contract) and the offeror’s own software development practices. Prior to Acquisition Reform, the contracts included very specific software standards with legal language requiring specific steps that “shall” be performed by the offeror 35 contractors, and recently a revitalization effort has occurred with placing these software development standards back on contracts, and their old document based inputs. In addition, the Acquisition Authority can choose to do additional assessments of potential offerors. One example is by visiting the proposed offeror’s location to perform a capabilities assessment and review artifacts from current or past software development efforts. Artifacts in this sense could include such items as software code, test plans, tests, build plans, peer review results, and metrics. 2.4 Game Theory Literature Review Game theory’s use in the Software Engineering field can be traced to Boehm’s fundamental work in the economics of software engineering, Theory-W and the WinWin Spiral model [131] [132] [133]. Theory-W’s fundamental principal is “Make Everyone a Winner”, and links in game theory through the example of a non-zero sum game involving three players; a customer and two candidates for the systems analysis lead [132]. Other recent examples include research in: the human aspects and teamwork in software engineering [134], how strategic software release decisions could be improved for competitive software release decision-making situations [135], an investigation of the bidding behavior of software engineers in a competitive situation [136], software development as a non-cooperative game [137], and a game between time pressured developers governing “corner cutting” behavior in agency situations [138]. We consider this time (schedule) pressured aspect further, noting that both the Challenger and Columbia space shuttle disasters had complex human interaction decision processes that decided incorrectly about the probabilistic nature of the risk of an accident leading up to the failure that led directly to their destruction [139] [140] [141] [142]. Further, Levenson documented some of the decisions made in the chain of events for some of the space accidents identified in Table 1, while others are subtly hinted at in both her and other references (for e.g. the decision processes that lead to the lack of adequate systems engineering or software engineering) [143] [144]. On the other hand, we have at least one documented case where the heroic efforts of one engineer and the chain of decisions to support 36 his efforts paid off with saving the mission data from the Cassini/Titan Huygens probe [145]. We also surmized in chapter 1 that acquisition decisions following TSPR led to prisoner dilemma games between competing contractors who no longer had software development standards on contracts. Keep these decision concepts in mind, while in the following sub-sections we provide a literature review of identified applications of game theory to software engineering, where non-relevent applications are followed by applicable ones. 2.4.1 Non-Relevent Uses of Game Theory The applications of game theory that we have determined are not relevant to our investigation are provided in this sub-section. 2.4.1.1 The Recent Sassenburg Dissertation Sassenburg [146] theorized in his Ph.D. dissertation that the software product’s release decision is a trade-off between early release in an attempt to maximize the economic payoff from early market adoption, and late release in an attempt to avoid the deferral of product functionality or the release of a poor quality product. In his approach, Sassenburg creates a methodology to structure software release decisions by investigating the existing body of knowledge, the current practices of software development organizations, and then combines the results of his investigation with theories from different disciplines [147]. Sassenburg’s dissertation does not apply directly to engineering complex space systems with multiple decision makers, each with competing goals as it deals with businesses in a competitive situation. For the most part, there is no direct competition governing software product release decision in an attempt to gain market share between offerors on the space software intensive systems under- development at SMC. 2.4.1.2 The Buisman Thesis Buisman [148] investigated the potential for using game theory as a tool for understanding software project bids using four software engineering students in his Software Engineering M.S. thesis. The ground rules he provided for the experiment consisted of: • An experimental bidding game with a number of projects, and just one software project per round. 37 • There was just one bidder who could win a project during each round. • The cost of each project is pre-estimated and does not change during the game. Earnings or losses (in non-descriptive units) are calculated as earnings = bid – cost. • The lowest bidder does not always win a project every round, but could win a project if they had bid the lowest amount. • Historical data can influence the selection decision to stay with a specific software engineer (despite another bid being lower) and not change (if the customer and software engineer are satisfied why change?). Bidders, however, are not made aware of the influence of historical information on the selection criteria. The ultimate goal was to collect the most projects as possible by using a certain strategy from an a-priori undetermined number of rounds to gain an average profit of 15% at the end. Some of the specific rules for and construction of the game included: 1. Bidders were free to use any kind of strategy. 2. If the bidder did not win a project the act of bidding did not cost anything. 3. Bidders are not made aware of the number of bidders for a project. 4. Bidders do not know the number of project rounds in the experiment. 5. The maximum bid is infinity if one chooses. Buisman’s master’s thesis, though providing thought provoking insight into potential bidding strategies is determined to not be relevant. Even though, we could speculate about the strategies used by space system contractors in the non-cooperative bidding contests for our expensive systems, from which strategies are undoubtedly in use. 2.4.1.3 Network Routing and Telecommunication Game theory has also been applied to investigate optimal solutions for network routing problems in telecommunication and for the Internet [149]. In these industries, the optimization approach assumes that the strategy for routing is independent of the decision processes of telecommunication firms, Internet Service Providers and the users of those systems, where in fact, the interaction between 38 these “players” as they attempt to optimize their own interaction with the system has an impact on the availability of the resources for all. 2.4.1.4 The Prisoner’s Dilemma for Choosing Extreme Programming Hazzan and Dubinsky [150] apply the Prisoner’s Dilemma as a method to suggest when eXtreme Programming (XP) is the correct software development method choice. They note that cooperation in software engineering is vital, and that the quandary raised by the Prisoner’s Dilemma is even stronger in software development environments. Quoting Agile authorities, the authors conclude with the observation that software development is a cooperative game of invention and communication, and that the Prisoner’s Dilemma and other game theory methods should be applied to other software development methods. 2.4.1.5 Game Theory use to Secure Reliability of Measurement Data Ko et al. [151] investigate the use of game theory to secure reliable data from the software developers for the quantitative measurement of the process. They suggest developer’s strategies when faced with measurement as additional effort, uneasiness, and uncertainty. They obtained weight values after educating the developers about the quantitative measurement process, and create a 3x3 game matrix of payoff values, after which they identify dominant and pure strategy equilibrium points. The 3x3 game that they provide has a Prisoner’s Dilemma sub-game. They suggest a manner for improving the reliability of the data by improving the data input by the developers in the first place from the analysis of a cooperative bi-matrix game and the use of cooperation evolution theory. A simulation suggests that the reliability of the data is improved by almost 10%. 2.4.2 Relevent Uses of Game Theory The applications of game theory that we have determined are relevant to our investigation are provided in this sub-section. 39 2.4.2.1 Value-Based Software Engineering and Theory-W Value-based software engineering (VBSE) has foundational concepts from Theory-W with the addition of the dependency, utility, decision and control theories [152]. Theory-W seeks to make all success-critical stakeholders (SCS) winners. Boehm and Jain use an informal proof to show that a win- lose situation usually results in lose-lose conditions for all the Success Critical Stakeholders (SCS). A process-oriented expansion of VBSE that also requires a significant amount of concurrency and backtracking (with a successful track record on more than 50 University of Southern California projects) is described as having the following seven steps [152] [153]: 1. Identify Protagonist Goals (theory-w) 2. Identify SCS (theory-w ↔ dependency theory) a. Results Chains (dependency theory) 3. SCS Value Propositions or Win Conditions (theory-w ↔ utility theory) a. Solution Exploration (theory-w) b. Solution Analysis (theory-w ↔ dependency theory) i. Cost/Schedule/Performance tradeoffs (dependency theory) 4. SCS Expectations Management (theory-w ↔ utility theory) 5. SCS Win-Win Negotiation (theory-w ↔ decision theory) a. Investment Analysis, Risk Analysis (decision theory) i. Prototyping (decision ↔ utility theory) ii. Option, Solution, Development & Analysis (decision ↔ dependency theory) iii. Cost/Schedule/Performance Tradeoffs (dependency theory) 6. Refine, Execute, Monitor & Control Plans (theory-w ↔ control theory) a. State measurement, prediction, correction; Milestone synchronization (control theory) 40 7. Risk, opportunity, change management (theory-w) a. Solution analysis (theory-w ↔ dependency theory) b. Cost/Schedule/Performance Tradeoffs (dependency theory) i. Option, Solution, Development & Analysis (dependency ↔ decision theory) ii. Prototyping (decision ↔ utility theory) c. Refine, Execute, Monitor & Control Plans (theory-w ↔ control theory) i. State measurement, prediction, correction; Milestone synchronization (control theory) In addition, Boehm and Jain [153] specifically list under decision theory the use of investment theory, game theory and statistical decision theory. Thus, our approach of using game theory and control theory (where system dynamics is described as an implementation of control theory in chapter 4), is therefore justified. 2.4.2.2 Human Aspects in Software Engineering Tomayko and Hazzan [154] look at software teamwork from a game theory perspective and note game theory’s concern with how individuals make decisions when they are mutually interdependent and how those individuals interact in an attempt to win when making decisions in those cases. They illustrate using the Prisoner’s Dilemma how individuals tend to compete even in situations where cooperation would allow them to gain more. They point out the following in a discussion about the Prisoner’s Dilemma [154]: As it turns out, such cases happen in many real-life situations in which people tend to compete instead of cooperate, when they might gain more from cooperation. The fact that people tend not to cooperate is explained by people’s worries that their cooperation will not be reciprocated and they will lose more (than if they compete and their partners cooperate). As has been indicated, this happens (in most cases) because of a lack of trust between partners. Hence, we conclude that this is relevant because it provides additional evidence that the use of game theory is a correct application for our acquisition issues. 41 2.4.2.3 Software Development as a Non-cooperative Game Grechanik and Perry [137] started by pointing to the failures of software projects from poor requirements, errors in specifications, and choosing an incorrect architecture after following a wrong design and development model. They follow with the logic that even though these reasons are correct, they are based on the fundamental assumption that everyone in the project is behaving cooperatively – driven to make the project a success while agreeing on the goals and the methods for making it a success. Further, Grechanik and Perry use the framework of game theory to uncover the hidden causes for project failures by looking into the strategies used by software management and developers to maximize their payoff and suggest methods for fixing them. Hence, we consider this as entirely relevant, where we will examine if the assumption that all players are behaving cooperatively in our schedule pressure driven software development environments, or we will reject it in our null hypothesis (included in chapter 3). Moreover, we conclude later that their arguments are foundational to this work. 2.4.2.4 “Corner Cutting” Description for Time Pressured Developers Austin [155] uses game theory to provide an explanation for “corner cutting” in agency situations where one person (in our case the government) relies on an agent (the contractor offeror) to do work. The agent that has been given an unachievable deadline faces a decision of; (1) adding effort to remain on schedule while maintaining quality, (2) reporting that they are “on schedule” while cutting quality corners, or (3) requesting schedule relief to ensure high-quality work. Austin reviews the policy of adding or reducing schedule slack in an attempt to enhance quality with the more appropriate behavioral deadline-setting policies for developers. He suggests caution in his discussion of generalizing the conclusions from his paper to actual software development environments due to known factors such as varying developer productivity and advances the concept that setting unachievable deadlines for all the developers may make lower-talent developers less likely to take shortcuts. 42 Austin reaches the conclusions that (1) systematically adding slack is not necessarily a cost- minimizing policy, (2) deadlines and planning estimates should be set separately, and (3) deadlines should be set aggressively to be “stretch goals” that few developers regularly meet. 2.5 Summary and Discussion In general, software prediction research in the industry can be grouped into the following earliest applicable lifecycle areas: 1. Early-life cycle methods a. Empirical models b. Process models c. Organizational capability d. Hybrid modeling methods e. Requirements maturity modeling f. Early system design models g. Required use of software development standards 2. Mid-life cycle methods a. Architectural design model simulations b. Artificial defect injection techniques 3. Late-life cycle methods a. System/software reliability growth measurement b. Test coverage c. Artificial defect injection techniques d. Code complexity metrics None of these methods explicitly takes into account the impact of the various decisions that are routinely made by any number of decision makers throughout the development life cycle. Thus, these methods do not explicitly provide us (the acquirer) the ability to evaluate various acquisition strategies. 43 Strategies that will allow us to optimize such factors as reliability, cost, schedule and effort while taking into account the inevitable perturbations such as funding on these parameters. Software risk management practices do, however, attempt to take decisions into account through risk analysis, and the spiral model is the only development model that explicitly incorporates risk management [156]. The current practice for cost assessment in the acquisition of new software intensive space systems is to use third parties to evaluate and then re-evaluate on a periodic basis the software offeror’s proposed cost and schedule using cost modeling tools such as COCOMO II or SEER-SEM (information on SEER-SEM is provided in appendix-A). Early in the life cycle, evaluation efforts are based on SLOC or function-point estimates for the amount of software, with the use of nominal values or best guesses on other modeling parameters, yielding cost and schedule estimates at completion based on the average software project for the expected type of software. Software defect density models can then be used to feed into a system’s reliability estimate. Process models using system dynamics can be used to model the dynamics of the software engineering processes a development organization states they will use. There is, however, the real possibility that the development organization will not use the RFP reported process (for example spiral development) and instead will fall back onto the more familiar incremental or waterfall development model patterns. Additionally, the provider’s RFP response may report software development metrics for their corporation’s “A-team” but utilize a bait and switch strategy by populating the acquisition with the “C-team” once the contract has been won, or possibly after significant program events such as the preliminary and critical design reviews have been completed. The Software Engineering Institute’s SW-CMM or the newer CMMI rating is typically reported for an organization, that is, at least for a software team in the development organization that was recently assessed at a SW-CMMI level of 3, 4 or 5; however, this will not necessarily be the rating of the team working on the software for the target project. The reported CMMI level may also be biased by the fact that the organization assessed itself. Thus, obtaining independent ratings of the teams writing flight code is vitally important. 44 On the early prediction of a software product’s reliability, there are a vast number of development models and any number of possible project decision dynamics, which likely has contributed to the current view by some that early lifecycle software reliability prediction methods lack credibility. Thus how do you choose from the large number of model types, analytical methods or processes available in the literature, some of which may or may not be applicable in all potential development circumstances? Any selection among the countless other decisions that can be made during any phase of the development lifecycle can easily perturb the development environment affecting the applicability of a model for the software’s reliability. Boehm et al. used game theory in the larger context of a general theory for software engineering - Theory-W, and Value Based Software Engineering in an explicit attempt to make everyone a winner. Tomayko and Hazzan use the Prisoner’s Dilemma to highlight the competitive nature of software engineering team members in some software development environments. They leave as a task the review of the agile ‘eXtreme Programming’ method to determine if there are practices that may increase trust among team members. Further, Turner and Boehm [157] report that comparing the lessons from agile and plan-driven approaches suggests that the most critical success factors are most likely in the realm of people factors, suggesting that staffing, culture, values, communications, and expectations management as five critical areas that have a significant impact. Sassenburg’s dissertation focused on the software release decision for a business in a competitive release decision environment. He did not look at strategies for the acquisition of embedded software-intensive space systems. His dissertation did not address optimal acquisition strategies for a government customer base and the win-win strategies for the business, user and customer. In addition, he did not discuss the application of differential game theory, and its methodology for control and optimization to provide insight into agency related quality problems. In Buisman’s nine-round software bidding game, he asked the four student bidders to provide a sentence or two on the motivation for their bid, which provided an interesting look into the motivation 45 behind bidder strategies [158]. In the analysis of his results, he cited the lack of a large number of players and sequential rounds, and the fact that the experiment lacked reality as shortcomings. Grechanik and Perry use a non-cooperative game view of the strategies utilized by software management and developers in software [137]. This reinforces the approach undertaken by this dissertation to address problems in the acquisition of software intensive space systems. The question that we will address is how to make everyone a winner in a manner that assures continual cooperation and achieves the goal of a designed software intensive development project. Hence, we believe fundamentally that the issues faced by our software intensive system acquisitions are founded in the Grechanik and Perry outline. Further, Austin’s original game theory framework is also revisited in detail in chapter 5 of this dissertation, and with the considerations from Grechanik and Perry and our own observations form the theoretical underpinnings for this dissertation. Austin’s application of the penalties and games between developers leaves us unsatisfied that there isn’t a better method to remove “corner cutting” once and for all from software engineering. Hence, we fundamentally disagree with Austin’s third policy recommendation based on our case study data observations, and we further develop a solution concept in an attempt to create an atmosphere for quality-driven cooperation—suggesting that quality is the correct Nash equilibrium decision point for all the players to attempt to achieve. And show in this dissertation that contrary to popular belief, driving up quality will fundamentally improve both schedule and cost. This Nash bargaining solution is provided in chapter 5. Finally, to design an optimal software acquisition for national security space systems, a sampling of existing practices encompassed in process models and methods from the literature were partially reviewed in this introductory chapter, with further model and method examples provided in appendix-A. 2.6 Conclusions Defects introduced during the software requirements, design or implementation phases can go undiscovered and pose a risk to the health and safety of the space system, where latent defects in the 46 launch system, ground, and in the on-board space vehicle software can all cause the loss of the space vehicle. Uploading a space software patch is not without risk. That is, assuming the on-board computer system was architected with adequate memory and CPU throughput reserves such that on-board software can be changed, the computers must still monitor system sensor data to ensure the health and safety of the vehicle, while the software modification operations are underway. In addition, software uploads can introduce subtle undetected errors that take a long period of time before they manifest into a mission-ending event. From the perspective of simply depending on thoroughly testing reliability into the software, Beizer points out that not only is complete testing practically impossible, it is also theoretically impossible [159]. Additionally, an issue for real-time systems, subtle timing differences can lead to extremely difficult to reproduce errors in ground tests, that when pressed into operation before being found can manifest into a serious mission problem. Beizer adds that we are then left to relying on a statistical measure of software reliability. Further, he expected that we should see each industry that uses software to eventually evolve reliability standards suitable for that particular industry. In fact, the Aerospace industry has a standard recommended practice on the use of software reliability, the “American National Standard Recommended Practice for Software Reliability”, American Institute of Aeronautics and Astronautics, ANSI/AIAA R-013-1992 (and is currently being revised). However, this standard is never routinely placed on contracts at SMC, perhaps contributing to the erroneous assumption that the reliability of the software is 1.0? Here, Lakey and Neufelder [160] offer the observation that this is a common misconception for traditional hardware reliability engineers who believe that all software errors are design errors; hence, according to this argument, all software failures must be deterministic, so that software either has a reliability of 1.0 or 0.0. The software size growth curve in Figure 1 provides a clue why this misrepresentation has persisted for embedded on-board software in the satellite industry. The contribution from software code to the system’s reliability has been a very minor component in most earth orbiting non-human rated space systems to date, such that the team sizes and time needed to get the software right could be 47 adequately managed in the time allotted – and any issues from the software’s complexity were well below what was found in the hardware. Thus, any software issues were likely swamped by the number of hardware related issues, and thus software did not appreciably contribute to system cost and schedule overruns. Furthermore, software’s repairable aspect has allowed most software issues to be fixed with patches or simple on-board database parametric modifications, while the repairable aspect applied to mission degrading hardware problems to be worked around has given software a reputation for saving the day from the hardware issues that cannot be easily fixed. Historically, the bulk of a system’s software functionality was allocated to the easier to maintain ground software as a result of the system’s early ground space trade-off study. One could postulate that even if residual software failures were uncovered on-orbit during the early years of using rudimentary software in space systems, that there was a greater likelihood that the vast majority of failures resided in functionality that was not hazardous to the safety of the space vehicle. These issues would then be fixed with a new software image upload, or software “patch” (that is a change to some minor aspect of the software that is not part of the primary uploadable software’s binary image), an on- board database change, or a modification to ground command sequences or parameters. An additional complicating factor for military space on-orbit software failures, keeping them out of the press, is a closed environment where the individuals working on these systems are stove piped and under secrecy orders or legal non-disclosure agreements. If, however, the failure resulted in the quite noticeable loss of a satellite, then it is usually reported in the press. Software reliability techniques have been used in NASA’s human rated software intensive space systems; the most notable example was on the Space Shuttle’s Avionics software [161]. In the conclusions section of Keller, and Schneidewind’s article, they point out that no single SRE method is the “silver bullet” and that the integration of numerous techniques is required to obtain high-reliability software [162]. The “tank and pipe” model in Figure 4 graphically shows that this is the likely situation for software, due to a number of potential defect injection and removal points. Likewise, Beizer states that testing is intended to be our last line of defense against software faults, not the first or only defense. 48 He continues with the concept that software testers need to first and foremost advocate the use of various defect prevention methods. The prudent approach is to then use testing – as the last line of defense following the extensive use of prevention and early discovery methods. We foresee that by adopting the use of ODC, we should in the future, be able to optimize the use of the early life cycle prevention and early discovery methods. Also, it should be clear that decisions made by one organization (for example the joint decision “Budget Authority”) affected the internal constraints within the other organization. For example, a decision to change the government acquisition strategy (TSPR) not only affected the funding constraint on another branch of the acquisition authority, severely limiting the engrained information exchange processes, it also changed the decision processes that governed the manner in which the acquiring organization wrote system acquisition contracts. The constraints that were removed were the application of military standards, which changed the manner in which offerors responded to the RFP. The value of the contract constraints on the reliability of the space systems was underappreciated. Eslinger provides a good review of the SMC acquisition history with respect to software standards and specifically notes the dramatic increase in on-orbit anomalies and space vehicle failures as a direct result of acquisition reform’s removing military software development standards from these contracts [163]. Hence it would seem, that the only leverage that an acquiring agency has had for assuring adequate processes is placing software development standards on contract with clear and concise legal language, i.e. the shall statements and interacting closely with the offeror through an embedding process for adequate and accurate information flow. A revision of the early 90’s software development standard (Mil-Std-498) has been created by The Aerospace Corporation (Technical Operating Report TOR-2004(3909)-3537B “Software Development Standard for Space Systems”) that includes the legal language for the software test program, which government customers can place on contracts in place of commercial standards. 49 C h a p t e r 2 E n d n o t e s [1] Freeman Dyson, Disturbing the Universe, Harper & Row, (New York: 1979): 114. [2] L. Jane Hansen, Robert W. Hosken, and Craig H. Pollock, “Spacecraft Computer Systems,” Space Mission Analysis and Design, 3rd edition, ed. Wiley J. Larson, and James R. Wertz (El Segundo, CA: Microcosm Press; Dordrecht, The Netherlands: Kluwer Academic Publishers, 1999), 645-683. [3] Gary G. Whitworth, “Ground System Design and Sizing,” Space Mission Analysis and Design, 3rd edition, ed. Wiley J. Larson, and James R. Wertz (El Segundo, CA: Microcosm Press; Dordrecht, The Netherlands: Kluwer Academic Publishers, 1999), 624-631. [4] Herbert Hecht, “Reliability for Space Mission Planning,” Space Mission Analysis and Design, 3rd edition, ed. Wiley J. Larson, and James R. Wertz (El Segundo, CA: Microcosm Press; Dordrecht, The Netherlands: Kluwer Academic Publishers, 1999), 773. [5] Hansen et al., “Spacecraft Computer Systems,” 663-667. [6] Ibid., 675-678. [7] Whitworth, “Ground System Design and Sizing,” 624-631. [8] James R. Wertz, and Richard P. Reinert, “Mission Characterization,” Space Mission Analysis and Design, 3rd edition, ed. Wiley J. Larson, and James R. Wertz (El Segundo, CA: Microcosm Press; Dordrecht, The Netherlands: Kluwer Academic Publishers, 1999), 25-31. [9] Raymond Sablynski, and Robert Pordon, “A Report on the Flight of Delta II’s Redundant Inertial Flight Control Assembly (RIFCA),” Proceedings of the 1998 Position Location and Navigation Symposium, 20-23 Apr 1998, by the IEEE, 286-293. [10] Charlie Tarrant, and Jerry Crook, “Modular rocket engine control software (MRECS),” Proceedings of the 1997 Digital Avionics Systems Conference (DASC), 26-30 October 1997, by the AIAA/IEEE, 8.3-24-8.3-30 vol. 2. [11] Barry W. Boehm, Software Engineering Economics, Prentice-Hall, Inc., (Englewood Cliffs, NJ: 1981), 35-36. [12] Ibid., 41-45. [13] Ibid., 656-657. [14] Barry W. Boehm, “A Spiral Model of Software Development and Enhancement”, in IEEE Computer 21, no. 5 (1988): 61-72. [15] Mikio Aoyama, “Agile Software Process Model,” IEEE Proceedings of the 21st International Computer Software and Applications Conference, (1997): 454-459. 50 [16] Ivar Jacobson et al., The Unified Software Development Process, Addison Wesley Longman, Inc, (Reading, MA: 1999): xxv. [17] Boehm, Software Engineering Economics, 35-36. [18] Barry W. Boehm et al., Software Cost Estimation With COCOMO II, (Upper Saddle River: Prentice Hall PTR, 2000), 302-304. [19] Barry W. Boehm, “A View of 20th and 21st Century Software Engineering,” keynote address 28th International Conference on Software Engineering (ICSE 2006), (Shanghai China: 25 May, 2006), available from http://www.isr.uci.edu/icse-06/program/keynotes/Boehm- Keynote.ppt; Internet; accessed on 22 June 2007. [20] NASA SP-6105, NASA Systems Engineering Handbook, (NASA: June 1995), 24-32. [21] Boehm, Software Engineering Economics, 35-37. [22] Dorothy Graham, “The Forgotten Phase”, Dr. Dobb’s Portal: The World of Software Development, (July 1 st , 2002): Internet, available from http://www.ddj.com/architect/184414873. [23] Barry W. Boehm, “A Spiral Model of Software Development”, 63. [24] William J. Palm III, System Dynamics, McGraw Hill (New York, NY: 2005): 4-5. [25] Tarik Abdel-Hamid, “The dynamics of software development project management: An integrative system dynamics perspective,” Ph.D. dissertation, Sloan School of Management, MIT, 1984. [26] Ray Madachy, Software Process Dynamics, Wiley-Interscience, (Hoboken, NJ: 2008): 3-4. [27] Ibid., 18-19, 40-41. [28] Madachy, Ph.D Dissertation. [29] Madachy, Software Process Dynamics, 200, 253-255. [30] Boehm et al., Software Cost Estimation With COCOMO II, 13. [31] Lee Fischman et al., “Inside SEER-SEM,” Crosstalk, (April 2005): 26. [32] Capers Jones, “Software Cost Estimating Methods for Large Projects©,” Crosstalk, (April 2005): 8. [33] Boehm, Software Engineering Economics, 58, 344-345. [34] Boehm et al., Software Cost Estimation With COCOMO II, xxix. [35] Ibid., xxxii-xxxiv. 51 [36] Ibid., 13. [37] Ibid., 14-82. [38] Ibid., 40. [39] Ibid., 50-51, 57-58. [40] Ibid., 254-255. [41] Ibid., 254-261. [42] Ibid., 261-267. [43] Ibid., 267-268. [44] Donna K. Dunaway and Steve Master, “CMM®-Based Appraisal for Internal Process Improvement (CBA IPI) Version 1.2 Method Description,” Carnegie Mellon Software Engineering Institute Technical Report CMU/SEI-2001-TR-033, (November 2001): 1-2. [45] James McHale and Daniel S. Wall, “Mapping TSP to CMMI,” Carnegie Mellon Software Engineering Institute Technical Report CMU/SEI-2004-TR-014, (April 2005): 1. [46] Diane L. Gibson et al., “Performance Results of CMMI®-Based Process Improvement,” Carnegie Mellon Software Engineering Institute Technical Report CMU/SEI-2006-TR-004, (August 2006): 5-30. [47] Joseph P. Elm et al., “Understanding and Leveraging a Supplier’s CMMI® Efforts: A Guidebook for Acquirers,” Carnegie Mellon Software Engineering Institute Technical Report CMU/SEI-2007-TR-004, (March 2007): 2-3. [48] IEEE, “IEEE Standard Classification for Software Anomalies,” Software Engineering Standards Committee of the IEEE Computer Society, IEEE Std 1044-1993, (December 1993). [49] Boris Beizer, Software Testing Techniques 2nd Edition, International Thomson Computer Press, (Boston, MA: 1990): 27-58, 460-476. [50] IEEE Std 1044-1993, 7-23. [51] Beizer, Software Testing Techniques, 27-58, 460-476. [52] Norm Bridge and Corinne Miller, “Orthogonal Defect Classification Using Defect Data to Improve Software Development,” Proceedings of the International Conference on Software Quality 7, no. 0, (Montgomery, AL: October 1997): 198. [53] Luigi Buglione and Alain Abran, "Introducing Root-Cause Analysis and Orthogonal Defect Classification at Lower CMMI Maturity Levels," Proceedings of the International Conference on Software Process and Product Measurement, (Cádiz, Spain: November 2006), 6-7; available from http:// www.gelog.etsmtl.ca/publications/pdf/1037.pdf; Internet; accessed 8 July 2007. 52 [54] Ram Chillarege, “Orthogonal Defect Classification,” Handbook of Software Reliability Engineering, ed. Michael R. Lyu (Los Alamitos, CA: IEEE Computer Science Press; New York, NY: McGraw-Hill Publishing Company, 1996), 359. [55] M. E. Fagan, “Design and Code Inspections to Reduce Errors in Program Development”, IBM Systems Journal 15, no. 3, (1976): 182-211. [56] NASA-GB-A302, Software Formal Inspections Guidebook, NASA Office of Safety and Mission Assurance, (August 1993): 5. [57] Capers Jones, “Software defect-removal efficiency,” IEEE Computer 29, no. 4, (April 1996): 94-95. [58] Nancy S. Eickelmann, et al., “An Empirical Study of Modifying the Fagan Inspection Process and the Resulting Main Effects and Interaction Effects Among Defects Found, Effort Required, Rate of Preparation and Inspection, Number of Team Members and Product 1st Pass Quality,” Proceedings of the 27th Annual NASA Goddard/IEEE Software Engineering Workshop, (December 2002): 58-64. [59] David L. Parnas and Mark Lawford, “The Role of Inspection in Software Quality Assurance,” IEEE Transactions on Software Engineering 29, no. 8, (August 2003): 674-676. [60] Yuk Kuen Wong, and David Wilson, “Exploring the Relationship between Experience and Group Performance in Software Review,” Proceedings of the Tenth Asia-Pacific Software Engineering Conference, (2003): 500-509. [61] Don O’Neill, “Issues in Software Inspection,” IEEE Software, (January 1997): 18-19. [62] Adam Porter, and Lawrence Votta, “What Makes Inspections Work?,” IEEE Software, (November/December 1997): 99-102. [63] Kishor S. Trivedi et al., “Recent Advances in Modeling Response-Time Distributions in Real- Time Systems,” Proceedings of Recent Advances in Modeling Response-Time Distributions 91, no. 7, (July 2003): 1023-1037. [64] Myron Hecht et al., “Use of Combined System Dependability and Software Reliability Growth Models”, International Journal of Reliability, Quality and Safety Engineering 9, no. 4, (December 2002): 289-304. [65] Alan Wood, “Predicting Software Reliability,” IEEE Computer 29, no. 11, (November 1996): 69-77. [66] Dick Hamlet et al., “Theory of System Reliability Based On Components,” Proceedings of the 23rd International Conference on Software Engineering, (May 2001): 361-370. [67] Graham Clark et al., “The Möbius Modeling Tool,” Proceedings of the 9th International Workshop on Petri Nets and Performance Models, (September 2001): 241-250. 53 [68] Myron Hecht, Aleka McAdams, and Alexander Lam, “Use of Test Data for Integrated Hardware/Software Reliability and Availability Modeling of a Space Vehicle,” Proceedings of the 24 th Aerospace Testing Seminar, 8-10 April 2008, by The Aerospace Corporation. [69] Christof Ebert, “Experiences with Colored Predicate-Transition Nets for Specifying and Prototyping Embedded Systems”, IEEE Transactions On Systems, Man, and Cybernetics—Part B: Cybernetics 28, no. 5, (October, 1998): 641-652. [70] J. Dennis Lawrence, and Warren L. Persons [preparers], “Survey of Industry Methods for Producing Highly Reliable Software,” U.S. Nuclear Regulatory Commission, Fission, and Energy Systems Safety Program, Lawrence Livermore National Laboratory, NUREG CR- 6278UCRL-ID-117524, (Aug. 29, 1994), 12-14. [71] James Rumbaugh, Ivar Jacobson, and Grady Booch, The Unified Modeling Language Reference Manual, Addison Wesley Longman, (Reading, MA: 1999): 3. [72] Philippe B. Kruchten, “The 4+1 View Model of Architecture,” IEEE Software, (November, 1995): 42-43. [73] Walter A. Dos Santos, Osvandre A. Martins, and Adilson M. Da Cunha, “A Real Time UML Modeling for Satellite On Board Software,” Proceedings of the 2 nd International Conference on Recent Advances in Space Technologies, (June 2005): 228-233. [74] Barry Boehm, edited by Wilfred J. Hansen, “Spiral Development: Experience, Principles, and Refinements: Spiral Development Workshop February 9, 2000,” Special Report CMU/SEI-00- SR-08, (July 2000): 3-6. [75] Guoqiang Shu et al., “Validating objected-oriented prototype of real-time systems with timed automata,” IEEE Proceedings of the 13th International Workshop on Rapid System Prototyping, (2002): 99. [76] Douglas J. Buettner et al., “Integrated Technical Computing Environments as a Tool for Testing Algorithmically Complicated Software,” Proceedings of the Seventeenth International Conference on Testing Computer Software, (June 2000). [77] NASA Office of Safety and Mission Assurance, Formal Methods Specification And Verification Guidebook For Software And Computer Systems Volume I: Planning And Technology Insertion, NASA Technical Publication TP-98-208193, (December 1998): 1. [78] MIT, Larch Home Page available from http://www.sds.lcs.mit.edu/spd/larch/; Internet; accessed 1 July 2007. [79] Thomas McGibbon, “An Analysis of Two Formal Methods: VDM & Z”, in DoD Data Analysis Center for Software report DACS-CRTA-97-1, (August 1997): 1-2. [80] Marc Zimmerman et al., “Making Formal Methods Practical,” Proceedings of the 19th Digital Avionics Systems Conference 1, (October 2000): 1B2/1-2. [81] NASA, Technical Publication TP-98-208193, 2-3. 54 [82] Beizer, Software Testing Techniques, 10-11. [83] W.E. Howden, "Functional Programming Testing," Technical Report, Dept. of Mathematics, University of Victoria, Victoria, B.C., Canada, DM 146 IR, (August 1978). [84] R. L. Kruse, and A. Ryba, Data Structures and Program Design In C++, Prentice-Hall, Inc., N.J. 07458, 1999. [85] Myron Hecht and Douglas Buettner, “Software Testing in Space Programs,” Crosslink Vol. 6, No. 3, (2005), Internet; available online at http://www.aero.org/publications/crosslink/fall2005/06.html. [86] C. Kaner, “An introduction to Scenario-based Testing”, Florida Tech., June, 2003, available online at http://www.testingeducation.org/articles/scenario_intro_ver4.pdf [87] M.L. Shooman, "Program Testing," Software Engineering, McGraw Hill, Inc., (Singapore: 1983): 223-295. [88] Brian Dobbing, and Alan Burns, “The Ravenscar Profile for Real-Time and High Integrity Systems”, Crosstalk, (2003): 10, available online at http://www.stsc.hill.af.mil/crosstalk/2003/11/0311CrossTalk.pdf. [89] R.J. Adams, et al., “Software Development Standard for Space Systems”, The Aerospace Corporation Technical Operating Report TOR-2004(3909)-3537 Rev. B., (El Segundo, CA: 2005): 9, 24-26. [90] Boris Beizer, Black-Box Testing: Techniques for Functional Testing of Software and Systems, John Wiley & Sons, Inc. (New York: 1995): 7. [91] Beizer, Software Testing Techniques, 51. [92] Ibid., 48. [93] E.F. Miller Jr., et. al., "Application of Structural Quality Standards to Software", Software Engineering Standards Applications Workshop, IEEE Catalog No. 81CH1663-7, July 1981. [94] Beizer, Software Testing Techniques, 173-212. [95] J.H. Poore, and C.J. Trammel, “Engineering Practices for Statistical Testing”, Crosstalk, (1998): available online at http://www.stsc.hill.af.mil/crosstalk/frames.asp?uri=1998/04/statistical.asp. [96] John D. Musa, Software Reliability Engineering: More Reliable Software Faster and Cheaper, AuthorHouse, (Bloomington, IN: 2004): 394. [97] Jeffrey Voas, et al., “Predicting how Badly “Good” Software can Behave”, IEEE Software 14, no. 4, (1997): 73-83. [98] J. H. Barton, et al., “Fault Injection Experiments Using FIAT”, IEEE Transactions on Computers 39, no. 4, (1990). 575-582, 55 [99] Henrique Madeira, et al., “On the Emulation of Software Faults by Software Fault Injection,” Proceedings of the Intl. Conf. on Dependable Systems and Networks, (New York, NY: 2000): 417-426. [100] Beizer, Software Testing Techniques, 75. [101] Ibid., 59-120. [102] K. Hayhurst, et al., “A Practical Tutorial on Modified Condition/Decision Coverage”, NASA TM-2001-210876, NASA Langley Research Center, May, 2001, available online at http://ntrs.nasa.gov/index.cgi?method=advanced searching for the title, last visited September 04, 2006. [103] Robert N. Charette, Software Engineering Risk Analysis and Management, Intertext Publications/Multiscience Press, Inc and McGraw-Hill Book Company (New York, NY: 1989): 52-56. [104] Bin Li et al., “Integrating Software into PRA,” Proceedings of the 14th International Symposium on Software Reliability Engineering (2003): 457. [105] Joseph R. Fragola, “Space Shuttle Program Risk Management,” PROCEEDINGS of the Annual RELIABILITY and MAINTAINABILITY Symposium, (1996): 133-142. [106] Jeevan Perera, and Jerry Holsomback, “Use of Probabilistic Risk Assessments for the International Space Station Program,” Proceedings of the 2004 Aerospace Conference, (2004): 516. [107] Michael H. Packard, and Edward J. Zampino, “Probabilistic Risk Assessment (PRA) Approach for the Next Generation Launch Technology (NGLT) Program Turbine-Based Combined Cycle (TBCC) Architecture 6 Launch Vehicle,” PROCEEDINGS of the Annual RELIABILITY and MAINTAINABILITY Symposium, (2004): 604. [108] Bin Li et al., “Integrating Software into PRA,” 457-467. [109] Lon D. Gowen et al., “Preliminary Hazard Analysis for Safety-Critical Software Systems,” Proceedings of the Eleventh Annual International Phoenix Conference on Computers and Communications (Scottsdale, AZ: 1992): 502. [110] Peter L. Goddard, “Software FMEA Techniques,” IEEE PROCEEDINGS Annual RELIABILITY and MAINTAINABILITY Symposium (2000): 118. [111] Martin S. Feather, “Towards a Unified Approach to the Representation of, and Reasoning with, Probabilistic Risk Information about Software and its System Interface,” IEEE Proceedings of the 15th International Symposium on Software Reliability Engineering (ISSRE’04: 2004). [112] J. A. McDennid et al., “Experience with the application of HAZOP to computer-based systems,” Proceedings of the Tenth Annual Conference on Computer Assurance (COMPASS: June 1995): 37. [113] Goddard, “Software FMEA Techniques,”: 418-420. 56 [114] Alice Rueda, and Mirek Pawlak, “Pioneers of the Reliability Theories of the Past 50 Years,” IEEE PROCEEDINGS Annual RELIABILITY and MAINTAINABILITY Symposium (RAMS: 2004): 104-105. [115] Frank J. Groen et al., “QRAS – The Quantitative Risk Assessment System,” in IEEE PROCEEDINGS Annual RELIABILITY and MAINTAINABILITY Symposium (2002): 349. [116] Nancy G. Leveson and Janice L. Stolzy, “Safety Analysis Using Petri Nets,” IEEE Transactions on Software Engineering SE-13, no. 3, (March 1987): 386. [117] Martin L. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Programming, John Wily & Sons, Inc., (Hoboken, NJ: 1994, 2005): 587. [118] Anonymous, Markov chain, on Wikipedia, (Internet) available on line at http://en.wikipedia.org/wiki/Markov_chain, last visited August 25, 2007. [119] Christoph Lindemann et al., “Numerical Methods for Reliability Evaluation of Markov Closed Fault-Tolerant Systems,” IEEE Transactions on Reliability 44, no. 4, (December 1995): 694. [120] Werner Sandmann, “On Optimal Importance Sampling for Discrete-Time Markov Chains,” Proceedings of the Second International Conference on the Quantitative Evaluation of Systems (QEST’05: 2005): 1. [121] Alan C. Tribble et al., “Software Safety Analysis of a Flight Guidance System,” Proceedings of the 21st Digital Avionics Systems Conference 2, (2002): pg. 13.C.1-5. [122] Bin Li et al., “Integrating Software into PRA,” 458. [123] Mark D. Hansen, “Survey of Available Software-Safety Analysis Techniques,” IEEE PROCEEDINGS Annual RELIABILITY and MAINTAINABILITY Symposium, (1989): 46. [124] R. E. Fields et al., “A Task Centered Approach to Analysing Human Error Tolerance Requirements,” P. Zave, editor, Second IEEE International Symposium on Requirements Engineering (RE'95), (1995): 18. [125] Bin Li et al., “Integrating Software into PRA,” 458. [126] Athanasios Papoulis, Probability, Random Variables and Stochastic Processes 3rd edition, (New York: McGraw-Hill, Inc., 1991), 83-84. [127] Yong Ou, and Joanne Bechta Dugan, “Sensitivity Analysis of Modular Dynamic Fault Trees,” Proceedings of the IEEE International Computer Performance and Dependability Symposium (2000): 35. [128] Bin Li et al., “Integrating Software into PRA,” 457-465. [129] Suellen Eslinger, “Software Acquisition Best Practices: Experiences from the Space Systems Domain”, Aerospace Report TR-2004(8550)-1, Proceedings of the Acquisition of Software- Intensive Systems Conference, (January 2003): 4, Internet; available online http://www.sei.cmu.edu/programs/acquisition-support/conf/2003-presentations/eslinger.pdf. 57 [130] Tom Bernard et al., “CMMI® Acquisition Module (CMMI-AM), Version 1.0,” Carnegie Mellon University Software Engineering Institute Technical Report CMU/SEI-2004-TR-001 (February 2004): iii. [131] Boehm, Software Engineering Economics, 289-340. [132] Barry Boehm and Rony Ross, “Theory-W Software Project Management: Principles and Examples”, IEEE Transactions On Software Engineering 15, no. 7, (July 1989): 902-916. [133] Boehm, Software Engineering Economics, 279-286. [134] James E. Tomayko, and Orit Hazzan, Human Aspects of Software Engineering, Laxmi Publications (December 30, 2005): 45-48. [135] Hans Sassenburg, “Design of a Methodology to Support Software Release Decisions: Do the Numbers Really Matter? ,” Ph.D. Thesis, University of Groningen, (2005): 44-45. [136] Jacco Buisman, “Game Theory and Bidding for Software Projects: An Evaluation of the Bidding Behaviour of Software Engineers,” M.S. Thesis Blekinge Institute of Technology, (August, 2002). [137] Mark Grechanik and Dewayne E. Perry, “Analyzing Software Development as a Noncooperative Game,” Sixth International Workshop on Economics-Driven Software Engineering Research (EDSER-6) W9L Workshop - 26th International Conference on Software Engineering, (Edinburgh, Scotland, UK: May 2004): 29-33. [138] Austin, 195-207. [139] Columbia Accident Investigation Board Report Vol. 1, National Aeronautics and Space Administration and the Government Printing Office (Washington D.C.: August 2003): 195- 204. [140] R.P. Feynman, “ Personal Observations on Reliability of Shuttle,” Appendix F. Report of the PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accident 2, Internet; available at http://history.nasa.gov/rogersrep/v2appf.htm, last visited August 26, 2007. [141] Jeff Forrest, “THE CHALLENGER SHUTTLE DISASTER: A Failure in Decision Support System and Human Factors Management,” Internet; available from http://frontpage.hypermall.com/jforrest/challenger/challenger_sts.htm; last visited August 26, 2007. [142] Richard P. Feynman, “WHAT DO YOU CARE WHAT OTHER PEOPLE THINK?” Further Adventures Of A Curious Character, Bantam Books (New York: 1989): 213-219. [143] Leveson, “Software in Spacecraft Accidents”, 9. [144] Nancy G. Levenson, “A Systems-Theoretic Approach to Safety in Software-Intensive Systems,” IEEE Transactions on Dependable and Secure Computing 1, no. 1, (January-March 2004): 66-85. 58 [145] Winchester , “Software Testing Shouldn't Be Rocket Science.” [146] Sassenburg, “Design of a Methodology to Support Software Release Decisions”, 4. [147] Ibid., 209-218. [148] Buisman, “Game Theory and Bidding for Software Projects,” i, 21-24. [149] E. Altman, T. Boulogne, R. El Azouzi, T. Jiménez, L. Wynter, “A survey on networking games in telecommunications,” Computers and Operations Research 33, no. 2, (February 2006): 286- 287. [150] Orit Hazzan and Yael Dubinsky, “Social Perspective of Software Development Methods: The Case of the Prisoner Dilemma and Extreme Programming,” Proceedings of XP'2005, (2005): 74-81. [151] Sang-Pok Ko, Hak-Kyung Sung, and Kyung-Whan Lee, “Study to Secure Reliability of Measurement Data through Application of Game Theory,” Proceedings of the 30 th EUROMICRO Conference (EUROMICRO’04), Vol. 00, (2004): 380-386. [152] Barry Boehm and Apurva Jain, “An Initial Theory of Value-Based Software Engineering”, USC-CSE Technical Report 2005-505, (March 2005): 2-6. [153] Barry Boehm and Apurva Jain, “A Value-Based Software Process Framework”, Proceedings Of The Software Process Change, International Software Process Workshop and International Workshop on Software Process Simulation and Modeling (SPW/ProSim 2006), (Shanghai, China, May 20-21, 2006): 1-10. [154] Tomayko and Hazzan, Human Aspects of Software Engineering, 45-46. [155] Austin, 195-207. [156] Barry Boehm et al., “Using the WinWin Spiral Model: A Case Study”, IEEE Computer 31, no. 7 (1998). [157] Richard Turner and Barry Boehm, “People Factors in Software Management: Lessons From Comparing Agile and Plan-Driven Methods,” Crosstalk, (December, 2003): 4. [158] Buisman, “Game Theory and Bidding for Software Projects,” 24-26. [159] Beizer, Software Testing Techniques, 24-26. [160] Peter B. Lakey, and Ann Marie Neufelder, System and Software Reliability Assurance Notebook, Rome Laboratory, (Rome, New York: 1997); appendix p. 17. [161] Ted Keller, and Norman F. Schneidewind, “Successful Application of Software Reliability Engineering for the NASA Space Shuttle”, Proceedings of the Eighth International Symposium on Software Reliability Engineering, (1997): 112. [162] Beizer, Black-Box Testing, 242. 59 [163] Suellen Eslinger, “Space System Software Testing: The New Standards”, Proceedings of the 23rd Aerospace Testing Seminar, 10-12 October 2006, by The Aerospace Corporation: 3-27 – 3-28. 60 C H A P T E R 3 : C A S E S T U D I E S In the long run, the only limits to the technological growth of a society are internal. A society has always the option of limiting its growth, either by conscious decision or by stagnation or by disinterest. A society in which these internal limits are absent may continue to grow forever. [1] 3. Introduction Case studies in this dissertation rely on documentation associated with flight software. A space system acquisition provides volumes of data including briefing charts, plans, designs, meeting minutes, technical memorandums, databases of action items and defects, and the software code itself. In the days when paper ruled the engineering process, before computers and electronic data files became the prevalent method for documentation, it has been said that the mass of paper needed to build a satellite weighed more than the satellite itself. Today, not only are there the original paper delivered data types but more likely one will find electronic documents in a number of possible formats (and versions of formats). While the electronic format provided numerous advantages over paper for the research presented in this dissertation, ultimately data availability and its format depends on various factors such as: what was on contract to be provided, the technology available at the time, the processes followed by the selected offeror, and others. Hence, in this chapter we provide; (1) an overview of the types of data available and their sources, (2) the strategy employed for selectively analyzing it, and (3) a description of the data analysis method with the results accompanied with a discussion. In the selection of case study projects for analysis, every effort was made to utilize triangulation of data sources (more than one supporting source) to reach the findings. 3.1 Data Availability and Types of Data In a large engineering endeavor such as building a satellite system that must endure the harsh space environment, the sources of information is actually quite diverse. We focus on a very particular aspect, the electronic documents generated during the engineering of the satellite’s flight software that have been accumulated in The Aerospace Corporation’s Software Reliability Research Database [2], 61 interviews where they were needed to corroborate what was actually occurring during the software’s development, and the observations made by The Aerospace Corporation personnel and the researcher’s support to and from within SMC program offices. Although, more data is available, the data types that were the most consistently available for all of the software projects are included in Table 8. Table 8: Data Availability for Flight Software Projects in this Study Project Labels A B C D E F G Contractors Statement of Work or Objectives (CSOW or SOO) Y Y Y Y Y Y Y Software Development Plan (SDP) Y 1 Y 1 Y 1 Y 1 Y 1 Y 1 Y Preliminary Design Review (PDR) Y Y Y 1 Y Y Y N Critical Design Review (CDR) Y Y Y 1 Y Y Y Y Algorithm Design Document (ADD) Y 1 Y 1 Y 1 Y 1 N N N Software Design Description (SDD) Y 1 Y 1 Y 1 Y 1 N N Y Software source code Y 1 Y 1 Y 1 Y 1 Y N N Software Defect Repository (SDR) Y 1 Y 1 Y 1 Y 1 Y* N N Various project briefings, reports and metrics Y Y Y Y Y Y Y Researcher observations and interviews Y Y Y Y Y N Y N No data. (Data was not made available to the researcher.) Y* Limited data available to the researcher. Y 1 Multiple versions available from numerous revisions. Projects C and D Both went through a re-design after architectural design issues were found. Bold Blue Text The focus of the quantitative data analysis was selected to support dynamic modeling. 3.2 Case Study Selection Approach After identifying what data was consistently available, the software projects were reviewed for lifecycle commonality. The PDR and CDR are used as the common point to compare between projects because these milestone events provide similar documents. The spiral model was identified in one of the project’s early software development plan versions, however, the actual model followed by the team for project-A (having the spiral model prescription in their SDP described in appendix-A) was more incremental in nature spanning multiple builds, and further there was no evidence the project utilized the 62 model’s prescribed risk management methodology, nor were all the SCS taken into account. Eventually, this contractor removed the spiral model methodology from the SDP in a later revision, and described the incremental approach. For comparison, both the incremental and waterfall model uses a “do requirement(s)”, then “do design(s)”, and then “do the implementation(s)” paradigm. Hence, the data analysis focused on the commonality from these milestone decision points. For example, a comparison of design review artifacts (usually PowerPoint viewgraphs) at the preliminary or critical design level provides similar documentation of the software’s design and design approach between the various software projects. Another example of a common document is a review of the contractor’s statement of work (CSOW) or equivalent (the statement of objectives (SOO)) that is used to spell out the offeror’s contractual obligations. The analysis of these documents provided one very meaningful initial finding, some projects had a commonality of having no “shall” statements on the contract for software. This occurred on the projects that used a SOO. Furthermore, these documents significantly emphasized schedule and cost but little to no emphasis was placed on quality. However, projects that used a CSOW had various numbers of “shall” statements in the contract for software. Hence, here we find evidence of initial (and we argue later incorrect) emphasis on schedule and cost by the customer. Following the contractual documents, the Software Development Plan (SDP) is the next logical document to review between these projects. We find that there are significant structural and content differences between the projects, programs, and contractors. Even from the same contractor there are significant differences in SDP layout and content. Next, the design material was reviewed. The primary items here are the Algorithm Design Description (ADD) and evidence of UML diagrams (or other formal design language – not powerpoint descriptions of the code) in the software design documents or in the design review viewgraphs provided to the government. Finally, we review defect reports from the projects where that data was available. 63 3.3 Qualitative Data Research and Analysis Qualitative data analysis provides a method for systematically analyzing data sources. Selected documents for each of the software projects listed in Table 8 were first characterized, categorized, and then ‘coded’ 3 . Open coding is the analytical process of sorting and attempting to build a picture of the data through the identification of phenomena and their properties. Phenomena are the central ideas in the data and will be represented as concepts. Categories are the concepts that represent these phenomena. Properties are the characteristics of a category, and the delineation of these properties gives the categories meaning. Dimensions of a property are the range along which general properties of a category can vary. “Axial coding is the act of relating categories to sub-categories along the lines of their properties and dimensions. [4]” Strauss and Corbin’s use of the word ‘categories’ is really just a synonym for phenomenon, or something that is considered significant. Coding looks at how these categories (or phenomenon) are linked. A sub-category is a category, however, it’s used to answer questions about the phenomenon like who, what where, when, why and with what consequences [4]. Strauss and Corbin [5] also point out that early in this process the analyst may or may not know what they are looking for, but the early process of open coding is similar to sorting pieces of a puzzle into colors, and shapes. After gathering the available data (over 9 Gigabytes for these 7 projects), sets of hypotheses (statements of relationships between the categories and sub-categories) were initially coded using a qualitative analysis tool. While the tool provided the ability to code ASCII text documents (not Microsoft Word® documents), image files (for example in JPEG format), and audio files; the variety of document types requiring analysis in this study (Microsoft’s Word®, PowerPoint®, Excel® spreadsheets, Adobe® PDF files, and numerous multi-megabyte large ASCII data files) required the 3 This is terminology from the field of qualitative research and should not to be confused with ‘the software was coded’, which is used in software development to indicate completion of the task of writing the software source code. 64 researcher to abandon the use of the tool for the traditional score by hand method. In some cases multi- megabyte MS Word files were converted directly into ASCII text files for parsing by Perl scripts written by the researcher to extract out defect data for quantitative analysis. An example Perl script is provided in appendix C. 3.3.1 Open Coding Methodology After starting with methodically opening each of the converted pages from the documents, codes were placed on the pages that provide meaningful information concerning the software development methodology. The name format that was used has a hierarchy of Highlevel_Lowerlevel, for example, Revision_Date and Revision_History. Revision is the higher-level code category, while History or Date is an associated lower level sub-category. Table 9 provides a short list of sample codes from the qualitative analysis process. Table 9: Sample Codes Code Name Description Revision_Date Date and revision for the document. This provides a dynamic reference for comparison to other documents. Defect_Prevention Indicates that there is information that the document underwent a defect prevention step. For example wording to suggest an inspection or peer review process. DB_Default Of particular importance to satellites are the database values used. This category provides information about the default value contained herein. As these ‘open codings’ are created, they were annotated with specific meanings. For example, a Defect_Prevention annotation added to a very early page from the Algorithm Design Document (ADD) for project-A read as: “Use of Peer Review indicated. The general quality of the document should then suggest that it went through a peer review process. The level of subsequent quality will indicate the thoroughness applied.” What does this mean? For this particular example, the authors stated right up-front that they had peer reviewed it, and also indicated the represented organizations of participants (and in some cases 65 named names) included in that review. This was not only an indication that the SDP process was followed (i.e. the document was peer reviewed), it also had sections that clearly indicated that the developers were proceeding without completely finishing the design. This was a decision that was agreed to as a group and provides evidence for “group-think”. If during the analysis, there was an indication of something that triggers a possible correlation with another event, textual tags in brackets were added. An example of this is the following sub-category tag that was added from the analysis of page 16 in this document [[[[Lack of up front design]]]]. The number of brackets provide a visual aid for later review of our scores to call out those items we consider significant. Recall that our comparison is the utilization of “do requirements”, “do design”, then “do implementation”; defects found in the later phases are fed back to the prior phase to fix. In this particular case, there is a clear indication that the engineers “jumped the gun” and called a version that predated the version we reviewed as the CDR version, with some specific comments about items that would be coming in a later revision. Hence, this was one indication that the temporal causality of the design modeling process had been clearly violated; the engineers purposely put items into a later increment, and thus, decided to proceed at risk. However, it does not appear that the document was ever updated to include the missing information (the researcher found no other subsequent versions for this particular document in the software development files (SDF) archived in our database, and a further comparison to the contractor’s SDF could not find trace of subsequent versions). The number of brackets used in coding the documents provides re-review visual triggers to indicate an initial perceived importance of the finding. When searching through the software defects that were found during testing for this particular project there were clear indications of schedule and quality relationships between the decisions made during the design phase that directly impacted the implementation phase. These were usually in the form of comments made by the engineers during the execution of these projects. These relationships were uncovered when looking for connections in defect flow between items such as the project’s algorithm design description and software defects that were traced to the item. Remembering the Software Defect Introduction and Removal Model in Figure 4 of 66 chapter 2, this relationship makes perfect sense. Thus, these clear relationships raise questions of whether or not the decision to press forward was due to schedule and cost pressures, or from a complete lack of understanding of what the downstream implications would be. 3.3.2 Quantitative Data from Qualitative Research During the qualitative research phase one can translate into quantitative data some of the relationships or items that were discovered via the key word search method. This data can be used for distinguishing between successful software projects, and unsuccessful ones. For example, one finding using this method was the number of “shall” statements used in software sections of the contracts. With these numerical differences, classes of software acquisition projects can start to be distinguished. Methods for creating decision criteria to differentiate between the information for software development artifacts and an appropriate level of software product maturity can be drawn from the vast literature on pattern classification methods that are available. One example is the Linear Discriminant Analysis method. Maturity of the documents can be identified using searches for terms like TBD (To Be Determined), the number of errors found by independent analysis, or from an analysis of the changes made to each subsequent version of the document (showing up as change bars with deletions in MS Word documents with change tracking enabled in the case of this research). These methods can be used to provide criteria for allowing a project to transition into the next phase of software development. 3.3.3 Hypothesis Testing The relationships found during the coding analysis of the documents provide the “hypotheses” that can be tested. For example, one hypothesis that might be made is a relationship between cost and schedule constrained software projects and the engineers’ likelihood of “cutting corners” in the initial software lifecycle phases of development (proceeding to the next phase without trying to complete the current phase). Coupling these relationships allows one to provide a theoretical explanation of the phenomenon, which can be tested further using quantitative data, or can allow one to suggest avenues for further research. The research null hypothesis for this dissertation is simply, “There is no evidence of 67 ‘corner cutting’ by contractor staff in schedule and cost constrained development environments and thus no adverse effect on the quality of the satellite flight software.” 3.3.4 Public Summary of Qualitative Findings There was sufficient data available from seven flight software projects to make comparisons between the projects and to a rigorous design model at certain analogous locations. The most illuminating documents from the qualitative analysis phase were the algorithm description documents, the software design documents, and the software defect databases. The summary of results in this section are used in chapter 4 to support our modeling. 3.3.4.1 Designs Abandoned or Not Used To Full Potential In these requirements based processes, the engineers spent an inordinate amount of time and effort revising the requirements and refining the documentation. The engineers likely considered a number of these documents as ‘required by the customer’. In particular, it was found that the software’s design seems to not be valued, and in a number of schedule and cost constrained projects, the designs were prematurely abandoned. By abandoned we mean that the design was not kept up to date as the project progressed following the CDR. Modern UML (or comparable methods of representing a software design) were not being used to their full potential. There was evidence that effort for designs and their associated documents were abandoned in schedule and/or cost constrained development efforts. This was confirmed by one of the project’s engineers. In these cases, it appears that the engineers focus their efforts on development of the source code, and simply reverse engineer the design for the government. There is also evidence suggesting that there is an abandonment of sound software engineering practices in organizations with lots of code that is available for re-use, or from their own internal development efforts. Moreover, there is a clear lack of knowledge or trust not only by the engineers, but in the entire management chain about the value of the object oriented design methodology and the UML for some of these organizations. Some possible alternative explanations or contributing factors for employee design abandonment include; direction by management, agreement from improperly educated government 68 representatives, a ‘lack of teeth’ in our costly contracts, our university system is not training our contractor’s management and engineers properly, and possibly the contractors are simply hiring low-end engineers for our military space programs to save on cost. All of which are likely contributors or alternative hypotheses; none of which, however, is a very appealing situation. Projects-B, C, and G all exhibited solid to very rigorous UML designs that included use cases and sequence diagrams. Project-C’s design followed after serious design flaws were uncovered, while project-G’s UML design was contractually required. These projects provided higher quality software qualification efforts. (Project-G did however split the qualification testing effort into two pieces when some of the functionality was not ready to meet the initial qualification milestone.) Project-B had learned lessons from project-A and instigated a solid UML design with use cases, sequence diagrams (both of these were lacking in the initial designs of Projects A, C, and D) coupled with peer reviews, third party unit testing, and rigorous integration testing, which led to a significantly reduced number of product defects found during qualification and system testing. We will see later in this chapter what the consequences are for not doing a design prior to starting the code. 3.3.4.2 Engineers See No Need for Government Required Documentation There is evidence that suggests the schedule-constrained engineers tasked with developing the software do not welcome the design documentation. In one meeting, a young engineer asked why they needed to do all these design documents for the government. Hence, there appears to be a clear lack of understanding and/or training of what design description documents are supposed to be used for, and why they need to be created and how they are intended to be used within the organizations they are intended to support. Younger engineers also do not have the experience of working on a lengthy development project, and tend to only have experience on short-term development tasks provided in a semester’s school assignment. Further, the lack of use of items like test plans, or understanding by the development organization (even within multiple levels of the software’s management chain) of what the contents of their program’s software development plan are; suggests a lack of understanding of the need for these 69 documents and issues with their subsequent up-keep suggested fundamental training, knowledge and experience issues. 3.3.4.3 Discussion about the Lack of Test Thoroughness Government technical representatives repeatedly identify a lack of unit testing and negative testing. The government required that project-A repeat the effort, not just once but twice. The first re-do was supposed to just retest those algorithmic areas identified as changed, but hadn’t been re-unit tested. On that attempt, the engineers spent a great deal of time just trying to get the unit tests to work again. Following that effort, an independent review team had identified significant issues that called into question the adequacy of the unit tests in the first place; the government then directed that the testing be completely redone. An independent team member indicated that when asking a software developer why they did not test the mathematical portions of an algorithm, the response was that these equations were complicated and too hard. The tip-off that something was amiss to program managers should have been the test coverage data that the developers provided (as Beizer [6] states this data should have indicated 100% statement coverage for unit testing). This was the metric that was used by the government’s technical team to identify that there were unit test issues. The result from the third attempt was the identification of 141 more defects, 18 of which were acknowledged as medium severity (severity level of 3) 4 , and a final unit test code coverage that was very close to 100%. And for those cases where the unit’s code could not be 4 On software defect severity levels: A severity 1 defect is one that would lead to a mission ending failure and there is no work around, a severity level 2 is a mission degrading defect with no work around, a severity level of 3 is a defect that has an operational work-around (this includes defects that could lead to the loss of the mission or be a significant availability issue, however for ground testing of the software - a work around for the issue is available), a severity level of 4 is an inconvenience or nusance and does not require a work around, and a severity of 5 is a cosmetic issue that is nice to have or fix. All uncovered severity level 1 and 2 defects are fixed, while fixing severity level 3 and below depends on the defect and on the contractor. It is also important to note, that in a number of cases the contractor and the government’s technical representatives disagreed on the severity level of the defects. Further, these databases did not include a priority numbering scheme, which the author has used in his experience to better discern between the defects in a severity category. For example, a severity 4 priority 1 (1 indicating high priority 5 indicating low priority) defect would need to be fixed, while a severity 3 priority 5 may not need to be fixed. 70 fully tested, each case was carefully inspected with knowledgeable customer representatives to assess the risk as low. A couple of examples of the defects found from the 3 rd round of unit testing include one that states, “In a negative test case, which is designed so that [(x + y)] exceeds 256. The compiler doesn’t detect the problem and goes inside the “if” statement, even though it should have gone to the “else” statement. This eventually results in an exception condition caused by constraint error.” While another defect identified in this third round is titled simply, “Heaters turned off during Safehold 1.” 5 Further, the defect database from project-E included a comment that indicated that it was obvious to the system tester who found a problem that the sub-system’s software developer did not even do unit testing. This is a clear-cut example of “corner cutting” by at least one engineer that was not identified until system integration testing. Even in the project-G low schedule pressure environment, which exhibited pristine (few to no errors could be found) UML design description documentation, there was concern from technical representatives in the areas of negative testing of software, and the actual use of the contractually required UML design to create the flight software. Here the technical representative expressed a concern that the contractually obligated use of UML was done to simply check the contractual box. In more than one case, the projects did not utilize UML use cases to refine the requirements and trace them to the design. Project-A incurred a significant cost at the end of the project (on the project’s preparation for its second attempt to do qualification testing – the first attempt was flatly rejected by the government technical representatives) to revise the requirements to make them testable, and create new requirements for functionality without any. On this project, management dictated at some point that the software requirements needed to be frozen, the result was just that – a requirements specification document that did not change. However, when new functionality needed to be added in order to meet 5 Safehold is the autonomous or ground commanded action that places the satellite into an autonomous (and usually simple) safe state, until it receives ground commands to do something else. 71 mission or interface requirements from late in the lifecycle test discoveries, the engineers were not allowed to modify the requirements in the requirements specification. Further, project-A (in an attempt to save time/money) elected to move a significant number of requirements into unit testing (using the unit tests from the third round of unit testing) for verification. Qualification testing of requirements traditionally uses engineers that did not write the code to provide some independence in the verification of those requirements. Hence, the unit tests needed to be modified by an independent team member to meet stringent needs of qualification testing rigor. The contractor then also needed to prove that the unit would behave in a similar manner in the integrated system (which in many situations is not an easy requirement to demonstrate), as there is no guarantee that the standalone unit would behave in a similar manner on the actual flight hardware, which executes in real time. Contractors apparent lack of an upfront design, likely contributes to a number of software projects that did unit testing, and then went directly into doing integration of those units on the hardware in the loop test resources. There has been little to no use of the software’s design for a number of projects to define software unit-unit and component integration tests in a build it up approach, which is prescribed by the V-testing model (or V-model) and also discussed by Beizer. Software programmers appear to lack the knowledge that the design should also be used to assist with integration testing. The design combined with a build plan (indicating the order that the software should be implemented) are ideally used by software testers for integration test planning. A recurring theme from a number of government technical representatives at The Aerospace Corporation has been a clear lack of negative testing in all phases of software testing. Project-A kept track of the negative to positive test ratio for all the units during the third round of testing and found a final ratio of about ~2 positive tests for every 1 negative test, but the ratio varied based on the algorithm. Project-D (after also re-doing unit testing) had carefully documented the ratio for unit tests generated over a six-month period as 1.1 positive tests for every negative test for 72 units. This recurring theme (lack of negative testing) suggests that the engineers may not be executing as many as one-third to one- half of the tests necessary to fully test the software at the unit level, and are thus only testing that the unit 72 works under nominal conditions. Peer reviews should catch this issue, but in schedule pressure environments, the contractor may ditch formal reviews for less formal colleague reviews (and less effective), or may drop reviews altogether (as was discovered on project-A). Even more alarming is when the project does not have any engineers that know how to properly test software. In these cases, peer reviews will not generate the needed checks and balances on rogue or “corner cutting” engineers in the phases prior to integration, qualification or even system testing of the software. A simple and effective manner to ensure that the engineers that have been hired onto a program know how to do testing is to retrain them. However, in cost constrained environments the training budget is typically cut. Software Quality Assurance (SQA) in some of these cases, we may think, should find these issues and raise the red flag. However, SQA organizations that simply rely on audits or do not hire knowledgeable engineers to fill the SQA role appear ill equipped and unable to identify software test shortcomings. Another question that naturally arises is where was management? Or why didn’t the software managers catch these issues? In at least some of the cases, management is not allowed into peer reviews because of “company policy.” So they are either conveniently oblivious to the issue, not properly trained themselves, or participated in the “corner cutting” approach in an attempt to meet the schedule. 3.3.4.4 Examples of Qualitative Research Data Figure 9 shows an example of TBDs in the area of fault detection requirements and late algorithm changes in the Post-CDR GN&C (Guidance Navigation & Control) Algorithm document from project-D: 73 Figure 9: Project-D Example of TBD’s and Late Algorithm Changes POST-CDR Project D’s Software Design Document also has a very clear case of pushing off into the future the design for fault detection; further this document had not been updated in over five years to include the missing algorithms into the design. When eventual design and architectural issues were identified later than necessary in the integrated system testing phase, the principal engineer stated in so many different words (paraphrased here) “that the code implementation was so far removed from the design as to make the design of no use” to aid in the re-design effort to fix the issues. Table 10 provides frequency counts of TBD in available algorithm documents from projects A, B, C, and D. Table 10: Quantitative Data Extracted During Qualitative Research Project A Project B Key Word Frequency Alg. Impact Key Word Frequency Alg. Impact TBD 28 High TBD 1 Low Changes None shown Changes None Revision # D Revision # Original Pre/Post CDR POST Pre/Post CDR PRE Project C Project D Key Word Frequency Alg. Impact Key Word Frequency Alg. Impact TBD 13 Moderate TBD 31 High Changes None shown Changes Major Revision # 4 Revision # G Pre/Post CDR PRE Pre/Post CDR POST 74 Both project-A and project-D have significant numbers of TBDs in the Post-CDR versions of the documents. The large number of un-resolved algorithms in these Post-CDR documents is significant. The project-D example indicates that the fault detection algorithmic area is the primary component. Project-B and project-C available algorithm documents were pre-CDR versions; hence the number of un-resolved algorithms is not alarming here. As a final qualitative example the following statement can be found from one of the systems engineers on project-A in their defect database when the contractor was going through their third round of unit testing, “During the process of unit test against the [project-A Version] software baseline, inaccuracies in the ADD have been discovered. Thus, in certain areas, the ADD is not a "trusted" specification for supporting the unit test effort and verifying correct expected results.” In order to provide an additional data point on the schedule pressure that the project-A engineers faced; one test engineer told the government technical representative during their first failed attempt at qualifying the software, “We’ve been about three months from SIQT for two years now.” 3.4 Quantitative Research and Analysis Quantitative research and analysis provides the consequences from the qualitative findings. Figure 10 is a plot of the cumulative defects for the code and all products related to the software project in the software defect repositories from project’s A, B, C, and D. In these cases, the researcher had access to documents provided by each of the offerors that included information about the various defects filed. Appendix-C contains an example Perl script that was written to parse the project-D offeror’s provided defect data format to obtain the data used in Figure 10 (a similar extraction process was used for obtaining project-A, B, and C data). Projects-A and D have in common that the engineers went directly onto coding before the design was complete, in fact in both of these cases much of the design provided to the government was reverse engineered from the code. 75 Figure 10: All Cumulative Defects Discovered for Project A, B, C, and D One of the key items of note from this view of the cumulative defect data is the significantly shorter time in which project-C (solid blue line) reaches maturity (low incoming defect rates) before all of the other projects, even though they have more defects than projects-A and B. High numbers of defects can mean one of two things, either the engineers are really good at finding the software defects, or the team put more defects into the product in the first place. The low rate plateau between weeks 130 and 200 in project-B is the result of a significant number of staff moving off of this project to go help the redesign effort of project-C. The significant rate increase in project-A between weeks 230 and 250 was from the third attempt at unit testing and the significant second effort at qualification testing the software. Project-D has a significant slow ramp of defects prior to week 250, where code and design were occurring concurrently. The increase in project- D’s defect rate at week 250 occurred shortly after their Critical Design Review (CDR). Project-D’s interesting fact is that this software made it all the way to space environment testing on the flight hardware before four significant architectural/design issues were uncovered that forced a fundamental 76 re-design of the software (the appendix I analogy to UML and blueprints was motivated by project-D). The re-design effort’s defect history is not included in the cumulative defect count. The remainder of the sub-sections under this quantitative analysis section will focus on comparing troubled project-A (as evidenced from the qualitative research results) with project-C. Again, project-C had some initial design issues; following which this contractor went through a significant transformation after a corporate commitment to revitalize the team with new leadership and the government used the tactic of providing significant government technical oversight to achieve a design for the software, before coding was allowed. 3.4.1 Project Peer Review Metrics Peer review (or Fagan-style inspections) metrics are provided in this section for projects-A and C. Figure 11 shows the co-plotted number of peer review findings between project-A and project-C. Figure 11: Co-Plotted Peer Review Findings from Project-A and C The next two sub-sections will provide additional detail on the peer review metrics for these projects. A table with the raw data is provided in appendix-B for reference. 77 3.4.1.1 Project-A Peer Review Metrics Figure 12 plots (1) all the project-A peer review findings, (2) just the peer review findings identified as having only a minor quality impact, and (3) those identified as having a major impact. Figure 12: Cumulative Project-A Major and Minor Peer Review Findings A “minor finding” in a peer review is defined as, an error that violates conventions that could result in maintenance difficulties, is an annoyance or an inconvenience, misspelled words, or bad sentence structure. “Major findings” in peer reviews are defined as something that if allowed to persist would likely cause the customer to find a problem with the delivered product and would result in a defect report from the customer. This category can include coding, and design errors. Looking at the raw data, the engineers on this project documented 9.22 minor findings per review and 0.36 major findings per review. Moreover, from the project metrics (which are from the period prior to the second qualification and third unit test effort) the project had 61 qualification test product reviews with no findings. It is not known whether or not this was because the software manager 78 on the project did not update their metrics or if they were in fact that poor. Based on independent quality findings in these qualification test scripts, the contractor had used colleague reviews amounting to nothing more than a buddy review of the test scripts 6 resulting in poor quality tests. The scripting language used was CSTOL (Colorado System and Test Operations Language). Examples of issues that we found in test scripts included; (1) some that had the test’s pass/fail variable defaulting to pass, and even though the logic in the scripts did in fact support setting a variable to fail, an accidental logic error anywhere in the script could have incorrectly reported that the test had “passed”, when indeed it should have failed; and (2) we found tests that reported “Passed” or “Succeeded”, when in fact the output data from these tests required offline analysis to correctly determine their pass/fail status. The concern with allowing these situations to persist into maintenance is there is a significantly increased likelihood of having different staff that was not involved in writing the tests as the staff that are then tasked with re- running those tests. Hence, those staff in a hurry to execute maintenance tests to get the patch uploaded to a space system that needs a fix can very easily miss this requirement, and thus can easily miss a significant issue. To help the contractor through this effort, the government directed a ‘peer review’ like ‘colleague review’ with significant involvement from government technical representatives and government employees in a focused tiger-team like atmosphere. During this effort, the first few weeks found a significant number of low quality test scripts and test description documents, but after about a month the effort was identifying far fewer defects as the test staff “got the message” about the level of quality the government expected to see in these products. Hence, we had to significantly help the contractor do these reviews. Metrics from these later government assisted reviews were not included in the project-A data. 6 Test teams on satellite programs write test scripts to test software in hardware in the loop simulators. The test script is software that is written to test software. Errors in these scripts can cause software defects to be missed. 79 3.4.1.2 Project-C Peer Review Metrics Figure 13 (below) is a cumulative plot of the major and minor peer review findings from project-C. Figure 13: Cumulative Project-C Major and Minor Peer Review Findings This project did a thorough job of reviewing the products leading to a significant number of defects found on average finding 12.78 minor and 2.89 major issues per review. We find here a project with embedded government technical oversight and a significant emphasis on quality leading to rigorous peer reviews. This is a significant contrast between project-A’s findings and those from project-C. 3.4.2 Project Staffing, Defects and their Correlations We now transition into looking at project staffing, and the project’s defects since the modeling approach used in chapter 4 requires this information. The correlations are done to determine if there is any correlation between the number of staff on a project and the project’s defect rates. Figure 14 shows a comparison of the staffing curves on project-A, and C. 80 Figure 14: Number of FTE or Staff for Project-A and C The data available to obtain these curves varied between the two projects. Project-A provided Full Time Equivalent (FTE) staffing information as a routine metric and an occasional organizational chart could be found in the data archive. Data for project-C was primarily from organization charts, but also included a phone interview of the government’s technical representative, detailed metrics (for a short time span) on earned value, effort and FTE. 3.4.2.1 Project-A Staffing, Defects and Correlations Figure 15, below, shows the information that was available that indicates what the staff were actually doing. The data indicates that a significant percentage of staff identified as devoted to testing were in fact building the project’s hardware in the loop simulator. 81 Figure 15: EVM, Planning, and Organization Chart Staffing for Project-A The organization charts aligned with the FTE in Figure 14 from the previous page and a note on one of the project’s metric charts indicates that the contractor does not include the work of sub- contractors in their FTE data. Hence, the FTE information previously provided in Figure 14 is biased in regards to the number of actual staff working on the project. Indications are that early on the contractor hired 6 sub-contractors but at some point an organizational chart indicates they had as many as 16 sub- contractors working on the project. Hence, estimates for the actual number of total staff working on the project had to be estimated from available information to adjust the number of total staff on the project. Figure 16 shown on the following page is an interpolated staffing profile from the FTE data that includes estimated total staff with sub-contractors including scatter plots for low and high noise levels on the estimates. 82 Figure 16: Project-A Estimated Total Staffing with Subcontractors and Noise These staff estimates are used to determine if there is a correlation between the staffing levels on project-A and the defects found in the project’s database. Figure 17 on the next page is a co-plot of available staff information (using an estimate of the number of contract staff) and defects discovered per week showing at least a visual correlation between the two. 83 Figure 17: Project-A FTE Plotted with the Defect Discoveries per Week Project-A’s week 380 is pointed out in the figure as it is the point when the staff level was increased to properly re-do unit testing (for the third time) and re-do qualification testing. Table 11 on the next two pages shows the numerical correlation coefficients between the staff data and the number of defects identified per week. Further, correlated for comparison is the staff data with the additional noise levels of large and small as shown pictorially in Figure 16. Finally, the staff levels are also correlated against random normal and uniform noise created using Microsoft Excel’s data analysis add-on package to provide a null correlation control group. 84 Table 11: Project-A Correlation Coefficients between Staff, Noise, and Defects Estimated Total FTE with Lg Noise Estimated Total FTE with Sm Noise Estimated Total FTE with Large (Lg) Noise 1 Estimated Total FTE with Small (Sm) Noise 0.981281755 1 Estimated Total FTE with Lg Noise Total Interpolated Staff Estimated Total FTE with Large (Lg) Noise 1 Total Interpolated FTE 0.885764619 1 Estimated Total FTE with Sm Noise PDF All DRs Estimated Total FTE with Small (Sm) Noise 1 PDF All DRs 0.382764924 1 FTE with Sm Noise in DR Range ONLY PDF All DRs Estimated Total FTE with Small (Sm) Noise 1 PDF All DRs 0.196060811 1 Total Interpolated FTE Estimated Total FTE with Lg Noise Total Interpolated FTE 1 Estimated Total FTE with Large (Lg) Noise 0.885764619 1 Estimated Total FTE with Lg Noise PDF All DRs Estimated Total FTE with Large (Lg) Noise 1 PDF All DRs 0.354241998 1 Interpolated Total FTE PDF All DRs Interp # of FTE 1 PDF All DRs 0.363300956 1 PDF All DRs Normal Noise PDF All DRs 1 Normal Noise 0.167151399 1 Mean 4.65171504 SD 6.164419665 Values < 0 are rounded to 0, others are truncated to integers 85 Table 11: Continued PDF All DRs Uniform Noise PDF All DRs 1 Uniform Noise 0.133174919 1 Uniform distribution range -11 to 11 Values < 0 are rounded to 0, others are truncated to integers Total Interpolated FTE Sev 3 DRs Total Interpolated FTE 1 Sev 3 DRs 0.377742237 1 Total Interpolated FTE Sev 3 < 380 Total Interpolated FTE 1 Sev 3 < 380 0.237019072 1 Total Interpolated FTE Sev 3 >= 380 Total Interpolated FTE 1 Sev 3 >= 380 0.295749904 1 Total Interpolated FTE Sev 4&5 DRs Total Interpolated FTE 1 Sev 4&5 DRs 0.435258515 1 Total Interpolated FTE Sev 4&5 < Week 380 Total Interpolated FTE 1 Sev 4&5 < Week 380 0.217666703 1 Total Interpolated FTE Sev 4&5 >= Week 380 Total Interpolated FTE 1 Sev 4&5 >= Week 380 0.37307671 1 The correlation coefficients indicate a weak but real correlation between staffing and the numbers of defects found. Further, splitting the data at week 380 indicates a stronger correlation with the number of severity 4 and 5 defects that were found by the second group of testers with significant government technical oversight than by the first group of testers without any government technical oversight. 86 3.4.2.2 Project-C Staffing, Defects, and Correlations Figure 18 shown below provides an assessment of the staff assigned work areas for project-C. Figure 18: Organization Chart Staffing for Project-C During the re-design phase on project-C, the code staff was supporting the UML re-design effort. This was confirmed from an interview with the government technical representative overseeing that project where he stated, “Early on they tried to do coding when they were supposed to be doing design, however, I was asked to go spy on them whenever we believed they were doing this and then they would stop. Finally, they got the message that we did not want them doing code until the design was finished.” Thus, the assigned work area for project-C is biased incorrectly (code staff were doing design); this knowledge will factor heavily into the system dynamics modeling for this project in chapter 4. 87 Figure 19 shows the staff with the interpolated trend line and noise levels (small and large) for staff uncertainties. The expected primary source of staff uncertainty is vacations and identifying exactly what the staff was working on. Figure 19: Project-C Staff Trend and Uncertainty Noise Levels Figure 20 on the next page shows the staff and defect discoveries per week on the same plot. 88 Figure 20: Project-C Staff Plotted with the Defect Discoveries per Week The defect discovery profile for this project appears to be a Rayleigh (Weibull distribution with a shape parameter of 2). Table 12 (below and continued on the next page) provides the correlation coefficients for project-C’s interpolated staffing curve with the number of defects discovered over time and correlations (similar to what was done for project-A) with small and large uncertainty noise levels and random noise from normal and uniform distributions. Table 12: Project-C Correlation Coefficients between Staff, Noise, and Defects Estimated Total Staff with Sm Noise Estimated Total Staff with Lg Noise Estimated Total Staff with Small (Sm) Noise 1 Estimated Total Staff with Large (Lg) Noise 0.936394005 1 Estimated Total Staff with Lg Noise Total Interpolated Staff Estimated Total Staff with Lg Noise 1 Total Interpolated Staff 0.947620214 1 PDF All DRs Estimated Total Staff with Lg Noise Probability Distribution Function (PDF) All DRs 1 Estimated Total Staff with Large (Lg) Noise 0.613519409 1 89 Table 12: Continued Estimated Total Staff with Sm Noise PDF All DRs Estimated Total Staff with Small (Sm) Noise 1 PDF All DRs 0.635331384 1 Interpolated Total Staff PDF All DRs Interp # of Staff 1 Probability Distribution Function (PDF) All DRs 0.648296483 1 PDF All DRs Normal Noise Probability Distribution Function (PDF) All DRs 1 Normal Noise 0.43549774 1 Mean 10.78139535 SD 9.112158249 Values < 0 are rounded to 0, others are truncated to integers PDF All DRs Uniform Noise Probability Distribution Function (PDF) All DRs 1 Uniform Noise 0.360558268 1 Range -49 to 49 Values < 0 are rounded to 0, others are truncated to integers Total Interpolated Staff Sev 3 DRs Total Interpolated Staff 1 Sev 3 DRs 0.606475877 1 Total Interpolated Staff Sev 3 DRs Total Interpolated Staff 1 Sev 4 & 5 DRs 0.575409407 1 Project-C also shows a weak correlation of defects to staff level. The correlation numbers for this effort are higher than for project-A. 3.4.2.3 Project-A Defect Distributions Figure 21 on the following page shows the distribution of only the defects found in the software code by severity per week. 90 Figure 21: Project-A Software ONLY Defects by Severity per Week Figure 22 (below) is the distribution of all the defect severities for project A. Figure 22: Project-A Severity Distribution (All Defects) 91 Project-A has a significant number of defects identified as severity 3 bin, with fewer defects in the other severity levels. This fact will be contrasted later with project-C’s severity distribution. Figure 23 (below) plots the cumulative distribution of the entire list of project-A defects, all software defects, and all severity 1 through 3 software defects. A table with the raw data is provided in appendix-B for reference. Figure 23: Project-A Cumulative Plot of All, All SW, and All SW Sev 1-3 Defects 3.4.2.4 Project-C Defect Distributions Figure 24 on the next page shows the distribution of only the defects found in the software code by severity per week. Figure 24 shows only those defects that were included in the software and verified as fixed. 92 Figure 24: Project-C Software ONLY Defects by Severity per Week However, the distribution of all defect severities for project-C is plotted in Figure 25 on the next page. The plot is done to the same y-axis scale as project-A for comparison. We see here the significant number of severity 4 and 5 defects found by the project-C team, and further note the far fewer number of defects in the important severity 3 category. 93 Figure 25: Project-C Severity Distribution (All Defects) Figure 26 on the next page plots the cumulative distribution of all project-C defects, all software defects, and all severity 1 through 3 software defects. The software defect plots include those defects that were attributed to other software products integrated with this one which was either other software developed by other teams or software that is termed COTS for (Commercial Off The Shelf). Figure 27 plots the per week percentage of the project-C defects in the database that were ultimately rejected or deferred. This reject or defer information is used in the chapter 4 models. A table with the raw data is provided in appendix-B for reference. 94 Figure 26: Project-C Cumulative Plot of All, All SW, and All SW Sev 1-3 Defects Figure 27: Project-C Percent of Rejected-Deferred Defects 95 3.4.3 Code Generation and Unit Test Data for Modeling Included in the next four tables (Table’s 13 through 16) are the average unit test metrics from project-D (which took careful accounting of positive and negative tests when re-doing their unit tests). Table 13: Results from the 1 st Month of Re-Unit Testing Unit’s Subsystem SLOC Who Coverage Total # of Tests Negative Tests Positive Tests # Failed Total Hours # of Findings ACC 220 Dev. A 100% 24 15 9 12 35 4 GN&C 227 Dev. A 100% 28 16 12 2 30 3 System 186 Dev. B 100% 37 21 16 0 70 5 EPS 222 Dev. C 100% 57 20 37 0 60 4 Totals 855 146 72 74 14 195 16 Table 14: Results after Six Months of Re-Unit Testing # Units SLOC Coverage Total # of Tests Negative Tests Positive Tests # of Failed Tests Total Hours # of Findings 4 855 100% 146 72 74 14 195 16 18 4199 100% 248 112 136 1 672 6 18 11003 100% 684 270 414 2 1046 71 32 13200 100% 1093 575 518 16 1135 77 72 29257 2171 1029 1142 33 3048 170 Table 15: Average Data from the 1 st Month of Re-Unit Testing Number of Units Tested 4 Number of Tests Created and Run 146 Percent of Code Tested within Units 100% Ratio of Positive to Negative Tests 1 : 1 Average # of SLOC per Unit 213.8 Average # Tests per Unit 37 Average # SLOC per Test 5.8 Average Findings per Unit 4 Percent of Tests that Failed 10% Average Effort (hrs) per Unit Test 49 Average Effort (hrs) per KSLOC 228 96 Table 16: Average Data from Six Months of Re-Unit Testing Number of Units Tested 72 Number of Tests Created and Run 2171 Percent of Code Tested within Units 100% Ratio of Positive to Negative Tests 1.1 : 1 Average # of SLOC per Unit 406.3 Average # Tests per Unit 30.2 Average # SLOC per Test 13.5 Average Findings per Unit 2.4 Percent of Tests that Failed 2% Average Effort (hrs) per Unit Test 42.3 Average Effort (hrs) per KSLOC 104.2 Over a few years this project’s code generation rate averaged about 2.2 SLOC per day, exhibited by rapid code growth followed by periods of code stabilization lasting a few months, while the peak rate with this pattern was about 5.1 SLOC per day. Presumably during the code stabilization period, new code is also being written for the next increment. These numbers are in line with NASA documented industry averages for new embedded flight software [7]. First, the information is provided to allow us to discuss later the importance of negative testing at the unit level, and second it provides us with a benchmark on the up-front effort required to do thorough unit testing. 3.5 Discussion There were two selection criteria for what we extracted from the large amount of available data and provided in this section: (1) Data that support the system dynamics modeling effort found in chapter 4. (2) Data and other anecdotal evidence that clearly supports that project-A was schedule-driven to the point that the offeror’s engineers were attempting to go faster by “cutting quality corners.” The premise for project-A as the stereotypical example of engineers under extreme schedule pressure is (we hope) well established at this point. The project suffered from a lack of rigor in peer 97 reviews, quality issues in all levels of test, and latent defects that were discovered much later in the test process than what should normally occur for a quality-driven development effort. The typical response from the offeror’s standpoint will be that the employees did nothing wrong, where management and the employees were doing everything prudently possible to meet the customer’s tight schedule. The subject advanced in this dissertation is the “cutting corners” hypothesis, where in some regards what is considered “cutting corners” in the field of software is usually left to the realm of “engineering judgment.” However, the data provided in this section for project-A will serve as our example for “cutting corners” in government software contracts. Project-C, on the other hand, with its embedded government technical representative oversight (following the initial design issues) was clearly more aggressive in peer reviews and efficient in working through the defects. The number of defects found by the project-C team in category 4 and 5 severity levels was significantly higher than project-A’s, and further they worked fixes for these defects into the product. The number of severity 1 and 2 defects was consistent between the two efforts. This fact may suggest that this class of defects could not be uncovered by the peer review method, nor from unit testing (where project-C used a commercially available unit testing tool) nor from the analytical methods used during the design phase of either project. Each defect has not however, been thoroughly analyzed to determine if these defects could have been removed earlier in the development phase. Project-C then, is the example used in this dissertation of not “cutting corners”. Even though that is how the team began, it is clearly not what happened in the long run due to intervention by the government. The effectivity data for high severity defects from project-A (found in appendix-B) suggests these defects were primarily found in system testing. As was suggested in the preceding paragraph, this data is however, inconclusive in suggesting that this is the only way these defects could have been discovered, as each defect has not been reviewed to determine if they could have been identified from better-executed upstream defect prevention and discovery methods. Alas, upstream methods cost money and time to execute. Cost and schedule constrained software development environments are unlikely to utilize these to their maximum potential and thus make the early decision to “go at risk.” 98 3.6 Public Recommendations The recommendations made in here are derived from qualitative data and observations by some of The Aerospace Corporation’s technical representatives and the researcher. The aerospace industry and government should strive for engineering methodologies that provide software engineers an ability to do their jobs without additional documentation requirements that they perceive as simply slowing them down. Software programmers inherently hate to do documentation. The industry should as much as possible require the use of tools from which documents can be derived and delivered in standardized formats that are acceptable to the government. The desire by software programmers to “simply pound out the code” instead of working to mature the requirements and the design through proven rigorous engineering and analysis leads us to the conclusion that perhaps the programmers in this industry should not be the creators of the software design. This is a controversial proposal in most software development fields. However, in our schedule constrained aerospace industry, clearly something must be done to avoid the architectural/design snafus that repeatedly seem to surface. One suggestion is merging the software engineering (design) tasks with the up-front systems engineering as the more prudent path, thus providing code developers a complete design, or providing them the design in increments from an engineering organization that follows an incremental or spiral development approach for larger systems as the approach that needs to be adopted. Clearly, customers should modify their acquisition processes and contracts in an effort to help resolve the design abandonment issue for schedule-driven projects. This lack of willingness to do a methodical high-level design also appears to drive the desire to ‘punt’ on the critically important fault management and fault detection requirements, until it is too late. The architecture for this important mission functionality needs to be in place prior to coding as this functionality ripples throughout the entire system. The scarcity of a designed fault management architecture and negative testing is attributed to a success oriented mentality believing nothing can go wrong. Therefore, we believe that these software intensive systems are the result of either schedule 99 and/or cost-driven behavior or ignorance that leads engineers down a path where they do not have the ability to do critical reasoning about how to properly handle hardware or software faults. A number of software projects have had clear issues with unit testing. Perhaps the government and contractors should adopt the strategy of retraining engineers how to properly do unit testing at the beginning of every project, until they know how to do it? And then train all the new engineers they hire throughout the project’s life cycle. Alternatively, a recommendation, is to provide the unit test data to the developers from engineers knowledgeable in testing methodologies, thus creating a teaming arrangement at the unit test level for software developers and software testers. We now put forward as a significant contributing factor that the need for corporations to retrain engineers is due to improperly trained engineers from our university system. However, if the engineers are trying to do the proper engineering on these systems, but if their management or the customer are not allowing this to happen; then we conjecture that the management establishment running these programs is where the retraining needs to occur. Therefore, we hypothesize that in actuality both of these situations are contributing factors. We also observed that the SDP required peer review of test artifacts was not sufficient to keep the engineers from cutting corners on unit-testing on these projects. It was the opinion of the government’s technical representatives that the project-A staff had, under schedule pressure, abandoned the SDP prescribed unit-testing methodology, despite the fact that the SDP was a contractual compliance document. We ultimately found that this contractual compliance was not sufficient to keep the contractor from cutting corners. Therefore, from all of these factors and logical arguments we strongly recommend the creation of uniform standards for the various software titles; institutionalizing minimal certification programs taught by universities that includes fundamentals such as mathematics, software architecting, software engineering, and required training in software testing methodologies. We need software savy engineers not computer science majors who know how to write software compilers. Furthermore, space system engineers do not seem to realize the fact that software intensive systems developed by schedule-driven environments will have more latent software defects. These defects cannot be totally uncovered in 100 software testing. Our next generation of software intensive systems must be engineered with this fact in mind as a central design philosophy. 101 C h a p t e r 3 E n d n o t e s [1] Freeman Dyson, Disturbing the Universe, 212. [2] Myron Hecht, and Douglas J. Buettner, “A Software Anomaly Repository To Support Software Reliability Prediction,” Proceedings of the 17th Annual Systems and Software Technology Conference, April 2005. [3] Anselm Strauss and Juliet Corbin, Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, SAGE Publications, (Thousand Oaks: 1998): 101-103. [4] Ibid., 124-125. [5] Ibid., 223. [6] Beizer, Software Testing Techniques, 14. [7] NASA, Software Estimation, Internet: http://www.ceh.nasa.gov/webhelpfiles/Software_Estimation.htm [8] Barry Boehm and Jo Ann Lane, “Using the Incremental Commitment Model to Integrate System Acquisition, Systems Engineering, and Software Engineering,” CrossTalk, October Issue, (2007). 102 C H A P T E R 4 : A S Y S T E M D Y N A M I C S M O D E L Engineers participate in the activities which make the resources of nature available in a form beneficial to man and provide systems which will perform optimally and economically. [1] 4. Background System dynamics originated in the fields of mathematics, engineering and physics from the theories of nonlinear dynamics and feedback control [2]. System dynamics modeling, however, is an inherently cross-disciplinary methodology (examples range from physical systems to cognitive social science) due to its application to real-world problems involving people and groups [2]. System dynamicists are using this tool to investigate the dynamics of diabetes, the cold war arms race, the combat between HIV and the human immune system, and to affect government policy by studying the interactions required to transition to a hydrogen based fuel economy [3] [4]. Studying software engineering project management, Abdel-Hamid’s [5] use of system dynamics is the first to address software engineering, inspiring a flood of papers using this approach to describe aspects of software development dynamics. Examples include the effects of interactions between requirements elicitation and areas of software development such as software implementation, and testing [6]. System dynamics models are used in this chapter as a research tool to probe a few of the dynamic relations found in the analysis results from chapter 3 for two of the software projects. The chapter begins with an overview of system dynamics modeling from the classical control theory sense as well as its use in modeling business dynamics. To model the dynamic relations found in chapter 3, one of the commercially available tools is used to create a model for the software development and integration test process. Results from a suite of validation tests are discussed to close out the chapter. 4.1 Procedures for Modeling Dynamic Systems According to Ogata [7], 103 From the point of view of analysis, a successful engineer must be able to obtain a mathematical model of a given system and predict its performance. (The validity of a prediction depends to a great extent on the validity of the mathematical model used in making the prediction.) From the design standpoint, the engineer must be able to carry out a thorough performance analysis of the system before a prototype is constructed. Hence, we will first provide a comparison between the roots of system dynamics for modeling physical systems with those used by business dynamicists. In the classical fields of nonlinear dynamics and control theory, system dynamicists use traditional mathematical modeling to describe the system and more or less contain a problem definition phase, a scope determination phase, a data identification phase, a model formulation phase, a model testing phase, and an understanding phase [8] [9]. Noticeably absent, however, in the process for modeling business dynamics (using Sterman’s [10] description of the process) is the lack of the mathematical model identification step. Madachy [11], however, adapts from Richardson and Forrester-Senge in his Ph.D. dissertation for system dynamics model validation tests to: (1) consider the suitability of the model for the purpose, (2) check consistency with reality, and (3) check the utility and effectiveness of structural and behavioral aspects of the model. This approach combined with examining whether or not the model provides an ability to generate correct reference behavior necessitates a large test matrix that is used to investigate the model’s properties [11]. Hence, as we build more complex models, the size of the corresponding test matrix will out of necessity also increase. (This fact can be discerned from the greatly expanded size of the test matrix created for this dissertation.) The physical system’s literature suggests that the lack of a mathematical model identification step removes from the business modeling toolkit the use of analysis methods such as the Laplace transform and the state-space (or state-variable) approach [12] [13]. The Laplace transform is useful for analyzing linear systems, while the state-space representation is used for handling complex systems that are either linear or non-linear, time invariant or varying all with multiple inputs and outputs [13]. Further, it removes the discovery of interesting mathematical relationships that describe nuances of the problem. 104 4.2 Block-Diagrams and the State-Space from Classical System Dynamics Using the nonlinear dynamics and control theory perspective from Ogata and Palm provided in the prior section, conceptually, we consider software development as a coupled multi-phased system represented by state variables included in linear algebraic equations governing the flow of products through a lifecycle phase. Ogata (see for example Ogata chapter 5 [13]) uses block diagrams to model continuous time dynamic systems. The block diagrams illustrate the interactions of vector and matrix variables. Thus, provided below for reference in Figure 28 is a labeled version of Ogata’s [13] generic block diagram with variable names using his terminology. Figure 28: Labeled Version of Ogata’s Generic Block-Diagram with a Feedback Loop The state-space approach uses vector-matrix notation to represent the dynamic system as an equation [13]. The state-space equations for the generic block diagram in Figure 28 are, ( ) Bu Ax x + = • t Equation 4-1 ( ) Du Cx y + = t Equation 4-2 Definitions for these variables are, 105 ... ... ... 2 1 2 1 2 1 = = = n n n y y y u u u x x x y u x Equation 4-3 = = mn m m n n mn m m n n b b b b b b b b b a a a a a a a a a ... ... ... ... ... ... ... ... ... ... ... ... ... ... 2 1 2 22 21 1 12 11 2 1 2 22 21 1 12 11 B A , etc. Equation 4-4 Hence, knowledge of the system’s matrices and input vectors provides the possibility for a closed form mathematical (or alternatively numerical) solution of the system’s dynamic equations. A Laplace transform of the state-space equations yields a reduced-form or transfer-function algebraic model; the use of either the transfer-function or state-space form depends on many factors including personal preference [14]. A generic mid-phase ‘Waterfall’ development block-diagram appears below in Figure 29. However, this diagram can represent any mid-phase development model that has feed- forward of products and feedback of defects. Figure 29: Block-Diagram for a mid-Phase Waterfall Process 106 Initially, the product input for a project comes from the customer’s RFP and/or the systems engineering communication interaction with the customer to elicit the requirements for the software intensive project. Later phases pass requirements, plans, designs, prototypes and other work products in document or other form to the next phase of development, and so on until the software is tested in the target environment, from which the space environment testing feeds back information about the discovery of behaviors that may indicate latent defects that escaped the quality control processes used throughout the development’s lifecycle. Thus, the state-space equations from our mid-phase block diagram are, p E p B p A p 1 x y x x + • + + = Equation 4-5 y x z p D p C p + = Equation 4-6 The figure and this conceptualization will be referred to later in this chapter to aid in our modeling of a software integration test feedback loop. Further, Eqn.s 4-5 and 4-6 provide the mathematical foundation for future work that the current business dynamics modeling lacks in the description of software engineering. This mathematical view is needed in order to build large feedback models of the software intensive system engineering process. Because at some point, we anticipate that the current tool based system dynamics modeling approach will be consigned to simply testing certain aspects of the system. 4.3 Commercial Tool Selected for Modeling the Dynamic Software System For this dissertation, Powersim Studio™ Academic 2005 (6.00.3423.6) Service Release 6 was selected [15]. The tool has an easy to use integrated development environment that allows the flexibility of building up sections of the model with the ability to execute the model as the changes are being introduced. In addition, Powersim includes functionality for performing Latin Hypercube or Monte Carlo risk sampling and a genetic algorithm for finding parameter optimization settings. Latin Hypercube sampling (similar to Monte Carlo sampling but uses a more efficient approach) results for 107 quality, schedule or cost-driven processes are included at the end of this chapter. Appendix-G provides additional reasons Monte Carlo sampling was not used. 4.3.1 Powersim Studio™ Modeling Tool Symbols Symbols used by the Powersim Studio modeling tool are in Table 17 [15]. Table 17: Symbols Used by the Powersim Studio™ Modeling Tool Symbol Name Description Level A variable that accumulates changes, which are influenced by in and/or out flows. Reservoir A special type of level that cannot be depleted below zero. Auxiliary A variable that contains calculations based on other variables. Constant A variable that contains a fixed (initial) value. Continuous flow with attached auxiliary A continuous flow connector with an attached auxiliary variable. Information link A connector that provides information to auxiliaries about the value of other variables. Delayed link A connector that provides delayed information to auxiliaries about the value of other variables at an earlier stage in the simulation. Initialization link A connector that provides start-up (initial) information to variables (both auxiliaries and levels) about the value of other variables. Cloud A symbol illustrating an undefined source or outlet for a flow to or from a level. The cloud symbol, also referred to as the source or sink of a flow, indicates the model's outer limits. 4.4 Description of a Modified Madachy Model (MMM) This section discusses the modified version of Madachy’s [16] [17] original inspection-based model for the software design, code and test phase. Madachy’s original inspection-based model provides a model that already includes a task and error chain and is based on the “effort” expended during the software development process. Recall that our goal is to use our system dynamics model as a research 108 tool to investigate the effects of inspections and unit testing in a framework that simulates our software development situation. In addition, the form of the inspection model in Madachy’s work allows for the straightforward addition of a unit-testing model. Moreover, the integration test feedback loop implementation is a representation of the methodology described in the software development plans for the projects and aligns with the feedback loop provided earlier in Figure 29. Table 18 on the next page provides a road map to the figures and tables contained in this chapter and the chapter’s primary appendix as a helpful guide for the reader. Table 18: Remaining Contents for Chapter 4 Section Page(s) Item(s) 4.4.1 110 through 125 Modified Madachy Model diagrams in Figure 30 through Figure 45. 4.4.3.1 130 Results for unit testing with no IT feedback in Figure 46 and Figure 47. 4.4.3.1 131 Results for unit testing with an IT feedback loop in Figure 48 and various parametric combinations with inspections in Figure 49. 4.4.3.2 133 Counter intuitive manpower rate results in Figure 50. 4.4.3.3 134 and 135 Effort path diagrams in Figure 51, Figure 52, and Figure 53. 4.5.1 138 and 139 Test goals for the dynamic defect discovery test cases in Table 19. 4.5.3 142 Results from combining design and code staff to simulate a code and design reverse engineering process using Madachy’s staffing curves in Table 20. 4.5.4 143 Sensitivity analysis diagrams using Latin Hypercube with distributions simulating quality, schedule and cost-driven processes in Figure 54 and Figure 55. Appendix E 307 through 323 Baseline test matrix is in Table 34 and with the comparison results in Table 35. The augmented MMM test matrix is in Table 36 and results are in Table 37. Appendix E 324 through 340 Dynamic defect discovery test matrix with numbers of errors found and escaping for each case in Table 38 through Table 47. Appendix E 341 through 344 Interpolated staffing and modification curves for projects A and C used for the dynamic simulations in Table 48 and Table 49. Appendix F 348 through 354 Plots of dynamic behavior compared with project-A and project-C data in Figure 73 through Figure 85. Appendix F 354 Diagram of a ‘starved’ requirements reservoir in Figure 86 and a plot of the ‘Errors Found in IT’ noise test case MD5.5 in Figure 87. Appendix G 356 and 357 Distributions for quality, schedule, and cost-driven processes using Madachy’s original staffing curves with numeric results for numbers of errors are in Table 50 and Table 51. 109 4.4.1 The Modified Madachy Model (MMM) The final MMM configuration used here is found in Figure 30 through Figure 45. The model incorporates unit testing and an integration test (IT) feedback loop, both of which can be enabled or disabled. The model includes functionality for using a constant or accelerated review board rejection and deferral rate for integration test identified errors. The model also incorporates a constant or decaying integration test failure rate. Equations and variable definitions for the model are found in appendix-D. 110 Figure 30: Modified Effort Model (Top Left Quadrant) 111 Figure 31: Modified Effort Model (Top Right Quadrant) 112 Figure 32: Modified Effort Model (Bottom Right Quadrant) 113 Figure 33: Modified Effort Model (Bottom Left Quadrant) 114 Figure 34: Modified Errors Models (Top Left) 115 Figure 35: Modified Errors Models (Top Right) 116 Figure 36: Modified Errors Models (Bottom Right) 117 Figure 37: Modified Errors Models (Bottom Left) 118 Figure 38: Modified Tasks Models (Top Left) 119 Figure 39: Modified Tasks Models (Top Right) 120 Figure 40: Modified Tasks Models (Bottom Right) 121 Figure 41: Modified Tasks Models (Bottom Left) 122 Figure 42: Un-Modified Test Effort Adjustment Model 123 Figure 43: Un-Modified Cumulative Total Effort Model 124 Figure 44: Powersim Time Calibration with iThink Model 125 Figure 45: Powersim Variables and Calibration Constants 4.4.2 Discussion of Model Modifications and Differences Madachy’s original inspection-based model provides a tested framework that uses the original COCOMO equations as a foundation for its predictions on the effects on effort for varying levels of inspection in a software process, and hence provides a useful framework for interpreting our empirical results. Our goal here is to simulate using ‘what if’ scenarios the numbers of errors that can be expected in a software process with varying levels of unit testing and inspections that have a feedback loop. 126 Comparison of the dynamic errors from the model with those from the case study data provides validation that the model is to some extent correctly simulating the project’s dynamics. Referring to Figure 30 through Figure 45 (from the previous pages), the symbols in black text and lines are those variables, levels, connections and flows which are in Madachy’s original inspection- based model or were added to get numerical agreement between Madachy’s iThink implementation and the Powersim implementation used here. Red text and lines in the model indicate those variables, levels, connections and flows that were added to augment the model with unit testing. Blue text and lines in the model indicate those variables, levels, connections and flows that were added to implement integration testing and its feedback loop. Grey text and lines in the model indicate those variables, and connections that were added to augment the model with an ability to easily modify staff curves in an attempt to match modeled data to real data by including staff curves from projects-A and C. The addition of unit testing is straightforward in Madachy’s model as it uses the same model form as he used for inspections in the coding phase. The functionality was added with the ability to completely enable or disable it through the selection of parameter settings in the constants and variable constructor diagram 7 . Within Powersim (Figure 45), the diagram’s constants (the variables for testing) were accumulated into a single constructor diagram with the name’s backgrounds color-coded green in order to provide a manageable central location for changing these constants between simulations to facilitate testing the model. The inspection and unit test variables that changed frequently for each test have colored backgrounds behind their symbols to assist with ensuring the correct parameter is modified. The integration test feedback loop implementation splits the errors based on the Beizer-noted value where we can cover a percentage of the code in integration testing, thus a fraction of the errors are 7 Constructor Diagrams are Powersim Studio’s method for splitting a model into manageable worksheets or diagrams. Powersim does not allow the same symbol names in different diagrams. Hence, to share variables between diagrams, the implementation shown in the Figures and the Appendix adds a suffix moniker (such as _TM or _ErrM) for the local symbol names and the equation for the symbol simply references the symbol with the global constant or the symbol name that is in the constructor diagram where it is originally declared. 127 found and a fraction of errors are missed, and are passed onto later phases. Errors found in integration testing can be accepted or rejected by the review board process, but based on observations made in this dissertation there is a strong preference by our schedule-pressured contractors to simply just try and fix design errors in the code instead of fixing them in the design. Hence, the parameter ‘Fraction of Design Errors Requiring Full Redesign’ was included to allow future research an ability to tune this behavior to the contractor’s development environment, but here is simply set to a value that represents the case study observed preference. Information in a small percentage of the project-C’s defect data indicates that they actually went back and fixed some of their design flaws, but due to what we determined was a low density of design defects (confirmed on project-C from the SDD review, observation of the dearth of design identified defects in the database, and discussions with contractor personnel and the government’s technical representative) the parameter does not have a significant effect on this project’s defect dynamics. In addition, the model calculates the design/code error density ratio in order to weight the number of defects that should naturally be selected by a review board to go through either full requirements and redesign or the lesser effort just ‘fix it in the code’ route. A test failure rate model (an exponential decay rate model) was included in the MMM for flexibility instead of simply relying on a constant test failure percentage. The augmented test matrix for the model does not explore the use of this capability and instead used a constant rate of 11%, which is what was found on project-C. It was however used in the test matrix for the dynamic data comparisons for project-A and project-C as seen in the “Use Const. Task Fail Rate” column of Table 38 through Table 47 in order to observe the effect and bound the defect dynamics in those cases. The defer task acceleration model was originally included to probe what has been seen and described on some projects as “launch chicken”. This is a term we use to indicate that late in the project there is an acceleration of defect rejections and deferrals and there is also an increase in deferring or not testing software functionality that had been included in the software but does not work. The game of “launch chicken” is a tense risk trade-off between the government and the contractor on what defects are 128 fixed, what functionality actually makes it into the software, and the amount of ground testing performed prior to launch. Project-C data however shows the opposite phenomenon; an early increased percentage of the defects were rejected or deferred which eventually stabilized to 30%. The augmented test matrix in the appendix for testing the differences between Madachy’s base model and the MMM simply used a reject/defer percentage of 5%, while the later defect dynamics validation section uses 30% for both project-A and project-C. Since Powersim does not have functionality similar to that used by Madachy to calculate the completion time of the project, we simply use the point in the data that the test effort stops changing (i.e. the test effort stops increasing and has stabilized at a constant value). This implementation results in just two days of difference to Madachy’s results in just one case in appendix-E, where the difference is typically either no days or one day. A future enhancement to the model can include an algorithmic abstraction of the manual method to automatically calculate the completion day for the simulated projects. To reach numerical agreement between iThink and Powersim, time calibration constants (‘Madachy DesignCode Dx_Dataspread’ and ‘Madachy Test Dx_Dataspread’) were added and empirically derived using trial and error during the implementation and testing of the base model. 4.4.3 Implementation and Testing Approach The first implementation in Powersim was for the sole purpose of recreating the original Madachy inspection-based model [17] [18] [21]. To verify that the implementation was correct, the test matrix in Madachy’s Ph.D. dissertation [20] was reused to test the model until sufficient 8 numerical 8 “Sufficient” here means that small numerical differences were observed, however these numerical differences could be attributed to differences between iThink and Powersim implementations. Further, no claim for the numerical accuracy of the tool’s model is required for this dissertation, where the modeled trend is the focus. A table with the numerical differences between the two implementations is provided in appendix-E Table 35. 129 agreement was reached between the Powersim and iThink implementations. 9 This approach allows us to consider the changes in this work as compared to those from Madachy. The results of this comparative testing phase (a simple subtraction of the results) are included in Table 35 of appendix-E, showing acceptable numerical differences between the two implementations. Madachy’s [20] test cases contain model parameter variations to investigate (Table 5.1.2-1 in his dissertation) the use of inspections, job size and productivity, error generation rates, design error multiplication, staffing profiles, schedule compression, ROI of inspections, and use of inspections per phase and testing effort. From his original suite of cases, the cases that were not re-executed here are 10.1, 10.2 and 14.1 through 16.12, although informal tests to verify the correct implementation of this functionality were performed. Madachy’s matrix was then augmented to test the implementation of the unit test and integration test feedback loop functionality under a widened variation of input parameters – in addition to those selected by Madachy to test his original model. These augmented tests greatly expand the original tests from Madachy and allow comparisons between the parameters introduced from the additional functionality to the baseline data cases in Madachy’s test matrix. In this manner, the changes to the model can then be quantified and thought about to determine if the results they produced make sense. 4.4.3.1 Results from Modeling Unit Testing with an Integration Test Feedback Loop A key finding from Madachy’s dissertation was the change in the total manpower rate from varying the inspection practice parameter. This parameter simulates a project’s use of inspections, with a value of 1 indicating a full use of inspections and a value of 0 used for no use of inspections. We provide (below) the results of adding unit testing in a similar manner as inspections and the integration test feedback loop functionality (abbreviated as IT in the titles). 9 Madachy provides his original iThink implementation at http://www.madachy.com/softwareprocessdynamicsorg/models/inspections.itm 130 Across a wide range of embedded mode software sizes, comparison between Figure 46 (below) and Figure 47 (next page) the simulation shows that simply adding unit testing to the model (with the integration test feedback loop disabled) increases only slightly the overall effort, and does not significantly affect the delivery time. Figure 46: Simulated Manpower Rate Without Unit Test or IT 131 Figure 47: Simulated Manpower Rate With Unit Testing but Without the IT Figure 48: Simulated Manpower Rates With the Feedback Induced Late Effort Spike 132 Figure 48, uses as an example a simulated 64 KSLOC embedded mode project - the modified model predicts the existence of a manpower rate increase as a direct consequence of using parameter settings for inspections and unit testing that has the effect of either not doing these, or not doing them effectively. This result represents a key finding from this research. The addition of the integration testing feedback loop predicts a manpower increase when not performing adequate defect removal activities in a software development activity, where no feedback loop misses this effort increase. Further, this finding is supported by the existence of a significant manpower increase in project-A that was needed to retest that software from the reduced up-front defect removal effort. Figure 49 shows the modeled effect from various combinations of unit testing and inspections, and will be discussed later. Figure 49: Simulated Manpower Rates with Various Defect Detection Parameter Settings 133 4.4.3.2 Counter Intuitive Manpower Rate Results A counter intuitive result (shown below in Figure 50) is the model’s prediction that by performing no unit testing and no inspections one can have a completion time that is slightly better than doing no inspections and partial (50%) unit testing. We would think that doing more unit testing we would get done sooner since we do not need to find and fix those defects in integration testing. This counter intuitive result is attributed to the values used in the model for the unit test time delay and the effort to rework the errors late in the lifecycle. The figure also shows the result of decreasing the unit test delay time from 10 days to 1 day. Hence, these results make perfect sense, and since there are a number of methods for implementing unit testing, either case can align with practice. Some example variations on unit testing include, having the developers do it themselves, using an independent team to provide the unit tests to the developers in parallel with code development effort, use of automated test creation tools and customer unit test documentation requirements which all work to increase or decrease the delay time. Factors affecting error rework delays are also numerous. Figure 50: Result with No Inspections Coupled with No Unit Testing or 50% Unit Testing 134 4.4.3.3 Effort Paths and Defects Found in Integration Testing Madachy’s original model also clearly demonstrates different effort paths when doing inspections (increases near-term up-front effort) and not doing inspections (decreases near-term up-front effort) on extending the overall effort. Our results shown in Figure 51 and Figure 52 also indicate multiple near-term and long-term effort paths on a simulated 32 KSLOC embedded mode software project and the dependency on the varying degrees of inspection and unit testing. Figure 51: Cumulative Effort Results from Varying Degrees of Inspections and Unit Testing Figure 52 on the next page is a magnified view of Figure 51 showing the multiple near-term paths in the period between weeks 200 and 240. This result may also seem to be counter intuitive. We find that doing no inspections and no unit testing in this period requires more near-term effort than doing no inspections and 50% unit testing. These effort-path differences are from the higher-effort incurred from a significant use of the integration test feedback loop after errors are found during integration testing which should have been found and removed during initial defect removal process. 135 Figure 52: Focused Look at Modeled Near-Term Effort Paths (Weeks 200 to 240) Figure 53: Results on Errors Found in Integration Test The modeled impact on the numbers of defects found in integration test from the various parameter settings for unit testing and inspections is shown above in Figure 53. The total number of 136 integration test errors discovered is also dependent on the error injection density and on integration testing effectiveness. The distance (in errors) between no inspections and no unit test, with that of no inspection and some unit test is attributed to the effectiveness value assigned to unit testing. The parameter for inspection effectiveness reused Madachy’s value of 0.6 throughout, where he notes that the literature reports values between 0.5 and 0.9 [21]. 4.4.3.4 Schedule Constraint Effects The qualitative and quantitative research results from chapter 3 indicate that a schedule- constrained project’s staff may cut unit tests, unit test rigor, inspection tasks or inspection rigor, reverse engineer the design from the code, and not perform design documentation upkeep and or creation of the documents associated with those tasks. The effect appears to contradict the modeled output for the number of errors escaping integration test if we simply view the results in Table 37 of appendix-E. The modeled output of IT errors indicates that there is actually an improvement in the number of defects as schedule pressure increases for cases 11.8-11.10 under the same parametric inputs for unit testing and inspections. The inputs for cases 11.2-11.4, which are indicative of attentiveness to high-quality show little impact on escaping errors, but an increase in errors found in IT. A possible interpretation for this apparent contradiction is found in the assumption that staff would not reduce testing or inspection rigor under schedule pressure, but would instead reduce non- quality impacting tasks. The simulation’s use of the COCOMO SCED parameter appears to have the opposite effect on pulling the schedule in and instead pushes the schedule out where the effect on the simulations is the reduction of tasks. (Simulations show tasks like requirements and design that do not complete in these test cases.) In addition, the simulation used here does not include a feedback loop on the error densities or on the inspection or unit testing practice from schedule pressure. A future investigation can include a schedule pressure feedback loop on these parameters in a manner as was done by Abdel-Hamid [22], where he quotes a number of authors concerning the impacts of schedule pressure on developers including “People under time pressure don’t work better, they just work faster…” Further, he specifically quotes the following from Thibodeau and Dodson [23], 137 When coding has begun before the completion of design, the designers are required to communicate their results to the programmers in a raw, unqualified state, hence significantly increasing the chance of design errors… . This is not to suggest that systems cannot be developed with overlapping activities. Many systems have distinct parts that can be coded before the entire design is completed. … We are concerned here with the situation where the press of the development schedule or the slippage of preceding activities results in overlapping activities that would have been accomplished better sequentially. Hence, it is noted here, the need for accounting for the likelihood a schedule-pressured team will cut partially or completely the design-phase, and defect detection and removal tasks based on perceived schedule pressure. 4.5 Simulation of Space Flight Software Projects This section provides the approach and results from using the model as a tool to investigate the defect dynamics for two of the available space flight software projects. Here we attempt to align the dynamics of the total numbers of errors found in two of the software projects with the modeled dynamic ‘Errors Found in IT’ data. 4.5.1 Approach for Studying Flight Software Defect Discovery Dynamics Using the actual staffing curves (shown in Figure 15 through Figure 19 of chapter 3) we begin by creating a large test matrix that attempts to vary the model’s parameters we wish to investigate and recording as results the daily values for the ‘Errors Found in IT’, and the ‘Errors Escaping Integration Test’ for the dynamic comparisons. This test matrix and the final calculated value that resulted from the varying defect densities and the inspection or test rigor for these two parameters are provided in Table 38 through Table 47 of appendix-E. From the simulation results of the primary test cases (test cases 0.1 through 8.5 below in Table 19) extra test cases were added for project-C to investigate dynamic properties of the defects at ‘ultra low’ design defect densities or code density, which appeared to have a significant effect on the dynamics of defect discovery from the results of tests in 1.x. In addition, test cases were added to investigate switching Madachy’s values [24] (obtained from Boehm’s [25] Table 6-8 in Software Engineering Economics) for the fraction of effort spent in design and coding of 0.454 and 0.2657 respectfully, due to 138 the observation that the engineers were not adhering to a design first then code strategy, which is what Boehm’s set of 63 projects consisted of [26]. Due to the significant amount of data generated by the simulations, the investigation for project-C simply used a small investigation by adding test case 9.x to test the dynamic effects on an unmodified staffing curve using the effort fractions switched on the 0.x set of tests. Table 19: Test Cases for Flight Software Defect Discovery Dynamics Case ID Project (A/C) Staff Curve Type (Unmodified/Modified) Effort Fraction Type (Base/Switch) Dynamics Investigated RW (Unmodified) Base/Switch A MD (Modified) Base/Switch RW (Unmodified) Base/Switch 0.x C MD (Modified) Base/Switch Reference case – Equal design and code density levels that obtain the ‘same’ 10 number of defects as the real flight software defects with varying but equal values for inspection and unit test practice and unit test effectiveness. RW (Unmodified) Base/Switch A MD (Modified) Base/Switch RW (Unmodified) Base 1.x C MD (Modified) Base/Switch Effect of un-equal design and code density levels on the dynamic shape. RW (Unmodified) Base/Switch A MD (Modified) Base/Switch RW (Unmodified) Base 2.x C MD (Modified) Base/Switch Effect of low integration test effectiveness on the overall dynamic shape and the change in the resulting numbers of defects passed through to later test phases. RW (Unmodified) Base/Switch A MD (Modified) Base/Switch RW (Unmodified) Base 3.x C MD (Modified) Base/Switch Effect of decreasing the COCOMO SCED parameter on the dynamic shape. RW (Unmodified) Base/Switch A MD (Modified) Base/Switch RW (Unmodified) Base 4.x C MD (Modified) Base/Switch Effect of disabling the model’s test effort adjustment algorithm on the dynamic shape. 10 The search for the ‘same’ number of quantitative errors found in the chapter 3 data used a manual method of varying the defect densities according to the test case to obtain a final value for the simulated ‘Errors Found in IT’ parameter that rounded to the same value as the total number of defects in the project’s defect database for all the defects. The manual method used an initial ‘guess’ value and if the final value of the simulated parameter was greater than the observered actual total, the guess parameter was reduced until the final value either rounded off to the total number in the defect database, or was as close as was possible for the particular test case. Future automation of the method can use any of the root-searching algorithms in the literature. 139 Table 19: Continued Case ID Project (A/C) Staff Curve Type (Unmodified/Modified) Effort Fraction Type (Base/Switch) Dynamics Investigated RW (Unmodified) Base/Switch A MD (Modified) Base/Switch RW (Unmodified) Base 5.x C MD (Modified) Base/Switch Effect of the combined parameter changes from test cases 2.x and 4.x on the dynamic shape. RW (Unmodified) Base/Switch A MD (Modified) Base/Switch RW (Unmodified) Base 6.x C MD (Modified) Base/Switch Effect from using an alternate task failure rate model (exponential decay model) on the dynamic shape. RW (Unmodified) Base/Switch A MD (Modified) Base/Switch RW (Unmodified) Base 7.x C MD (Modified) Base/Switch Effect of reducing the time delay from unit testing. RW (Unmodified) Base/Switch A MD (Modified) Base/Switch RW (Unmodified) Base 8.x C MD (Modified) Base/Switch Effect of reducing the time delay from defect review boards. 9.x C RW (unmodified) Switch Effect of switching the effort fraction for just project-C on the 0.x reference test case on the dynamic shape. RW (unmodified) Switch 10.x C MD (modified) Switch Effect of ultra-low defect densities on the dynamic shape. Following the execution of the test cases for the selectively changed parameters, the approach is to vary the error injection densities in an attempt to obtain the same total number of errors found in each of the projects. Then by observing the dynamics of plots of the resulting ‘Errors Found in IT’ dynamic data and the actual errors from the project we can determine how well the model aligns with the project’s actual dynamics and can use the observations in concluding discussions. During the execution of these test matrices a reservoir ‘starving’ behavior and a numeric instability issue was noticed in a small number of the extreme cases in appendix-E Table 38 through Table 47. In Powersim a ‘starved’ reservoir is one that is drawn below zero, where the data value is 140 replaced with the “?” symbol, and time plots will show a spike at that point. These cases are found on reservoirs without first order control. The starving behavior was also observed when putting in an initial guess value for the design or code densities, which were quite a bit different than the values needed to obtain the observed number of defects. The numeric instability issue is best described as noise in the ‘Errors Found in IT’ at the target value, and typically on the order of less than +/- 5 errors (requiring higher fidelity defect densities to obtain the observed numbers of errors) but was the most extreme on the project-A MD5.5 case. Examples of both of these are provided at the end of appendix-F. 4.5.2 Approach for Modifying Staffing Curves Prior to using the approach above, the interpolated staffing curves from chapter 3 were split into the work break down areas of requirements and design, code and test. Initially, this is based on the available staff organizational positions of systems engineers, developers, and testers with management and support roles distributed evenly between these functional areas. Modification curves are then used to subjectively adjust each of the interpolated staff curves in an attempt to account for personal observations and other information concerning the actual tasks the team members were working on obtained from interviews and discussions with project personnel or what EVM data was available in the project’s software development folders. The primary issue is aligning organizational charts with tasks in the modeled work break down areas. For example, are programmers coding or are they working on requirements, and are testers testing the code or are they building the test hardware? The preference, of course, is to extract staff task information for requirements and design, code and test directly from a project management tool such as Microsoft’s Project or other Earned Value Management (EVM) task/staff management tool. 11 The approach used here to compensate for this issue is to compare the flight software defect dynamics using the ‘raw’ importation (filling the modification curves with values of 1) of the 11 Data accumulation with this informational source for modeling is underway. 141 interpolated staff information and filling the modification curves with values that either increase or decrease the staff curves for the work areas. 4.5.3 Simulation Results and Discussion for Flight Software Defect Discovery Dynamics Simulated flight software defect dynamics using the ‘raw’ importation staff curves and the subjective modification are provided in appendix-F in Figure 73 through Figure 85 for projects-A and C with their cumulative defect distributions. The staffing curves that were derived from the project data are included in appendix-E Table 48 and Table 49. These results are reasonable provided the accuracy of the staff information obtained from the projects; of particular interest is how close the model approximates project-C with low design defect densities when the effort fraction is switched 12 in the model. A low design defect density was an observation made in chapter 3, after the initial design issues led to the revitalized design effort. Hence, closing the loop on this observation requires the addition of test cases that use Madachy’s original staffing profile. We try the case where requirements and design occur in parallel by the same staff who reverse engineer the requirements and design from the code by summing the two curves, and renormalize the resulting summed curve using 25% of the resulting summed staff curve for requirements and design, and 75% of the staff curve for code to simulate the concurrent engineering seen in these projects. For comparison we simply re-execute the U2.x test cases from Table 36 with the effort fraction as it was originally set, and with the values switched (numeric results of this experiment are provided on the next page in Table 20). 12 The Madachy values for fraction of effort spent in design or code were “switched” or “swapped”, to simulate the observations we made concerning the contractor’s concurrent engineering practices. 142 Table 20: Results from Simulating Code and Design Reverse Engineering TC # Design Insp. Prac. Code Insp. Prac. Unit Test Prac. Unit Test Effic. Switched Effort Errors Found in IT Errors Escaping Integration Test MC2.1 1 1 1 1 No 275 51 MC2.2 0.5 0.5 1 1 No 474 70 MC2.3 0 0 1 1 No 568 78 MC2.4 1 1 0.5 1 No 389 64 MC2.5 0.5 0.5 0.5 1 No 484 71 MC2.6 0 0 0.5 1 No 602 80 MC2.7 1 1 0.5 0.5 No 392 64 MC2.8 0.5 0.5 0.5 0.5 No 496 72 MC2.9 0 0 0.5 0.5 No 870 107 MC2.10 1 1 0 1 No 393 64 MC2.11 0.5 0.5 0 1 No 687 92 MC2.12 0 0 0 1 No 1188 146 MC2.13 1 1 1 1 Yes 473 74 MC2.14 0.5 0.5 1 1 Yes 821 104 MC2.15 0 0 1 1 Yes 985 117 MC2.16 1 1 0.5 1 Yes 672 91 MC2.17 0.5 0.5 0.5 1 Yes 836 107 MC2.18 0 0 0.5 1 Yes 1041 122 MC2.19 1 1 0.5 0.5 Yes 677 92 MC2.20 0.5 0.5 0.5 0.5 Yes 859 107 MC2.21 0 0 0.5 0.5 Yes 1503 165 MC2.22 1 1 0 1 Yes 678 92 MC2.23 0.5 0.5 0 1 Yes 1191 137 MC2.24 0 0 0 1 Yes 2053 223 The results above show an increase in the numbers of ‘Errors Found in IT’ and a corresponding increase in the number of errors escaping integration test. This makes sense as these activities are best done sequentially to obtain the optimal benefit from design and code reviews. Looking at the tasks model for these cases we also discover that the requirements phase does not complete all of its tasks, and the design phase became ‘starved’. Analogous to this ‘starved’ numerical behavior is the qualitative observation that the requirements and design documents on these projects either go through numerous revisions from the addition of manpower resources and thus effort or they simply do not get completed. We take this as a sign that the design is not done. The numerical number of defects for both of these cases is less than those found in U2.x, suggesting that either the modeled parameter is not set to the correct level for these cases, or the feedback loop is not modeled correctly. Further, for these cases we did not modify the design error 143 amplification factor, which undoubtedly needs to be increased for attempting concurrent engineering. Likewise, the model does not correctly align the tasks to go from code into design – as the design tasking is modeled to precede the code in Madachy’s original model. We know from experience that incomplete and unverified designs can lead to design errors resulting in significant design rework effort. We see this in the case study data for project’s C (initially) and also project-D. Furthermore, we have found the quality of our design first projects to be superior. And yet, even though we try and explain this fact to our customer and the contractors we continue to see evermore complex contractor strategies to reverse engineering designs from code. 4.5.4 Latin Hypercube Sampling for Quality, Schedule, and Cost-Driven Projects Efficient Latin Hypercube sampling is used to investigate early effort quality strategies that emphasize defect detection methods of inspection and unit testing and those strategies that emphasize early effort minimization for short-term schedule or cost benefits. The distributions are included in appendix-G with tabular results while quality and schedule Total Manpower Rates are shown to the same scale below in Figure 54 and on the next page in Figure 55. Figure 54: Latin Hypercube Sampling for Schedule-Driven Processes 144 Figure 55: Latin Hypercube Sampling for Quality-Driven Processes The simulated results indicate that the quality-driven strategy increases up-front effort, however the process consistently provides significantly better schedule performance than those processes that attempt to minimize quality processes in an up-front effort minimization attempt. This result represents a second key finding from this research. Further, the fact that defects will bleed through late life cycle test processes (in the same manner that they can bleed through the up-front processes) instinctively suggests that any up-front quality effort-reduction strategy will have higher defect rates. 4.6 Conclusions and Discussion Ogata [27] explains what is involved in the analysis and design of dynamic systems as: (1) system analysis which is the investigation, under specified conditions, of the performance of a system whose mathematical model is known, thus requiring as the first step the derivation of the mathematical model, (2) system design which is the process of a trial and error approach that finds a system that accomplishes the task, and (3) synthesis which is the explicit procedure for finding a system that will perform in a specified manner. He also notes that the process of prototyping is opposite to that of mathematical modeling, where a prototype is a physical system that represents the mathematical model within reason. Necessarily, the process requires that the engineer builds a prototype and tests it, to 145 determine whether or not it is satisfactory, and then does this process again until the prototype satisfies the requirements of the system. This final iterative prototyping process is not achievable with software intensive systems development because it is highly dependent on humans in the loop and thus cost prohibitive. If we only had an infinite budget, we could then run the “grand experiment” consisting of multiple non-interacting teams of thousands of people setting out to build the same software intensive system. System dynamics models from Abdel-Hamid, Madachy and others can, however, be used in lieu of this “grand experiment” where the model’s representation of the effort for humans in the loop and the software development dynamic processes with examples from real software intensive systems must suffice. The system dynamics model then becomes a research tool as used here where numerical experiments reasonably approximate reality. We can then explore many types of relationships among parameters and assess their impact to determine an optimal set of policies. In the two software intensive satellite projects investigated in detail here, the one that used the rigorous design and quality-driven effort – even after an initial design flaw led to a concentrated re-design effort, out performed in every respect the schedule-cost driven project. The quality-driven project’s dynamic data compared reasonably well to the modeled ‘Errors found in IT’ parameter when we switched the modeled fraction of effort value used for requirements and design, and code, which we felt was reasonable based on interviews and the qualitative results. In both cases it was observed that consistent data on the staff’s actual tasking should provide better results. Further, the data that was modeled included all of the defects found in each of the project’s defect database, as the staff filing software defects from discoveries made during testing of the actual flight software is presumed to be some fraction of the total test staff. Project-A’s simulation includes a significant staff spike modification curve value at the beginning of the test staff ramp up. This was used to simulate the presumed effect of having a defect database coming on line with a backlog of defects primarily against the requirements documents that needed to be logged. 146 The Latin Hypercube as well as the test matrix results indicate that increased effort in quality processes will consistently provide better schedule performance with less variability than will almost every attempt to minimize these quality processes during the initial requirements, design, and coding phases. 147 C h a p t e r 4 E n d n o t e s [1] L. M. K. Boelter, (1957), from Classic Quotes, Internet; available from http://www.quotationspage.com/quote/27234.html. [2] John D. Sterman, Business Dynamics: Systems Thinking and Modeling for a Complex World, McGraw-Hill, (2000): 5. [3] Ibid., 41. [4] C. Welch, Lessons Learned from Alternative Transportation Fuels: Modeling Transition Dynamics, National Research Energy Laboratory Technical Report NREL/TP-540-39446, (February 2006). [5] Madachy, Software Process Dynamics, 3-4. [6] Ibid., 211-268. [7] Katsuhiko Ogata, System Dynamics 4th ed., Pearson Prentice Hall, (Upper Saddle River, NJ: 2004): 6. [8] Ibid., 4. [9] William J. Palm III, System Dynamics, McGraw Hill Higher Education, (New York, NY: 2005): 4-5. [10] Sterman, Business Dynamics, 86. [11] Madachy, Ph.D. Dissertation, 53-58. [12] Palm III, System Dynamics, 114, 254. [13] Ogata, System Dynamics 4th ed., 8, 169-171. [14] Palm III, System Dynamics, 263. [15] Powersim Studio™ Academic 2005 (6.00.3423.6) Service Release 6, Copyright© 1993-2006 Powersim Software AS (Product Code: PSSA-N030306-DRI##): Internet reference, http://www.powersim.com/main/resources/technical_resources/technical_support/, (last visited September 21, 2007). [16] Madachy, Software Process Dynamics, 277-281. [17] Raymond J. Madachy, “System dynamics modeling of an inspection-based process,” Proceedings of the 18th international conference on Software engineering, (1996): 376-386. [18] Madachy, Software Process Dynamics, 275-288. [19] Madachy, Ph.D. Dissertation. 148 [20] Ibid., 108-110. [21] Ibid., 41. [22] Tarek Abdel-Hamid and Stuart E. Madnick, Software Project Dynamics: An Integrated Approach, Prentice Hall Software Series, (Englewood Cliffs, New Jersey: 1991): 100-103. [23] R. Thibodeau, and E.N. Dodson, “Life Cycle Phase Interrelationships,” Journal of Systems and Software, Vol. 1, 1980, 203-211. [24] Madachy, private communication. [25] Boehm, Software Engineering Economics, 90. [26] Boehm, private communication. [27] Ogata, System Dynamics 4th ed., 5-6. 149 C H A P T E R 5 : G A M E T H E O R Y Science is the search for truth - it is not a game in which one tries to beat his opponent, to do harm to others. We need to have the spirit of science in international affairs, to make the conduct of international affairs the effort to find the right solution, the just solution of international problems, not the effort by each nation to get the better of other nations, to do harm to them when it is possible.[1] 5. Introduction Modern game theory evolved from the initial works of Zermelo (1913), Borel (1921), von Neumann (1928), von Neumann and Morgenstern (1944), and a series of papers from Nash (1950, 1951, 1953), while the initial concepts for the theory can be traced to the Babylonians in the Talmud [2] [3] [4]. Isaacs is regarded as the first to address differential game theory and dynamic games in work that began after he joined RAND in 1948, where work on game theory was underway [5]. In addition to Isaacs, the list of renowned researchers working on game theory at RAND included Richard E. Bellman, Leonard D. Berkovitz, David H. Blackwell, John M. Danskin, Melvin Dresher, Wendell H. Fleming, Irving L. Glicksberg, Oliver A. Gross, Samuel Karlin, John F. Nash, and Lloyd S. Shapley [5]. In general, game theory is used to develop optimal strategies for action in competitive situations with two or more ‘players’ of the game [6]. Conventional ‘static’ game theory identifies the single discrete decision that a player must make in a non-temporally dependent ‘game’ for any specific strategy. Assuming the other player also selects a strategy governing the decisions of ‘his’ play, the outcome of the game is then completely determined [7]. Players that attempt to analyze the other player’s payoff (or ‘win’ conditions) can determine an optimal strategy for ‘his’ play based on the situation of the game (for example cooperative or non-cooperative). Dynamic games are those games in which the order of a player’s discrete decisions is important, while differential game theory uses (as the analogy to discrete decisions) control variables in differential equations that players set to control the continuous time-dependent state variables [7]. In both of these cases the outcome is completely determined depending on the player’s strategy for making the discrete decisions or setting the control variables. 150 Even though most games in our real-world space system acquisition situations are inordinately complex with numerous players, strategies, and dynamic situations; this chapter will show that it is possible to extract specific situations that can be simplified into strategic categories, thus allowing the application of game theory analytical methods. Our goal then is to demonstrate that game theory methods can be used to guide the selection of policies that support optimal software intensive system acquisition strategies. This chapter first includes an introduction to game theory static methods through the use of simple example games and then includes applicable theory from the literature and associates with the theory arguments based on the case study data. 5.1 Background The sub-sections in this section provide background information about the different types of games. 5.1.1 Normal Form Games (Static Games) Common static games are the two-person zero-sum game, the two-person non-zero-sum game, and the prisoner’s dilemma. In the two-person zero-sum game, one player’s gain is the other’s loss [8] [9]. The non-zero-sum game covers situations where the gain of either player does not in general lead to a loss to the other player by the same amount [9]. Dresher and Flood [9] devised the prisoner’s dilemma while at RAND in 1950 to describe a non-zero-sum game that has an outcome that is Pareto inefficient (i.e. there is another outcome that would give both players higher payoffs). 13 5.1.2 A Solution Example for a 3x3 Zero-Sum Game The game matrix is used to show the ‘payoff’ to either player depending on the player’s strategy. Figure 56 on the next page shows the game matrix for a two-person zero-sum game where 13 The games in this section follow the general solutions provided by Straffin in his discussion of methods for the solution of games in order to demonstrate how different decisions are driven by a player’s strategy selection. 151 both players have the same strategies, while Figure 57 re-displays the same game with player 1’s payoffs (this works in zero-sum games as player 2’s payoffs are just the negative of player 1’s). Figure 56: 2-Person Zero-Sum Game Matrix with 3 Strategies Each Figure 57: Zero-Sum Game Matrix with Player 1 Payoffs Displayed The decision movement diagram for trying to determine the best strategy (based on the other player’s strategy) is displayed by drawing row arrows from the largest to the smallest entry in the row, and column arrows from the smallest to the largest entry in the column, shown below in Figure 58 [9]. Figure 58: The Game’s Decision Movement Diagram If the maximin and the minimax are the same entry amount in a matrix game, then the game is said to have a ‘saddle point’ [9]. A method for finding the saddle point (if the game has one) is by finding the maximum entry in each column and minimum entry of each row as shown in Figure 59. Game’s payoff for Player 2 Game’s payoff for Player 1 Player 1 Player 2 A B C A B C (10,-10) (-10,10) (0,0) (10,-10) (-10,10) (-20,20) (20, -20) (-10, 10) (10,-10) Player 1 Player 2 A B C A B C 10 -10 0 10 -10 -20 20 -10 10 Player 1 Player 2 A B C A B C 10 -10 0 10 -10 -20 20 -10 10 152 Figure 59: The Game’s minimax and maximin The game in Figure 59 does not have a saddle point. For games with a saddle point, the entry amount is the ‘value’ of the game. In any matrix game, the ‘value’ of the game is the amount where Player 1 has a strategy, which guarantees that it is the minimum payoff amount, while Player 2 has a strategy that guarantees that Player 1 will not win more than this amount [9]. Finding that strategy for each player, and the game’s ‘value’ is called the ‘solution’ of the game. In games (such as this one) without a saddle point, an alternative method must be used to solve the game. So far, the strategies have been ‘pure’ strategies, meaning that each player selects one of the options and plays only that option. This type of play is considered rational play if the selection of strategies leads to an equilibrium outcome of the game. This situation would exist in the original example game if two of the amounts were changed in Player 1’s B strategy such that it reversed the direction of the arrow in the row. This modified game’s decision movement diagram is shown in Figure 60 while Figure 61 shows the saddle point for the new situation. Figure 60: The Modified Game’s Decision Movement Diagram Player 1 Player 2 A B C A B C 10 -10 0 0 -10 -20 20 10 10 Player 1 Player 2 A B C A B C 10 -10 0 10 -10 -20 20 -10 10 Column max 10 10 20 Row minimum -10 -10 -20 maximin maximin minimax 153 Figure 61: The Modified Game’s Saddle Point In the modified game, Player 2 has every incentive to simply play the dominant strategy C. Player 1 in this case would now prefer to play strategy B, thus guaranteeing no loss. The solution for this game is a ‘value’ of 0, Player 1 has the pure strategy of B, and Player 2 has the pure strategy of C. The solution to the original zero-sum game in Figure 57 requires a mixed strategy. One method for determining a mixed strategy is to solve the three 2x3 sub-games to find a mixed strategy for Player 2 that will minimize Player 1’s payoff. 0 -10 -20 10 20 Player 2's A Player 2's B Player 2's C 0 -10 -20 10 20 Player 1's A Player 1's B 0 1 x Figure 62: Player 2’s Optimal Mixed Strategy to Minimize Player 1’s Payoff Player 1 Player 2 A B C A B C 10 -10 0 0 -10 -20 20 10 10 Column max 10 0 20 Row minimum -10 0 -20 maximin minimax 154 0 -10 -20 10 20 Player 2's A Player 2's B Player 2's C 0 -10 -20 10 20 Player 1's A Player 1's C Figure 63: No Mixed Strategy Solution for Player 1’s use of A-C Strategy 0 -10 -20 10 20 Player 2's B Player 2's C Player 2's A 0 -10 -20 10 20 Player 1's B Player 1's C 0 1 x Figure 64: Player 2’s Mixed Strategy for Player 1’s B-C Mixed Strategy We proceed by trying the method of equalizing expectations [9]. We do this by assuming that Player 2 plays the game using probabilities of x, y and 1-x-y. The amount that Player 1 would then expect to win (or lose) for each of the three possible 2x3 mixed strategies is Player 1’s A x(20) + y(10) + (1-x-y)(-10) = -10 + 30x + 20y Player 1’s B x(-10) + y(0) + (1-x-y)(10) = 10 - 20x - 10y Player 1’s C x(10) + y(-10) + (1-x-y)(-20) = -20 + 30x + 10y 155 Setting these equal to each other yields the following equations, Player 1’s A-B -10 + 30x + 20y = 10 - 20x - 10y 5x + 3y = 2 Player 1’s A-C -10 + 30x + 20y = -20 + 30x + 10y y = -1 Player 1’s B-C 10 - 20x - 10y = -20 + 30x + 10y 5x + 2y = 3 Solving these equations leads to x = 1 and y = -1. This solution leads us to conclude that the optimal mixed strategy is for the 2x2 sub-game, which corresponded to the minimax, maximin cells. Figure 65: 2x2 Sub-game in the Original 3x3 game Solution for this 2x2 sub-game is accomplished in the same manner as the 2x3 sub-games. Player 1’s A x(10) + (1-x)(-10) = -10 + 20x Player 1’s B x(0) + (1-x)(10) = -10x + 10 Player 1’s A-B -10 + 20x = -10x + 10 x = 2/3 With the payoff, Player 1’s A (2/3)(10) + (1/3)(-10) = 10/3 Player 1’s B (2/3) (0) + (1/3)(10) = 10/3 Thus, if Player 2 plays a mixed strategy consisting of 2/3 B and 1/3 C (perhaps by generating a uniformly distributed random number between 0 and 1 – selecting B if the number was > − 3 0.333 ) he can guarantee that Player 1 will not win more than the game’s value of 10/3 (to within the precision of his random number generator). Of course, Player 1 could always play sub-optimally and choose the C Player 1 Player 2 A B C A B C 10 -10 0 10 -10 -20 20 -10 10 Column max 10 10 20 Row minimum -10 -10 -20 maximin maximin minimax 156 strategy, which would always be to Player 2’s benefit if he were playing his optimal strategy. From Player 1’s perspective, he can surmise that Player 2’s strategy is to select 2/3 B and 1/3 C, guaranteeing that he will not obtain more than a payoff of 10/3. We can easily show using the same method that Player 1’s optimal strategy is to counter with 2/3 B and 1/3 A, guaranteeing a minimal payoff of 10/3. This is the solution for this game. 5.1.3 3x3 Non-Zero-Sum Game A possible 3x3 non-zero-sum game (assuming no communication is allowed) is displayed below in Figure 66, with the Player 1 decision diagram in Figure 67, and the Player 2 decision diagram in Figure 68 on the following page. Figure 66: An Example 3x3 Non-Zero-Sum Game Figure 67: Player 1’s Decision Diagram for the Non-Zero-Sum Game Player 1 Player 2 A B C A B C (2,5) (0,-1) (-1,-1) (0,1) (1,0) (0,0) (1,1) (5,2) (-1,0) Player 1 Player 2 A B C A B C (2,5) (0,-1) (-1,-1) (0,1) (1,0) (0,0) (1,1) (5,2) (-1,0) 157 Figure 68: Player 2’s Decision Diagram for the Non-Zero-Sum Game If both players go for their individual maximum payoff (the pure B strategy) with a complete disregard for the other player’s strategy, the result is bad for both players. They both end up with the BB result and lose. The A strategy gives both players the greatest likelihood of winning something. Looking for strategy dominance, it is easy to see that each player’s A strategy dominates their C strategy. Hence, each player is likely to realize that the probability of winning something is better than the guarantee of winning nothing. However, either player could always select their C strategy and guarantee that they would not win or lose anything. If both players play this conservative strategy the result is neither player wins or loses. However, should either player surmise that the other player was going to play it safe and choose their cooperative A strategy, then that player could non-cooperatively take advantage of the other player and choose B. AB and BA are pure-strategy equilibria in this non-zero sum game; these are the so-called Nash equilibria [9]. However, as discussed if both players select to play their own maximum pure-strategy Nash equilibria, both players are placed into an unfortunate loosing situation; this is a difference from what was found for zero-sum games. The same method used for zero-sum games is used here in an attempt to identify a mixed strategy solution. This is the prisoner’s dilemma game, should a player select a strategy that is in their individual best interest the result would be an outcome that is bad for both players [10]. Player 1 Player 2 A B C A B C (2,5) (0,-1) (-1,-1) (0,1) (1,0) (0,0) (1,1) (5,2) (-1,0) 158 5.1.4 Cooperative Bargaining and The Nash Solution Suppose the players in the games from this sub-section instead of continuing to play non- cooperatively agree to sit down to arbitrate a fair cooperative solution, Von Neumann and Morgenstern argued that any solution must be: (1) Pareto optimal – that is there should be no solution for any player that is better for both players, and (2) neither player should be forced to accept less than he could guarantee himself from non-cooperative play [11]. The negotiation set for the players is the set of pure and mixed solutions of the game [11]. The question becomes, can we identify the single outcome that is the fairest to all players? John Nash proposed an idea of how to identify this solution for two players using optimal threat strategies from both players, the name of which has been called the Nash arbitration scheme [11]. Nash began his solution with the following four axioms: (1) The Rationality Axiom. The solution point should be in the solution set. (2) The Linear Invariance Axiom. If either of the player’s utilities is transformed by a positive linear function, the solution point should be transformed by the same linear function. (3) The Symmetry Axiom. If the polygon happens to be symmetric about the line of slope plus one through the status quo point (this is an agreed to default point that is used if the arbitration fails), then the solution point should be on this line. (4) The Independence of Irrelevant Alternatives Axiom. Suppose there is a solution in the negotiation set for a polygon (which is a graphical representation of the player’s payoffs) that contains within its boundaries the point we call the status quo. Suppose there is a second polygon, which also contains the status quo point and the solution, but this second polygon is completely contained with in the first polygon. Then the solution should be common for both polygons. Not everyone agrees to all four of Nash’s axioms, where Staffin [12] points to a paper from Kalai and Smorodinsky that shows Nash’s solution is unfair if new outcomes become available that improves a player’s position. Nash, however, proved that there is one and only one arbitration scheme, 159 which satisfy all four of his axioms. We have provided this view to later propose a method using Nash’s bargaining solution to the observed corner cutting behavior on software intensive systems. 5.2 Austin’s Original Expanded Normal Form Game Austin [13] provides a 3x3 game between two software developers (agents), which is shown below in Figure 69. Figure 69: Austin’s Original Expanded Normal Form Game for Developer Quality Decisions 14 Each developer is faced with a choice between high quality and low quality or one of adding effort that depends on a nature assigned probability p that the agent is assigned a task with a deadline that is unachievable without taking quality cutting measures, and a perceived career penalty C by the agent for not completing the task on time. Q 1 and Q 2 represents the penalty from concern for quality that is accrued against all agents where the subscript 1 indicates that one agent has cut quality corners while the 2 subscript is the penalty for both agents cutting corners, such that Q 2 > Q 1 . This penalty represents 14 Reprinted by permission, Robert D. Austin, The effects of time pressure on quality in software development: An agency model, Information Systems Research, volume 12, number 2, (June, 2001). Copyright 2001, the Institute for Operations Research and the Management Sciences, 7240 Parkway Drive, Suite 300, Hanover, Maryland 21076 USA. 160 damage to the organization’s reputation or future profitability and affects all agents, while the penalty C affects just the single agent [16]. Austin specifically did not include a penalty for ‘getting caught’ in his treatise and further noted that the independence of p was a strong assumption. EP denotes the penalty to the developer for adding effort. Organizations can (and most do) incentivize developers with overtime pay to compensate for the addition of effort (this of course increases the cost to the customer). However, the likelihood for getting personally caught and punished for cutting corners and thus incurring the penalty C is absent from our contracts. Especially, when management and developers cooperate to remove peer reviews, or the team pushes through the peer reviews at many times the normal rate by not adequately reviewing the documentation or the code, etc. Hence, from this argument we see that the slopes of the lines in Figure 70 can change based on the penalties or incentive structure. Figure 70: Austin’s Diagram for Adding Effort as an Alternative to Shortcut-Taking 15 15 Reprinted by permission, Robert D. Austin, The effects of time pressure on quality in software development: An agency model, Information Systems Research, volume 12, number 2, (June, 2001). Copyright 2001, the Institute for Operations Research and the Management Sciences, 7240 Parkway Drive, Suite 300, Hanover, Maryland 21076 USA. 161 5.3 N-Player System Development Games In N-player games the decision strategies result in a multi-dimensional form, so for simplicity, consider a 3-player game. The resulting multi-dimensional form is a cube in decision space, where there is now the possibility of forming coalitions reducing it back into a two dimensional game. Coalitions can form in any game of three or more players. For Player’s A, B, and C, there are the possibilities of having coalitions between Players A and B which then play against C, Players B and C against A, and Players A and C against B. Depending on the payoffs between the coalitions, the coalitions can form dynamically depending on the situation, or reform based on changing alliances. Further, if there is cooperative play amongst all the players, then all the players win. Consider now a system development game consisting of Quality, Cost, and Schedule state- variables where a single player is made responsible for only one of these three state-variables (players 1, 2 and 3 for each of these state-variables to avoid confusion). Further, consider that the end goal for all the players benefit is one of sufficiently high quality, within a certain schedule and within a certain cost. This simple game creates a view of the world that ignores the “Effort” that is required by the individuals tasked to actually build the system (we will call this coalition player 4). If the Effort required was either never taken into account or improperly accounted for in the first place, the result can become a Lose situation for player 4. Hence, we consider the addition of a new Effort axis in appendix-H. 5.4 Extensive Form Games (Game Trees) Straffin states, “… in real conflict situations, decisions are often made sequentially, with information about previous choices becoming available to the players as the situation develops.” [14] Game trees are a method for modeling this sequential decision situation. Austin [15] also provides a game tree in his game between the two developers. The extensive form game from figure 1 of Austin’s paper is reproduced below in Figure 71 for reference. 162 Figure 71: Austin’s Original Extensive Form Game for Developer Quality Decisions 16 H and L indicate a High quality decision or a Low quality corner cutting decision branch. Oval enclosed decision nodes provide insufficient information to the agent such that the agent is unable to distinguish between the nodes in that set. Austin also notes that the independence assumption on the assignment probability is less reasonable for developers working on interdependent tasks. One would also think that this assumption is also unreasonable for situations where a manager who understands the individual capabilities of each developer and assigns work accordingly. Based on the qualitative and quantitative research provided in chapters 3 and 4, in situations without sufficient penalty considerations by the agents, the quality cutting decision is taken in order to meet an aggressive schedule. The resulting counter strategy employed by the customer was to embed knowledgeable technical representatives into the development situation, which now removes a technical observational difficulty that was also noted by Austin [16]. While we have seen a further counter strategy to our embedding government observers from another team’s management. This counter strategy was to then hide the team doing the work to avoid our seeing them working on the code, before 16 Reprinted by permission, Robert D. Austin, The effects of time pressure on quality in software development: An agency model, Information Systems Research, volume 12, number 2, (June, 2001). Copyright 2001, the Institute for Operations Research and the Management Sciences, 7240 Parkway Drive, Suite 300, Hanover, Maryland 21076 USA. 163 they did a design. All the while, they verbally promised to the government that they were doing the design. Further, we’ve witnessed situations where the team’s management has demonstrated that they’ve either taken part in the quality corner cutting strategy or were unknowledgeable in software development and thus could claim ignorance in that they did not know that their suggested ‘streamlined processes’, which the customer approved in name of meeting the all important launch schedule, would remove the penalty perception from the developers and lead directly to quality corner cutting. 5.5 A Differential Game of Optimal Production with Defects Isaacs [17] was the first to solve a differential game similar to that of software development; to find the optimal production of steel for a government undertaking a program of steel production. Isaacs dealt with maximizing steel production when a certain amount of extant steel is required as an ingredient for the manufacture of additional steel. Furthermore, the current supply of steel is either used to create more steel mills (to then be employed in the production of steel), stockpiled, or used to create steel. This is similar in concept to how software is created. Creating the design, writing pseudo-code or creating rapid software prototypes are prudent risk mitigation techniques during the up-front design phase to burn down risk. In the steel production problem, Isaacs first identifies Kinematic Equations (K.E.s) of the state variables (M for mills, S for steel and T for time) that are functions of maximizing or minimizing control variables. Isaacs’, however, assumes that the production of steel is flawless. Hence there was no need for a control variable to account for defect discovery and removal of bad steel. Hence, we consider the situation where workers building the new plant cannot assume the steel is flawless, since the steel arrives from numerous sources (with varying numbers and types of flaws). If the workers simply use the steel as is, there is an unknown risk that their new steel mill will fail. A detailed look into this line of thought would lead us into the fascinating game theory literature covering situations with complete and incomplete information, or for us, the presence and location of these pesky “Snarks” [18] [19] [20]. 164 Isaacs identifies two players of the game, (the pursuer) responsible for controlling φ and trying to minimize various state variables, and (the evader) controlling ψ and trying to maximize various state variables. The K.E.’s are used to derive a Main Equation (M.E.) that consists of the state variables, the control variables and the partial derivatives of the Value (or Payoff) of the game. The M.E. is used to derive Retrogressive Path Equations (RPE) that depends on the control variables (in his case just two - ψ S , and ψ M ) and various constant constraints. He provides a generalized method for deriving “Universal Surfaces” (US) from K.E.s consisting of more than 3 state variables [21]. Isaacs considers these US as the union of especially advantageous paths, for at least one of the players. Since these are ‘zero-sum games’, what is advantageous for one player is deleterious to the other. Hence, he coined the term φ-US to denote where a discontinuity in ϕ occurs and ψ remains continuous on the surface. On the other hand, a ψ-US denotes where a discontinuity in ψ occurs and ϕ remains continuous on the surface. His K.E.’s are of the form, 3 ,..., 1 , ≥ = + = • n i x i i i β ϕ α . Equation 5-1 We proceed by including in Isaac’s equations the state variables S g and S b to redefine S as S = S g + S b , Equation 5-2 where S g is good unflawed steel, and S b is bad flawed steel, and define the ratio, g b v C S S ≡ , Equation 5-3 which is the critical limit where a steel mill built from steel that exceeds this value will fail. We also add two-control variables ψ 1 and ψ 2 . The first is the quality processes invoked by steel mills in their steel production phase to maximize the removal of bad steel, and the second is the quality processes invoked in the production of steel mills to find and remove the bad steel. Hence, the K.E. now take the following form b M c S S M 2 ψ ψ − = • , Equation 5-4 165 ( ) [ ] b M S a S 1 S S 1 ψ ψ ψ − − − = • , Equation 5-5 1 T − = • , or splitting out the state-variable for steel Equation 5-6 ( ) 2 S S M ψ ψ ψ − + = • M b g M c c , and Equation 5-7 ( ) [ ] ( ) [ ] ( ) 1 1 S 1 S S ψ ψ ψ ψ ψ − − − + − − = • M S b M S g a a . Equation 5-8 We leave this problem in this unsolved form (we will revisit in the conclusions and will provide the solution in later work) to simply illustrate here the following points for the conclusion section of this chapter: (1) The rate at which steel can be created, and mills can be built now have a temporal dependence on the quality processes required to remove bad steel. (2) This temporal penalty for high quality is a cost increase for the additional quality processes the mill builders must use to ensure that the steel they use does not exceed the critical value – due in part to an unscrupulous minority of steel mill owners that attempt to minimize the removal of bad steel (replace ψ 1 with φ 1 ) (and also in part from the unpredictable combination of defects that occurs when using steel from two or more different mills) to an acceptable level instead of maximizing the removal of bad steel to increase their profit margins. (3) If all of the steel mills were maximizing their bad steel removal the mill builders would not have to undertake a strict quality assurance program and thus could save money. However, we could take this concept a step further and introduce the premise that the mill builders themselves incur a cost penalty from their quality assurance program. This penalty reduces the number of mills they can build, and thus their profit. Hence, there is the pressure to make the decision to replace ψ 2 with φ 2 in order to maximize their profit as well. 166 5.6 Other Methods 5.6.1 Dynamic Programming and Recursive Decision-Making Bellman’s [22] dynamic programming and modern general equilibrium theory is used in dynamic economics to yield tractable models of dynamic economic systems. Stokey et al. [25] show that this is “…easiest to see when the system as a whole itself solves a maximum problem…” The application to the microeconomics of software intensive systems with latent defects, while unsolved, would be an interesting application and ripe area for further research. Adda and Cooper [26] provide a simple example of applying dynamic programming to the problem of eating a cake. They start with the size state variable of the cake (W) being given at the start of any period [26]. The control variable is the variable that is being chosen to control, and in their example is the consumption of the cake in the current period (c) [26]. (They also note that c is in the ‘compact set’.) The state of the cake tomorrow is dependent on the state today and the control today [26]. The relationship is W’ = W – c, and is called the transition equation [26]. Adda and Cooper [26], utilize a translation from that of being in today’s consumption to instead being a representation of tomorrow’s state, yielding, [ ] ( ) ( ) W V W W u W V W W ′ + ′ − = ∈ ′ β , 0 max ) ( , Equation 5-9 which is called the Bellman equation. They proceed to show the first-order condition for the optimization problem (the first derivative of the variable u (u ’ ) is set equal to the second component in the Bellman equation), and policy functions for the problem. Noting that [26]… The policy functions […] are important in applied research, for they provide the mapping from the state to actions. When elements of the state as well as action are observable, these policy functions will provide the means for estimating the underlying parameters. The analogy to the situation of developing large software intensive systems of units should be apparent. This optimization problem can be solved numerically, but is not done in this treatise. The application to the state-matrix equations for the model from chapter 4 should be a solvable first step. 167 5.6.2 Probabilistic Risk Analysis and Game Theory Hausken [27] has merged the behavioral conflict analysis capabilities of game theory with probabilistic risk analysis. Hausken used probabilities in RBD series and parallel notation (eqns. 4-13 and 4-14 respectively [27]) combined with von Neumann and Morgenstern’s expected utility (eqn. 4-15) [27]: ( ) ) ,..., , ( ), ,..., , ( , , ) , ( 2 1 2 1 1 n n i i i n i s s s s x x x x s x p s x p = = ∏ = = , Equation 5-10 ( ) [ ] ) ,..., , ( ), ,..., , ( , , 1 1 ) , ( 2 1 2 1 1 n n i i i n i s s s s x x x x s x p s x p = = − ∏ − = = , Equation 5-11 0, ) , ( 0, 0, , ) , ( ) , , , ( ≥ ∂ ∂ > ∂ ∂ > − − = s x p u r u b m s s x bp m r s x u i i i i i i i i Equation 5-12 ( ) . ,..., 1 , , 0, ) , ( n i m r S S s x p u i i i i i i i i = = ≥ ∂ ∂ Equation 5-13 Where, x = (x 1 , x 2 , …, x n ) comprises the technical characteristics (the state) of the units, and s = (s 1 , s 2 , …, s n ) is the strategy combination that is an ordered set consisting of one strategy for each of the n players in the game, p(x, s) is the reliability of the system, p i (x i , s i ) is the unit reliability from player i, with compensation (i.e. revenue) r i , and unit costs m i , while b is a scaling parameter for player i [27]. Hausken’s [27] utility equation yields, ) , , , , ), , ( ), , ( ( ) , , , ( i i i i i i i i i m r s x s x p s x p u m r s x u = . 5.7 Discussion and Conclusions for Negotiating The Solution We review some possible strategies that can be used by acquirers to deal with a corner cutting contractor strategy in this section and finish with some conclusions. 5.7.1 Peer Review and Unit Test Counter Strategies and Bargaining Solutions Software developers tasked with testing their own software in a schedule-pressured situation are faced with a decision, Do I unit test the software completely? Do I develop it completely? Do I develop just enough to keep the dependent software production flow moving? Do I test only enough to show that the dependent software works under normal conditions? A developer will not always make the 168 high quality decision desired by the customer and thus it is likely not the Nash equilibrium decision in schedule-pressure situations but more likely they are stuck in a “Prisoner’s Dilemma” choice between high quality, cutting corners, or adding effort. Unless, that is the software acquisition was set up in a manner to make the developers choice of high quality the strong one. Software peer review is the mechanism commonly used for ensuring that the software has all the functionality required by the integrated product, and is also the method used to proactively uncover software defects. The rigor of the reviews allows more or less defects to escape. Peer reviewing the developer’s tests is a method to ensure that the software was thoroughly tested by the developer. However, there are issues with this standard test and review method when the entire reviewing team of developers is under intense schedule pressure – which causes ‘group think’ towards routinely cutting corners. Although, we have no doubt that they all would have preferred to have the time needed to fully test and rigorously review everything, schedule pressure essentially forced this corner cutting behavior – they simply cannot get it all done in the allotted time. Austin’s diagram for adding effort (Figure 70) suggests that there is a transition from adding effort, to corner cutting to admission of lateness. When we have long lead development projects that have gone through multiple re-plans, we discover the situation that the developers have been dynamically transitioning between these regimes throughout the life cycle. Thus, acquiring organizations must make the quality decision a Nash equilibrium in the acquirer’s favor or must negotiate the precise quality of the final product at the beginning of the contract. If the contractor is either incentivized towards quality behavior and required to provide evidence for audit purposes and is severely penalized for non-selection of the high quality choice, the acquirer stands a better chance of getting a quality product. Further, ignorance about corner cutting of developers by the contractor’s management is not an excuse; they must be held personally accountable. On some of our projects, we have effectively employed a strategy of embedding knowledgeable customer technical representatives in the reviews as peers to ensure the correct level of quality is obtained. We have however had difficulty fully covering all the reviews with our available 169 staff, and are thus left in the situation of randomly sampling the reviews. In these cases, we have statistics that show our technical presence increases the number of peer review findings identified in the review. We believe this is the case, because our technical representatives are not under the same schedule pressures that the contractor’s employees are, thus we strive for high quality. Companies usually employ an independent software quality assurance department; some of which are chartered to simply ‘audit’ the development process, and verify that fixes for faults uncovered in later test phases were incorporated into the baseline software product. The success of a random periodic audit process, including the depth and rigor to which the auditor looks is an interesting problem; especially when there are not enough quality assurance staff or the project has elected to use a single quality assurance representative that is embedded into the schedule-driven development organization. In this situation, the power of the majority under schedule-driven software development conditions becomes the phenomenon to watch for [22]. We have seen in chapter 3 that testing is particularly vulnerable to corner cutting strategies. To further this point, Beizer [23] considers the issues with independent testing, programming and large versus small software projects and points out for independent testing that, The more you known about the design, the likelier you are to eliminate useless tests, which, despite functional differences, are actually handled by the same routines over the same paths; but the more you know about the design, the likelier you are to have the same misconceptions as the designer. Ignorance of structure is the independent tester’s best friend and worst enemy. The naive tester has no preconceptions about what is or is not possible and will, therefore, design tests that the program’s designer would never think of—and many tests that never should be thought of. Knowledge, which is the designer’s strength, brings efficiency to testing but also blindness to missing functions and strange cases. Tests designed and executed by the software’s designers are by nature biased toward structural considerations and therefore suffer the limitations of structural testing. Tests designed and executed by an independent tester are bias-free and can’t be finished. Part of the artistry of testing is to balance knowledge and its biases against ignorance and its inefficiencies. Furthermore, Beizer [24] notes how size impacts these issues, and provides the following passage, [Programming in the large] means constructing programs that consist of many components written by many different persons. Programming in the small is what we do for ourselves in the privacy of our own offices or as homework exercises in an undergraduate programming course. Size brings with it nonlinear scale effects, which are imperfectly understood today. Qualitative changes occur with size and so 170 must testing methods and quality criteria. A primary example is the notion of coverage—a measure of test completeness. Without worrying about exactly what these terms mean. 100% coverage is essential for unit testing, but we back off this requirement as we deal with ever larger software aggregates, accept 75%-85% for most systems, and possibly as low as 50% for huge systems of 10 million lines of code or so. Thus, Beizer alludes to a testing dilemma for integration testing or design level testing as well. One suggested remedy is the required utilization of an initial design and review process that includes formal methods for mathematical design validation, the goal of which is to find the design errors, prior to implementation. The unfortunate decision faced by the cost-conscious customer at design reviews (when large development teams are already in place) becomes one of holding up development, keeping idle an army of developers, or allowing what are believed to be the more mature sections of the design to proceed into code. Hence, the agreed to solution is almost always, procede at “risk”, but we argue that this is not a “risk” it is a “certainty” leading to significant redesign efforts. Hence, we further argue that you can create your own normal form game, and decide for yourself what the logical outcome when anyone is faced with this situation. Therefore, both the government and our contractors can benefit from a Rayleigh staffing profile that isn’t ramped up until after design languages like UML combined with exhaustive analysis (as an example) of the design have reduced the design defects to an acceptable level. The design for the large number of programmers is then used as a coordination and communication tool for the large software development staff. The concept is exactly analogous to how blueprints direct teams of individuals tasked with building large structures. Areas of the design, that are planned for later incremental development phases usually have their interfaces (the software architecture) and the critical foundation portions with baseline functionality understood and thus in a higher state of maturity. The qualitative research shows that often, the contractors are moving into implementation before critical functionality has finished design. In addition, in some cases they were using antiquated design practices that did not include the analysis of the design to ensure that it would work. Even though UML is used throughout the dissertation as the modeling approach, satisfactory alternative common modeling approaches may also 171 suffice. However, when it comes to communicating to the customer that the design will in fact work, our advice is to adopt either a standard methodology that the customer is familiar and comfortable with, or spend the time and money to educate the customer that this modeling approach will yield a verifiable design. A state variable to investigate for control is one that quantifies the quality and maturity of any architectural representation to support the later incremental development phases. A critical part of this is what is required for maintenance of the architecture in a format that fosters communication between typically non-interacting engineers working in parallel. Small highly motivated teams with chief architects that fully understand the right software architecture to build (and that directs the development team) can likely succeed to a point without fully vetted designs for “small” projects; however, this small team process will break down at some point for large development projects requiring lots of communication, and may even break down on smaller projects. The result that we demonstrated in this dissertation is – more defects. Hence, we suggest as a government strategy, the mandated use of a design language like UML and appropriate analytical methods to support their resulting designs on all software intensive government projects. 5.7.2 Player Strategies for Software Intensive System Some possible strategies of software developers are provided in Table 21 under various cost and schedule pressure situations – the probability of which increases or decreases depending on the degree of the pressure. The following examples are provided to assist with the development of contractual language. Table 21: Sampling of Possible Software Developer Strategies Software developer strategies: Example Pressures (Cost, Schedule, Quality) Work overtime if needed (the add effort strategy) (Low, Moderate, Moderate) Strive for perfect code (take the time needed) (Low, Low, High) Follow documented software development steps (Low, Low, Moderate) Refuse to work unpaid overtime (High, High, Low) (1) Reduce unit testing (e.g. scope of coverage - no negative tests) (Low, Moderate, Low) (2) 172 Table 21: Continued Software developer strategies: Example Pressures (Cost, Schedule, Quality) Reduce functionality (e.g. scope - no negative tests) (Low, Moderate/High, Low) (3) Reduce peer review rigor (e.g. reuse code, or time reviewing) (Low, Moderate, Low) (4) Quit (find other work) (Low, High, Low) (1) At least partially dependent on individual preferences as well as likely undocumented personal conflicts. (2) Likely also a function of training. (3) Likely also a function of the maturity of functional specifications. (4) Likely also a function of corporate processes, team factors, quality assurance personnel and others. Table 22 is a sampling of possible management strategies that have been used depending on the project’s quality, cost and schedule pressures. Note that the quality strategies appear to be orthogonal to the cost and schedule strategies, where as some of these could be used in mixed strategies for quality and schedule, or quality and cost, etc. Table 22: Sampling of Possible Software Management Strategies Quality Driven Schedule Driven Cost Driven Use paid overtime - - Require unpaid overtime - Hire best available staff - Hire cheapest staff - - Layoff experienced staff - - Add people to a late project - - Strict process driven development - - Use Industry Best Practices - - Staff training – emphasis on quality processes - - Allow skipping process steps - Try ‘Big Bang’ integration - Allow reduced quality - Feed staff working overtime (food incentive) - Create a software safety culture - - Shortcut processes - 173 Table 22: Continued Quality Driven Schedule Driven Cost Driven Slow roll the project (minimize hired staff) - - Fire staff that shortcut processes - - Cheaper facilities - - Reduce staff - - Utilize innovative quality activities - - Layoff all the employees - - Quality incentives (e.g. bug bounties) - - Rapid development incentives to workers - - 5.7.3 Some Threat Strategies (from the Contractor’s Viewpoint) While most of this dissertation has provided strategies and situations as they appear from the government acquirer’s viewpoint, we would be remiss in not also providing appropriate threat strategies for the contractor’s viewpoint. The following italicized text provides a “maintain the status quo” argument. First the clear situation to us (small-corporation all the way to large-corporation) is that the customer does not appear to want to properly fund a quality software development process, and are not patient enough for the time it takes to plan and develop one. Furthermore, they demand un-attainable schedules; they do not seem to know what they want (they keep changing the requirements and their minds), and do not have the funds to properly staff us to maintain large development organizations where we could spend the time to continuously re-train those staff as technology progresses. They tell us what they have to spend, then turn around and want more than they can afford. So for us (large-corporation), our optimal threat strategy is to simply keep the status quo, where we actually make tons of money for our stockholders on the maintenance of our poorly developed products. We (small-corporation) are seriously concerned about follow-on business and your threat of no follow-on business is a good enough threat, so we will fix this issue ourselves and would prefer that you not legally regulate our perfectly acceptable software development processes. We (large- 174 corporation), however, will ‘promise’ to improve – as you the customer clearly believe this is our problem, and if the issue (low quality) comes up again, we can continue to ‘promise’ to improve (we may even undertake a quality initiative or two). However, we know you must have the systems we build, so it won’t be long before you are asking us to waive your own mandated quality initiatives. Besides, we can always simply lobby our way out of legally binding stiffer quality laws, and can continue to ‘convince’ the appropriate short term government official to not place those tough new quality standards on your new contracts. For those cases where those strategies do not work, we can engage our organization that has a reputation for high quality to build the system, and will negotiate quality incentives to make our stockholders happy. In this thesis, we now pose the question, whose problem is this? Is this a problem with the contractor or is this the government’s problem? This is the motivation for our specific selection of case study observations that have been made throughout the dissertation – which are based from our customer’s viewpoint; the ‘threat’ strategies provided in this sub-section are intended to be used to propose a bottoms-up Nash bargaining solution in the next sub-section to deal with quality issues in any large software corporation. 5.8 Conclusion: A Proposed Nash Bargaining Solution Nobody can reasonably dispute that individual decisions will not affect cost, schedule, quality or effort. This viewpoint is justified using not only theoretical considerations, but experience from the real world. Game theory provides a theoretical method in a non-retribution manner to approach understanding why the decisions are made. Much of the existing work to apply game theory to software engineering and development was reviewed in chapter 2. This chapter provided further background material and then used it in discussions throughout to consider the methods for applying game theory to fight software defects. The points from the earlier differential game theory sub-section are now modified for our software viewpoint as: 175 (1) The rate at which software is created has a temporal dependency on the introduction and removal of defects. (2) This temporal dependence creates a penalty for high quality, which is incurred as an increase in up-front cost and effort, but is recouped later as a significant savings during integration of the software. The fact that there are developers who will choose the low quality option to meet schedule leads to the desire for rigorous quality checks at every point in the development of the software. With software, certain types of defects are found and removed more effectively by different tests or peer reviews, but the later in the development life-cycle the defect is allowed to exist, the higher the cost of removing that defect. For complex large development efforts, this cost can be significant – especially for design flaws. (3) If the designers, and developers were working towards high quality we could move faster through software’s later quality checks of integration, qualification and system testing, by not having to find and fix those defects that could have been found early in the process. The software defect removal process is layered out of the necessity to find defects as early in the software development process as possible, and the methods along the path are better at removing certain types of defects that later defect removal methods will miss. Removing or cutting corners on these early processes just pushes defects down stream where those that can be found later on will take more time and money to fix. Due to the decreased ability to fully test these complex systems after they become integrated we are leaving these bugs in the software where they can eventually lead to the loss of the space vehicle, or we get lucky that there aren’t any mission ending defects left and just spend time fixing them on orbit. Hence, we argue here that allowing this situation to persist is ultimately the government’s problem, and the government needs to fix it. Preferably, using a multi-level carrots and stick approach. The findings and recommendations for doing so were provided at the end of chapter 3. A collaboration between industry, government, and academia can start acting by working to identify bodies of knowledge for the various roles required to build these systems, and then identify a 176 method for proper certification and levels of certification to be independently administered for those individuals engaged in the engineering of space system software. The mandated adoption and funding of quality processes should save the government money in the long run but is likely to meet with significant resistance. Thus, the proposed Nash bargaining solution uses the formation of ‘coalitions’ consisting of quality-minded staff. We base this solution on the game theory literature, Boehm’s work and the neeed to identify a more fundamental solution to our situation. Hence, these quality-minded staff are directed to identify their negotiation sets based on their role in the development process. For example, the integration test engineers will identify all of the issues they encounter from poorly designed, peer reviewed, and tested software, having to increase their effort late in the program to identify what bugs they can in the impossibly short schedule they are forced to abide bye. In addition, to their own internal issues, introduced from their own non-quality-minded staff to create quality processes for their people. This negotiation set could include significant additional pay for having to work late evenings, or swing shifts. The developers of the software (who do not appear to want to do design but just write code) can negotiate for input products that are complete, languages they prefer to use, the need for the design and engineering staff to support their reviews, while also providing the test inputs and expected outputs needed to test the correct implementation – and – using the same quality minded personnel identification step, they create processes that will insure high quality code. This same process is duplicated for the SQA staff, qualification testing staff, etc. The set of negotiation points is then used to negotiate between the interacting staff on the projects until an arbitrated solution is achieved. Those with arbitrated solutions then choose representatives to negotiate with other recently formed coalitions. In the event of disagreements, the coalition re-approaches the individual parties in the chain to identify an agreeable solution, etc. Within this Nash bargaining solution we see the need to repeatedly arbitrate between the N- players of the game until satisfactory solutions are obtained. To address the issue, where new technology or improved processes are adopted or become available that improve a player’s quality, schedule or cost 177 bargaining position, the coalition re-negotiates its position with the identified chain of stakeholders but on an agreed to re-negotiation time scale. Appendix-H has been added to further discuss these concepts as a solution to Dyson’s problem, within the context of Boehm’s work. Also, at the time this dissertation is being completed, the author is aware of a government panel [29] reviewing the current management structure for all of national security space and is a member of a the NDIA (National Defense Industry Association), which are both looking at this problem. 178 C h a p t e r 5 E n d n o t e s [1] Linus Pauling, in No More War!, from Classic Quotes; Internet: http://www.quotationspage.com/quote/5174.html: last accessed 16 July 2007. [2] Roger B. Myerson, GAME THEORY: Analysis of Conflict, Harvard University Press (Cambridge, MA, 1997): 1. [3] Engelbert Dockner, Steffen Jørgensen, Ngo Van Long, Gerhard Sorger, Differential games in economics and management science, Cambridge University Press, (Cambridge, UK 2000): 3. [4] Paul Walker, History of Game Theory: A Chronology of Game Theory, Internet (October 2005): last visited December 8 th , 2007, available at http://www.econ.canterbury.ac.nz/personal_pages/paul_walker/gt/hist.htm [5] M. H. Breitner, “The Genesis of Differential Games in Light of Isaacs Contributions,” Journal of Optimization Theory and Applications, vol. 124, no. 3, Springer Science+Business Media B.V., (March 2005): 523-559. [6] James O. Berger, Statistical Decision Theory and Bayesian Analysis 2 nd Edition, Springer- Verlag New York Inc., (New York, NY, 1980): 310. [7] Rufus Isaacs, Differential Games: A Mathematical Theory With Applications To Warfare And Pursuit, Control And Optimization, Dover Publications, (Mineola, NY: 1999, originally published by John Wiley and Sons, Inc. New York, 1965): 14. [8] Myerson, GAME THEORY, 122-126. [9] Philip D. Straffin, GAME THEORY and STRATEGY, The Mathematical Association of America, (Washington D.C.: 1993): 4-5. [10] Myerson, 97-98. [11] Straffin, 102-110. [12] Straffin, 111. [13] Austin, 202. [14] Straffin, 37. [15] Austin, 198. [16] Ibid., 197. [17] Isaacs, Differential Games, 14. [18] James C. Cox, Jason Shachat, and Mark Walker, “An Experiment to Evaluate Bayesian Learning of Nash Equilibrium Play,” Games and Economic Behavior 34, (2001): 11-33. 179 [19] Satinder Singh, Vishal Soni, and Michael P. Wellman, “Computing Approximate Bayes-Nash Equilibria in Tree Games of Incomplete Information,” Proceedings of the 5 th ACM Conference on Electronic Commerce, (2004): 81-90. [20] Breitner, 540. [21] Isaacs, 156-199. [22] Rupert Brown, Group Processes 2 nd Edition, Blackwell Publishing, (Malden, MA: 2000): 125- 143. [23] Beizer, Software Testing Techniques, 12. [24] Ibid., 14 [25] Nancy L. Stokey and Robert E. Lucas Jr., with Edward C. Prescott, Recursive Methods in Economic Dynamics, Harvard University Press, (Cambridge, MA: 1989); 7. [26] Jérôme Adda and Russell Cooper, Dynamic Economics, The MIT Press, (Cambridge, MA: 2003); 16-18. [27] Kjell Hausken, “Probabilistic Risk Analysis and Game Theory,” Risk Analysis 22, no. 1 (2002): 17-27. [28] T. Capers Jones, Estimating Software Costs, 544. [29] Amy Butler, “Panel Wants Massive Milspace Reshuffling,” Aviation Week, (August 14 th , 2008); Internet available online at http://www.aviationweek.com/aw/generic/story_channel.jsp?channel=defense&id=news/SHA KE08148.xml. 180 C H A P T E R 6 : C O N C L U S I O N S If we had a reliable way to label our toys good and bad, it would be easy to regulate technology wisely. But we can rarely see far enough ahead to know which road leads to damnation. Whoever concerns himself with big technology, either to push it forward or to stop it, is gambling in human lives. [1] 6. Introduction This chapter consolidates the research results. In this dissertation, we have provided an understanding of the forces driving engineering (i.e. human) behavior for our schedule-driven software development environment in a manner that does not implicate specific companies, individuals within management, or any of the engineers involved. This is the cornerstone of this research. 6.1 Summary Game theory provided the theoretical underpinnings to explain how severe near-term “corner cutting” occurred on one of our software intensive system acquisitions and thus gave us insight into how we can remove this behavior from all of our high-cost mission-critical software acquisitions. The resulting data analysis combined with the game theoretic foundation allowed the creation of a set of policy recommendations for not only the acquisition of high-cost software-intensive systems, but also it suggests the requirement for mandated periodic retraining of certified software professionals, and fundamental changes to how our software engineers are trained by universities in the first place. We believe this is necessary to ensure that all the workers from the lowest levels all the way to the top of our management chains understand the importance of our engineering methods. To independently summarize each chapter, the first chapter provided an introduction to a quality problem for schedule-driven space systems that led to the line of inquiry described in this research. We also included a roadmap for the dissertation in this section. The second chapter provides background material intended for a broad audience, and concludes with a review of the various applications of game theory to the field of software development and engineering. 181 The third chapter has qualitative and quantitative research results from software-intensive national security space systems under development at the Space and Missile Systems Center. The focus of the research was on two of the projects from The Aerospace Corporation software reliability research database, while rate and unit test data was also provided from another. The research showed that the multiple projects have repeated issues with design maturity and the use of de-facto industry standard design tools, rigor of unit testing, and without significant government oversight, the desire to want to ‘skip’ or curtail quality practices. Even with government oversight, we have witnessed ‘counter’ strategies to reduce or eliminate required design and quality processes. The full scope of issues reviewed led to a set of public recommendations for government, contractors, academic institutions, and information technology standards organizations. The forth chapter uses a modified version of Madachy’s inspection-based system dynamics model as a tool to investigate the defect discovery dynamics for two of the projects researched in chapter 3 that had significantly different defect distributions. The results from using a modified version of Madachy’s model including an integration test feedback loop and unit testing; showed that less rigor in these processes leads to a significant manpower increase in effort to find and fix the defects that should have been caught earlier. In addition, the result was longer schedules that allowed more downstream defects into operation as a direct consequence. Using modification curves to represent available information on what staff was actually doing (using a modification curve to change the interpolated staffing curves for design, code, and test) obtained reasonable reproductions of the project’s defect dynamics. We also noted that improved staff task information from project management tools should increase the fidelity and thus the confidence in our results. Of note, is how well project-C’s simulated results match the real defect distribution when Madachy’s fraction of effort values were swapped so that more effort was on code than on design. Our code-design efforts are reversed to what the situation was for the 63 projects in the COCOMO database, thus this “swapping” of values is justified by the development approach taken on those projects. However, this should be further investigated using a larger number of projects that provide verifiable staff task and effort information. Finally, the chapter 182 includes a Latin Hypercube sampling for distributions simulating decisions made on schedule, cost, and quality-driven projects. The results show a dramatically improved schedule consistency for those projects that follow rigorous design and code inspection, and unit test practices. Chapter 5 provided background material on game theory and discussed a paper from Austin that appears (for the most part) to theoretically explain the research results. We, however, provided arguments that management is directly involved in the corner cutting strategy selection of the workers through their own selection of strategies, and included examples of strategies and counter strategies for developers and managers. Hence, the chapter proposes a Nash coalition-forming bargaining solution for fixing the issues that lead to corner cutting in software intensive system developments. And as with any industry, we likely will still find new and innovative corner cutting strategies, but we expect that the quality, schedule, and cost results from our suggested Nash bargaining game should be dramatic. This bargaining solution will take time to implement and will likely be controversial to some, but it assures us that high-level corporate and government negotiated system acquisitions properly account for the staff effort to do the actual work, which is where the problem exists in the first place. This chapter summarizes the findings of the dissertation and provides a consolidated list of contributions with suggestions for follow on research. Appendices include further reference information, tables of test matrices, comparison data for the Modified Madachy Model, and results from using and testing the model. Also included is a note on a solution for Freeman Dyson’s problem, which is a negotiated “fuzzy-ball” in N-dimensional player space. The final appendix is for public awareness, and gives an analogy to a strategy used on one of our projects to avoid the contractual requirement to design the software using UML. Table 23 provides a summary of the good versus bad advice observations that we made in this dissertation. 183 Table 23: Strategy Advice for the Development of Software Intensive Systems Advice Type Use schedule and cost-driven effort-reduction strategies Bad Send “mixed signals” on quality to software developers through management use of schedule and cost strategies that directly impact quality (e.g. allowing coding without a design) Bad Utilize concurrent code and design Bad Keep the status quo - ignoring the foundational acquisition issues which allow software developers and their management to continue to cut corners on quality and let us just keep trying to develop our mission critical software intensive systems in this same manner Bad Use management strategies at any level indicating that schedule is more important than quality – leading to mixed signals to the developers Bad Negotiate the use of higher effort and up-front cost, but quality-driven processes (e.g. formal methods, cleanroom development and advanced design methodologies) that will reduce the overall schedule and cost risk Good Log low severity defects (Caveat - this doesn’t mean log all the punctuation errors found in all the products, but if they impact the understandability, then they should be recorded) Good Update the design during code implementation, and use the design to generate the software’s architectural framework Good Use bottoms-up development team bargaining as part of the SCS negotiations to ensure the contractor adopts quality-driven processes that avoid corner cutting behavior; no matter how severe the schedule pressure, and negotiate complete independent access for a 3 rd party, including embedding of personnel Good A top to bottom government review of the training and certification requirements for all of our software professionals and their management Good Include government technical representatives as part of SCS negotiations, to ensure they concur that the team identified processes will be effective Good 6.2 Case Study Findings and Recommendations Table 24 on the following two pages contains a list of the findings and recommendations from the case study research. 184 Table 24: Summary of Findings and Recommendations Finding Applicability Recommendation Incomplete Design Specifications Government, Contractor Move away from document-focused design processes towards tool-based modeling language processes with clearly defined completion criteria. Significant out-of-phase requirements churn (especially for projects that did not include UML use cases and sequence diagrams) Government, Contractor (1) Move away from ambiguous flowed down requirements specifications to up-front combined systems and software design processes that utilize design languages (e.g. UML with use cases and sequence diagrams) to iterate functional and performance requirements. (2) Plan for and schedule rework through the lifecycle phases based on historical values. (3) Mandate independently verified CMMI level 5 for all mission critical flight software and systems engineering. Assumption of ‘goodness’ on reused source code, even when it is modified in critical systems Government, Contractor (1) Contractors should place restrictions on transferring code between projects. (2) Code reuse considerations should be made during an up- front design tradeoff process that uses designs of the reuse functionality. (3) Completed designs that clearly define all required code adaptations for any new system. Significant out-of-phase Algorithm Description Document churn Government, Contractor (1) Move away from mathematically intensive algorithm documents as the communication method between system engineers and programmers, and… (2) Move towards specifications built during the systems engineering phase that use tools like Matlab or Mathematica with self-documenting analytical formats with embedded symbolic language manipulation. (3) Translate the symbolic equations into pseudocode for the developers. (Create a pseudocode standard.) (4) Define completion criteria that requires demonstration and test of the prototyped algorithms. SQA procedures that simply audit process appear insufficient to identify test process and review issues Government, Contractor (1) SQA organizations should be mandated to proactively integrate into the process (e.g. owners of peer reviews with concise quality criteria before a peer review can be scheduled) to insure documentation is complete. (2) SQA should be mandated to attend all reviews, with training for the staff to assume roles of moderator and/or recorder. 185 Table 24: Continued Finding Applicability Recommendation Cost/Schedule constrained space development environments significantly increase the likelihood that software developers will proceed into the development phase pre- maturely Government (1) Government must provide executable schedules and sufficient funds with risk based management reserves for each incremental lifecycle phase. (2) Each phase should be planned for early with realistic risk mitigation, before the risk becomes a downstream reality. (3) Cost/effort models with periodic in-lifecycle updates for software development should be uniformly applied across space programs. (4) Use the Incremental Commitment Model [8] for the design of software intensive systems in a competitive environment with completion criteria. (5) Embed government technical oversight for expensive flagship programs. (6) Include contractual language for severe financial and personnel penalties for corner cutting on industry standard quality processes. (7) Include contractual language to address technology refresh issues on each program. Titles for “software engineers”, “software architects”, “software quality engineer”, “software test engineer”, and “software manager” are ill- defined and requirements for their knowledge base and experience is inconsistent Contractor, Government, Universities, IEEE, and Information Technology standards organizations (1) Define and use standardized definitions for software titles with specific skill levels and training requirements indicating a certain knowledge and experience base. (2) Use externally administered certification levels for each titled area with the real threat for loss of certification if these individuals use corner cutting strategies. Consider for example: http://www2.computer.org/portal/web/certification. (3) Consider laws that mandate software professional certification levels and periodic retraining requirements to keep personnel current with evolving technology and critical skill areas fresh (examples include software testing techniques, and architectural design language technologies). (4) Teach software engineering as a cross-disciplinary degree tract using a mandated knowledge base including fundamentals such as math. (5) Retrain our entire management establishment. (6) Mandate a standardized set of software metrics from all government programs, provide funding to an organization to accumulate and report on improvement progress, and provide training at the university level for what the metrics are used for. 186 6.3 Contributions The dissertation has provided a significant amount of non-attributable qualitative and quantitative research data to the modeling community, using as a foundation, the issues faced during the development of software intensive space systems. Discussion about these issues led to public recommendations of possible solutions that can be implemented to help avoid them in the future. The research data was then used in a modified version of Madachy’s inspection-based model as a research tool to show the increase in effort and significantly degraded schedule consistency from processes that are schedule-driven over those that are quality-driven. Quantitative data from two of the projects were used to reasonably reproduce the observed defect dynamics despite the lack of consistent and accurate staff task information. This result suggests that system dynamics is not only a promising research tool, it could improve our predictive capabilities on software intensive system acquisitions over our current prediction methodologies. This information can then be used to educate our customers about the impact of their schedule and cost-driven dynamic decisions. The method can likely also be used to predict the impact from perturbations and shocks (such as funding, staff, and others) to the dynamics of software development. Finally, this dissertation has provided both a practical and a set of theoretical reasons from the literature on game theory for the test and peer review inadequacies that occurred on one of the SMC software projects. In addition, it was stated early in chapter 1, that the goal of this research was to identify software-intensive space system acquisition methods, policies or areas of further investigation that will allow us to avoid or minimize the possibility for a reoccurrence of these development issues. The advanced solution requires the formation of a coalition that represents those individuals who are directly involved with taking development short cuts on our space system acquisitions to identify quality-driven development processes and quality-driven employment requirements that will minimize those occurrences. The formation of ever larger coalitions should then be tasked to address the larger “should we build it” negotiation problem. In reality, this suggested theoretical solution is in fact occurring to some degree within the current environment, and from the author’s viewpoint – does not 187 fully appreciate the level of change that the author has argued needs to occur. Ultimately, this dissertation argues that the fundamental reason for the existing situation rests with the current set of government policies that lead not only to a lack of required certifications for software professionals and depth of software quality audits on projects in agency relationships, but also initial cost restrictions and unacceptable schedules for the development of new software intensive systems. The mandated use of fully vetted architectural designs using design languages (such as UML) for these systems, similar to how large-complex buildings are architected, should also significantly improve the quality situation. The information concerning the software growth and defect problem in chapter 1 has already been published by the author at the Aerospace Testing Seminar, while the qualitative research results from this dissertation were provided as an invited presentation at the NASA Planetary Spacecraft Fault Management Workshop [2]. Papers for peer reviewed journals covering the system dynamics model with modeling results, the changes to Isaacs’ game of optimal steel production to incorporate bad steel and its solution, and the extended game theoretic arguments that lead to the use of an N-player dynamic bargaining game as the solution for Freeman Dyson’s problem are in progress. 6.4 Future Research The most evident avenue for future research requires the identification of software projects that have verifiable team tasking information, in addition to dynamic defect information from the execution of those tasks. Of particular interest would be defect pass through rates from varying levels of rigor in design and code inspection processes and unit testing obtained from projects using ODC and dynamic COQUALMO to classify the defects [3]. Further, the development of a higher fidelity model using COCOMO II that accounts for individual products and adds feedback loops from qualification and system testing could undoubtedly identify the importance of defects in these early lifecycle documents on the quality, and the consequent cost and schedule for the development of the software. The specific dynamics and decisions on project’s C and D are case studies that should certainly be investigated in the future (where reportedly there is tasking information from project-C that can be mined). In addition, 188 although not included here, the modeled defects can be easily transferred into reliability modeling tools such as CASRE or the model can be modified to include reliability calculations. In this manner, the reliability growth of the software can be dynamically predicted. The current model can also be used to investigate additional dynamic data from software development; for example, the dynamics of peer review findings and incomplete documentation could be investigated in the future using data provided in the appendix. Further, throughout the dissertation interesting problems for further investigation were noted as appropriate. Another area for further research is the linking of decisions at the contractor’s upper level management in response to strategies their government level counterparts utilized, and the rippled-down effect on the strategies used by the workers (from a game theoretic standpoint) would be revealing. 189 C h a p t e r 6 E n d n o t e s [1] Freeman Dyson, Disturbing the Universe, Harper & Row, (New York, NY: 1979): 7. [2] Douglas J. Buettner, “Case Study Results and Findings from SMC Flight Software Projects,” presentation at the NASA Planetary Spacecraft Fault Management Workshop; http://discoverynewfrontiers.nasa.gov/fmw_info.html [3] Ray Madachy and Barry Boehm, “Assessing Quality Processes with ODC COQUALMO,” proceedings of the International Conference on Software Process, (May 10: 2008); Internet presentation available on line at http://www.icsp- conferences.org/icsp2008/Presentations/May%2010/Session%20A1/Assessing%20Quality%20 Processes%20with%20ODC%20COQUALMO%205.pdf. 190 G L O S S A R Y This section contains a glossary of terms. 17 Acceptance Test An Acceptance Test is used synonymously with qualification test to indicate the suite of tests executed in a formal manner to verify the software meets its specified requirements. (see also qualification test) Ada (programming language) Ada is a high-level programming language created by the Department of Defense (DoD) for embedded systems during the mid-70’s to get around the problem of non-standardized language use by military systems; it is the result of an extensive language design effort. The effort focused on design a language to address the following primary concerns: reliability, maintenance, programming as a human activity, and language efficiency. In April of 1997 the DoD removed its requirements for military systems to use Ada, although, a number of systems in development at SMC still use this language. Architecture (software) The architecture of a software program is the structure, which is comprised of the software elements, the externally visible properties of those elements, and the relationships among them. Algorithm (software) An algorithm is a set of software instructions written in a computer language to perform some task. Anomaly (software) An Anomaly is any condition that deviates from expectations based on requirements specifications, design documents, user documents, standards, etc. or from someone’s perceptions or experiences. Anomalies may be found during, but not limited to, review, test, analysis, compilation, and use of software products or applicable documentation. Attitude Control Subsystem (ACS) The Attitude Control Subsystem (ACS) (also Attitude Determination and Control Subsystem (ADCS)) is the space vehicle system tasked with stabilization of the space vehicle and its three-dimensional rotational orientation in space. Automated Test An Automated Test is a test case or test suite that can be run at any time to verify that the changes to the software has not broken other functionality. The tests are generally run using a scripting language that allows them to be run manually at any time, or when the automation script is scheduled for execution following an event (examples are after a new software build or at mid-night on Friday). Automated Test Generation Automated Test Generation is the automated creation of test cases by an automated test generation tool (an example is VectorCAST TM ). Automated Test Generation can speed up the creation of test cases, and is done to provide thorough testing of the code’s branches and paths. 17 The contents of this glossary also includes terms from a tutorial on software testing taught by the author for the Aerospace Testing Seminar. 191 Bad Engineering Behavior “Bad Engineering Behavior” is our term for large software development teams that are driven by schedule and cost pressures to the point of exhibiting some or all of the following quality corner cutting behaviors; going directly to building the software code without having a design, concurrently or after the fact reverse engineering a design from the code, discontinuing or not doing rigorously quality processes such as design or code inspections and unit testing, not building enough hardware simulation systems, not staffing the development effort at the correct level, not following a rigorously planned design driven development using project management tools, and other such examples. Beginning-Of-Life (BOL) The Beginning-Of-Life (BOL) is the period of time starting when a space vehicle is on orbit life and the performance of the space vehicle’s systems has not been degraded from exposure to the space environment or from extended use. Black Box Test A Black Box Test is one that disregards the internal structure of the software and is purely focused on the behavioral or functional aspect of the software code under test. Boundary Value Test Boundary Value Test (also called a Domain Test) is the test or suite of tests that assumes that all inputs to software code can be viewed as numbers, and that these numbers must be classified by the internal code and then subsequently processed appropriately. The boundary value test cases are designed to test the inside, on and outside the boundaries those variables can assume. Branch Test A Branch Test is a test case with inputs that is designed to get inside a specific software branch (example are if, elseif and switch or case statements). Branch testing is structural testing or white box testing. Bug (software) A software “bug” is the flaw in the software that causes anomalous behavior of the system. C/C++ (programming languages) The C programming language was developed at Bell Laboratories in the early ‘70s as a system implementation language for the Unix operating system that now can run on virtually every computer hardware platform. C is widely considered one of the most successful high-level languages. C++ is a language invented by Bjarne Stroustrup based on C with the edition of class structures and other language features (such as inheritance) that enable C++ to implement Object Oriented Designs (OOD). Capability Maturity Model ® Integration (CMMI ® ) The Software Engineering Institute (SEI) at Carnegie Mellon University developed the Capability Maturity Model (CMM) in the late 80’s and early 90’s in order to capture software development best practices across numerous organizations. The Capability Maturity Model ® Integration (CMMI) provides a process improvement approach to organizations, and contains a 5 level system of maturity ranks (1 being the lowest and 5 the highest) that is frequently used for assessing the maturity of an organization’s processes. Central Processing Unit (CPU) The Central Processing Unit (CPU) is the integrated circuit that does most of the work in a computer. The CPU loads software instructions and input data and executes the software instructions based on the input data and the logic in the code. 192 Cleanroom (software) The cleanroom software development process is a statistical development methodology that results in very low defect software. Code (software) The software code is the set of software instructions (usually written in a high-level programming language like Ada or C/C++) that implement the functionality specified by the software requirements. Computer Aided Software Reliability Estimation (CASRE) Computer Aided Software Reliability Estimation (CASRE) is a tool from Dr. Allen Nikora of the Jet Propulsion Laboratory that can be used to predict the software reliability growth from the software’s observed defect data. Computer Software Configuration Item (CSCI) A Computer Software Configuration Item (CSCI) is a software item specified for configuration control at the system architectural design, by contractual specification, or as development products and specifically includes SCM assignment of project-unique identifiers. CSCI entities include software products to be developed or used under contract, and certain elements required from the software development environment. Coverage (test) Coverage is the percent of all branches or paths in the software unit that were covered by all of the unit’s tests. CPU hours CPU hours is the number of hours the CPU has been active during testing, excluding any time that the flight software was not in use. CPU hours is just one convenient measure that can be used to determine the reliability of the software. Day-In-The-Life (DITL) Test A Day-In-The-Life (DITL) Test is an endurance (soak) test that operates the software, as it would be used for at least 24 (wall clock) hours. Defect (software) (Defect Report (DR) or Software Defect Repository (SDR)) A Defect is a flaw in a system or software product that is discovered through an inspection process. (Note: Residual defects not discovered by inspection can lead to faults, failures and anomalies.) The specific occurrence of a product defect in the software defect database is termed as a DR in the case of a software code defect while an SDR is the defect database. Developer, Programmer, or Coder (software) The software developer, computer programmer or coder is the individual or set of individuals that write the software code. Effective Line Of Code (ELOC or ESLOC) Effective Line of Code is a non-commented source line of code. (see source line of code) Electrical Power And Distribution Subsystem (EPDS) The Electrical Power And Distribution Subsystem (EPDS) (also Electrical Power Subsystem (EPS) or power subsystem) generates power, conditions it, regulates it, stores it for periods of peak demand or when the space vehicle is in eclipse and distributes it throughout the space vehicle to the other subsystems that use it. 193 Emulator/Emulation An Emulator is a piece of hardware (for example a computer) that is used to mimic the operation of a piece of flight hardware. End-Of-Life (EOL) The End-Of-Life (EOL) is the period of time near the end of the space vehicle’s on orbit life after the performance of the space vehicle’s systems have been significantly degraded from exposure to the space environment and extended use. End-To-End Test An End-To-End Test is a test that involves all CSCI software components to test the full communication path from the ground station to the satellite and back. Endurance (Soak) Test An Endurance (Soak) Test is a test that operates the flight software for an extended period of time to ensure that the flight software will not encounter any issues when it is run on orbit for an even longer period time. Environmental Test (for software) An Environmental Test of the flight hardware should include testing the flight software at the temperature extremes to verify that the environmental changes (which can affect timing) does not adversely affect the operation of the RTOS and other flight software. Error Checking/Handling Error checking and handling is code that is included specifically to check computations for errors (for example Not A Number (NAN) or some possible anticipated result that could occur from sensor data input) and then handle/report these errors in a specific manner. Expected Outputs Expected Outputs are a set of outputs from an oracle for a given set of inputs that are compared against the actual outputs from the software under test to determine if the software was coded correctly. Extreme Value Test An Extreme Value Test is a boundary value test with input values assigned to extremely large (positive and negative), extremely small (approaching zero) and the value zero. Failure (software) A Failure is the inability of a system or component to perform its required functions within specified performance requirements. Failure Reporting And Corrective Action System (FRACAS) (for software) Failure Reporting And Corrective Action System (FRACAS) is the database and processes for documenting, reporting, correcting and verifying fixes for software failures. Failure Mode and Effects Analyses (FMEA) Failure Mode and Effects Analyses (FMEA) is a detailed analysis method used to identify the failure modes and protection mechanisms for safety/mission critical embedded control system software designs. 194 Failure Mode Effects and Criticality Analyses (FMECA) Failure Mode Effects and Criticality Analyses (FMECA) uses the FMEA technique to calculate a criticality value based on the probability that the identified failure mode will cause a system failure. Fault (software) A Fault is a flaw in a system or software product that is discovered through a test process. Fault Injection Test A Fault Injection Test is a test that inserts faults in an attempt to predict how the system will behave when a software component fails, hardware fails, when bad user input is encountered or when the software is forced to operate in unlikely operational modes. Fault Tolerant (software) Fault Tolerant software provides software code to detect, analyze and handle anticipated software/hardware faults in a specific manner based on the type of fault that was detected. Potential fault handling mechanisms include non-drastic measures like sending a signal to ground operators to more drastic measures like placing the space vehicle into a safe state to wait for ground command or rebooting a computer. Flight Software Flight Software is a general term used to mean any software or firmware that is embedded into or flown on the space vehicle either in the computers used for space vehicle control (space vehicle bus) or on any of the attached payloads. Formal Methods Formal Methods are a set of methods from the fields of logic and discrete mathematics that rely on the explicit enumeration of all assumptions and steps in the specification, design and construction of computer systems and software. Formal Qualification Test (FQT) A Formal Qualification Test (not to be confused with Formal Methods) is a methodical way for performing black-box requirements qualification testing for customer witnesses with specific roles for a test conductor, a test reader, and a test director with active Software Quality Assurance participation. The FQT is performed in a formal setting and used to demonstrate to the customer that the software is mature and meets all of its requirements. Functional Hazard Analysis (FHA) Functional Hazard Analysis (FHA) is a technique that attempts to predict the effects of functional failures on the system. Functional Requirements Functional Requirements are those software requirements that specify software functionality that shall be implemented. Good Engineering Behavior “Good Engineering Behavior” is our term for large software development teams that are driven by quality despite schedule and cost pressures and exhibit the following behaviors; using prototypes of software algorithms (for example using tools like Mathematica, Matlab or languages other than the target language) to support an object oriented design, rigorously review all documentation using peer reviews and inspections, document 100% code path and branch coverage and review test case for negative testing, use in phase feedback of quality from ODC to determine if additional reviews need 195 to be held, utilize proactive software quality assurance who are an integral part of the development process, and any number of other such quality minded low defect processes (many of which can be found in the literature). High-Level (Low-Level) Programming Language A High-Level Programming Language (examples are C/C++ and Ada) is a software language that is parsed and converted into directly executable machine language by the language’s compiler. A software language contains a specific syntax (such as the assignment operator =, or the not equal operator /=) with logical operations (such as AND && or OR ||) and other similar machine code interpretable functionality that is used by the compiler to create machine code. Machine code is the set of instructions that can be directly interpreted and acted upon by the CPU. Low-Level Programming Languages provide less abstraction from the ‘low-level’ machine code. Assembly Language (human readable notation for machine code) and machine code are examples of what are typically termed to be Low-Level Programming Languages. Incremental Development Incremental Development is a software development method that plans for developing the software’s functionality in increments. The result is a build a little, test a little paradigm, until all of the functionality has been implemented. Independent Verification and Validation (IV&V) Independent Verification and Validation (IV&V) is a costly testing methodology where an independent agent is tasked with black box testing the software. All mission critical software should have IV&V on contract. Inputs (for test) Test Inputs are any set of values used by a white or black box test to verify the correct behavior of the software. Integrated Product Team (IPT) An Integrated Product Team (IPT) is an organizational structure responsible for managing and building significant aspects of a system. The IPT structure for space systems consists of the space segment (responsible for space assets), ground segment (responsible for all ground telemetry receipt processing and space asset commanding), system engineering, integration and test (SEIT) (responsible for the total program integration effort) and recently an IPT was added explicitly for managing the entire software effort. Integration Test (IT) An Integration Test is the testing that occurs during the gradual aggregation of units (after they have passed their localized unit tests) in order to build up the CSCI. Testing at this level is oriented towards insuring that the unit behaves properly when interfaced with the larger aggregate. In real time embedded systems – testing verifies that the aggregated CSCI still meets its real time performance budgets. Interface Control Document (ICD) Interface Control Document(s) contains the details of the interfaces between hardware and software, software and other software, and definitions of the TLM formats between ground and space. Interface Test An Interface Test is the test case (consisting of both positive and negative cases) designed to show that the unit behaves properly when integrated CSCI. Interface tests can be designed to test unit-to- unit interfaces, external CSCI interfaces and hardware interfaces. 196 Interpreted Language An Interpreted Language is one that is read and executed directly on the CPU from the source code by an interpreter (an executable form of the code is generally not an intermediate output from interpreters). Key Performance Parameters (KPPs) Key Performance Parameters (KPPs) are a critical subset of performance parameters representing those capabilities and characteristics so significant that the failure to meet its acceptable threshold value for performance can be cause for concept or the system to be reevaluated, the project to be reassessed or cancelled. Mean Time Between Failures (MTBF) Mean Time Between Failure (MTBF) is the mean expected time that the next failure will occur. Mean Time To Failure (MTTF) Mean Time To Failure (MTTF) is the mean expected time that the next failure will occur. Mean Time To Repair (MTTR) Mean Time To Repair (MTTR) is the mean expected time that the system will be repaired and returned to service . Mission Critical Functionality Mission Critical Functionality is any software functionality that has a significant probability that its failure could lead to the loss of the space vehicle. Negative Test A Negative Test is any test originally designed to try and demonstrate that the software or system does NOT meet its requirements (this includes any exploratory attempts at finding bad behavior due to missing requirements or a poor design) or behaves badly. Negative tests include off-nominal tests, stress tests, extreme value tests, and boundary-value tests outside the bounds of the parameter’s planned operational range. Mature software development organizations have aggressive negative testing strategies incorporated with a design for testability philosophy. Nominal Test A Nominal Test is any positive test originally designed to demonstrate that the software or system when fed parameters from within their normal operational range behaves well (does not behave badly). Object Oriented Design (OOD) Object Oriented (OO) Design (OOD) is a software design methodology that creates an object oriented software architectural model. OOD provides benefits to the system in maintainability and reusability of the software. An OOD will group like functionality into classes and sub-classes – the OO methodology provides constructs such as inheritance which allows like classes to use functionality and see/use data for classes that are inherited and data encapsulation which only allows functions in a class to see and use data and other functions that are also members of the same class (among others OO constructs such as polymorphism – allowing a class to take on different fundamental characteristics based on its use). Path Test A Path Test is a test that executes a specific thread (or pathway) through the software. 197 Performance Requirements Performance Requirements are those software requirements that specify software performance (such as timing requirements, frequency requirements or CPU utilization requirements) that shall be met. Preliminary System Safety (or Hazard) Analysis Preliminary System Safety (or Hazard) Analysis (PSSA or PHA) is a preliminary analysis performed during the initial concept phase of the system to identify all safety issues or hazards their risk of occurrence and categorize the probable consequences (example loss of life or mission). Positive Test A Positive Test (also called a ‘happy path’ or ‘clean’ test) is any test originally designed to simply demonstrate that the software or system behaves well when provided with nominal input parameters. Off-nominal Test An Off-nominal Test is a negative test originally designed to demonstrate that the software or system when fed parameters outside their normal operational range does not cause the system to behave badly, and that recovery to normal operation is possible. Operational Profile An Operational Profile is the entire set of operations with their probability of occurrence that software can execute. Operational Test An Operational Test is a test that is executed when the system is first placed in operation to check out the system to determine if the system is working properly. Oracle An Oracle is any software that is used to provide inputs and expected outputs for testing the flight software. Qualification Test (also Software Item Qualification Test (SIQT) and Formal Qualification Test (FQT)) A Qualification Test is a test designed to show that the software meets a requirement or a set of requirements. (see also requirement test) Random Test A Random Test is a test that selects random inputs according to the software system’s operational profile. Random tests are used to demonstrate the reliability of the software. Rapid Prototype (software) A Rapid Prototype is a software algorithm (or set of algorithms) that is rapidly developed by systems engineers and analysts in order to implement significant functionality to mitigate technical and/or schedule risk. Rapid Prototypes are usually coded in programs such as Matlab or Mathematica, which provide an integrated graphical analysis environment with interpreted language constructs, but may use C/C++ or other high-level programming language. Real-Time-Operating-System (RTOS) The Real-Time-Operating-System (RTOS) is the software code that provides the interface between the application (flight software) and the embedded hardware (CPU and various support electronics like Random Access Memory (RAM)). The RTOS provides these system services requested by the flight software, usually within very strict timing requirements. The RTOS ‘kernel’ is an abstraction 198 layer of software that hides the details of the hardware interface from the application software. The difference between Real-Time and Non-Real-Time operating systems is the need for deterministic timing behavior to allow them to always meet real-time deadlines. Red-lines A red-line is a flaw in a document that is purely cosmetic in nature, such as incorrect grammar, spelling or punctuation. The inspection processes the author enjoyed as a young engineer were extremely rigorous, and at the end red-lines were simply provided to the developer whose product was being reviewed. (Undoubtedly the astute reader can find red-lines remaining in this document as an example. Some of these latent defects the author is fully aware of. While included in these defects are those that will cause the reader to cringe, not unlike the computers who simply carry out the instructions that were coded by the developers.There regrettably is no UML for designing a dissertation.) Regression Test A Regression Test is any test that is re-executed after it originally passed in an attempt to demonstrate that the test still passes after the software code has changed. Regression Tests should be planned and designed for automation. Reliability (software) Software reliability is the probability that a software program or system will perform its intended function over a specified interval under stated conditions. Reliability Demonstration Chart A Reliability Demonstration Chart is a graphical method for demonstrating that the software meets its reliability requirements. Request For Proposal (RFP) Customer provided document that describes at a high level the system the customer wants built. Requirements (software) Software requirements are the set of functional and performance capabilities that the software shall comply with (meet) so the system can meet its system level requirements. Requirement Test A Requirement Test is a test designed to show that the CSCI meets a specific requirement. (see qualification test) Risk (software) Software Risks are concerns/issues (usually as technical ‘risks’ or schedule ‘risks’) that may increase a programs budget, delivery schedule, or inability to meet a functional or performance requirement. Robustness Test A Robustness Test is a negative test that is designed to show that the software behaves well under off- nominal conditions. Run-For-Record A Run-For-Record is the Formal Qualification or Software Item Qualification Test event for the CSCI that are run for the customer to demonstrate that the software meets its requirements. After successful completion of this test event and the Test Exit Review milestone (which presents the results to the customer in the form of the Software Test Report) the software is fully qualified provided that there 199 are no significant leans from failed requirements. A Run-For-Record should not be attempted without first having the tests executed (called a dry-run) on the target build. Safety Critical Safety Critical functionality is the set of software functions that are deemed to pose a threat to human life should a failure occur. Scenario Test A Scenario Test is a test that executes a significant thread (which could be a single Use Case or a large group of Use Cases) through the software that demonstrates the software’s behavior. Script-based Test A Script-based Test is a set of instructions that are interpreted by a software test environment to command a simulator, emulator or both to execute in a specific manner to test the flight software. Simulator (high-fidelity/low-fidelity) Software Simulators are software programs that ‘simulate’ the space vehicle’s embedded hardware environment. The simulators usually termed as ‘high-fidelity’ if they implement very accurately physics based or empirical models of the environment that the flight software will run in or ‘low- fidelity’ if they only approximately model that environment. Smoke Test A Smoke Test is a scenario test used by the developers during software integration testing as a regression test to verify that new units or bug fixes do not break the software’s existing functionality. Software Development Lifecycle Software Development Lifecycle is the collection of software phases (Requirements Elicitation, Requirements Analysis, Design, Implementation, Test, Deployment and Maintenance) that span the system’s concept inception to retirement of that system. Software Development Plan (SDP) The Software Development Plan provides the documented software process, and is typically a contractually binding document. Software Development Process A Software Development Process is usually classified as the model that the software development follows, while the specific development processes that an organization may implement may include everything from documentation peer reviews and software code inspections to process improvement activities. Specific examples of software development models are the waterfall, incremental, evolutionary, and spiral development models. Software Engineer A Software Engineer is usually an individual that participates in the design, development and unit testing of the software code. Some organizations may also have separate roles for a software architect (an individual or set of individuals responsible for the overall software architectural OOD) and for the software programmer. There is, however, no strict definition for a required body of knowledge to define a software engineer’s skill set. 200 Software Metrics Software Metrics are those metrics gathered during the development or maintenance of the software. They are used by management to assess schedule, cost, and all the –ilities (for example reliability or quality) and adherence to the development process. Software developers hate to maintain and fill out spreadsheets and progress information to support metrics gathering. Software Quality Assurance (SQA) Software Quality Assurance (SQA) is the full set of activities that assures that the software meets organizational and military quality standards. SQA personnel usually assume a wide range of tasks from participation in peer reviews to insure that they are occurring within organizational standards, to signing off and closing software defect reports as a final process step to indicate that the defect has been corrected in the delivered product. Software Review Board (SRB) The Software Review Board (SRB) is the meeting that is convened to review and disposition all software defects. Software Reliability Engineering (SRE) Software Reliability Engineering (SRE) is the entire suite of software engineering methods used to measure, model and predict the software’s reliability while providing feedback to the software development process to proactively improve the software’s reliability during development. Software Reliability Growth Model (SRGM) A Software Reliability Growth Model (SRGM) is any of the numerous mathematical models that use cumulative defect history of the software to predict the software’s reliability. Software Requirements Specification (SRS) Software Requirements Specification (SRS) is the document that contains the functional and performance requirements that the software is designed to meet. Software Test Description (STD) The Software Test Description (STD) consisting of software test procedures is a CDRL document that provides detailed step-by-step test execution instructions with pass/fail criteria for the test. Software Test Plan (STP) The Software Test Plan (STP) is a CDRL document that provides the overall software test strategy. Software Test Report (STR) The Software Test Report (STR) is a CDRL document that provides the results of the run-for-record SIQT. System Program Office (SPO) The System Program Office (SPO) is the government organization responsible for the acquisition of the space system. Source Lines Of Code (SLOC) Source Lines Of Code (SLOC) is a software metric that is frequently used early in the system concept phase (and even during development) to estimate the amount of effort (and thus budget) it will take to create the software. SLOC can be measured as physical or logical SLOC. Physical SLOC is the number of physical lines for the code, where language and implementation differences can dramatically affect the actual SLOC count. Logical SLOC identifies language specifics for inclusion 201 by the SLOC counter such as pre-compiler directives (for example #if or #def in C/C++) expression statements, etc. Spiral Development (Model) The Spiral Development Model is a software development process invented by Barry Boehm (University of Southern California) that repeatedly iterates a set of more rudimentary development processes while prescribing the continuous management of risk. Recently, Boehm has introduced a variation to the risk-based spiral model to cover software intensive systems with many suppliers, long development cycles and numerous systems of systems. Stress Test A Stress Test is a negative test that subjects the system (of which the software is integrated into) to unrealistically harsh inputs or load with inadequate system resources to perform the operation. System Design A System Design consists of those characteristics of the system or CSCI that are selected by the developing organization in response to the mission requirements. Some will match the requirements; others will be elaborations of requirements, such as definitions of all error messages in response to a requirement to display error messages; others will be implementation related, such as decisions about what software units and logic to use to satisfy the requirements. Test Case (software) A Test Case is a specific test, which can be a negative or positive, or a white or black box test. Test Engineer (software) A Test Engineer is the individual tasked with software testing, usually following (but not including) unit testing. Test Procedure (software) The Test Procedure is the document that contains the steps needed to execute the test case. Test Point (software) A Test Point is a parameter that is checked after the execution of a test case. Test Suite A Test Suite is a group of tests. A test suite can be at the level of testing some specific functionality or can be a suite of suites that test the entire CSCI. Test-Like-You-Fly (TLYF) (for software) Test-Like-You-Fly (TLYF) is the test philosophy that you test the system under the full range of conditions that the system is intended to operate under. Thread (software design) A software design “thread” is a specific path through the integrated software system that follows a design Use Case or a specific software use scenario. Thread (software implementation) A software implementation “thread” is a concurrent task or sets of functionality that can execute either at any time or on a specific time schedule. 202 Unified Modeling Language (UML) The Unified Modeling Language (UML) is the result of a unification effort by Rational Software Corporation in the mid-90’s after James Rumbaugh (the author of the popular Object-Modeling- Technique (OMT) methodology) joined forces at Rational with Grady Booch (the author of another popular object modeling technique), and later Ivar Jacobsen (author of the Use Case modeling techniques). The unification of the object modeling methodologies by these ‘three amigos’ has effectively ended the ‘modeling wars’. UML is now standardized and maintained by the Object Management Group™ (OMG™), which is an open membership not-for-profit consortium that produces and maintains computer industry specifications for interoperable enterprise applications. Unit Test A Unit Test is a test of the smallest piece of software that can be independently tested, usually the work of a single developer consisting of a few hundred lines of code or less. Use Case A Use Case is the specification of sequences of actions, including variant sequences and error sequences, that a system, subsystem, or class can perform by interacting with external systems or users to define the system or sub-systems behaviors under the specified circumstances. Validation Validation is the process of evaluation (usually at the end of software development) to ensure that the software is in compliance with its requirements. (Stated another way, validation is the process through which one tests to make sure the right thing was built.) Verification Verification is the process of determining if the prior software development phase (for example the design) met the requirements of that phase (as in the example of design verification that the design will meet its requirements). (Stated another way, verification is the process through which one attempts to make sure the thing (in the case of the example the design) was done correctly.) Waterfall (Development) Model Waterfall Development is the classic software development model (consisting of the standard Requirements elicitation/analysis, Preliminary Design, Detailed Design, Implementation, System Integration and Test, Operation) where the next development phase does not begin until the current phase is completed. White-Box Test A White-Box Test (or Glass-Box Test) is a test of the software code’s internal structure (structural test). Work Breakdown Structure (WBS) The Work Breakdown Structure (WBS) is the hierarchical structure used in project management for organizing deliverables and tasks. 203 B I B L I O G R A P H Y Abdel-Hamid, Tarek, and Madnick, Stuart E., Software Project Dynamics: An Integrated Approach, Prentice Hall Software Series, (Englewood Cliffs, New Jersey: 1991). Abdel-Hamid, Tarik, “The dynamics of software development project management: An integrative system dynamics perspective,” Ph.D. dissertation, Sloan School of Management, MIT, (1984). Adams, R.J., Eslinger, S., Hantos, P., Owens, K.L., Stephenson, L.T., Tagami, J.M., Weiskopf, R., Newberry, LtCol G.A., and Zambrana, M.A., “Software Development Standard for Space Systems,” The Aerospace Corporation Technical Operating Report TOR-2004(3909)-3537 Rev. B., (El Segundo, CA: 2005). Adda, Jérôme, and Cooper, Russell, Dynamic Economics, The MIT Press, (Cambridge, MA: 2003). Altman, E., Boulogne, T., El Azouzi, R., Jiménez, T., Wynter, L., “A survey on networking games in telecommunications,” Computers and Operations Research 33, no. 2, (February 2006). Anonymous, Markov chain, on Wikipedia; Internet; available from http://en.wikipedia.org/wiki/Markov_chain. Aoyama, Mikio, “Agile Software Process and Its Experience,” IEEE Proceedings of the 20th International Conference on Software Engineering, (1998). Aoyama, Mikio, “Agile Software Process Model,” IEEE Proceedings of the 21st International Computer Software and Applications Conference, (1997). Austin, Robert D., “The effects of time pressure on quality in software development: An agency model,” Information Systems Research (INFORMS), Vol. 12, no. 2, (June 2001). Ballhaus, William F., Jr., “National Security Keynote,” 2004 Space Systems Engineering & Risk Management Symposium, available online at www.aero.org/conferences/riskmgmt/pdfs/Ballhaus.pdf. Barton, J. H., Czeck, E.W., Segall, Z.Z., Siewiorek, D.P., “Fault Injection Experiments Using FIAT,” IEEE Transactions on Computers 39, no. 4, (1990). Beizer, Boris, Black-Box Testing: Techniques for Functional Testing of Software and Systems, John Wiley & Sons, (1995). Beizer, Boris, Software Testing Techniques 2 nd Edition, International Thomson Computer Press, (Boston, MA: 1990). Berger, James O., Statistical Decision Theory and Bayesian Analysis 2 nd Edition, Springer-Verlag New York Inc., (New York, NY: 1980). Bernard, Tom, et al., “CMMI® Acquisition Module (CMMI-AM), Version 1.0,” Carnegie Mellon University Software Engineering Institute Technical Report CMU/SEI-2004-TR-001, (2004). 204 Boehm, Barry W., “A Spiral Model of Software Development and Enhancement”, IEEE Computer 21, no. 5 (1988). Boehm, Barry W., “A View of 20 th and 21 st Century Software Engineering,” keynote address 28 th International Conference on Software Engineering (ICSE 2006), (Shanghai, China: 2006); Internet; available from http://www.isr.uci.edu/icse-06/program/keynotes/Boehm-Keynote.ppt. Boehm, Barry W., “Software Risk Management: Principles and Practices,” IEEE Software, (1991). Boehm, Barry W., et al., Software Cost Estimation With COCOMO II, Prentice Hall PTR, (Upper Saddle River, NJ: 2000). Boehm, Barry W., Software Engineering Economics, Prentice-Hall, Inc., (Englewood Cliffs, NJ: 1981). Boehm, Barry W., Software Risk Management, IEEE Computer Society Press, (Washington, D.C.: 1989). Boehm, Barry, “Get Ready for Agile Methods, with Care,” IEEE Computer 35, no. 1, (2002). Boehm, Barry, and Jain, Apurva, “A Value-Based Software Process Framework”, Proceedings Of The Software Process Change, International Software Process Workshop and International Workshop on Software Process Simulation and Modeling, SPW/ProSim 2006, (Shanghai, China: 2006). Boehm, Barry and Jain, Apurva, “An Initial Theory of Value-Based Software Engineering”, USC-CSE Technical Report 2005-505, (2005). Boehm, Barry, and Lane, Jo Ann, “21 st Century Processes for Acquiring 21 st Century Software-Intensive Systems of Systems”, Crosstalk, (2006). Boehm, Barry and Lane, Jo Ann, “Using the Incremental Commitment Model to Integrate System Acquisition, Systems Engineering, and Software Engineering,” CrossTalk, October Issue, (2007). Boehm, Barry and Ross, Rony, “Theory-W Software Project Management: Principles and Examples”, IEEE Transactions On Software Engineering 15, no. 7, (1989). Boehm, Barry, edited by Wilfred J. Hansen, Spiral Development: Experience, Principles, and Refinements: Spiral Development Workshop February 9, 2000, Special Report CMU/SEI-00- SR-08, (2000). Boehm, Barry, et al., “Using the WinWin Spiral Model: A Case Study”, IEEE Computer 31, no. 7, (1998). Booch, Grady, “The Fever is Real,” Internet, available from http://www.acmqueue.org/modules.php?name=Content&pa=showpage&pid=131. Breitner, M. H., “The Genesis of Differential Games in Light of Isaacs Contributions,” Journal of Optimization Theory and Applications, vol. 124, no. 3, Springer Science+Business Media B.V., (2005). 205 Bridge, Norm, and Miller, Corinne, “Orthogonal Defect Classification Using Defect Data to Improve Software Development,” Proceedings of the International Conference on Software Quality 7, no. 0, (Montgomery, AL: 1997). Brown, Rupert, Group Processes 2 nd Edition, Blackwell Publishing, (Malden, MA: 2000): 125-143. Buettner, Douglas J., “Case Study Results and Findings from SMC Flight Software Projects,” presentation at the NASA Planetary Spacecraft Fault Management Workshop; http://discoverynewfrontiers.nasa.gov/fmw_info.html. Buettner, D. J., and Hecht, M., “Use of a Software Anomaly Repository for Software Reliability Analysis”, presentation at the International Symposium on Software Reliability Engineering (ISSRE), (2004). Buettner, Douglas J. and Arnheim, Bruce L., “The Need for Advanced Space Software Development Technologies,” Proceedings of the 23rd Aerospace Testing Seminar, 10-12 October 2006, The Aerospace Corporation, (2006). Buettner, Douglas J., Hayes, Catherine K., Trotter, Jason D., and Miller, Andrew, “Integrated Technical Computing Environments as a Tool for Testing Algorithmically Complicated Software,” Proceedings of the Seventeenth International Conference on Testing Computer Software, (June 2000). Buglione, Luigi, and Abran, Alain, "Introducing Root-Cause Analysis and Orthogonal Defect Classification at Lower CMMI Maturity Levels," Proceedings of the International Conference on Software Process and Product Measurement, (Cádiz, Spain: November 2006), 6-7; Internet; available from http:// www.gelog.etsmtl.ca/publications/pdf/1037.pdf. Buisman, Jacco, “Game Theory and Bidding for Software Projects: An Evaluation of the Bidding Behaviour of Software Engineers,” M.S. Thesis Blekinge Institute of Technology, (August, 2002). Bunting, Russ, et al., “Interdisciplinary Influences in Software Engineering Practices,” Proceedings of the 10th International Workshop on Software Technology and Engineering Practice, (2002). Charette, Robert N., Software Engineering Risk Analysis and Management, Intertext Publications/Multiscience Press, Inc and McGraw-Hill Book Company (New York, NY: 1989). Chillarege, Ram, “Orthogonal Defect Classification,” in Handbook of Software Reliability Engineering , ed. Michael R. Lyu (Los Alamitos, CA: IEEE Computer Science Press; New York, NY: McGraw-Hill Publishing Company, 1996). Clark, Graham, et al., “The Möbius Modeling Tool,” Proceedings of the 9 th International Workshop on Petri Nets and Performance Models, (September 2001). CNN News article, “Scientist: Mars rock photo shows 'Holy Grail‘,” CNN, 27 January 2004; Internet available from http://www.cnn.com/2004/TECH/space/01/26 /mars.rovers/; Internet; accessed on 5 May 2007. 206 Columbia Accident Investigation Board Report Vol. 1, National Aeronautics and Space Administration and the Government Printing Office (Washington D.C.: August 2003): 195-204. Congress, House, Government Reform Subcommittee on Technology and Procurement Policy, Acquisition Reform Working Group Statement on “Acquisition Reform Initiatives,” 107 th Cong., 22 May 2001; Internet; available from http://www.csa- dc.org/documents/TestimonybyARWGbeforeTechnologyandProcurementPolicysu.pdf. Cox, James C., Shachat, Jason, and Walker, Mark, “An Experiment to Evaluate Bayesian Learning of Nash Equilibrium Play,” Games and Economic Behavior 34, (2001): 11-33. David, Leonard, “'Serious Anomaly' Silences Mars Spirit Rover,” SPACE.com news article contributed by The Associated Press, 22 January 2004; Internet; available from http://www.space.com/missionlaunches/spirit_silent_040122.html. Defense Acquisition University, Earned Value Management (EVM) Community Gold Card quick link; Internet https://acc.dau.mil/evm Dobbing, Brian and Burns, Alan, “The Ravenscar Profile for Real-Time and High Integrity Systems”, Crosstalk, (2003); Internet; available from http://www.stsc.hill.af.mil/crosstalk/2003/11/0311CrossTalk.pdf. Dockner, Engelbert, Jørgensen, Steffen, Van Long, Ngo, Gerhard Sorger, Differential games in economics and management science, Cambridge University Press, (Cambridge, UK 2000). Dos Santos, Walter A., Martins, Osvandre A., and Da Cunha, Adilson M., “A Real Time UML Modeling for Satellite On Board Software,” Proceedings of the 2nd International Conference on Recent Advances in Space Technologies, (June 2005). Druyun, Darleen A., Testimony to Congressional House Armed Services Committee, (April 8th, 1997); Internet: available from http://armedservices.house.gov/comdocs/testimony/105thcongress/97- 4-8Druyun.htm. Dunaway, Donna K., and Master, Steve, “CMM®-Based Appraisal for Internal Process Improvement (CBA IPI) Version 1.2 Method Description,” Carnegie Mellon Software Engineering Institute Technical Report CMU/SEI-2001-TR-033, (November 2001). Dyson, Freeman, Disturbing the Universe, Harper & Row, (New York: 1979). Ebert, Christof, “Experiences with Colored Predicate-Transition Nets for Specifying and Prototyping Embedded Systems”, IEEE Transactions On Systems, Man, and Cybernetics—Part B: Cybernetics 28, no. 5, (October, 1998): 641-652. Eickelmann, Nancy S., et al., “An Empirical Study of Modifying the Fagan Inspection Process and the Resulting Main Effects and Interaction Effects Among Defects Found, Effort Required, Rate of Preparation and Inspection, Number of Team Members and Product 1 st Pass Quality,” Proceedings of the 27 th Annual NASA Goddard/IEEE Software Engineering Workshop, (December 2002). 207 Elm, Joseph P., “Understanding and Leveraging a Supplier’s CMMI® Efforts: A Guidebook for Acquirers,” Carnegie Mellon Software Engineering Institute Technical Report CMU/SEI-2007- TR-004, (2007). Eslinger, Suellen, “Software Acquisition Best Practices: Experiences from the Space Systems Domain”, Aerospace Report No. TR-2004(8550)-1, Proceedings of the Acquisition of Software-Intensive Systems Conference, (January 2003): 4; Internet; available from http://www.sei.cmu.edu/programs/acquisition-support/conf/2003-presentations/eslinger.pdf. Eslinger, Suellen, “Space System Software Testing: The New Standards,” Proceedings of the 23rd Aerospace Testing Seminar, 10-12 October 2006, The Aerospace Corporation, (2006). Fagan, M. E., “Design and Code Inspections to Reduce Errors in Program Development”, IBM Systems Journal 15, no. 3, (1976). Also reprinted in IBM Systems Journal 38, no’s 2 and 3, (1999). Feather, Martin S., “Towards a Unified Approach to the Representation of, and Reasoning with, Probabilistic Risk Information about Software and its System Interface,” IEEE Proceedings of the 15 th International Symposium on Software Reliability Engineering (ISSRE), (2004). Feynman, R. P., “ Personal Observations on Reliability of Shuttle,” Appendix F. in the Report of the PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accident 2; Internet; available from http://history.nasa.gov/rogersrep/v2appf.htm, last visited August 26, 2007. Feynman, Richard P., “WHAT DO YOU CARE WHAT OTHER PEOPLE THINK?” Further Adventures Of A Curious Character, Bantam Books (New York: 1989). Fields, R. E., et al., “A Task Centered Approach to Analysing Human Error Tolerance Requirements,” P. Zave, editor, Second IEEE International Symposium on Requirements Engineering (RE'95), (1995). Fischman, Lee, et al., “Inside SEER-SEM,” Crosstalk, (2005). Forrest, Jeff, “THE CHALLENGER SHUTTLE DISASTER: A Failure in Decision Support System and Human Factors Management,” Internet; available from http://frontpage.hypermall.com/jforrest/challenger/challenger_sts.htm. Fragola, Joseph R., “Space Shuttle Program Risk Management,” IEEE PROCEEDINGS of the Annual RELIABILITY and MAINTAINABILITY Symposium, (1996). Gibson, Diane L., et al., “Performance Results of CMMI®-Based Process Improvement,” Carnegie Mellon Software Engineering Institute Technical Report CMU/SEI-2006-TR-004, (August 2006). Gleick, James, “A Bug and a Crash: Sometimes a Bug Is More Than a Nuisance,” Internet; available from http://www.around.com/ariane.html. Goddard, Peter L., “Software FMEA Techniques,” IEEE PROCEEDINGS Annual RELIABILITY and MAINTAINABILITY Symposium (2000): 118. 208 Gowen, Lon D., et al., “Preliminary Hazard Analysis for Safety-Critical Software Systems,” in Proceedings of the Eleventh Annual International Phoenix Conference on Computers and Communications, (Scottsdale, AZ: 1992). Graham, Dorothy, “The Forgotten Phase”, Dr. Dobb’s Portal: The World of Software Development, (July 1 st , 2002): Internet, available from http://www.ddj.com/architect/184414873. Grechanik, Mark, and Perry, Dewayne E., “Analyzing Software Development as a Noncooperative Game,” in Sixth International Workshop on Economics-Driven Software Engineering Research (EDSER-6) W9L Workshop - 26th International Conference on Software Engineering, (Edinburgh, Scotland, UK: 2004). Groen, Frank J., et al., “QRAS – The Quantitative Risk Assessment System,” IEEE PROCEEDINGS Annual RELIABILITY and MAINTAINABILITY Symposium (2002). Hamel, Michael A., “Military Space Acquisition: Back to the Future,” High Frontier 2, no. 2. Hamlet, Dick, “Theory of System Reliability Based On Components,” Proceedings of the 23rd International Conference on Software Engineering, (2001). Hansen, L. Jane. Hosken, Robert W., and Pollock, Craig H., “Spacecraft Computer Systems,” in Space Mission Analysis and Design, 3 rd edition, ed. Wiley J. Larson, and James R. Wertz (El Segundo, CA: Microcosm Press; Dordrecht, The Netherlands: Kluwer Academic Publishers, 1999). Hansen, Mark D., “Survey of Available Software-Safety Analysis Techniques,” IEEE PROCEEDINGS of the Annual RELIABILITY AND MAINTAINABILITY Symposium, (1989). Hausken, Kjell, “Probabilistic Risk Analysis and Game Theory,” Risk Analysis 22, no. 1 (2002). Hayhurst, K.J. Veerhusen, D.S., Chilenski, J.J. Rierson, L.K., “A Practical Tutorial on Modified Condition/Decision Coverage,” NASA TM-2001-210876, NASA Langley Research Center, (2001). Hazzan, Orit, and Dubinsky, Yael, “Social Perspective of Software Development Methods: The Case of the Prisoner Dilemma and Extreme Programming,” Proceedings of XP'2005, (2005). Hecht, Herbert, “Reliability for Space Mission Planning,” in Space Mission Analysis and Design, 3 rd edition, ed. Wiley J. Larson, and James R. Wertz, Microcosm Press; Dordrecht, The Netherlands: Kluwer Academic Publishers, (El Segundo, CA: 1999). Hecht, Myron, Aleka McAdams, and Alexander Lam, “Use of Test Data for Integrated Hardware/Software Reliability and Availability Modeling of a Space Vehicle,” Proceedings of the 24th Aerospace Testing Seminar, 8-10 April 2008, by The Aerospace Corporation. Hecht, M., and Buettner, D. J., “A Software Anomaly Repository To Support Software Reliability Prediction”, Proceedings of the Systems and Software Technology Conference, (April 2005) ; Internet; available from http://www.sstc-online.org/Proceedings/2005/PDFFiles/MH814.pdf. 209 Hecht, Myron, and Buettner, Douglas, “Software Testing in Space Programs,” Crosslink Vol. 6, No. 3, (2005), Internet; available online at http://www.aero.org/publications/crosslink/fall2005/06.html. Hecht, Myron, Hecht, Herb, and An, Xuegao, “Use of Combined System Dependability and Software Reliability Growth Models”, International Journal of Reliability, Quality and Safety Engineering 9, no. 4, (December 2002). Hellman, Ziv, “Bargaining Set Solution Concepts in Dynamic Cooperative Games,” Munich Personal RePEc Archive (MPRA) (April 2008); Internet available on line at http://mpra.ub.uni- muenchen.de/8798/1/MPRA_paper_8798.pdf. Howden, W.E., "Functional Programming Testing," Technical Report, Dept. of Mathematics, University of Victoria, Victoria, B.C., Canada, DM 146 IR, (August 1978). IEEE, “IEEE Standard Classification for Software Anomalies,” Software Engineering Standards Committee of the IEEE Computer Society, IEEE Std 1044-1993, (1993). Isaacs, Rufus, Differential Games: A Mathematical Theory With Applications To Warfare And Pursuit, Control And Optimization, Dover Publications, (Mineola, NY: 1999, originally published by John Wiley and Sons, Inc. New York, 1965). Jacobson, Ivar, Booch, Grady, Rumbaugh, James, The Unified Software Development Process, Addison Wesley Longman, Inc, (1999). Jones, Capers, “Software Cost Estimating Methods for Large Projects©,” Crosstalk, (2005). Jones, Capers, “Software defect-removal efficiency,” in IEEE Computer 29, no. 4, (1996). JPL Special Review Board, Report on the Loss of the Mars Polar Lander and Deep Space 2 Missions; Jet Propulsion Laboratory, California Institute of Technology, JPL D-18709, 22 March 2000; Internet; available from ftp://ftp.hq.nasa.gov/pub/pao/reports/2000/2000_mpl_report_1.pdf. Kaner, C., “An introduction to Scenario-based Testing,” Florida Tech., June, 2003; Internet; available from http://www.testingeducation.org/articles/scenario_intro_ver4.pdf Keller, Ted, and Schneidewind, Norman F., “Successful Application of Software Reliability Engineering for the NASA Space Shuttle”, Proceedings of the Eighth International Symposium on Software Reliability Engineering, (1997). Ko, Sang-Pok, Sung, Hak-Kyung, and Lee, Kyung-Whan, “Study to Secure Reliability of Measurement Data through Application of Game Theory,” Proceedings of the 30 th EUROMICRO Conference (EUROMICRO’04), Vol. 00, (2004). Kruchten, Philippe B., “The 4+1 View Model of Architecture,” IEEE Software, (November, 1995). Kruse, R. L., and Ryba, A., Data Structures and Program Design In C++, Prentice-Hall, Inc., N.J. 07458, 1999. 210 Lakey, Peter B., and Neufelder, Ann Marie, System and Software Reliability Assurance Notebook, Rome Laboratory, (Rome, NY: 1997). Lawrence, J. Dennis, and Persons, Warren L. [preparers], “Survey of Industry Methods for Producing Highly Reliable Software,” U.S. Nuclear Regulatory Commission, Fission, and Energy Systems Safety Program, Lawrence Livermore National Laboratory, NUREG CR-6278UCRL-ID- 117524, (1994). Levenson, Nancy G., “A Systems-Theoretic Approach to Safety in Software-Intensive Systems,” in IEEE Transactions on Dependable and Secure Computing 1, no. 1, (2004). Leveson, Nancy G., “The Role of Software in Spacecraft Accidents”, Massachusetts Institute of Technology; unpublished; available from http://sunnyday.mit.edu/papers/jsr.pdf; Internet; accessed 5 May 2007, 3. Leveson, Nancy G., and Stolzy, Janice L., “Safety Analysis Using Petri Nets,” in IEEE Transactions on Software Engineering SE-13, no. 3, (1987). Li, Bin, et al., “Integrating Software into PRA,” Proceedings of the 14th International Symposium on Software Reliability Engineering (2003). Lindemann, Christoph, et al., “Numerical Methods for Reliability Evaluation of Markov Closed Fault- Tolerant Systems,” IEEE Transactions on Reliability 44, no. 4, (1995). Madachy, Ray, and Boehm, Barry, “Assessing Quality Processes with ODC COQUALMO,” proceedings of the International Conference on Software Process, (May 10: 2008); Internet presentation available on line at http://www.icsp-conferences.org/icsp2008/Presentations/ May%2010/Session%20A1/Assessing%20Quality%20Processes%20with%20ODC%20COQU ALMO%205.pdf. Madachy, Ray, Software Process Dynamics, Wiley-Interscience, (Hoboken, NJ: 2008). Madachy, Raymond J., “System dynamics modeling of an inspection-based process,” Proceedings of the 18th international conference on Software engineering, (1996). Madachy, Raymond J., A Software Project Dynamics Model For Process Cost, Schedule And Risk Assessment, Ph.D. Dissertation, Department of Industrial and Systems Engineering, USC, (December: 1994). Madeira, Henrique, Costa, D., and Vieira, M., “On the Emulation of Software Faults by Software Fault Injection,” Proceedings of the Intl. Conf. on Dependable Systems and Networks, (New York, NY: 2000). McDennid, J. A., et al., “Experience with the application of HAZOP to computer-based systems,” Proceedings of the Tenth Annual Conference on Computer Assurance (COMPASS), (1995). McGibbon, Thomas, “An Analysis of Two Formal Methods: VDM & Z”, DoD Data Analysis Center for Software report DACS-CRTA-97-1, (1997). 211 McHale, James, and Wall, Daniel S., “Mapping TSP to CMMI,” Carnegie Mellon Software Engineering Institute Technical Report CMU/SEI-2004-TR-014, (2005). MIT, Larch Home Page; Internet; available from http://www.sds.lcs.mit.edu/spd/larch/. Musa, John D., Software Reliability Engineering: More Reliable Software Faster and Cheaper 2 nd Edition, AuthorHouse (Bloomington, IN: 2004). Myerson, Roger B., GAME THEORY: Analysis of Conflict, Harvard University Press (Cambridge, MA, 1997). NASA Office of Safety and Mission Assurance, Formal Methods Specification And Verification Guidebook For Software And Computer Systems Volume I: Planning And Technology Insertion, NASA Technical Publication TP-98-208193, (1998). NASA SP-6105, "NASA Systems Engineering Handbook," NASA, (1995). NASA, “Mars Global Surveyor (MGS) Spacecraft Loss of Contact,” 13 April 2007; Internet; available from http://www.nasa.gov/pdf/174244main_mgs_white_paper_20070413.pdf. NASA, “Overview of the DART Mishap Investigation Results,” Internet; available from http://www.nasa.gov/pdf/148072main_DART_mishap_overview.pdf. NASA, Software Estimation, Internet: http://www.ceh.nasa.gov/webhelpfiles/Software_Estimation.htm. NASA, Formal Methods Specification and Verification Guidebook for Software and Computer Systems, Vol. I: Planning and Technology Insertion, [NASA/TP-98-208193], Release 2.0, National Aeronautics and Space Administration, Washington, DC, 1998. NASA-GB-A302, Software Formal Inspections Guidebook, NASA Office of Safety and Mission Assurance, (August 1993). O’Neill, Don, “Issues in Software Inspection,” in IEEE Software, (1997). Ogata, Katsuhiko, System Dynamics 4 th ed., Pearson Prentice Hall, (Upper Saddle River, NJ: 2004). Ou, Yong, and Dugan, Joanne Bechta, “Sensitivity Analysis of Modular Dynamic Fault Trees,” in Proceedings of the IEEE International Computer Performance and Dependability Symposium (2000). Packard, Michael H., and Zampino, Edward J., “Probabilistic Risk Assessment (PRA) Approach for the Next Generation Launch Technology (NGLT) Program Turbine-Based Combined Cycle (TBCC) Architecture 6 Launch Vehicle,” in PROCEEDINGS of the Annual RELIABILITY and MAINTAINABILITY Symposium, (2004): 604. Palm III, William J., System Dynamics, McGraw Hill Higher Education, (New York, NY: 2005). Papoulis, Athanasios, Probability, Random Variables and Stochastic Processes 3 rd edition, (New York: McGraw-Hill, Inc., 1991). 212 Parnas, David L., and Lawford, Mark, “The Role of Inspection in Software Quality Assurance,” in IEEE Transactions on Software Engineering 29, no. 8, (August 2003). Perera, Jeevan, and Holsomback, Jerry, “Use of Probabilistic Risk Assessments for the International Space Station Program,” Proceedings of the 2004 Aerospace Conference, (2004). Poore, J.H., and Trammel, C.J., “Engineering Practices for Statistical Testing,” Crosstalk, (1998); Internet; available from http://www.stsc.hill.af.mil/crosstalk/frames.asp?uri=1998/04/statistical.asp. Porter, Adam, and Votta, Lawrence, “What Makes Inspections Work?,” IEEE Software, (November/December 1997). Powersim Software AS, Powersim Studio™ Academic 2005 (6.00.3423.6) Service Release 6, Copyright© 1993-2006 Powersim Software AS (Product Code: PSSA-N030306-DRI##); Internet; available from http://www.powersim.com/main/resources/technical_resources/technical_support/. Puterman, Martin L., Markov Decision Processes: Discrete Stochastic Dynamic Programming, John Wily & Sons, Inc., (Hoboken, NJ: 1994, 2005). Ray, Justin, “Sea Launch malfunction blamed on software glitch”, Spaceflight Now, 30 March 2000; Internet; available from http://spaceflightnow.com/sealaunch/ico1/000330software.html. Rueda, Alice, and Pawlak, Mirek, “Pioneers of the Reliability Theories of the Past 50 Years,” Proceedings of the Reliability, Availability and Maintainability Symposium (RAMS), (2004). Rumbaugh, James, Jacobson, Ivar, and Booch, Grady, The Unified Modeling Language Reference Manual, Addison Wesley Longman, (Reading, MA: 1999). Sablynski, Raymond, and Pordon, Robert, “A Report on the Flight of Delta II’s Redundant Inertial Flight Control Assembly (RIFCA),” Proceedings of the 1998 Position Location and Navigation Symposium, 20-23 Apr 1998, by the IEEE, 286-293. Sandmann, Werner, “On Optimal Importance Sampling for Discrete-Time Markov Chains,” Proceedings of the Second International Conference on the Quantitative Evaluation of Systems (QEST’05), (2005). Sassenburg, Hans, “Design of a Methodology to Support Software Release Decisions: Do the Numbers Really Matter?,” Ph.D. Thesis, University of Groningen, (2005): 44-45. Shooman, M.L., "Program Testing," Software Engineering, McGraw Hill, Inc., (Singapore: 1983): 223- 295. Shu, Guoqiang, et al., “Validating objected-oriented prototype of real-time systems with timed automata,” IEEE Proceedings of the 13th International Workshop on Rapid System Prototyping, (2002): 99. 213 Singh, Satinder, Soni, Vishal, and Wellman, Michael P., “Computing Approximate Bayes-Nash Equilibria in Tree Games of Incomplete Information,” Proceedings of the 5th ACM conference on Electronic commerce, (2004). Sterman, John D., Business Dynamics: Systems Thinking and Modeling for a Complex World, McGraw- Hill, (2000): 5. Stokey, Nancy L., and Lucas, Robert E. Jr., with Prescott, Edward C., Recursive Methods in Economic Dynamics, Harvard University Press, (Cambridge, MA: 1989); 7. Straffin, Philip D., GAME THEORY and STRATEGY, The Mathematical Association of America, (Washington D.C.: 1993): 4-5. Strauss, Anselm, and Corbin, Juliet, Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, SAGE Publications, (Thousand Oaks, CA: 1998). Tarrant, Charlie, and Crook, Jerry, “Modular rocket engine control software (MRECS),” AIAA/IEEE Proceedings of the 1997 Digital Avionics Systems Conference (DASC), vol. 2, (1997). Thibodeau, R., and Dodson, E.N., “Life Cycle Phase Interrelationships,” Journal of Systems and Software, Vol. 1, (1980). Tomayko, James E., and Hazzan, Orit, Human Aspects of Software Engineering, Laxmi Publications, (2005). Tribble, Alan C., et al., “Software Safety Analysis of a Flight Guidance System,” Proceedings of the 21 st Digital Avionics Systems Conference 2, (2002). Trivedi, Kishor S., et al., “Recent Advances in Modeling Response-Time Distributions in Real-Time Systems,” Proceedings of Recent Advances in Modeling Response-Time Distributions 91, no. 7, (2003). Turner, Richard, and Boehm, Barry, “People Factors in Software Management: Lessons From Comparing Agile and Plan-Driven Methods,” CrossTalk, (2003). Voas, J., Charron, F., McGraw, G., Miller, K., and Friedman, M., “Predicting how Badly “Good” Software can Behave,” IEEE Software 14, no. 4, (1997): 73-83. Walker, Paul, History of Game Theory: A Chronology of Game Theory; Internet; available from http://www.econ.canterbury.ac.nz/personal_pages/paul_walker/gt/hist.htm, last visited December 8 th , 2007, (October 2005). Welch, C., “Lessons Learned from Alternative Transportation Fuels: Modeling Transition Dynamics,” National Research Energy Laboratory Technical Report NREL/TP-540-39446, (February 2006). Wertz, James R., and Reinert, Richard P., “Mission Characterization,” in Space Mission Analysis and Design, 3 rd edition, ed. Wiley J. Larson, and James R. Wertz, Microcosm Press; Dordrecht, The Netherlands: Kluwer Academic Publishers, (El Segundo, CA: 1999). 214 Whitworth, Gary G., “Ground System Design and Sizing,” in Space Mission Analysis and Design, 3 rd edition, ed. Wiley J. Larson, and James R. Wertz Microcosm Press; Dordrecht, The Netherlands: Kluwer Academic Publishers, (El Segundo, CA: 1999). Winchester, Joe, “Software Testing Shouldn't Be Rocket Science,” JDJ - Java Developers Journal; Internet; available from http://java.sys-con.com/read/48176.htm, (2005). Woehr, Jack, “A Conversation with Glenn Reeves: Really remote debugging for real-time systems,” Dr. Dobbs Journal, (November 1999); Internet; available from http://www.ddj.com/184411097. Wong, Yuk Kuen, and Wilson, David, “Exploring the Relationship between Experience and Group Performance in Software Review,” Proceedings of the Tenth Asia-Pacific Software Engineering Conference, (2003). Wood, Alan, “Predicting Software Reliability,” IEEE Computer 29, no. 11, (1996). Zimmerman, Marc, et al., “Making Formal Methods Practical,” Proceedings of the 19 th Digital Avionics Systems Conference 1, (2000). 215 A P P E N D I X : A – S O F T W A R E M O D E L S A N D M E T H O D S A.1 Incremental Model The incremental model uses the same phases as the waterfall model, but stages the addition of new functional capability and ‘bug fixes’ over the development life cycle [1]. Hence, the various software increments can be in any specific development phase, running in parallel, all the while adding new functionality and iterating defect fixes in a timeline meeting the system’s development schedule [2]. Software development thus undergoes continuous iteration, and incremental development continues until the product meets all of its requirements, and some specific quality gate criteria in the software intensive system. A.2 Transform Model The transform model’s approach uses a formal specification and assumes a capability to automatically convert this specification into software code [5]. However, the method has encountered difficulties since the specification transformation capability only exists for limited application domains and still shares difficulties encountered by the evolutionary model [6]. A.3 Spiral Model Based on a risk-driven approach rather than a document/code driven approach, the original spiral model (Figure 72) is a process that encompasses previous models, providing guidance on which model best fits a specific software development situation [7]. 216 Figure 72: The Spiral Model The WinWin spiral model, an extension to the original spiral model, was introduced to handle some difficulties with determining where the elaborated objectives, constraints, and alternatives come from [8]. Based on Theory-W, the WinWin spiral model resolved this by adding three activities to the front-end of each spiral: the first is to identify key stakeholders, the second is to identify their win conditions, and the final is a win-win negotiation reconciliation step [8]. The Spiral 2005 model is a recent evolution of the WinWin spiral model that not only incorporates the life cycle anchor points for concurrent system/software engineering to replace the traditional Department of Defense (DoD) milestone reviews such as the PDR or the CDR, but also provides a scalable process model that incorporates agile methods for very large Systems of Systems concepts with > 10 million lines of code [9]. A.4 Agile Process Model The agile process is based on rapid incremental-delivery, and quick product evolutions that advocate a more modular lightweight process than traditional models [10]. Agile processes are best applied to small development teams with premium people (with the occasional successful example with larger teams and teams that have junior people) using process frameworks. These projects derive much of their agility from the implicit knowledge of the team and a close interaction with the customer [11] 217 [12]. The process framework is a collaboration layer that provides functionality that guides the individual developer software process, supports planning and execution of the software process, metrics gathering and organization at various levels, metrics visualization and controls security [12]. A.5 Rational Unified Process Model The Rational Unified Process (RUP) is the result of merging the Unified Process through the acquisition of a number of software tool companies by the Rational Software Corporation [13]. RUP uses the Unified Modeling Language (UML), emphasizing use-case, architecture-centric, iterative and incremental development [14]. For each ‘iteration’ the developers identify applicable use cases for that build, they design according to the architecture’s framework, implement the design and verify that the implementation matches the use cases [15]. If the iteration doesn’t work, the developers revisit their prior decisions and attempt a new approach [16]. The process recognizes the crucial role of people, project organization, product artifacts, software engineering processes and software development tools used to automate the process activities [17]. RUP reports that it provides a disciplined approach to the assignment of development tasks and responsibilities in a software development organization. RUP maintains the explicit goal of producing high-quality software within a predictable schedule and budget [18]. A.6 System Evaluation and Estimation of Resources–Software Estimating Model (SEER- SEM) The System Evaluation and Estimation of Resources–Software Estimating Model (SEER- SEM) is a proprietary model that diverged from its original Jensen cost estimation model source (from work by Dr. Randall Jensen and his colleagues at Hughes) by incorporating elements of COCOMO, the NASA Softcost model, Halstead metrics, and the work of Doty Associates on the influences of the development environment [19] [20]. It provides software development schedule and effort estimates 218 based on parametric inputs similar to those in COCOMO II [20]. Where, the basic form of the effort equation is [20], 2 . 1 4 . 0 = te e C S D K Equation A-1 ( ) T C D N e RW RW RW S S S × + × + × × + = 35 . 0 25 . 0 4 . 0 0 Equation A-2 D is the staffing complexity; and C te is the effective technology (a composite efficiency metric) [20]. S e is the effective size, S 0 is the existing size, RW D is the design rework, RW C is the code reimplementation work, and RW T is the test rework [20]. While, the schedule duration is obtained from [20], 4 . 0 2 . 0 = − te e D C S D t Equation A-3 A.7 Earned Value Management (EVM) Earned Value Management (EVM) is used to manage cost and schedule on large defense programs. EVM computes efficiencies as the Cost Performance Index (CPI) and Schedule Performance Index (SPI), which are based on Budget Cost for Work Scheduled (BCWS), Budget Cost for Work Performed (BCWP), and the Actual Cost for Work Performed (ACWP). Variances are also computed from these values, in addition to being computed from the Budget At Completion (BAC), and the Estimate At Completion (EAC) [21]. For the performance indices, favorable values are > 1.0, while unfavorable values are < 1.0. These are calculated as [21], Cost Performance Index (CPI) = BCWP/ACWP, Equation A-1 Schedule Performance Index (SPI) = BCWP/BCWS. Equation A-2 For the variances, favorable values are positive, while unfavorable values are negative and are calculated from [21], Cost Variance (CV) = BCWP - ACWP Equation A-3 Schedule Variance (SV) = BCWP - BCWS Equation A-4 219 Variance at Completion (VAC) = BAC - EAC Equation A-5 A.8 Software Risk Management Software risk management is the process that schedule, cost or technical risks for software development are assessed and controlled [22]. Table 25 contains a brief description of various steps used in software risk management (table contents references are from [22] and [23]). Table 25: Software Risk Management Steps Risk Management Mitigation Activity Steps Description Risk Assessment Risk Identification Checklists The utilization of lists to identify risk ‘items’ such as the top 10 risks (below) and can include additional ‘items’ from domain experts. Risk Assessment Risk Identification Decision Driver Analysis The analysis steps of those items that provide the information required for decision makers with the assurance that a risk is being realized. Risk Assessment Risk Identification Assumption Analysis The comparison of assumptions against experience. Risk Assessment Risk Identification Decomposition The careful breakdown of the system to identify risk. Risk Assessment Risk Analysis Performance Models The use of models to analyze risk from potential performance shortfalls. Risk Assessment Risk Analysis Cost Models The use of cost models to analyze risk from potential funding and schedule shortfalls. Risk Assessment Risk Analysis Network Analysis The analysis risk from potential shortfalls from the network. Risk Assessment Risk Analysis Decision Analysis The analysis of risk item effects using techniques like statistical decision analysis. Risk Assessment Risk Analysis Quality Factor Analysis The analysis of quality factors like dependability, reliability, availability, maintainability (DRAM) and security. Risk Assessment Risk Prioritization Risk Exposure The ranking of risk items using their risk exposure that is equal to the probability of an unfavorable outcome times the consequence of the unfavorable outcome (typically using a grid graph) to prioritize each item. Risk Assessment Risk Prioritization Risk Leverage The relative cost-benefit ratio for the various risk reduction activities (that is calculated as the risk exposure before minus the risk exposure after divided by the cost of the activity). Risk Assessment Risk Prioritization Compound Risk Reduction The prioritized reduction of risk items due to their compound interactions. Risk Control Management Planning Risk Reduction The use of techniques that reduce risk like the use of software peer reviews. 220 Table 25: Continued Risk Management Mitigation Activity Steps Description Risk Control Management Planning Buying Information The use of financial reserves to fund the acquisition of information about the risk item (e.g. use of quality assurance personnel to attend all software peer reviews to ensure thoroughness and adherence to process guidelines). Risk Control Management Planning Risk Avoidance The proactive management technique of altogether avoidance of a particular risk item (such as avoiding perceived staff risks for a particular component via the reuse legacy software items). Risk Control Management Planning Risk Transfer The proactive management technique of risk shifting (such as shifting a code development task to a more capable software engineer). Risk Control Management Planning Risk Element Planning The use of risk-management planning for each software element. Risk Control Management Planning Risk Plan Integration The coordination of individual risk-element plans into an integrated comprehensive plan. Risk Control Risk Resolution Prototypes The elimination of a risk item through the use of a comprehensive prototype. Risk Control Risk Resolution Simulations The elimination of a risk item through the use of comprehensive simulations. Risk Control Risk Resolution Benchmarks The elimination of a risk item through the use of benchmark timing and throughput measurements. Risk Control Risk Resolution Analyses The elimination of risk items through the use of detailed mission analyses. Risk Control Risk Resolution Staffing The elimination of risk items through the use of critical staff agreements and various staff retention techniques for key-personnel. Risk Control Risk Monitoring Milestone Tracking The tracking of progress via milestones or inchstones at periodic management reviews. Risk Control Risk Monitoring Top-10 Tracking #1 [24]: Personnel shortfalls The controlled staffing of projects with the best talent, matching the job to the available talent, fostering a top-notch software development culture (e.g. using team building techniques), reaching agreements with key personnel, and cross-training on various tasks. Risk Control Risk Monitoring Top-10 Tracking #2: Unrealistic schedules and budget The use of detailed multi-source cost and schedule estimation (e.g. bottoms up, top- down cost models), design to cost, incremental development, software reuse, and periodic requirements scrubs. Risk Control Risk Monitoring Top-10 Tracking #3: Developing the wrong software functions The use of organization analysis (having the right people for the job), mission analysis, operations-concept formulation, user surveys and user participation, prototyping, early users’ manuals, off-nominal performance analysis, quality-factor analysis. 221 Table 25: Continued Risk Management Mitigation Activity Steps Description Risk Control Risk Monitoring Top-10 Tracking #4: Developing the wrong user interface The use of prototypes, use-cases and user scenarios, task analysis, and end-user participation. Risk Control Risk Monitoring Top-10 Tracking #5: ‘Gold plating’ The control of functionality to insure that not more is put in than is needed to meet the requirements, and is controlled through the use of periodic scrubbing of requirements, use of prototypes, cost-benefit analysis, and designing to cost. Risk Control Risk Monitoring Top-10 Tracking #6: Continuing stream of requirement changes The use of high change thresholds, encapsulation, and incremental development (deferring changes to later increments). Risk Control Risk Monitoring Top-10 Tracking #7: Shortfalls in externally furnished components The use of benchmarking, inspections, reference checking and compatibility analysis. Risk Control Risk Monitoring Top-10 Tracking #8: Shortfalls in externally performed tasks The use of reference checking, pre-award audits, award-fee contracts, competitive design or prototyping and team building. Risk Control Risk Monitoring Top-10 Tracking #9: Real-time performance shortfalls The use of simulations, benchmarking, modeling and prototyping, instrumentation of code, and tuning. Risk Control Risk Monitoring Top-10 Tracking #10 Straining computer- science capabilities The use of technical analysis, cost-benefit analysis, prototyping, and reference checking. Risk Control Risk Monitoring Risk Reassessment The process of continuous risk item reassessment Risk Control Risk Monitoring Corrective Action The application of appropriate risk controlling techniques to eliminate or reduce risk items. 222 A p p e n d i x A E n d n o t e s [1] Barry Boehm, Software Engineering Economics, 41-45. [2] Ibid. [3] Ibid., 656. [4] Ibid., 657. [5] Barry W. Boehm, “A Spiral Model of Software Development”, 63. [6] Ibid., 64. [7] Ibid., 64-65. [8] Barry Boehm et al., “Using the WinWin Spiral Model: A Case Study,” 33-34. [9] Barry Boehm, and Jo Ann Lane, “21 st Century Processes for Acquiring 21 st Century Software- Intensive Systems of Systems”, Crosstalk, (May 2006): 4-9. [10] Mikio Aoyama, “Agile Software Process and Its Experience,” IEEE Proceedings of the 20th International Conference on Software Engineering, (1998): 4. [11] Barry Boehm, “Get Ready for Agile Methods, with Care,” IEEE Computer 35, no. 1 (January 2002): 65-66. [12] Mikio Aoyama, “Agile Software Process and Its Experience,” 4, 7-8. [13] Ivar Jacobson et al., The Unified Software Development Process, xx-xxvi. [14] Ibid., 1. [15] Ibid., 7. [16] Ibid. [17] Ibid., 15-16. [18] Russ Bunting et al., “Interdisciplinary Influences in Software Engineering Practices,” in Proceedings of the 10th International Workshop on Software Technology and Engineering Practice, (2002): 62. [19] Capers Jones, Estimating Software Costs, 28. [20] Lee Fischman et al., “Inside SEER-SEM,” 26-27. [21] Defense Acquisition University, Earned Value Management (EVM) Community Gold Card quick link; Internet https://acc.dau.mil/evm 223 [22] Barry W. Boehm, Software Risk Management, IEEE Computer Society Press, (Washington, D.C.: 1989): 1-16. [23] Barry W. Boehm, “Software Risk Management: Principles and Practices,” IEEE Software, (January 1991): 35-36. [24] Barry W. Boehm, “A Spiral Model of Software Development,” 70. 224 A P P E N D I X : B – R A W Q U A N T I T A T I V E D A T A B.1 Introduction This appendix contains raw data in releasable form from projects-A and C, which were the focus of the dissertation. B.2 Peer Review Data Table 26: Project-A Requirements and Design Peer Review Metrics Staff Week Minor Major # of Reviews 0 0 0 0 171 215 10 8 175 17 0 2 180 20 0 3 184 6 0 1 188 2 0 2 193 43 0 4 197 12 0 1 201 0 0 0 206 6 0 2 210 28 0 3 215 0 0 0 219 0 0 0 223 0 0 0 227 0 0 0 232 0 0 0 236 0 0 0 240 7 0 1 245 19 0 1 249 0 0 0 254 0 0 0 258 30 0 1 262 56 0 3 267 0 0 0 271 0 0 0 275 82 0 4 279 99 0 5 225 Table 27: Project-A Code Peer Review Metrics Staff Week Minor Major KSLOC # of Reviews 0 0 0 0 171 37 2 4.2 6 175 35 3 20.9 4 180 31 2 3.5 4 184 0 0 0 0 188 19 5 1.9 2 193 0 0 0 0 197 296 18 8.9 12 201 126 4 5.1 5 206 85 0 2.1 3 210 83 0 2.3 2 215 94 2 3.1 3 219 0 0 0 0 223 335 6 10.7 7 227 117 5 7.8 5 232 0 0 0 0 236 10 0 1.4 1 240 23 1 4.8 1 245 0 0 0 0 249 0 0 0 0 254 0 0 0 0 258 0 0 0 0 262 0 0 0 0 267 0 0 0 0 271 0 0 0 0 275 0 0 0 0 279 0 0 0 0 Table 28: Project-A Unit Test Peer Review Metrics Staff Week Minor Major # of Reviews 0 0 0 0 171 0 0 0 175 0 0 0 180 0 0 0 184 0 0 0 188 0 0 0 193 0 0 0 197 0 8 3 226 Table 28: Continued Staff Week Minor Major # of Reviews 201 0 0 0 206 0 0 0 210 55 4 12 215 8 0 4 219 16 3 12 223 8 4 10 227 4 1 9 232 0 0 5 236 0 0 1 240 4 1 3 245 0 0 0 249 0 0 0 254 0 0 4 258 0 0 0 262 0 0 0 267 0 0 0 271 0 0 0 275 0 0 0 279 0 0 0 Table 29: Project-A Qualification Test Peer Review Metrics Staff Week Minor Major # of Reviews 0 0 0 0 171 0 0 0 175 0 0 0 180 0 0 0 184 0 0 0 188 0 0 0 193 0 0 0 197 0 0 3 201 0 0 4 206 0 0 31 210 0 0 8 215 0 0 15 219 0 0 0 223 0 0 0 227 0 0 0 232 0 0 0 227 Table 29: Continued Staff Week Minor Major # of Reviews 236 0 0 0 240 0 0 0 245 0 0 0 249 0 0 0 254 0 0 0 258 0 0 0 262 0 0 0 267 0 0 0 271 0 0 0 275 0 0 0 279 0 0 0 Table 30: Project- C Accumulated (All) Peer Review Metrics Staff Day Weeks Accum Minors Accum Majors Total 0 0 0 0 0 15 2 0 0 0 44 6 1 4 5 74 10 10 6 8 104 14 10 6 8 135 19 37 41 20 165 23 37 41 65 196 28 73 50 112 227 32 95 50 117 257 36 105 50 121 288 41 148 50 128 318 45 200 50 134 349 49 230 50 157 380 54 230 50 157 409 58 419 50 175 439 62 532 50 180 469 67 617 50 181 500 71 617 50 181 530 75 664 123 186 561 80 677 133 189 592 84 775 233 196 622 88 790 244 197 653 93 878 264 198 683 97 878 264 198 228 Table 30: Continued Staff Day Weeks Accum Minors Accum Majors Total 714 102 1048 375 204 745 106 1198 415 211 774 110 1238 426 215 804 114 1297 439 221 834 119 1467 489 234 865 123 1562 507 242 895 127 1666 520 247 926 132 1909 665 250 957 136 1909 665 250 987 141 1909 665 250 1018 145 2002 675 253 1048 149 2025 750 254 1079 154 2030 751 255 1110 158 2030 751 255 1140 162 2030 751 255 1170 167 2030 751 255 1200 171 2030 751 255 1231 175 2070 871 264 1261 180 2435 926 280 1292 184 2525 966 300 1323 189 3215 1066 353 1353 193 4125 1276 370 1384 197 4150 1311 372 1414 202 4225 1316 381 1445 206 4550 1321 405 1476 210 5020 1411 450 1505 215 5285 1536 480 1535 219 5795 1634 529 1565 223 6025 1764 563 1596 228 6413 1871 600 1626 232 6451 1881 602 1657 236 6487 1883 606 1688 241 6682 1900 610 1718 245 6904 1913 613 1749 249 7124 1915 619 1779 254 7359 1916 629 1810 258 7479 1916 635 1841 263 7744 1951 654 1870 267 8304 1952 679 229 Table 30: Continued Staff Day Weeks Accum Minors Accum Majors Total 1900 271 9434 1952 719 1930 275 9814 1952 735 1961 280 9894 2252 747 1991 284 9954 2252 755 2022 288 9964 2252 757 2053 293 10082 2262 772 2083 297 10122 2307 784 2114 302 10152 2317 788 2144 306 10234 2327 794 2175 310 10326 2332 808 B.3 Software Defect Repository (SDR) Data Table 31: Project- A All Defect Data # Sev Product Effectivity Day # Sev Product Effectivity Day 1 3 SRS SW Req 1 23 3 SRS SW Req 2 2 3 SRS SW Req 1 24 3 SRS SW Req 2 3 3 SRS SW Req 2 25 3 SRS SW Req 2 4 3 SDD SW Req 2 26 3 SRS SW Req 2 5 3 SRS SW Req 2 27 3 Test Plan SW Req 2 6 3 SRS Sys Req 2 28 3 SRS SW Req 3 7 3 SRS SW Req 2 29 3 SRS SW Req 3 8 3 SRS SW Req 2 30 3 SRS SW Req 3 9 3 SRS SW Req 2 31 3 SRS SW Req 3 10 3 SRS SW Req 2 32 3 SRS SW Req 3 11 3 SRS SW Req 2 33 3 SRS SW Req 3 12 3 SRS SW Req 2 34 3 SRS SW Req 3 13 3 SRS SW Req 2 35 3 SRS SW Req 3 14 3 SRS SW Req 2 36 3 SRS SW Req 3 15 3 SRS SW Req 2 37 3 SRS SW Req 3 16 3 SRS SW Req 2 38 3 SRS SW Req 3 17 3 SRS SW Req 2 39 3 SRS SW Req 3 18 3 SRS SW Req 2 40 3 SRS SW Req 3 19 3 SRS SW Req 2 41 3 SRS SW Req 3 20 3 SRS SW Req 2 42 3 SRS SW Req 3 21 3 SRS SW Req 2 43 3 SRS SW Req 3 22 3 SRS SW Req 2 45 3 SRS SW Req 3 230 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 46 3 SRS SW Req 3 84 3 SRS SW Req 5 47 3 SRS SW Req 3 85 3 SRS SW Req 5 48 3 SRS SW Req 3 86 3 SRS SW Req 5 49 3 SRS SW Req 3 87 3 SRS SW Req 14 50 3 SRS SW Req 3 88 3 CODE Unit Int/Test 16 51 3 SRS SW Req 3 89 3 SRS SW Req 20 52 3 SRS SW Req 3 90 3 SRS SW Req 20 53 3 SRS SW Req 3 91 3 SRS Code/Unit Test 29 54 3 SRS SW Req 3 92 2 CODE Code/Unit Test 30 55 3 SRS SW Req 3 93 3 SDD SW Req 5 56 3 SRS SW Req 3 94 3 Test Plan SW Req 5 57 3 SRS SW Req 3 95 1 other Code/Unit Test 51 58 3 SRS SW Req 3 96 3 SRS SW Req 61 59 3 SRS SW Req 3 97 3 SRS Code/Unit Test 61 60 3 SRS SW Req 3 98 3 SRS Code/Unit Test 63 61 3 SRS SW Req 3 99 3 ADD SW Req 77 62 3 SRS SW Req 3 100 3 SRS SW Req 79 63 3 SRS SW Req 3 101 3 SRS SW Req 89 64 3 SRS SW Req 3 102 3 SRS SW Req 90 65 3 SRS SW Req 3 103 3 SRS SW Req 90 66 3 SRS SW Req 3 104 3 SRS SW Req 90 67 3 SRS SW Req 3 105 3 SRS SW Req 90 68 3 SRS SW Req 3 106 1 SRS SW Req 90 70 3 SRS SW Req 3 107 3 SRS SW Req 90 71 3 SRS SW Req 3 108 3 SRS SW Req 90 72 3 SRS SW Req 4 109 3 SRS SW Req 90 73 3 SRS SW Req 4 110 3 SRS SW Req 90 74 3 SRS SW Req 4 111 3 SRS SW Req 90 75 3 SRS SW Req 5 112 3 SRS SW Req 90 76 3 SRS SW Req 5 113 3 SRS SW Req 90 77 3 SRS SW Req 5 114 3 SRS SW Req 90 78 3 SRS SW Req 5 115 3 SRS SW Req 90 79 3 SRS SW Req 5 116 3 SRS SW Req 96 80 3 SRS SW Req 5 117 3 SRS SW Qual Test 124 81 3 SRS SW Req 5 118 3 SRS SW Req 147 82 3 SRS SW Req 5 119 4 SRS SW Arch. 147 83 3 SRS SW Req 5 120 4 SRS SW Req 147 231 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 121 3 SRS SW Req 147 158 4 CODE Unit Int/Test 225 122 3 SRS SW Req 147 159 4 CODE Unit Int/Test 225 123 3 SRS SW Req 147 160 3 SRS Code/Unit Test 226 124 3 SRS SW Req 147 161 3 CODE Code/Unit Test 232 125 3 SRS SW Req 147 162 4 CODE Unit Int/Test 236 126 5 SRS SW Req 147 163 3 SRS SW Qual Test 237 127 4 SRS SW Req 147 164 3 SDD SW Qual Test 237 128 3 SRS SW Req 147 165 3 SRS Sys Req 237 129 4 SRS SW Req 147 166 3 SDD Sys Req 237 130 3 SRS SW Req 147 167 3 SDD Sys Req 237 131 2 SRS SW Req 147 168 3 SRS SW Req 245 132 3 SRS SW Req 147 169 3 CODE HW/SW Integ 268 133 2 SRS SW Req 147 170 3 SRS SW Req 271 134 3 SRS SW Req 147 171 3 CODE HW/SW Integ 280 135 3 SRS SW Req 147 172 3 CODE SW Qual Test 281 136 3 CODE Unit Int/Test 150 173 3 SRS SW Qual Test 286 137 3 SRS SW Qual Test 162 174 2 CODE Unit Int/Test 289 138 2 SRS SW Req 176 175 3 SRS SW Req 289 139 3 other Code/Unit Test 181 176 3 CODE SW Qual Test 299 140 3 SDD Sys Design 184 177 3 SRS Unit Int/Test 300 141 3 SRS SW Qual Test 188 178 4 SRS HW/SW Integ 300 142 3 SRS SW Qual Test 188 179 4 CODE HW/SW Integ 300 143 3 SRS SW Qual Test 188 180 4 CODE System Test 306 144 3 SRS SW Qual Test 188 181 3 CODE HW/SW Integ 308 145 3 SRS SW Qual Test 188 182 4 CODE SW Qual Test 310 146 2 CODE Unit Int/Test 189 183 3 CODE SW Qual Test 310 147 3 other SW Qual Test 191 184 3 CODE SW Qual Test 310 148 2 CODE Unit Int/Test 197 185 3 CODE SW Qual Test 310 149 3 CODE SW Qual Test 199 186 3 CODE Unk 313 150 3 SDD Code/Unit Test 202 187 4 CODE HW/SW Integ 313 151 3 SDD Code/Unit Test 203 188 3 CODE HW/SW Integ 313 152 3 SRS Unit Int/Test 209 189 4 SDD HW/SW Integ 313 153 3 SRS SW Qual Test 215 190 4 CODE Unit Int/Test 316 154 3 SRS HW/SW Integ 217 191 3 CODE Code/Unit Test 316 155 3 other SW Qual Test 219 192 3 CODE System Test 320 156 4 CODE Unit Int/Test 225 193 3 CODE SW Qual Test 321 157 4 CODE Unit Int/Test 225 194 5 CODE HW/SW Integ 322 232 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 195 4 CODE HW/SW Integ 325 232 4 CODE HW/SW Integ 344 196 4 CODE Code/Unit Test 327 233 3 kernel HW/SW Integ 348 197 3 CODE HW/SW Integ 334 234 3 CODE Sys Req 349 198 3 CODE Code/Unit Test 335 235 3 CODE Sys Req 349 199 4 CODE Unit Int/Test 335 236 3 CODE SW Qual Test 349 200 3 CODE Unit Int/Test 335 237 3 CODE SW Qual Test 350 201 3 CODE Unit Int/Test 335 238 5 CODE HW/SW Integ 351 202 4 CODE Unit Int/Test 335 239 3 CODE HW/SW Integ 352 203 3 CODE Unit Int/Test 335 240 3 CODE Unit Int/Test 357 204 3 CODE Unit Int/Test 335 241 3 CODE Unit Int/Test 357 205 3 CODE Unit Int/Test 335 242 3 CODE HW/SW Integ 357 206 3 CODE Unit Int/Test 335 243 1 CODE HW/SW Integ 357 207 3 CODE Unit Int/Test 335 244 5 CODE Unit Int/Test 362 208 4 CODE Unit Int/Test 335 245 3 CODE Unit Int/Test 362 209 4 CODE Unit Int/Test 335 246 3 SRS Sys Req 362 210 4 CODE Unit Int/Test 335 247 5 CODE HW/SW Integ 362 211 3 CODE Unit Int/Test 335 248 3 CODE HW/SW Integ 362 212 4 CODE Unit Int/Test 335 249 3 CODE SW Qual Test 364 213 4 CODE Unit Int/Test 335 250 3 CODE SW Qual Test 365 214 3 CODE Unit Int/Test 335 251 3 CODE SW Qual Test 365 215 3 other Unit Int/Test 335 252 3 CODE Unit Int/Test 370 216 3 other Unit Int/Test 335 253 4 CODE Code/Unit Test 373 217 5 CODE Unit Int/Test 335 254 5 CODE Code/Unit Test 373 218 3 CODE HW/SW Integ 335 255 3 CODE Code/Unit Test 373 219 4 CODE HW/SW Integ 335 256 3 CODE Code/Unit Test 375 220 3 CODE HW/SW Integ 335 257 3 CODE Code/Unit Test 375 221 3 CODE HW/SW Integ 335 258 3 CODE Code/Unit Test 375 222 5 CODE Code/Unit Test 336 259 3 CODE Code/Unit Test 375 223 3 CODE HW/SW Integ 338 260 3 CODE Code/Unit Test 375 224 5 CODE HW/SW Integ 338 261 3 CODE Code/Unit Test 375 225 3 CODE Unit Int/Test 341 262 3 CODE Code/Unit Test 375 226 3 CODE SW Qual Test 341 263 3 CODE Code/Unit Test 375 227 3 CODE SW Design 342 264 3 CODE Code/Unit Test 375 228 2 CODE SW Design 342 265 3 CODE Code/Unit Test 375 229 4 CODE SW Design 342 266 3 CODE Code/Unit Test 375 230 3 SRS SW Qual Test 342 267 3 SRS SW Qual Test 376 231 3 SRS Unit Int/Test 343 268 3 CODE HW/SW Integ 378 233 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 269 2 SRS SW Qual Test 379 306 3 CODE System Test 411 270 3 SRS SW Qual Test 380 307 3 other SW Qual Test 411 271 3 CODE Unit Int/Test 383 308 3 CODE Code/Unit Test 415 272 3 CODE SW Design 383 309 4 CODE Prep for Use 415 273 3 CODE HW/SW Integ 383 310 3 CODE Code/Unit Test 418 274 2 CODE Code/Unit Test 384 311 3 CODE HW/SW Integ 418 275 3 CODE HW/SW Integ 384 312 2 CODE HW/SW Integ 420 276 4 other Unit Int/Test 385 313 3 CODE HW/SW Integ 420 277 4 CODE HW/SW Integ 385 314 3 CODE HW/SW Integ 421 278 2 CODE HW/SW Integ 386 315 3 CODE Sys Req 421 279 3 CODE SW Qual Test 386 316 4 CODE HW/SW Integ 421 280 5 CODE Unit Int/Test 387 317 3 CODE HW/SW Integ 421 281 3 CODE SW Qual Test 387 318 3 CODE SW Qual Test 422 282 3 CODE SW Design 390 319 3 SRS SW Qual Test 422 283 3 CODE Sys Req 390 320 3 Test Plan SW Qual Test 422 284 3 CODE SW Qual Test 390 321 3 CODE HW/SW Integ 425 285 4 CODE HW/SW Integ 391 322 3 CODE HW/SW Integ 425 286 2 CODE Code/Unit Test 393 323 3 CODE SW Qual Test 427 287 1 CODE Code/Unit Test 393 324 3 CODE SW Qual Test 427 288 3 CODE SW Qual Test 397 325 3 CODE Unit Int/Test 428 289 3 CODE HW/SW Integ 397 326 3 CODE System Test 428 290 3 CODE HW/SW Integ 397 327 5 SRS Unit Int/Test 429 291 5 CODE HW/SW Integ 398 328 3 SRS SW Qual Test 432 292 3 CODE SW Design 400 329 3 CODE HW/SW Integ 432 293 4 CODE HW/SW Integ 400 330 3 CODE HW/SW Integ 435 294 3 SRS HW/SW Integ 400 331 5 CODE HW/SW Integ 436 295 3 SDD HW/SW Integ 400 332 3 CODE Unit Int/Test 440 296 3 SDD HW/SW Integ 400 333 3 CODE Unit Int/Test 440 297 4 CODE HW/SW Integ 400 334 3 CODE Unit Int/Test 440 298 3 CODE System Test 404 335 3 SRS HW/SW Integ 440 299 3 CODE SW Qual Test 404 336 3 CODE Unit Int/Test 441 300 5 CODE HW/SW Integ 404 337 2 CODE HW/SW Integ 442 301 2 CODE HW/SW Integ 406 338 3 CODE HW/SW Integ 446 302 3 CODE HW/SW Integ 406 339 3 CODE HW/SW Integ 446 303 3 SRS SW Design 408 340 4 CODE HW/SW Integ 446 304 2 CODE Code/Unit Test 411 341 3 CODE Unit Int/Test 447 305 3 CODE HW/SW Integ 411 342 5 CODE HW/SW Integ 447 234 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 343 4 CODE SW Qual Test 449 380 3 CODE Code/Unit Test 513 344 5 CODE Unit Int/Test 453 381 3 CODE Code/Unit Test 516 345 3 CODE Unit Int/Test 454 382 3 CODE HW/SW Integ 516 346 4 CODE HW/SW Integ 454 383 3 CODE HW/SW Integ 518 347 3 CODE HW/SW Integ 455 384 3 CODE Unit Int/Test 518 348 2 CODE Code/Unit Test 455 385 3 CODE HW/SW Integ 523 349 3 CODE SW Req 455 386 3 SDD HW/SW Integ 523 350 3 CODE HW/SW Integ 456 387 2 CODE HW/SW Integ 524 351 2 CODE HW/SW Integ 458 388 3 CODE HW/SW Integ 524 352 3 CODE Code/Unit Test 460 389 3 CODE HW/SW Integ 524 353 3 CODE Unk 460 390 3 CODE SW Qual Test 530 354 3 CODE Code/Unit Test 461 391 4 CODE Code/Unit Test 530 355 3 CODE HW/SW Integ 462 392 4 CODE HW/SW Integ 531 356 3 CODE HW/SW Integ 463 393 3 CODE HW/SW Integ 532 357 5 CODE Code/Unit Test 474 394 3 CODE HW/SW Integ 533 358 3 other HW/SW Integ 477 395 2 CODE HW/SW Integ 533 359 4 CODE Code/Unit Test 477 396 1 CODE HW/SW Integ 533 360 3 CODE HW/SW Integ 483 397 3 CODE SW Qual Test 537 361 5 CODE Code/Unit Test 485 398 4 CODE SW Qual Test 538 362 3 CODE SW Qual Test 485 399 3 CODE SW Qual Test 538 363 4 CODE Unit Int/Test 485 400 3 CODE SW Qual Test 542 364 2 CODE HW/SW Integ 490 401 2 CODE HW/SW Integ 544 365 4 CODE Code/Unit Test 494 402 3 CODE HW/SW Integ 545 366 4 CODE HW/SW Integ 497 403 3 CODE Prep for Use 546 367 2 CODE HW/SW Integ 497 404 3 CODE SW Qual Test 546 368 3 CODE Code/Unit Test 499 405 3 CODE SW Qual Test 546 369 3 CODE SW Design 503 406 2 CODE System Test 547 370 3 CODE SW Design 503 407 3 CODE SW Qual Test 548 371 3 CODE HW/SW Integ 503 408 2 CODE Code/Unit Test 553 372 3 CODE Code/Unit Test 503 409 3 CODE Code/Unit Test 554 373 3 CODE HW/SW Integ 505 410 2 CODE Code/Unit Test 554 374 3 CODE HW/SW Integ 505 411 2 CODE HW/SW Integ 555 375 3 CODE HW/SW Integ 505 412 3 CODE System Test 555 376 3 CODE HW/SW Integ 510 413 3 CODE SW Qual Test 555 377 3 CODE Unit Int/Test 512 414 2 CODE Code/Unit Test 558 378 3 CODE Unit Int/Test 512 415 3 CODE System Test 559 379 3 CODE SW Design 513 416 3 CODE HW/SW Integ 561 235 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 417 3 CODE SW Qual Test 562 454 4 CODE Code/Unit Test 611 418 3 CODE Code/Unit Test 568 455 3 CODE SW Qual Test 614 419 3 CODE Code/Unit Test 568 456 3 CODE System Test 614 420 3 CODE HW/SW Integ 569 457 3 CODE HW/SW Integ 614 421 5 CODE HW/SW Integ 572 458 3 CODE HW/SW Integ 614 422 4 CODE HW/SW Integ 572 459 4 CODE HW/SW Integ 614 423 3 CODE HW/SW Integ 573 460 4 CODE HW/SW Integ 614 424 3 CODE HW/SW Integ 581 461 3 CODE Unk 615 425 1 CODE SW Qual Test 584 462 3 CODE SW Qual Test 623 426 3 SRS HW/SW Integ 587 463 4 CODE Code/Unit Test 624 427 3 CODE System Test 588 464 2 CODE SW Qual Test 625 428 2 CODE Code/Unit Test 589 465 3 CODE System Test 628 429 3 CODE Unk 589 466 3 CODE System Test 628 430 3 CODE Code/Unit Test 590 467 3 CODE HW/SW Integ 629 431 3 CODE Unk 593 468 2 CODE SW Qual Test 630 432 3 CODE HW/SW Integ 594 469 4 CODE SW Qual Test 635 433 2 CODE Code/Unit Test 595 470 3 CODE SW Qual Test 636 434 3 CODE Sys Design 595 471 3 SRS System Test 636 435 3 CODE HW/SW Integ 595 472 4 CODE SW Qual Test 637 436 3 CODE SW Qual Test 596 473 1 CODE Code/Unit Test 639 437 3 CODE SW Qual Test 596 474 3 CODE SW Qual Test 639 438 3 CODE SW Qual Test 597 475 3 CODE SW Qual Test 639 439 3 CODE HW/SW Integ 597 476 2 CODE Unit Int/Test 639 440 4 CODE Unit Int/Test 600 477 3 CODE HW/SW Integ 644 441 3 CODE Unit Int/Test 600 478 2 CODE SW Qual Test 644 442 3 CODE Unit Int/Test 600 479 2 CODE System Test 649 443 4 CODE Code/Unit Test 601 480 3 SRS SW Qual Test 649 444 4 CODE Code/Unit Test 601 481 3 SRS SW Qual Test 649 445 2 CODE Code/Unit Test 601 482 3 CODE SW Qual Test 649 446 3 CODE SW Arch 601 483 3 CODE SW Qual Test 650 447 3 CODE Unit Int/Test 604 484 3 CODE SW Qual Test 650 448 3 CODE System Test 607 485 2 CODE SW Qual Test 651 449 3 CODE System Test 608 486 3 CODE SW Qual Test 652 450 3 CODE SW Qual Test 608 487 3 CODE SW Qual Test 656 451 2 SRS SW Req 609 488 3 CODE SW Qual Test 656 452 3 CODE HW/SW Integ 610 489 4 CODE SW Qual Test 657 453 3 CODE HW/SW Integ 611 490 3 CODE Unit Int/Test 657 236 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 491 3 CODE System Test 657 528 4 CODE SW Qual Test 699 492 4 SRS SW Qual Test 663 529 3 CODE SW Qual Test 699 493 3 other SW Qual Test 664 530 1 CODE SW Req 701 494 3 CODE SW Qual Test 666 531 2 CODE System Test 702 495 4 SRS SW Qual Test 666 532 3 CODE Unit Int/Test 705 496 3 CODE SW Qual Test 667 533 2 CODE Code/Unit Test 705 497 3 CODE HW/SW Integ 667 534 3 CODE System Test 706 498 2 other Unit Int/Test 668 535 3 CODE System Test 706 499 3 CODE Code/Unit Test 670 536 3 CODE System Test 706 500 3 CODE Code/Unit Test 670 537 3 SRS SW Qual Test 707 501 3 SRS SW Qual Test 670 538 3 SRS SW Qual Test 707 502 3 CODE SW Qual Test 674 539 3 CODE SW Qual Test 712 503 2 CODE System Test 675 540 3 CODE SW Qual Test 713 504 3 SRS SW Qual Test 677 541 3 CODE Code/Unit Test 714 505 3 SRS SW Qual Test 677 542 5 CODE System Test 714 506 3 SRS SW Qual Test 677 543 3 CODE SW Qual Test 716 507 3 SRS SW Qual Test 677 544 3 CODE Code/Unit Test 726 508 3 CODE Code/Unit Test 679 545 2 CODE Sys Req 727 509 3 CODE HW/SW Integ 679 546 2 CODE Sys Req 727 510 3 CODE System Test 685 547 2 CODE Sys Req 727 511 4 CODE HW/SW Integ 685 548 2 CODE Sys Req 727 512 2 CODE System Test 685 549 2 CODE Sys Req 727 513 3 CODE System Test 685 550 2 CODE Sys Req 727 514 3 CODE System Test 686 551 3 CODE Code/Unit Test 727 515 3 CODE Code/Unit Test 686 552 2 CODE Sys Req 727 516 3 CODE Prep for Use 686 553 3 CODE System Test 727 517 2 CODE System Test 686 554 2 CODE Sys Req 727 518 3 CODE SW Qual Test 686 555 2 CODE Sys Req 727 519 4 SRS SW Req 687 556 3 CODE HW/SW Integ 727 520 4 ADD SW Req 687 557 2 CODE Sys Req 727 521 4 SRS SW Qual Test 687 558 3 CODE System Test 727 522 4 SRS SW Qual Test 687 559 3 documentation Sys Req 727 523 4 SRS SW Qual Test 687 560 3 SRS SW Qual Test 728 524 3 CODE SW Qual Test 688 561 2 CODE Sys Design 728 525 3 other SW Qual Test 699 562 3 other SW Qual Test 729 526 4 CODE Prep for Use 699 563 3 CODE System Test 729 527 3 CODE Prep for Use 699 564 3 SRS SW Qual Test 729 237 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 565 3 CODE System Test 730 602 3 CODE SW Qual Test 762 566 2 CODE System Test 730 603 1 CODE Code/Unit Test 763 567 3 CODE System Test 730 604 3 CODE HW/SW Integ 765 568 2 CODE HW/SW Integ 730 605 3 CODE SW Qual Test 765 569 2 CODE SW Qual Test 733 606 3 CODE SW Qual Test 769 570 3 CODE System Test 736 607 3 SRS SW Qual Test 770 571 3 CODE Sys Req 737 608 3 CODE SW Qual Test 771 572 3 CODE System Test 737 609 3 CODE SW Qual Test 771 573 3 CODE Code/Unit Test 739 610 3 CODE SW Qual Test 771 574 3 CODE Unit Int/Test 741 611 2 CODE System Test 774 575 3 CODE System Test 741 612 2 CODE System Test 775 576 3 CODE SW Qual Test 741 613 3 CODE HW/SW Integ 776 577 2 CODE System Test 742 614 2 CODE System Test 778 578 3 other SW Qual Test 742 615 3 CODE Code/Unit Test 778 579 2 CODE SW Qual Test 742 616 3 other HW/SW Integ 783 580 3 CODE Code/Unit Test 743 617 3 CODE Prep for Use 784 581 3 CODE System Test 743 618 3 CODE Prep for Use 784 582 3 CODE Sys Req 743 619 2 CODE HW/SW Integ 784 583 3 CODE SW Qual Test 743 620 3 CODE System Test 785 584 3 CODE HW/SW Integ 743 621 2 CODE System Test 785 585 3 CODE Sys Req 744 622 4 CODE SW Qual Test 789 586 3 CODE Code/Unit Test 747 623 2 CODE System Test 789 587 3 CODE SW Qual Test 747 624 3 other SW Qual Test 790 588 2 CODE SW Qual Test 748 625 2 CODE SW Qual Test 790 589 3 CODE Prep for Use 748 626 2 CODE SW Qual Test 790 590 3 CODE SW Qual Test 749 627 1 CODE Unit Int/Test 790 591 3 CODE SW Qual Test 751 628 2 CODE SW Qual Test 791 592 3 CODE HW/SW Integ 751 629 3 CODE System Test 791 593 3 CODE HW/SW Integ 751 630 3 CODE SW Qual Test 793 594 3 CODE SW Qual Test 755 631 3 CODE SW Qual Test 794 595 4 CODE SW Qual Test 756 632 3 CODE System Test 794 596 3 CODE SW Qual Test 759 633 3 CODE System Test 794 597 3 CODE System Test 761 634 2 CODE HW/SW Integ 796 598 5 CODE SW Qual Test 761 635 3 CODE System Test 796 599 3 CODE SW Qual Test 761 636 3 SRS SW Qual Test 796 600 3 CODE SW Qual Test 761 637 3 SRS SW Qual Test 796 601 3 CODE SW Qual Test 761 638 3 CODE SW Qual Test 796 238 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 639 3 CODE Sys Design 797 676 3 SRS SW Qual Test 841 640 3 CODE Sys Design 797 677 2 CODE System Test 842 641 3 CODE HW/SW Integ 798 678 1 CODE Code/Unit Test 842 642 3 CODE Sys Design 798 679 3 CODE System Test 842 643 3 CODE Sys Design 798 680 3 CODE SW Qual Test 845 644 3 CODE Sys Req 799 681 3 CODE SW Qual Test 845 645 3 SRS SW Req 799 682 2 CODE SW Qual Test 845 646 3 CODE Sys Design 799 683 2 CODE System Test 845 647 3 CODE SW Qual Test 801 684 3 CODE Sys Req 847 648 3 CODE SW Qual Test 804 685 3 SRS SW Qual Test 847 649 3 CODE Sys Design 804 686 4 CODE SW Qual Test 847 650 3 CODE Sys Design 804 687 2 CODE System Test 847 651 3 CODE Sys Design 804 688 3 CODE SW Qual Test 848 652 3 CODE Code/Unit Test 805 689 4 CODE HW/SW Integ 849 653 3 CODE HW/SW Integ 805 690 3 CODE SW Qual Test 850 654 3 CODE Code/Unit Test 805 691 3 SRS SW Qual Test 852 655 2 CODE System Test 807 692 3 CODE Sys Req 852 656 3 SRS SW Qual Test 810 693 3 SRS SW Req 853 657 3 CODE Prep for Use 811 694 3 SRS SW Req 854 658 3 CODE Prep for Use 811 695 2 CODE SW Qual Test 845 659 2 CODE SW Qual Test 811 696 2 CODE System Test 854 660 2 SRS SW Req 811 697 3 CODE System Test 854 661 3 CODE Sys Req 814 698 3 CODE System Test 854 662 5 CODE SW Qual Test 814 699 3 CODE System Test 854 663 2 CODE SW Qual Test 817 700 3 CODE SW Qual Test 855 664 3 CODE Sys Req 818 701 3 CODE System Test 855 665 3 SRS SW Qual Test 818 702 3 CODE System Test 855 666 3 CODE System Test 818 703 2 CODE System Test 856 667 3 CODE HW/SW Integ 821 704 3 CODE Sys Req 856 668 3 CODE Sys Design 824 705 2 CODE System Test 856 669 3 SRS SW Req 824 706 5 CODE System Test 861 670 3 other SW Qual Test 824 707 2 CODE System Test 861 671 3 SRS SW Req 825 708 1 CODE System Test 862 672 3 CODE Code/Unit Test 825 709 4 CODE System Test 862 673 2 CODE SW Qual Test 825 710 4 CODE SW Qual Test 863 674 3 CODE SW Qual Test 828 711 2 CODE SW Qual Test 863 675 2 CODE System Test 838 712 3 CODE SW Qual Test 863 239 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 713 2 CODE SW Qual Test 866 750 2 CODE System Test 892 714 3 SRS Sys Req 866 751 2 CODE System Test 895 715 1 CODE System Test 866 752 3 CODE System Test 895 716 2 CODE SW Qual Test 867 753 3 CODE Sys Design 895 717 3 CODE SW Qual Test 867 754 1 CODE System Test 895 718 3 CODE SW Qual Test 867 755 2 other Unk 897 719 3 CODE System Test 867 756 2 CODE Unit Int/Test 897 720 3 CODE System Test 867 757 2 CODE System Test 899 721 3 CODE HW/SW Integ 867 758 3 CODE System Test 899 722 2 CODE Unit Int/Test 869 759 3 CODE System Test 901 723 3 SRS SW Req 875 760 2 CODE System Test 904 724 3 CODE Sys Req 875 761 1 CODE System Test 904 725 2 CODE System Test 875 762 2 CODE System Test 904 726 2 CODE System Test 877 763 4 CODE SW Qual Test 904 727 4 CODE SW Qual Test 878 764 3 CODE SW Qual Test 904 728 3 CODE SW Qual Test 878 765 2 CODE System Test 909 729 3 CODE SW Qual Test 878 766 2 CODE System Test 909 730 2 CODE HW/SW Integ 880 767 3 CODE Code/Unit Test 909 731 3 CODE System Test 881 768 3 CODE SW Design 909 732 3 CODE SW Qual Test 882 769 3 CODE SW Design 909 733 2 CODE System Test 882 770 3 ADD SW Qual Test 910 734 3 SRS SW Qual Test 884 771 3 CODE SW Qual Test 910 735 4 CODE SW Qual Test 885 772 3 SRS SW Qual Test 911 736 3 SRS SW Qual Test 885 773 3 CODE SW Qual Test 911 737 3 CODE System Test 885 774 3 CODE Code/Unit Test 911 738 4 CODE System Test 887 775 1 CODE System Test 915 739 4 CODE System Test 887 776 2 CODE SW Qual Test 915 740 3 CODE System Test 887 777 2 CODE System Test 917 741 3 CODE System Test 887 778 4 CODE System Test 917 742 3 CODE System Test 887 779 4 CODE System Test 918 743 5 CODE System Test 887 780 4 CODE System Test 918 744 3 SRS SW Req 888 781 4 CODE SW Qual Test 918 745 5 CODE System Test 890 782 2 kernel Code/Unit Test 919 746 3 CODE SW Qual Test 891 783 3 CODE SW Qual Test 919 747 2 CODE System Test 891 784 3 other Prep for Use 919 748 3 CODE System Test 891 785 2 CODE System Test 922 749 3 other System Test 891 786 3 CODE SW Qual Test 924 240 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 787 2 CODE SW Qual Test 924 824 1 CODE System Test 957 788 3 SRS Sys Req 924 825 3 CODE System Test 957 789 3 CODE SW Req 925 826 2 CODE System Test 959 790 3 SRS SW Qual Test 925 827 3 CODE SW Qual Test 960 791 3 CODE Sys Req 926 828 3 CODE SW Qual Test 961 792 3 CODE SW Qual Test 929 829 3 CODE SW Qual Test 961 793 3 SRS SW Qual Test 929 830 2 CODE System Test 961 794 2 CODE System Test 930 831 3 SRS Sys Req 966 795 3 SRS SW Req 930 832 3 SRS System Test 966 796 3 CODE SW Qual Test 931 833 3 other System Test 968 797 2 CODE SW Qual Test 932 834 3 CODE SW Qual Test 976 798 3 SRS Sys Req 932 835 3 CODE SW Qual Test 977 799 3 SRS Sys Req 933 836 3 CODE SW Qual Test 978 800 3 SRS SW Req 933 837 3 CODE Prep for Use 979 801 3 CODE SW Qual Test 936 838 3 CODE SW Qual Test 980 802 3 CODE SW Qual Test 936 839 4 CODE SW Design 982 803 3 SRS SW Qual Test 938 840 4 CODE SW Design 982 804 3 CODE System Test 938 841 2 CODE System Test 982 805 3 CODE System Test 938 842 3 SRS SW Req 986 806 3 CODE System Test 940 843 3 CODE SW Qual Test 986 807 3 CODE System Test 941 844 3 CODE System Test 986 808 2 CODE System Test 941 845 3 CODE SW Qual Test 988 809 2 CODE System Test 941 846 3 SRS SW Req 992 810 2 CODE System Test 941 847 3 SRS SW Qual Test 995 811 3 CODE SW Qual Test 943 848 3 SRS SW Qual Test 995 812 3 CODE SW Qual Test 943 849 3 SRS SW Qual Test 995 813 2 CODE System Test 946 850 2 CODE Sys Req 999 814 1 CODE System Test 949 851 3 CODE System Test 999 815 2 CODE System Test 949 852 1 CODE System Test 999 816 3 CODE System Test 949 853 3 SRS SW Qual Test 1001 817 3 CODE System Test 949 854 3 CODE System Test 1001 818 2 CODE System Test 950 855 4 CODE Unk 1002 819 2 CODE System Test 952 856 3 CODE System Test 1002 820 3 CODE Unit Int/Test 952 857 2 CODE System Test 1002 821 2 CODE System Test 954 858 3 CODE Sys Req 1002 822 2 CODE System Test 957 859 3 CODE SW Qual Test 1003 823 2 CODE System Test 957 860 3 CODE SW Qual Test 1006 241 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 861 3 other Prep for Use 1007 898 3 CODE SW Qual Test 1042 862 4 CODE Sys Design 1007 899 3 CODE System Test 1042 863 3 CODE Code/Unit Test 1008 900 2 CODE System Test 1044 864 3 SRS Sys Req 1009 901 1 CODE System Test 1049 865 3 SRS SW Qual Test 1012 902 3 CODE System Test 1050 866 2 CODE System Test 1013 903 2 CODE System Test 1052 867 2 CODE System Test 1013 904 2 CODE HW/SW Integ 1056 868 2 CODE System Test 1013 905 3 CODE HW/SW Integ 1056 869 2 CODE System Test 1013 906 2 CODE Code/Unit Test 1057 870 2 CODE System Test 1013 907 3 CODE Qual Test Prep 1057 871 2 CODE Unit Int/Test 1014 908 3 CODE SW Qual Test 1058 872 3 CODE Prep for Use 1015 909 3 SRS Maint_Prep 1064 873 3 other SW Qual Test 1015 910 3 CODE Prep for Use 1064 874 3 CODE SW Qual Test 1016 911 3 SDD Maint_Prep 1065 875 2 CODE HW/SW Integ 1016 912 3 SDD Maint_Prep 1065 876 3 CODE Prep for Use 1017 913 1 CODE System Test 1070 877 2 CODE Qual Test Prep 1017 914 2 SRS System Test 1071 878 2 CODE System Test 1017 915 3 SRS Maint_Prep 1072 879 2 CODE System Test 1017 916 3 SRS Maint_Prep 1072 880 3 CODE Code/Unit Test 1020 917 2 CODE System Test 1072 881 3 other System Test 1020 918 3 CODE System Test 1072 882 3 other Prep for Use 1020 919 3 CODE Prep for Use 1076 883 3 CODE SW Qual Test 1021 920 3 CODE Prep for Use 1076 884 3 CODE SW Qual Test 1021 921 3 CODE SW Qual Test 1080 885 3 SRS SW Qual Test 1024 922 3 CODE Maint_Prep 1081 886 3 CODE SW Qual Test 1027 923 1 CODE HW/SW Integ 1083 887 3 other Unk 1027 924 2 CODE System Test 1084 888 3 CODE SW Qual Test 1028 925 3 SRS Maint_Prep 1085 889 3 other Unk 1028 926 2 CODE System Test 1090 890 3 other Unk 1029 927 3 CODE System Test 1090 891 3 CODE Maint_Prep 1031 928 3 CODE System Test 1090 892 3 SRS SW Req 1031 929 3 SRS Maint_Prep 1091 893 3 CODE System Test 1031 930 3 CODE System Test 1091 894 3 SRS SW Req 1034 931 3 CODE System Test 1092 895 3 CODE Qual Test Prep 1035 932 3 CODE SW Qual Test 1092 896 2 CODE System Test 1035 933 3 SRS System Test 1092 897 2 CODE Qual Test Prep 1036 934 3 CODE HW/SW Integ 1092 242 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 935 3 CODE Maint_Prep 1093 972 3 other SW Qual Test 1139 936 3 CODE System Test 1094 973 3 other Prep for Use 1139 937 3 CODE Sys Design 1094 974 3 CODE System Test 1141 938 3 CODE System Test 1097 975 4 CODE Maint_Prep 1146 939 3 CODE Maint_Prep 1098 976 3 CODE Prep for Use 1148 940 3 CODE Unk 1099 977 3 CODE Prep for Use 1148 941 3 CODE Unk 1099 978 3 other Prep for Use 1148 942 3 CODE Prep for Use 1099 979 3 CODE System Test 1148 943 3 other System Test 1102 980 3 CODE HW/SW Integ 1149 944 3 other System Test 1104 981 3 SRS Maint_Prep 1151 945 5 CODE Prep for Use 1105 982 2 CODE System Test 1154 946 4 SRS Maint_Prep 1108 983 3 CODE System Test 1154 947 4 CODE SW Qual Test 1112 984 2 CODE System Test 1154 948 3 other System Test 1112 985 2 CODE System Test 1156 949 2 CODE HW/SW Integ 1114 986 3 CODE System Test 1160 950 2 CODE HW/SW Integ 1114 987 2 CODE Unit Int/Test 1160 951 2 CODE System Test 1119 988 3 SRS Maint_Prep 1160 952 3 CODE System Test 1119 989 3 SRS Maint_Prep 1161 953 2 CODE Maint_Prep 1119 990 3 SRS Maint_Prep 1161 954 3 CODE Maint_Prep 1119 991 3 SRS Prep for Use 1161 955 2 CODE System Test 1122 992 3 SRS Maint_Prep 1161 956 3 CODE System Test 1125 993 3 SRS Maint_Prep 1178 957 2 CODE System Test 1126 994 3 SRS System Test 1178 958 3 CODE HW/SW Integ 1127 995 3 SRS System Test 1178 959 3 CODE Maint_Prep 1128 996 3 CODE Maint_Prep 1178 960 3 SRS Maint_Prep 1132 997 3 CODE System Test 1183 961 2 other Prep for Use 1133 998 3 CODE Maint_Prep 1184 962 3 SRS Maint_Prep 1133 999 3 CODE System Test 1184 963 3 SRS Maint_Prep 1133 1000 3 SRS Maint_Prep 1191 964 3 CODE HW/SW Integ 1133 1001 3 CODE Sys Req 1192 965 3 CODE HW/SW Integ 1133 1002 3 CODE Maint_Prep 1211 966 4 CODE Maint_Prep 1134 1003 3 CODE Unit Int/Test 1212 967 3 CODE Code/Unit Test 1134 1004 3 CODE System Test 1212 968 3 CODE System Test 1135 1005 3 CODE System Test 1213 969 3 other Prep for Use 1135 1006 3 CODE Unit Int/Test 1225 970 3 CODE System Test 1136 1007 3 CODE Unit Int/Test 1225 971 2 CODE System Test 1139 1008 3 CODE Unit Int/Test 1225 243 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 1009 3 CODE System Test 1225 1046 1 CODE Unit Int/Test 1286 1010 3 CODE Sys Req 1230 1047 3 CODE System Test 1288 1011 3 CODE System Test 1232 1048 3 CODE System Test 1290 1012 3 CODE System Test 1233 1049 1 CODE System Test 1290 1013 1 CODE System Test 1233 1050 3 CODE System Test 1293 1014 3 CODE HW/SW Integ 1234 1051 2 CODE Maint_Prep 1296 1015 3 CODE Maint_Prep 1237 1052 3 CODE Prep for Use 1297 1016 3 CODE Prep for Use 1237 1053 4 CODE Prep for Use 1297 1017 3 SRS Maint_Prep 1239 1054 3 CODE Maint_Prep 1300 1018 3 CODE System Test 1240 1055 3 CODE System Test 1300 1019 2 CODE System Test 1244 1056 4 CODE Sys Design 1303 1020 3 CODE Maint_Prep 1246 1057 3 other Prep for Use 1308 1021 3 CODE Sys Req 1246 1058 3 CODE Maint_Prep 1308 1022 3 CODE Maint_Prep 1248 1059 4 CODE Maint_Prep 1308 1023 3 CODE System Test 1255 1060 3 CODE Maint_Prep 1310 1024 3 CODE Prep for Use 1259 1061 3 CODE Sys Design 1310 1025 3 CODE Prep for Use 1259 1062 3 CODE Maint_Prep 1311 1026 3 CODE System Test 1260 1063 2 CODE HW/SW Integ 1313 1027 3 CODE System Test 1262 1064 3 CODE HW/SW Integ 1313 1028 2 SRS System Test 1262 1065 2 CODE HW/SW Integ 1313 1029 3 CODE Sys Req 1265 1066 3 CODE Prep for Use 1314 1030 2 CODE System Test 1265 1067 3 CODE Maint_Prep 1314 1031 3 CODE Prep for Use 1266 1068 3 CODE Maint_Prep 1314 1032 3 CODE System Test 1266 1069 3 CODE Maint_Prep 1314 1033 3 CODE System Test 1266 1070 3 CODE Maint_Prep 1314 1034 3 SRS Maint_Prep 1269 1071 3 CODE System Test 1314 1035 3 CODE Maint_Prep 1269 1072 3 CODE Prep for Use 1314 1036 3 CODE System Test 1269 1073 3 SRS Maint_Prep 1315 1037 2 CODE System Test 1272 1074 3 CODE Prep for Use 1315 1038 3 CODE Maint_Prep 1274 1075 3 CODE Sys Design 1315 1039 3 CODE Maint_Prep 1274 1076 3 CODE Maint_Prep 1315 1040 5 CODE Maint_Prep 1276 1077 4 CODE Maint_Prep 1315 1041 1 CODE System Test 1276 1078 3 CODE Maint_Prep 1317 1042 3 CODE Prep for Use 1280 1079 3 SRS Maint_Prep 1317 1043 3 CODE Maint_Prep 1281 1080 3 CODE Unit Int/Test 1317 1044 3 CODE Maint_Prep 1282 1081 4 CODE Prep for Use 1322 1045 2 CODE System Test 1283 1082 3 CODE System Test 1324 244 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 1083 2 CODE Prep for Use 1324 1120 3 CODE HW/SW Integ 1384 1084 5 CODE Maint_Prep 1325 1121 3 CODE Maint_Prep 1385 1085 3 CODE Maint_Prep 1329 1122 3 CODE System Test 1385 1086 2 CODE System Test 1330 1123 2 kernel System Test 1386 1087 3 other Prep for Use 1330 1124 2 kernel System Test 1386 1088 3 CODE Maint_Prep 1331 1125 2 kernel System Test 1386 1089 4 CODE Prep for Use 1335 1126 2 kernel System Test 1386 1090 3 CODE Maint_Prep 1336 1127 2 CODE System Test 1386 1091 3 CODE Maint_Prep 1336 1128 4 CODE System Test 1386 1092 2 CODE System Test 1338 1129 2 CODE Code/Unit Test 1386 1093 3 CODE System Test 1338 1130 2 CODE Code/Unit Test 1386 1094 3 CODE System Test 1339 1131 3 CODE Maint_Prep 1393 1095 2 CODE Maint_Prep 1342 1132 4 CODE System Test 1394 1096 3 CODE Maint_Prep 1342 1133 3 other Maint_Prep 1402 1097 3 CODE Maint_Prep 1346 1134 3 CODE System Test 1402 1098 4 CODE System Test 1350 1135 3 CODE System Test 1402 1099 3 CODE HW/SW Integ 1351 1136 3 CODE System Test 1402 1100 2 CODE Prep for Use 1352 1137 3 CODE System Test 1406 1101 3 CODE Code/Unit Test 1359 1138 3 CODE System Test 1406 1102 2 CODE System Test 1359 1139 3 CODE System Test 1406 1103 3 CODE System Test 1363 1140 4 CODE Maint_Prep 1407 1104 2 CODE System Test 1364 1141 2 CODE System Test 1408 1105 3 other Prep for Use 1365 1142 3 CODE Maint_Prep 1409 1106 2 CODE System Test 1366 1143 4 CODE Maint_Prep 1409 1107 2 CODE System Test 1366 1144 3 CODE System Test 1412 1108 3 CODE Code/Unit Test 1368 1145 2 CODE System Test 1412 1109 3 CODE System Test 1372 1146 2 CODE Unit Int/Test 1414 1110 2 CODE System Test 1374 1147 3 CODE Code/Unit Test 1427 1111 2 CODE Maint_Prep 1380 1148 3 CODE System Test 1429 1112 3 CODE Maint_Prep 1380 1149 4 SRS Maint_Prep 1430 1113 3 CODE Maint_Prep 1380 1150 2 CODE HW/SW Integ 1431 1114 3 CODE Maint_Prep 1380 1151 2 CODE Code/Unit Test 1433 1115 3 CODE System Test 1380 1152 2 other SW Qual Test 1433 1116 3 CODE System Test 1380 1153 4 other Maint_Prep 1442 1117 2 CODE System Test 1381 1154 3 CODE Maint_Prep 1443 1118 3 kernel HW/SW Integ 1381 1155 3 kernel HW/SW Integ 1443 1119 3 CODE System Test 1381 1156 4 CODE Maint_Prep 1443 245 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 1157 3 kernel HW/SW Integ 1443 1194 3 CODE Maint_Prep 1497 1158 4 CODE Qual Test Prep 1447 1195 3 CODE Maint_Prep 1497 1159 4 CODE Qual Test Prep 1447 1196 4 CODE Qual Test Prep 1499 1160 5 CODE Unit Int/Test 1448 1197 4 CODE Maint_Prep 1499 1161 3 CODE Prep for Use 1448 1198 4 CODE Maint_Prep 1499 1162 5 other Code/Unit Test 1450 1199 5 CODE Prep for Use 1499 1163 5 CODE Unit Int/Test 1455 1200 5 CODE Code/Unit Test 1503 1164 5 CODE Unit Int/Test 1456 1201 4 CODE Code/Unit Test 1503 1165 5 CODE Code/Unit Test 1461 1202 4 SRS Qual Test Prep 1504 1166 2 CODE Code/Unit Test 1463 1203 4 SRS Qual Test Prep 1504 1167 3 other SW Qual Test 1464 1204 5 CODE Code/Unit Test 1504 1168 3 CODE Code/Unit Test 1469 1205 5 CODE Code/Unit Test 1504 1169 4 CODE SW Design 1469 1206 3 CODE SW Qual Test 1504 1170 5 CODE Code/Unit Test 1470 1207 1 CODE System Test 1504 1171 5 CODE Code/Unit Test 1472 1208 3 CODE SW Qual Test 1504 1172 4 CODE Qual Test Prep 1476 1209 4 SRS Qual Test Prep 1505 1173 3 CODE Maint_Prep 1477 1210 3 CODE System Test 1506 1174 2 CODE System Test 1478 1211 3 CODE Maint_Prep 1507 1175 2 other System Test 1482 1212 3 CODE System Test 1511 1176 3 CODE Code/Unit Test 1483 1213 3 CODE Maint_Prep 1511 1177 5 CODE Code/Unit Test 1483 1214 4 SRS Code/Unit Test 1511 1178 5 CODE Unit Int/Test 1484 1215 4 CODE Code/Unit Test 1513 1179 3 CODE Code/Unit Test 1484 1216 4 CODE Code/Unit Test 1513 1180 3 ADD Code/Unit Test 1485 1217 4 CODE Maint_Prep 1514 1181 3 CODE Maint_Prep 1485 1218 4 CODE Maint_Prep 1518 1182 3 CODE System Test 1485 1219 5 CODE Code/Unit Test 1520 1183 3 CODE Maint_Prep 1486 1220 1 CODE System Test 1520 1184 5 CODE Code/Unit Test 1486 1221 3 CODE SW Design 1520 1185 4 CODE Maint_Prep 1486 1222 3 CODE System Test 1521 1186 4 CODE Maint_Prep 1486 1223 5 CODE Code/Unit Test 1521 1187 3 other SW Qual Test 1490 1224 2 CODE Maint_Prep 1524 1188 2 CODE System Test 1490 1225 2 CODE Code/Unit Test 1524 1189 4 CODE Maint_Prep 1492 1226 1 CODE Maint_Prep 1525 1190 3 CODE Maint_Prep 1492 1227 3 CODE System Test 1526 1191 5 CODE Code/Unit Test 1493 1228 5 CODE Code/Unit Test 1526 1192 3 CODE Maint_Prep 1494 1229 3 CODE HW/SW Integ 1527 1193 3 CODE Maint_Prep 1494 1230 4 CODE SW Qual Test 1531 246 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 1231 3 CODE HW/SW Integ 1532 1268 3 SRS Maint_Prep 1576 1232 4 CODE Maint_Prep 1533 1269 5 CODE Code/Unit Test 1576 1233 3 CODE Qual Test Prep 1533 1270 5 CODE Code/Unit Test 1576 1234 3 CODE Code/Unit Test 1533 1271 3 SRS Maint_Prep 1577 1235 3 CODE Code/Unit Test 1533 1272 3 SRS Maint_Prep 1577 1236 3 SRS Maint_Prep 1535 1273 3 SRS Maint_Prep 1577 1237 3 SRS Maint_Prep 1536 1274 3 SRS Maint_Prep 1577 1238 3 CODE System Test 1539 1275 3 SRS Maint_Prep 1580 1239 3 CODE HW/SW Integ 1539 1276 3 SRS Maint_Prep 1580 1240 3 CODE HW/SW Integ 1539 1277 5 CODE Code/Unit Test 1580 1241 3 CODE System Test 1539 1278 5 CODE Unit Int/Test 1581 1242 3 CODE HW/SW Integ 1539 1279 4 CODE Code/Unit Test 1583 1243 3 CODE HW/SW Integ 1539 1280 5 CODE Unit Int/Test 1583 1244 3 CODE HW/SW Integ 1539 1281 5 CODE Unit Int/Test 1583 1245 3 CODE HW/SW Integ 1539 1282 5 CODE Code/Unit Test 1587 1246 3 CODE HW/SW Integ 1539 1283 3 CODE Unit Int/Test 1587 1247 4 CODE Code/Unit Test 1545 1284 5 CODE Unit Int/Test 1587 1248 5 CODE Code/Unit Test 1545 1285 5 CODE Code/Unit Test 1587 1249 5 CODE Code/Unit Test 1546 1286 5 CODE Code/Unit Test 1587 1250 5 CODE Code/Unit Test 1547 1287 5 CODE Code/Unit Test 1588 1251 4 CODE Code/Unit Test 1547 1288 3 CODE Maint_Prep 1588 1252 5 CODE Prep for Use 1547 1289 4 CODE Maint_Prep 1589 1253 5 CODE Code/Unit Test 1552 1290 4 CODE System Test 1590 1254 5 CODE Code/Unit Test 1554 1291 3 CODE System Test 1590 1255 5 CODE Code/Unit Test 1554 1292 5 CODE Code/Unit Test 1591 1256 4 SRS Code/Unit Test 1555 1293 4 CODE Unit Int/Test 1594 1257 4 CODE Code/Unit Test 1555 1294 5 CODE Code/Unit Test 1594 1258 5 CODE Code/Unit Test 1556 1295 5 CODE Code/Unit Test 1595 1259 5 CODE Code/Unit Test 1556 1296 5 CODE Code/Unit Test 1597 1260 3 CODE System Test 1559 1297 3 CODE Code/Unit Test 1598 1261 3 CODE Unit Int/Test 1560 1298 3 CODE Code/Unit Test 1598 1262 4 SRS Qual Test Prep 1561 1299 5 CODE Code/Unit Test 1598 1263 4 SRS SW Qual Test 1561 1300 5 CODE Unit Int/Test 1598 1264 5 CODE Unit Int/Test 1573 1301 4 CODE Unit Int/Test 1598 1265 4 CODE Code/Unit Test 1574 1302 3 CODE Code/Unit Test 1598 1266 4 CODE Code/Unit Test 1574 1303 3 CODE Code/Unit Test 1598 1267 3 SRS Maint_Prep 1576 1304 3 other System Test 1601 247 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 1305 3 CODE Unit Int/Test 1602 1342 4 CODE Maint_Prep 1609 1306 3 other System Test 1603 1343 3 CODE Unit Int/Test 1609 1307 5 CODE Unit Int/Test 1603 1344 5 CODE Maint_Prep 1610 1308 5 CODE Unit Int/Test 1603 1345 5 CODE Code/Unit Test 1610 1309 3 CODE HW/SW Integ 1603 1346 5 CODE Code/Unit Test 1611 1310 3 CODE HW/SW Integ 1603 1347 4 CODE Maint_Prep 1611 1311 5 CODE SW Qual Test 1604 1348 4 CODE Maint_Prep 1611 1312 5 CODE Code/Unit Test 1604 1349 4 CODE Maint_Prep 1611 1313 4 CODE Unit Int/Test 1604 1350 4 CODE Maint_Prep 1611 1314 4 CODE Code/Unit Test 1604 1351 5 CODE Code/Unit Test 1611 1315 5 CODE Code/Unit Test 1604 1352 3 CODE Maint_Prep 1611 1316 4 CODE Code/Unit Test 1604 1353 4 CODE Maint_Prep 1612 1317 5 CODE Code/Unit Test 1604 1354 3 CODE Code/Unit Test 1612 1318 4 CODE Code/Unit Test 1604 1355 5 CODE Unit Int/Test 1617 1319 3 CODE Code/Unit Test 1604 1356 4 CODE Unit Int/Test 1617 1320 4 CODE Code/Unit Test 1604 1357 3 CODE Unit Int/Test 1617 1321 5 CODE Code/Unit Test 1604 1358 3 CODE System Test 1617 1322 4 CODE Code/Unit Test 1604 1359 4 CODE Maint_Prep 1618 1323 5 CODE Code/Unit Test 1605 1360 5 CODE Code/Unit Test 1618 1324 5 CODE Code/Unit Test 1605 1361 3 SRS Maint_Prep 1623 1325 5 CODE Code/Unit Test 1605 1362 2 CODE Maint_Prep 1623 1326 4 SRS System Test 1605 1363 3 CODE Qual Test Prep 1624 1327 5 CODE Code/Unit Test 1605 1364 3 CODE System Test 1624 1328 4 CODE Code/Unit Test 1605 1365 3 CODE SW Qual Test 1625 1329 5 CODE Code/Unit Test 1605 1366 5 CODE Unk 1625 1330 5 CODE Code/Unit Test 1605 1367 4 CODE Maint_Prep 1626 1331 4 CODE Code/Unit Test 1605 1368 3 CODE Maint_Prep 1626 1332 4 CODE Code/Unit Test 1605 1369 4 CODE Maint_Prep 1626 1333 4 CODE Code/Unit Test 1605 1370 2 CODE System Test 1629 1334 5 CODE Code/Unit Test 1605 1371 3 CODE System Test 1629 1335 5 CODE Code/Unit Test 1605 1372 5 CODE Code/Unit Test 1630 1336 5 CODE Code/Unit Test 1605 1373 5 CODE Code/Unit Test 1631 1337 5 CODE Code/Unit Test 1608 1374 5 CODE Code/Unit Test 1631 1338 5 CODE Unk 1608 1375 4 SRS Qual Test Prep 1632 1339 4 CODE Code/Unit Test 1608 1376 3 CODE Qual Test Prep 1640 1340 5 CODE Unk 1608 1377 4 CODE Code/Unit Test 1641 1341 4 SRS Code/Unit Test 1609 1378 4 CODE Code/Unit Test 1645 248 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 1379 5 CODE Unit Int/Test 1645 1416 5 CODE Code/Unit Test 1666 1380 5 CODE Code/Unit Test 1646 1417 3 SRS SW Qual Test 1666 1381 5 CODE Code/Unit Test 1646 1418 5 CODE Code/Unit Test 1666 1382 4 CODE Qual Test Prep 1646 1419 5 CODE Code/Unit Test 1666 1383 3 SRS Qual Test Prep 1647 1420 4 CODE Unit Int/Test 1666 1384 3 SRS Qual Test Prep 1647 1421 3 CODE SW Qual Test 1666 1385 3 SRS Qual Test Prep 1647 1422 3 CODE Unit Int/Test 1667 1386 3 SRS Qual Test Prep 1647 1423 3 CODE Qual Test Prep 1667 1387 4 CODE Qual Test Prep 1648 1424 3 CODE Qual Test Prep 1667 1388 3 CODE Prep for Use 1648 1425 4 CODE Unit Int/Test 1668 1389 3 CODE Prep for Use 1648 1426 5 CODE Qual Test Prep 1670 1390 3 CODE Unit Int/Test 1650 1427 3 CODE Qual Test Prep 1670 1391 3 SRS Qual Test Prep 1651 1428 3 CODE Qual Test Prep 1671 1392 3 CODE System Test 1651 1429 3 CODE Qual Test Prep 1672 1393 3 SRS Qual Test Prep 1651 1430 4 CODE Code/Unit Test 1672 1394 3 CODE SW Qual Test 1652 1431 4 CODE Code/Unit Test 1672 1395 2 CODE System Test 1654 1432 5 CODE Code/Unit Test 1672 1396 3 CODE SW Qual Test 1654 1433 4 CODE Code/Unit Test 1672 1397 2 CODE Qual Test Prep 1657 1434 4 CODE Code/Unit Test 1673 1398 4 CODE Unit Int/Test 1657 1435 5 CODE Unit Int/Test 1673 1399 3 CODE Qual Test Prep 1657 1436 5 CODE Code/Unit Test 1674 1400 3 CODE System Test 1660 1437 4 CODE Code/Unit Test 1675 1401 3 CODE Qual Test Prep 1661 1438 5 CODE Code/Unit Test 1675 1402 5 CODE Unit Int/Test 1661 1439 4 CODE Code/Unit Test 1675 1403 5 CODE Unit Int/Test 1661 1440 4 CODE Code/Unit Test 1675 1404 5 CODE Unit Int/Test 1661 1441 5 CODE Code/Unit Test 1675 1405 3 CODE Qual Test Prep 1661 1442 4 CODE Code/Unit Test 1675 1406 5 CODE Qual Test Prep 1662 1416 5 CODE Code/Unit Test 1666 1407 5 CODE Unit Int/Test 1664 1417 3 SRS SW Qual Test 1666 1408 5 CODE Unit Int/Test 1664 1418 5 CODE Code/Unit Test 1666 1409 3 CODE Maint_Prep 1664 1419 5 CODE Code/Unit Test 1666 1410 5 CODE Code/Unit Test 1664 1420 4 CODE Unit Int/Test 1666 1411 5 CODE Code/Unit Test 1664 1421 3 CODE SW Qual Test 1666 1412 4 CODE Code/Unit Test 1665 1422 3 CODE Unit Int/Test 1667 1413 5 CODE SW Req 1665 1423 3 CODE Qual Test Prep 1667 1414 5 CODE Unit Int/Test 1665 1424 3 CODE Qual Test Prep 1667 1415 3 CODE Maint_Prep 1666 1425 4 CODE Unit Int/Test 1668 249 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 1443 5 CODE Code/Unit Test 1675 1480 3 SRS Qual Test Prep 1710 1444 5 CODE Code/Unit Test 1675 1481 3 SRS Qual Test Prep 1710 1445 4 CODE Code/Unit Test 1678 1482 3 SRS Qual Test Prep 1710 1446 4 CODE Code/Unit Test 1678 1483 4 CODE SW Qual Test 1713 1447 5 CODE Unit Int/Test 1678 1484 3 SRS Qual Test Prep 1713 1448 5 CODE Unit Int/Test 1678 1485 3 CODE SW Req 1714 1449 3 CODE System Test 1679 1486 3 SRS Qual Test Prep 1714 1450 3 CODE System Test 1681 1487 3 CODE SW Req 1714 1451 3 CODE Qual Test Prep 1682 1488 4 CODE SW Req 1714 1452 5 CODE Prep for Use 1682 1489 3 CODE SW Req 1714 1453 3 CODE SW Req 1682 1490 4 CODE SW Req 1714 1454 4 CODE SW Req 1682 1491 3 CODE SW Req 1714 1455 4 CODE Unit Int/Test 1682 1492 3 CODE Maint_Prep 1718 1456 1 CODE System Test 1687 1493 3 CODE Maint_Prep 1718 1457 3 SRS Qual Test Prep 1687 1494 3 CODE Maint_Prep 1719 1458 3 SRS Maint_Prep 1688 1495 4 CODE Prep for Use 1722 1459 3 CODE Unit Int/Test 1688 1496 4 CODE SW Req 1722 1460 2 other Code/Unit Test 1688 1497 5 CODE Unit Int/Test 1722 1461 2 kernel Code/Unit Test 1688 1498 5 CODE Unit Int/Test 1722 1462 3 SRS Qual Test Prep 1689 1499 3 CODE HW/SW Integ 1723 1463 3 SRS Qual Test Prep 1690 1500 3 SRS Qual Test Prep 1723 1464 3 SRS Qual Test Prep 1690 1501 3 SRS Qual Test Prep 1723 1465 3 SRS Qual Test Prep 1690 1502 3 SRS Qual Test Prep 1724 1466 4 SRS Qual Test Prep 1692 1503 3 SRS Qual Test Prep 1724 1467 3 SRS Qual Test Prep 1695 1504 3 SRS Qual Test Prep 1724 1468 3 SRS Qual Test Prep 1695 1505 3 SRS SW Design 1724 1469 3 SRS Qual Test Prep 1695 1506 3 CODE Qual Test Prep 1733 1470 3 CODE Code/Unit Test 1695 1507 3 CODE Qual Test Prep 1734 1471 3 CODE SW Req 1695 1508 3 CODE SW Qual Test 1736 1472 4 CODE SW Qual Test 1696 1509 4 CODE SW Qual Test 1738 1473 3 CODE SW Qual Test 1696 1510 3 CODE Maint_Prep 1738 1474 3 CODE SW Qual Test 1700 1511 3 CODE Qual Test Prep 1748 1475 3 SRS Qual Test Prep 1701 1512 3 CODE Qual Test Prep 1748 1476 3 CODE SW Req 1701 1513 3 CODE Qual Test Prep 1749 1477 3 CODE SW Req 1701 1514 3 CODE Qual Test Prep 1752 1478 2 CODE System Test 1701 1515 3 CODE Qual Test Prep 1766 1479 3 SRS Qual Test Prep 1710 1516 2 CODE Unit Int/Test 1773 250 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 1517 3 CODE SW Qual Test 1776 1554 3 CODE SW Qual Test 1814 1518 4 CODE SW Qual Test 1776 1555 3 CODE SW Qual Test 1814 1519 3 SRS Maint_Prep 1777 1556 4 CODE Qual Test Prep 1821 1520 3 SRS Qual Test Prep 1777 1557 4 CODE Qual Test Prep 1823 1521 4 CODE Qual Test Prep 1778 1558 4 CODE SW Qual Test 1825 1522 2 other Qual Test Prep 1778 1559 3 CODE SW Qual Test 1829 1523 4 other Qual Test Prep 1778 1560 5 CODE Qual Test Prep 1834 1524 4 other Qual Test Prep 1778 1561 3 CODE Maint_Prep 1843 1525 4 other Qual Test Prep 1778 1562 3 CODE Qual Test Prep 1854 1526 3 other Unit Int/Test 1778 1563 3 CODE System Test 1854 1527 4 other Qual Test Prep 1778 1564 3 CODE Maint_Prep 1870 1528 4 other Qual Test Prep 1778 1565 3 SRS SW Qual Test 1874 1529 4 CODE System Test 1780 1566 3 CODE SW Qual Test 1889 1530 3 CODE Unit Int/Test 1784 1567 3 SRS Qual Test Prep 1890 1531 3 SRS SW Qual Test 1784 1568 3 SRS Qual Test Prep 1890 1532 3 CODE Maint_Prep 1785 1569 3 SRS SW Qual Test 1890 1533 3 CODE SW Qual Test 1786 1570 3 SRS SW Req 1890 1534 3 CODE Qual Test Prep 1792 1571 4 SRS SW Qual Test 1901 1535 3 CODE Unk 1792 1572 3 SRS SW Qual Test 1902 1536 3 CODE Qual Test Prep 1792 1573 3 SRS SW Req 1910 1537 4 CODE Unk 1793 1574 3 SRS SW Req 1910 1538 3 CODE Unk 1793 1575 3 SRS SW Req 1910 1539 3 CODE System Test 1794 1576 3 SRS SW Req 1912 1540 5 CODE Unit Int/Test 1797 1577 4 SRS SW Req 1912 1541 3 CODE Qual Test Prep 1800 1578 3 SRS Qual Test Prep 1912 1542 4 CODE System Test 1800 1579 3 SRS SW Req 1920 1543 4 CODE Unk 1804 1580 3 other Unk 1924 1544 3 CODE Qual Test Prep 1806 1581 3 SRS SW Req 1938 1545 3 CODE SW Qual Test 1807 1582 3 CODE System Test 1938 1546 4 CODE Unit Int/Test 1810 1583 4 CODE Qual Test Prep 1951 1547 3 CODE Qual Test Prep 1811 1584 3 CODE System Test 1953 1548 3 CODE Qual Test Prep 1811 1585 3 CODE SW Qual Test 1955 1549 3 SRS SW Req 1813 1586 5 SRS SW Qual Test 1969 1550 4 SRS SW Req 1813 1587 5 CODE Sys Req 1969 1551 4 CODE Qual Test Prep 1814 1588 3 SRS SW Req 1973 1552 4 CODE Qual Test Prep 1814 1589 5 CODE SW Qual Test 1974 1553 3 CODE SW Qual Test 1814 1590 5 CODE SW Qual Test 1974 251 Table 31: Continued # Sev Product Effectivity Day # Sev Product Effectivity Day 1591 2 CODE SW Qual Test 1976 1628 3 SRS System Test 2140 1592 3 CODE SW Qual Test 1976 1629 4 CODE SW Qual Test 2143 1593 4 CODE System Test 1979 1630 3 SRS System Test 2148 1594 5 CODE SW Qual Test 1987 1631 3 SRS System Test 2148 1595 3 CODE SW Design 1988 1632 4 CODE System Test 2149 1596 3 CODE System Test 1988 1633 3 CODE Unit Int/Test 2170 1597 5 CODE System Test 1988 1634 3 SRS Qual Test Prep 2177 1598 4 SRS SW Qual Test 1990 1635 3 SRS Qual Test Prep 2177 1599 4 CODE Qual Test Prep 1993 1636 3 SRS Qual Test Prep 2177 1600 3 CODE Qual Test Prep 1994 1637 3 SRS Qual Test Prep 2178 1601 3 CODE Prep for Use 2001 1638 3 SRS Qual Test Prep 2179 1602 4 CODE SW Qual Test 2002 1639 3 CODE SW Qual Test 2179 1603 3 CODE Unk 2022 1640 3 SRS Qual Test Prep 2184 1604 3 SRS Prep for Use 2022 1641 3 CODE System Test 2184 1605 3 CODE System Test 2036 1642 3 CODE Qual Test Prep 2185 1606 3 SRS SW Req 2037 1643 3 CODE Qual Test Prep 2189 1607 3 SRS SW Req 2049 1644 3 SRS SW Req 2190 1608 5 CODE System Test 2059 1645 3 SRS Qual Test Prep 2192 1609 3 CODE System Test 2063 1646 5 CODE SW Qual Test 2192 1610 3 CODE Sys Design 2072 1647 3 other Qual Test Prep 2193 1611 3 CODE Code/Unit Test 2072 1648 3 CODE Prep for Use 2193 1612 3 SRS SW Req 2077 1649 3 other System Test 2205 1613 3 CODE Unit Int/Test 2092 1650 3 CODE SW Qual Test 2206 1614 4 CODE Prep for Use 2098 1651 4 CODE SW Qual Test 2206 1615 3 CODE Qual Test Prep 2099 1652 3 other Qual Test Prep 2206 1616 3 CODE System Test 2100 1653 3 CODE SW Qual Test 2206 1617 3 SRS SW Qual Test 2100 1654 5 CODE Qual Test Prep 2208 1618 3 CODE Qual Test Prep 2100 1655 3 SRS Qual Test Prep 2208 1619 3 CODE System Test 2106 1656 3 CODE Qual Test Prep 2209 1620 4 CODE System Test 2113 1657 4 SRS System Test 2217 1621 3 SRS System Test 2122 1658 3 SRS Qual Test Prep 2218 1622 3 CODE SW Qual Test 2122 1659 3 CODE SW Qual Test 2218 1623 3 SRS SW Req 2123 1660 3 CODE System Test 2219 1624 3 SRS System Test 2126 1661 3 CODE System Test 2219 1625 3 SRS System Test 2126 1662 3 CODE System Test 2219 1626 3 CODE System Test 2135 1663 3 CODE Qual Test Prep 2221 1627 3 CODE System Test 2135 1664 3 SRS Code/Unit Test 2234 252 Table 31: Continued # Sev Product Effectivity Day 1665 3 CODE System Test 2234 1666 3 CODE SW Qual Test 2235 1667 4 CODE SW Qual Test 2238 1668 3 CODE SW Qual Test 2238 1669 3 SRS Qual Test Prep 2239 1670 4 SRS Qual Test Prep 2239 1671 3 SRS Code/Unit Test 2240 1672 3 CODE Code/Unit Test 2240 1673 3 CODE SW Qual Test 2241 1674 3 CODE SW Qual Test 2242 1675 3 SRS Qual Test Prep 2245 1676 3 SRS Qual Test Prep 2252 1677 3 CODE System Test 2256 1678 3 SRS Sys Design 2256 1679 2 CODE Qual Test Prep 2260 1680 4 SRS System Test 2266 1681 3 CODE Qual Test Prep 2266 1682 5 CODE Qual Test Prep 2268 Table 32: Project- C All Defect Data # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 1 1307 3 Docs Rej/Def 16 1354 3 CODE 1499 2 1307 3 Docs 1592 17 1355 3 Docs 1389 3 1307 4 CODE Rej/Def 18 1356 3 CODE Rej/Def 4 1307 5 Docs Rej/Def 19 1359 3 CODE 1512 5 1307 5 Docs Rej/Def 20 1359 3 CODE Rej/Def 6 1307 4 Docs 1282 21 1360 5 Unk 1366 7 1338 4 CODE Rej/Def 22 1361 4 Docs 1446 9 1339 5 CODE Rej/Def 23 1361 3 Docs Rej/Def 10 1345 3 Docs Rej/Def 24 1361 3 Docs 1422 11 1351 5 CODE 1362 25 1362 3 Unk 1373 12 1352 3 CODE 1369 26 1365 3 Unk 1376 13 1352 3 Docs 1593 27 1365 3 Unk 1376 14 1352 3 CODE Rej/Def 28 1365 4 Unk 1430 15 1353 3 Docs 1359 29 1366 3 Unk 1369 253 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 30 1368 5 Docs 1451 70 1393 5 N/A Rej/Def 31 1369 4 Unk 1432 71 1393 5 Test Equip Rej/Def 32 1369 3 Docs 1422 72 1396 4 Docs 1418 33 1369 4 CODE Rej/Def 73 1396 4 Docs 1418 34 1369 4 CODE Rej/Def 74 1397 4 Docs 1526 35 1373 4 CODE 1430 75 1397 5 N/A Rej/Def 36 1373 3 Docs 1376 76 1400 4 Docs 1415 37 1373 3 Docs Rej/Def 77 1402 2 Unk 1512 38 1373 3 N/A Rej/Def 78 1403 4 Unk 1432 39 1374 5 N/A Rej/Def 79 1403 4 CODE 1512 40 1374 4 Docs Rej/Def 80 1404 5 CODE Rej/Def 41 1374 3 Docs 1568 81 1409 3 N/A Rej/Def 42 1374 4 Docs 1495 82 1409 5 Docs 1417 43 1377 3 N/A Rej/Def 83 1409 4 Docs 1417 44 1377 3 N/A Rej/Def 84 1410 3 Docs 1499 45 1379 3 Unk 1411 85 1414 3 N/A Rej/Def 46 1381 4 Docs 1519 86 1416 3 Docs Rej/Def 47 1381 4 Docs Rej/Def 87 1417 5 Unk 1422 48 1381 4 Docs 1499 88 1417 3 Unk 1431 49 1382 4 Docs 1584 89 1418 5 N/A Rej/Def 50 1382 4 Docs Rej/Def 90 1418 4 Docs 1527 51 1382 4 Docs 1528 91 1421 3 Hardware Rej/Def 52 1382 4 N/A Rej/Def 92 1421 3 CODE Rej/Def 53 1382 4 Docs 1530 93 1422 5 Unk 1430 54 1387 3 Docs 1415 94 1422 4 Unk 1466 55 1387 3 Docs 1505 95 1422 4 Unk 1466 56 1387 5 Docs Rej/Def 96 1423 3 Docs 1432 57 1387 3 Docs 1418 97 1428 5 CODE Rej/Def 58 1387 3 Test Equip Rej/Def 98 1429 3 CODE Rej/Def 59 1387 5 N/A Rej/Def 99 1429 5 N/A Rej/Def 60 1387 3 CODE 1418 100 1431 5 N/A Rej/Def 61 1388 3 Docs 1418 101 1432 5 Unk 1436 62 1389 4 N/A 1507 102 1435 5 Unk 1589 63 1389 3 CODE 1418 103 1437 4 Unk 1446 64 1389 3 CODE 1422 104 1442 3 CODE Rej/Def 65 1393 5 Test Equip Rej/Def 105 1443 5 Docs 1607 66 1393 3 CODE Rej/Def 106 1443 4 Docs 1614 67 1393 5 CODE 1500 107 1443 2 Unk 1446 68 1393 5 N/A Rej/Def 108 1444 4 Other 1561 69 1393 5 N/A Rej/Def 109 1445 3 Unk 1607 254 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 110 1446 3 CODE 1694 150 1491 4 Unk 1621 111 1446 2 COTS SW Rej/Def 151 1491 4 Unk 1495 112 1449 4 Unk 1466 152 1491 4 Unk 1634 113 1451 4 Tools 1495 153 1492 4 CODE 1515 114 1451 3 Unk 1605 154 1492 4 Unk 1495 115 1456 4 Docs 1527 155 1492 4 Unk 1495 116 1456 5 COTS SW Rej/Def 156 1492 5 Docs Rej/Def 117 1458 3 N/A Rej/Def 157 1492 3 Tools 1646 118 1464 5 Docs 1509 158 1493 4 CODE 1527 119 1467 5 Docs 1543 159 1493 3 Unk 1554 120 1467 4 N/A Rej/Def 160 1494 4 Tools 1526 121 1470 4 Docs Rej/Def 161 1494 4 Docs Rej/Def 122 1471 4 CODE 1625 162 1494 4 Tools 1526 123 1471 3 Tools 1495 163 1494 4 Tools 1526 124 1473 3 Docs 1549 164 1494 4 Tools 1526 125 1473 3 Docs 1549 165 1494 3 Tools 1526 126 1473 4 Unk 1652 166 1494 3 Tools 1526 127 1476 5 Docs 1535 167 1494 3 Tools 1526 128 1478 5 CODE 1542 168 1494 3 Tools 1535 129 1478 4 Unk 1495 169 1494 3 Tools 1535 130 1478 4 Unk 1634 170 1495 4 Unk 1638 131 1478 4 Docs 1549 171 1495 3 Unk 1652 132 1480 3 CODE 1572 172 1495 3 Unk 1653 133 1480 3 CODE 1554 173 1495 2 CODE 1575 134 1481 3 CODE 1521 174 1495 3 N/A Rej/Def 135 1481 3 Docs 1570 175 1498 3 CODE 1509 136 1484 4 Docs Rej/Def 176 1498 3 CODE 1521 137 1485 3 Unk 1612 177 1498 4 Docs 1516 138 1485 3 Unk 1635 178 1500 5 CODE 1625 139 1486 2 CODE 1521 179 1500 4 Docs 1515 140 1486 1 Unk 1495 180 1500 3 CODE 1509 141 1486 4 CODE 1509 181 1501 5 Docs 1635 142 1486 5 Unk 1536 182 1501 4 Other Rej/Def 143 1486 5 Unk 1535 183 1501 3 CODE 1521 144 1488 3 Unk 1495 184 1501 3 CODE 1513 145 1488 3 CODE 1652 185 1501 3 CODE 1519 146 1488 2 Unk 1519 186 1501 3 CODE 1519 147 1488 3 Unk 1495 187 1501 3 CODE 1540 148 1488 3 Unk 1499 188 1501 3 CODE 1519 149 1488 3 Unk 1495 189 1505 3 CODE 1542 255 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 190 1507 1 Unk 1514 230 1530 2 CODE 1549 191 1508 3 CODE Rej/Def 231 1531 3 CODE 1565 192 1508 3 CODE 1520 232 1533 4 CODE 1575 193 1509 5 CODE Rej/Def 233 1533 2 CODE 1543 194 1509 5 CODE 1540 234 1534 4 Docs 1558 195 1510 2 CODE 1516 235 1535 3 CODE 1543 196 1510 3 CODE 1520 236 1535 2 Docs 1584 197 1510 3 CODE 1522 237 1535 3 CODE 1562 198 1510 3 CODE 1514 238 1535 3 COTS SW 1596 199 1510 3 CODE 1515 239 1535 5 N/A Rej/Def 200 1511 3 CODE 1514 240 1536 4 CODE Rej/Def 201 1511 3 CODE 1515 241 1537 4 Docs 1578 202 1512 5 CODE 1547 242 1537 4 CODE 1669 203 1513 4 N/A 1659 243 1537 3 CODE 1549 204 1513 3 CODE 1653 244 1538 3 CODE 1660 205 1513 4 CODE 1548 245 1538 3 CODE 1660 206 1514 5 CODE 1652 246 1538 4 CODE Rej/Def 207 1514 3 CODE 1584 247 1540 5 N/A Rej/Def 208 1515 5 Docs Rej/Def 248 1541 2 CODE 1558 209 1515 4 N/A 1549 249 1541 4 CODE 1663 210 1516 3 COTS SW Rej/Def 250 1543 3 Docs 1547 211 1517 2 CODE 1522 251 1544 3 Docs Rej/Def 212 1519 3 CODE 1521 252 1545 1 CODE Rej/Def 213 1519 4 CODE 1529 253 1546 3 CODE 1592 214 1520 3 CODE 1527 254 1546 3 CODE 1555 215 1520 3 CODE 1527 255 1547 5 Docs 1556 216 1520 3 Elec Files 1551 256 1547 2 CODE 1556 217 1521 3 CODE 1543 257 1547 3 CODE 1562 218 1521 4 Docs Rej/Def 258 1547 2 CODE 1557 219 1521 4 CODE Rej/Def 259 1548 4 CODE 1591 220 1522 3 CODE Rej/Def 260 1549 3 CODE 1694 221 1522 3 CODE Rej/Def 261 1549 3 CODE 1694 222 1522 3 Tools 1530 262 1549 3 CODE 1694 223 1522 4 CODE Rej/Def 263 1550 4 CODE 1592 224 1525 3 CODE Rej/Def 264 1550 4 CODE 1605 225 1527 2 CODE 1540 265 1550 3 CODE 1578 226 1527 5 N/A Rej/Def 266 1550 2 CODE 1555 227 1529 3 Docs Rej/Def 267 1550 3 CODE 1592 228 1529 3 Docs Rej/Def 268 1550 1 CODE 1571 229 1530 2 CODE 1554 269 1550 2 CODE 1556 256 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 270 1551 3 CODE Rej/Def 310 1564 2 CODE 1603 271 1551 3 CODE 1557 311 1564 3 CODE Rej/Def 272 1551 3 COTS SW 1555 312 1564 5 Docs 1619 273 1551 3 CODE 1582 313 1565 3 CODE 1579 274 1551 3 CODE Rej/Def 314 1565 3 CODE 1577 275 1551 3 CODE Rej/Def 315 1565 4 CODE 1600 276 1551 3 CODE 1572 316 1565 4 CODE 1586 277 1554 3 Docs 1660 317 1565 4 CODE 1577 278 1554 5 CODE 1684 318 1568 2 Other Rej/Def 279 1554 3 CODE Rej/Def 319 1568 3 CODE 1624 280 1554 2 CODE 1617 320 1569 2 COTS SW Rej/Def 281 1554 3 CODE 1556 321 1569 3 CODE 1578 282 1554 3 CODE 1557 322 1569 3 CODE 1611 283 1555 5 CODE 1694 323 1569 3 CODE 1669 284 1555 5 CODE 1726 324 1569 3 CODE 1578 285 1556 3 CODE 1600 325 1569 3 CODE 1578 286 1556 3 CODE 1583 326 1569 3 CODE 1704 287 1556 3 CODE 1669 327 1569 3 CODE Rej/Def 288 1556 4 CODE Rej/Def 328 1569 5 CODE 1590 289 1557 3 CODE 1565 329 1570 3 CODE 1579 290 1557 3 COTS SW Rej/Def 330 1570 3 Docs Rej/Def 291 1558 3 CODE 1590 331 1570 3 CODE 1715 292 1558 4 CODE 1613 332 1570 3 CODE 1583 293 1558 4 CODE Rej/Def 333 1571 3 CODE 1583 294 1558 5 Docs 1565 334 1571 4 CODE 1592 295 1558 3 CODE 1590 335 1571 5 CODE Rej/Def 296 1561 3 CODE Rej/Def 336 1571 5 CODE Rej/Def 297 1561 2 CODE 1569 337 1572 5 Docs 1703 298 1562 3 CODE 1590 338 1572 5 Docs 1590 299 1562 3 CODE 1570 339 1572 3 CODE 1592 300 1562 3 CODE Rej/Def 340 1572 3 CODE 1592 301 1564 3 CODE 1572 341 1575 1 CODE Rej/Def 302 1564 4 CODE 1669 342 1576 3 CODE 1579 303 1564 4 CODE 1638 343 1576 3 CODE 1603 304 1564 2 CODE 1590 344 1577 3 CODE 1592 305 1564 3 CODE 1591 345 1577 3 CODE 1590 306 1564 2 CODE 1617 346 1577 5 N/A Rej/Def 307 1564 3 CODE 1652 347 1577 3 COTS SW 1578 308 1564 4 CODE 1572 348 1578 3 COTS SW Rej/Def 309 1564 5 CODE 1600 349 1578 3 COTS SW Rej/Def 257 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 350 1578 3 CODE 1591 390 1597 4 CODE 1605 351 1578 5 Docs Rej/Def 391 1597 2 CODE 1690 352 1578 5 N/A Rej/Def 392 1598 4 CODE 1600 353 1578 4 N/A Rej/Def 393 1598 4 CODE 1653 354 1579 4 CODE 1584 394 1598 4 CODE 1635 355 1580 5 N/A Rej/Def 395 1598 3 CODE 1600 356 1582 3 CODE 1590 396 1598 4 Docs 1633 357 1583 4 CODE 1603 397 1598 3 CODE Rej/Def 358 1583 3 CODE 1591 398 1598 5 Docs 1604 359 1583 3 Other 1592 399 1598 5 Docs 1618 360 1584 4 CODE 1589 400 1598 4 Other 1717 361 1584 3 CODE Rej/Def 401 1598 3 CODE 1605 362 1584 3 CODE 1621 402 1598 2 CODE 1600 363 1584 4 CODE 1738 403 1598 4 CODE 1768 364 1584 5 CODE 1604 404 1599 4 CODE 1611 365 1585 3 CODE 1592 405 1599 4 CODE 1632 366 1585 3 CODE 1590 406 1599 4 CODE 1631 367 1585 3 CODE 1596 407 1599 5 CODE 1618 368 1585 3 Docs 1593 408 1599 4 CODE 1613 369 1586 3 CODE 1590 409 1599 5 CODE Rej/Def 370 1586 5 Test Equip Rej/Def 410 1600 5 CODE 1684 371 1586 5 Docs Rej/Def 411 1600 3 CODE 1604 372 1589 3 CODE 1591 412 1600 4 CODE 1604 373 1589 2 CODE 1593 413 1600 3 CODE 1612 374 1589 4 CODE 1669 414 1600 4 CODE 1606 375 1590 3 CODE 1669 415 1600 3 CODE Rej/Def 376 1590 3 CODE 1684 416 1600 4 CODE 1639 377 1590 5 Docs Rej/Def 417 1600 5 Docs 1731 378 1591 2 CODE 1593 418 1603 5 Hardware 1617 379 1591 3 Elec Files 1592 419 1603 2 CODE 1635 380 1593 1 CODE 1598 420 1603 4 CODE 1785 381 1593 3 CODE 1600 421 1603 5 Docs 1605 382 1593 3 CODE 1612 422 1603 5 CODE 1613 383 1593 3 CODE 1604 423 1603 3 CODE 1640 384 1593 3 CODE 1600 424 1603 5 CODE 1621 385 1596 3 CODE 1598 425 1604 4 CODE Rej/Def 386 1596 2 CODE Rej/Def 426 1604 3 CODE 1613 387 1597 2 CODE 1618 427 1604 3 CODE Rej/Def 388 1597 3 Elec Files 1600 428 1604 3 CODE Rej/Def 389 1597 4 COTS SW Rej/Def 429 1604 3 CODE 1633 258 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 430 1604 3 CODE 1632 470 1621 3 CODE 1635 431 1605 3 CODE Rej/Def 471 1621 3 CODE 1631 432 1606 3 CODE 1614 472 1621 3 Dev Tools 1640 433 1606 3 CODE Rej/Def 473 1621 5 N/A Rej/Def 434 1607 5 N/A Rej/Def 474 1621 2 N/A 1624 435 1607 2 CODE 1613 475 1621 5 Docs Rej/Def 436 1607 5 Docs 1673 476 1622 3 CODE 1639 437 1607 3 CODE 1690 477 1624 3 CODE 1628 438 1607 5 CODE 1675 478 1625 5 N/A Rej/Def 439 1607 4 CODE 1624 479 1625 1 CODE Rej/Def 440 1607 4 CODE 1625 480 1625 2 CODE 1628 441 1607 2 CODE 1613 481 1625 4 CODE Rej/Def 442 1607 5 CODE 1648 482 1625 3 CODE Rej/Def 443 1607 3 CODE 1613 483 1625 1 CODE Rej/Def 444 1607 5 CODE 1646 484 1626 5 Docs 1753 445 1611 3 CODE 1641 485 1626 1 CODE 1628 446 1611 4 CODE 1626 486 1627 3 CODE 1669 447 1611 2 CODE 1618 487 1627 4 CODE 1642 448 1611 3 CODE Rej/Def 488 1627 2 CODE 1633 449 1612 4 CODE 1638 489 1627 4 CODE 1675 450 1612 4 CODE 1635 490 1627 5 CODE 1649 451 1612 4 CODE 1614 491 1627 2 CODE 1663 452 1612 4 CODE 1745 492 1628 4 CODE Rej/Def 453 1613 4 N/A 1631 493 1628 4 CODE 1663 454 1613 4 Docs 1621 494 1628 3 CODE Rej/Def 455 1613 3 CODE 1624 495 1628 3 CODE 1669 456 1613 2 Elec Files 1618 496 1628 3 CODE 1640 457 1613 2 N/A 1613 497 1628 5 CODE 1638 458 1613 3 CODE 1669 498 1628 3 CODE 1684 459 1617 3 CODE 1663 499 1631 2 CODE 1632 460 1617 5 CODE 1641 500 1631 3 CODE 1675 461 1618 2 CODE 1624 501 1631 2 CODE 1632 462 1618 4 CODE Rej/Def 502 1631 2 CODE 1634 463 1618 3 Docs 1619 503 1631 2 CODE 1663 464 1618 3 CODE 1626 504 1632 5 Docs 1654 465 1618 5 Docs 1648 505 1632 2 CODE 1635 466 1618 3 CODE 1621 506 1633 4 CODE Rej/Def 467 1619 5 CODE 1646 507 1633 5 Docs 1638 468 1620 3 CODE 1632 508 1633 3 CODE 1684 469 1621 3 CODE 1675 509 1634 2 CODE Rej/Def 259 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 510 1634 3 Test Scripts 1698 550 1641 1 CODE Rej/Def 511 1634 4 CODE 1666 551 1641 3 CODE 1647 512 1634 4 CODE 1684 552 1642 3 CODE 1799 513 1634 3 CODE 1675 553 1642 3 CODE 1695 514 1634 3 CODE Rej/Def 554 1642 5 Docs 1668 515 1634 2 N/A Rej/Def 555 1646 2 CODE 1704 516 1634 2 CODE 1638 556 1646 3 CODE 1684 517 1635 2 CODE 1638 557 1646 3 CODE 1663 518 1635 3 CODE Rej/Def 558 1646 3 CODE Rej/Def 519 1635 3 CODE 1663 559 1646 4 CODE 1690 520 1635 3 CODE 1675 560 1646 5 Docs 1715 521 1635 3 CODE 1663 561 1646 4 Docs 1733 522 1635 4 N/A Rej/Def 562 1647 3 CODE 1715 523 1635 3 CODE 1639 563 1648 5 Docs 1716 524 1635 2 N/A Rej/Def 564 1648 2 CODE 1654 525 1635 2 N/A Rej/Def 565 1648 3 Test Equip Rej/Def 526 1635 4 CODE 1653 566 1648 3 CODE 1675 527 1637 3 N/A 1785 567 1648 3 CODE 1669 528 1637 2 CODE 1646 568 1649 3 CODE 1653 529 1638 4 CODE 1639 569 1649 3 CODE 1669 530 1638 3 CODE 1778 570 1649 3 CODE Rej/Def 531 1639 3 CODE 1646 571 1649 1 CODE 1663 532 1639 3 CODE 1669 572 1649 4 Test Equip Rej/Def 533 1639 3 Test SW Rej/Def 573 1649 2 CODE 1653 534 1639 3 CODE 1641 574 1652 4 CODE Rej/Def 535 1639 3 CODE 1641 575 1652 2 CODE 1684 536 1639 3 CODE 1641 576 1652 3 Unk 1654 537 1639 2 CODE 1640 577 1653 5 Docs Rej/Def 538 1639 2 CODE 1640 578 1653 2 CODE 1656 539 1640 2 CODE 1640 579 1654 2 CODE 1656 540 1640 4 CODE 1669 580 1655 1 CODE 1663 541 1640 5 CODE 1648 581 1655 4 Docs 1676 542 1640 5 CODE 1642 582 1655 3 N/A Rej/Def 543 1640 3 CODE 1682 583 1656 4 CODE 1669 544 1640 3 CODE 1677 584 1656 3 CODE 1663 545 1641 4 CODE 1646 585 1656 2 CODE 1660 546 1641 4 CODE 1655 586 1656 5 Docs 1771 547 1641 4 CODE 1652 587 1656 5 Dev Tools Rej/Def 548 1641 4 CODE 1647 588 1656 4 COTS SW Rej/Def 549 1641 2 COTS SW Rej/Def 589 1656 4 CODE 1663 260 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 590 1659 3 CODE 1669 630 1668 5 CODE 1730 591 1659 2 CODE 1669 631 1668 3 CODE 1684 592 1659 3 CODE 1675 632 1668 5 CODE 1785 593 1660 2 CODE 1663 633 1669 5 CODE Rej/Def 594 1660 2 CODE 1669 634 1669 4 CODE 1697 595 1660 5 Docs 1682 635 1669 3 CODE 1680 596 1660 5 Docs 1690 636 1669 4 CODE 1675 597 1660 5 COTS SW Rej/Def 637 1669 5 CODE 1690 598 1660 5 N/A Rej/Def 638 1669 3 CODE 1675 599 1660 3 CODE 1669 639 1669 5 N/A Rej/Def 600 1660 3 CODE 1675 640 1669 5 Test Scripts Rej/Def 601 1660 5 Docs 1736 641 1669 2 CODE 1675 602 1660 4 CODE 1675 642 1670 3 CODE 1809 603 1660 5 Docs Rej/Def 643 1670 4 CODE 1690 604 1661 4 Other Rej/Def 644 1670 3 CODE 1684 605 1661 4 CODE 1669 645 1670 3 CODE Rej/Def 606 1661 3 CODE Rej/Def 646 1670 5 CODE 1684 607 1661 4 CODE 1669 647 1673 5 CODE Rej/Def 608 1661 3 CODE 1669 648 1673 4 CODE Rej/Def 609 1661 2 CODE 1663 649 1673 2 CODE 1675 610 1662 3 CODE 1675 650 1673 2 CODE 1675 611 1662 3 CODE Rej/Def 651 1673 4 N/A Rej/Def 612 1662 3 CODE 1704 652 1673 4 CODE Rej/Def 613 1662 2 CODE 1684 653 1673 4 N/A 1684 614 1662 3 CODE 1669 654 1673 4 CODE 1684 615 1663 3 Docs Rej/Def 655 1673 3 CODE 1690 616 1663 5 Docs 1764 656 1674 3 Test SW Rej/Def 617 1663 3 CODE 1669 657 1674 3 CODE Rej/Def 618 1663 3 CODE 1669 658 1674 2 CODE Rej/Def 619 1664 4 CODE 1669 659 1674 3 CODE 1690 620 1666 3 CODE 1684 660 1674 2 CODE 1690 621 1666 3 CODE 1684 661 1674 3 CODE 1690 622 1667 3 Test SW Rej/Def 662 1674 3 CODE 1690 623 1667 5 CODE 1800 663 1674 3 CODE Rej/Def 624 1667 4 CODE Rej/Def 664 1675 3 CODE 1684 625 1668 5 Docs 1673 665 1675 2 CODE 1684 626 1668 4 CODE 1684 666 1675 3 CODE Rej/Def 627 1668 4 CODE 1684 667 1675 3 CODE Rej/Def 628 1668 3 CODE 1684 668 1676 3 CODE 1680 629 1668 3 Docs Rej/Def 669 1676 2 CODE 1684 261 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 670 1676 5 CODE 1684 700 1683 3 CODE 1723 671 1676 3 CODE 1690 701 1684 4 CODE 1697 672 1676 2 CODE 1704 702 1684 5 Docs 1760 673 1676 3 CODE 1715 703 1684 2 N/A 1704 674 1676 3 CODE Rej/Def 704 1684 3 CODE 1715 675 1677 2 CODE 1687 705 1685 3 COTS SW Rej/Def 676 1677 4 CODE 1684 706 1687 3 CODE 1694 677 1677 4 CODE Rej/Def 707 1687 4 CODE 1723 678 1679 2 CODE 1684 708 1687 3 CODE 1697 679 1679 3 CODE Rej/Def 709 1688 3 CODE 1697 680 1680 3 CODE 1704 710 1688 4 CODE 1715 681 1680 2 CODE 1715 711 1688 3 CODE Rej/Def 682 1680 3 Elec Files 1704 712 1688 2 CODE 1690 683 1680 4 CODE 1704 713 1688 5 Docs Rej/Def 684 1680 3 CODE 1760 714 1689 3 N/A 1711 685 1680 5 CODE 1704 715 1689 3 CODE 1697 686 1680 4 CODE 1730 716 1689 4 CODE 1704 687 1681 4 CODE 1690 717 1689 4 CODE 1737 688 1681 2 CODE 1690 718 1689 5 CODE 1697 689 1681 3 CODE Rej/Def 719 1690 3 CODE 1697 690 1681 3 CODE Rej/Def 720 1690 4 N/A Rej/Def 691 1681 5 CODE Rej/Def 721 1691 3 CODE Rej/Def 692 1681 5 Docs Rej/Def 722 1691 4 CODE 1717 693 1682 4 CODE 1723 723 1691 3 CODE 1704 694 1682 3 CODE 1717 724 1691 3 CODE 1740 695 1683 4 CODE Rej/Def 725 1694 3 CODE 1772 696 1683 3 CODE Rej/Def 726 1694 2 CODE 1694 697 1683 5 Docs 1690 727 1694 4 Docs 1715 698 1683 5 Docs 1701 728 1694 5 CODE 1813 699 1683 3 CODE 1718 729 1694 3 COTS SW Rej/Def 670 1676 5 CODE 1684 730 1695 3 CODE 1806 671 1676 3 CODE 1690 731 1695 2 CODE 1704 672 1676 2 CODE 1704 732 1695 3 CODE 1797 673 1676 3 CODE 1715 733 1695 3 CODE 1717 674 1676 3 CODE Rej/Def 734 1695 3 CODE 1746 675 1677 2 CODE 1687 735 1695 3 CODE 1704 676 1677 4 CODE 1684 736 1695 3 CODE 1704 677 1677 4 CODE Rej/Def 737 1695 3 CODE 1715 678 1679 2 CODE 1684 738 1696 3 CODE 1704 679 1679 3 CODE Rej/Def 739 1696 4 CODE 1858 262 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 740 1696 4 CODE Rej/Def 780 1704 3 CODE 1723 741 1696 3 N/A 1799 781 1704 3 CODE 1715 742 1697 4 CODE 1717 782 1704 5 Other 1806 743 1697 3 CODE Rej/Def 783 1705 2 N/A 1716 744 1697 4 Docs 1710 784 1705 3 CODE 1715 745 1697 2 CODE 1715 785 1705 3 CODE 1717 746 1697 2 CODE Rej/Def 786 1708 5 CODE 1719 747 1698 3 CODE 1715 787 1709 3 Test Equip 1732 748 1698 4 CODE 1715 788 1709 3 CODE 1717 749 1701 3 CODE Rej/Def 789 1709 3 CODE 1715 750 1701 3 CODE 1723 790 1709 3 CODE 1717 751 1701 3 CODE 1723 791 1710 4 CODE 1717 752 1701 3 CODE 1704 792 1710 3 CODE 1717 753 1701 3 CODE 1717 793 1711 2 COTS SW Rej/Def 754 1701 3 CODE 1723 794 1711 3 CODE 1737 755 1701 5 CODE 1772 795 1711 3 N/A 1737 756 1701 3 CODE 1737 796 1712 3 CODE 1737 757 1701 N/A Rej/Def 797 1712 3 CODE 1723 758 1702 3 CODE Rej/Def 798 1712 5 CODE 1760 759 1702 3 CODE Rej/Def 799 1712 4 CODE 1785 760 1702 3 CODE 1717 800 1712 4 CODE 1737 761 1702 3 Test Equip Rej/Def 801 1712 3 CODE 1723 762 1702 3 COTS SW Rej/Def 802 1712 4 CODE 1723 763 1702 3 CODE 1715 803 1712 3 CODE 1737 764 1702 5 Docs 1772 804 1712 4 CODE Rej/Def 765 1702 3 CODE 1760 805 1712 3 CODE 1729 766 1702 2 CODE 1715 806 1712 4 CODE Rej/Def 767 1702 5 Docs Rej/Def 807 1715 3 CODE 1737 768 1702 3 CODE Rej/Def 808 1715 2 Test Equip 1732 769 1702 3 N/A Rej/Def 809 1715 4 Test Equip 1732 770 1702 3 N/A 1729 810 1715 2 Test Equip 1732 771 1702 2 CODE 1715 811 1716 4 CODE Rej/Def 772 1703 4 CODE 1704 812 1716 3 CODE 1737 773 1703 2 CODE 1715 813 1716 3 CODE 1723 774 1703 4 CODE 1715 814 1717 4 CODE 1806 775 1703 4 N/A Rej/Def 815 1717 5 CODE 1767 776 1703 5 Dev Tools Rej/Def 816 1717 4 CODE 1722 777 1703 3 N/A Rej/Def 817 1717 3 CODE 1723 778 1704 3 CODE 1715 818 1718 3 CODE 1737 779 1704 4 N/A Rej/Def 819 1718 5 N/A Rej/Def 263 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 820 1718 2 CODE 1723 860 1732 4 CODE Rej/Def 821 1718 2 Test Equip Rej/Def 861 1733 3 N/A Rej/Def 822 1719 3 Test SW Rej/Def 862 1733 5 Docs 1760 823 1719 5 Docs 1726 863 1733 2 CODE 1747 824 1719 5 Docs 1737 864 1733 2 Test Equip 1738 825 1719 5 Docs 1746 865 1735 5 Dev Tools 1878 826 1719 5 CODE 1723 866 1736 3 CODE 1760 827 1722 5 Docs 1737 867 1737 4 CODE 1801 828 1722 2 CODE 1729 868 1737 3 CODE 1740 829 1722 5 N/A Rej/Def 869 1737 3 CODE 1767 830 1722 3 CODE 1730 870 1737 5 COTS SW Rej/Def 831 1722 4 CODE 1737 871 1737 4 CODE 1797 832 1722 4 CODE 1730 872 1737 4 Elec Files 1802 833 1722 4 N/A 1797 873 1737 4 CODE 1767 834 1722 4 CODE 1778 874 1737 3 CODE Rej/Def 835 1723 3 CODE Rej/Def 875 1737 5 Docs 1745 836 1723 3 CODE 1729 876 1738 5 Test Equip Rej/Def 837 1724 3 CODE Rej/Def 877 1738 2 CODE Rej/Def 838 1724 5 CODE 1733 878 1739 3 CODE 1767 839 1724 2 CODE 1729 879 1739 5 N/A 1760 840 1724 2 Test Equip 1738 880 1739 2 CODE 1747 841 1724 2 Test Equip 1746 881 1739 3 N/A Rej/Def 842 1724 3 CODE Rej/Def 882 1739 4 Test Equip Rej/Def 843 1724 3 CODE 1767 883 1739 5 Docs 1759 844 1724 2 Test Equip 1732 884 1739 2 CODE 1747 845 1724 2 Test Equip 1733 885 1740 5 Docs 1760 846 1725 4 CODE 1737 886 1740 4 COTS SW Rej/Def 847 1725 5 CODE 1778 887 1741 3 N/A 1760 848 1725 2 CODE 1747 888 1743 3 CODE 1760 849 1727 4 CODE 1737 889 1743 5 Docs 1795 850 1730 1 CODE 1737 890 1743 3 CODE 1806 851 1730 3 CODE 1737 891 1743 3 CODE 1760 852 1730 5 CODE 1732 892 1743 3 CODE 1760 853 1731 2 CODE 1737 893 1744 4 COTS SW Rej/Def 854 1731 3 CODE 1737 894 1744 5 COTS SW Rej/Def 855 1731 2 Test SW Rej/Def 895 1744 2 CODE 1750 856 1731 3 CODE 1737 896 1744 3 CODE 1767 857 1731 5 Docs 1773 897 1744 3 CODE 1760 858 1731 4 CODE 1753 898 1745 4 CODE 1778 859 1732 3 CODE 1803 899 1745 2 CODE 1753 264 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 900 1746 3 CODE 1767 940 1760 4 CODE 1778 901 1746 4 N/A Rej/Def 941 1761 4 COTS SW Rej/Def 902 1746 4 N/A Rej/Def 942 1761 3 CODE Rej/Def 903 1747 3 N/A Rej/Def 943 1761 2 CODE 1778 904 1747 5 Docs 1828 944 1761 3 CODE 1873 905 1747 3 CODE 1858 945 1761 5 Test SW Rej/Def 906 1750 5 CODE 1778 946 1763 3 CODE 1797 907 1750 3 CODE Rej/Def 947 1764 3 CODE 1778 908 1750 3 CODE 1797 948 1764 3 CODE 1778 909 1751 3 CODE 1767 949 1764 3 COTS SW Rej/Def 910 1752 5 Docs 1759 950 1764 4 CODE 1778 911 1752 5 CODE 1760 951 1765 4 CODE 1778 912 1753 3 CODE 1760 952 1765 4 CODE 1785 913 1753 5 CODE 1778 953 1765 3 CODE 1778 914 1754 5 COTS SW 1801 954 1765 4 CODE 1778 915 1754 4 COTS SW Rej/Def 955 1765 2 CODE 1778 916 1754 3 CODE 1785 956 1766 2 Docs 1785 917 1754 4 Other Rej/Def 957 1766 3 Tools 1779 918 1754 4 Elec Files 1767 958 1766 4 CODE 1797 919 1754 4 CODE 1767 959 1766 4 CODE 1797 920 1754 4 CODE 1767 960 1766 3 CODE 1785 921 1754 5 CODE 1785 961 1767 5 CODE 1778 922 1757 3 CODE Rej/Def 962 1767 4 CODE 1778 923 1757 5 CODE Rej/Def 963 1767 2 CODE 1785 924 1757 5 CODE 1778 964 1767 2 CODE 1806 925 1757 5 COTS SW Rej/Def 965 1767 5 Docs 1794 926 1757 4 CODE 1797 966 1767 5 CODE 1785 927 1757 4 CODE 1767 967 1768 5 CODE Rej/Def 928 1757 5 N/A Rej/Def 968 1768 4 CODE 1778 929 1757 5 CODE 1767 969 1768 5 N/A 1800 930 1758 3 CODE Rej/Def 970 1768 5 Docs 1780 931 1758 5 Test HW Rej/Def 971 1768 4 Test SW Rej/Def 932 1758 5 Test HW Rej/Def 972 1768 5 N/A Rej/Def 933 1758 3 CODE 1772 973 1768 2 N/A Rej/Def 934 1758 3 COTS SW 1785 974 1769 2 CODE 1785 935 1758 3 N/A Rej/Def 975 1771 5 Hardware 1797 936 1760 2 CODE 1778 976 1771 3 CODE 1778 937 1760 5 Docs 1772 977 1771 3 CODE 1797 938 1760 2 CODE 1767 978 1772 5 CODE 1793 939 1760 3 CODE 1778 979 1772 5 Docs 1794 265 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 980 1772 4 CODE Rej/Def 1020 1785 4 Docs Rej/Def 981 1772 3 CODE Rej/Def 1021 1785 3 N/A Rej/Def 982 1773 3 COTS SW Rej/Def 1022 1786 5 N/A Rej/Def 983 1773 3 CODE 1873 1023 1786 3 CODE 1809 984 1773 4 CODE 1785 1024 1786 5 Dev Tools 1797 985 1773 3 CODE Rej/Def 1025 1786 3 CODE 1797 986 1773 3 CODE 1797 1026 1787 3 CODE 1879 987 1773 3 CODE 1797 1027 1787 3 Test Equip 1807 988 1773 4 N/A Rej/Def 1028 1792 3 CODE 1797 989 1774 2 CODE 1785 1029 1792 5 Docs 1800 990 1774 3 CODE Rej/Def 1030 1792 3 COTS SW Rej/Def 991 1774 4 CODE 1785 1031 1792 5 Docs 1800 992 1774 5 Tools 1778 1032 1792 5 COTS SW Rej/Def 993 1774 4 CODE 1797 1033 1792 5 Docs 1813 994 1774 2 Test SW Rej/Def 1034 1793 3 CODE 1858 995 1774 3 CODE 1779 1035 1793 3 CODE 1806 996 1775 3 CODE Rej/Def 1036 1793 3 CODE 1797 997 1775 3 CODE 1797 1037 1793 3 CODE 1858 998 1776 3 CODE 1806 1038 1794 2 CODE 1806 999 1778 4 CODE 1873 1039 1794 5 Test SW Rej/Def 1000 1778 3 CODE 1797 1040 1794 5 Test SW Rej/Def 1001 1778 3 Test SW 1797 1041 1794 5 Test SW Rej/Def 1002 1778 4 Test SW 1797 1042 1794 5 Test Equip Rej/Def 1003 1779 5 Test SW Rej/Def 1043 1794 3 Hardware Rej/Def 1004 1779 4 CODE 1797 1044 1794 3 CODE 1797 1005 1779 3 CODE 1806 1045 1795 3 CODE 1801 1006 1780 5 SIQT Scripts Rej/Def 1046 1795 3 CODE 1806 1007 1780 5 CODE 1782 1047 1796 5 CODE 1809 1008 1780 2 CODE 1797 1048 1799 3 CODE 1858 1009 1780 5 Docs 1843 1049 1799 2 CODE 1806 1010 1781 4 CODE 1858 1050 1799 2 CODE 1806 1011 1781 5 Docs Rej/Def 1051 1799 3 Other Rej/Def 1012 1781 3 CODE 1802 1052 1800 3 CODE Rej/Def 1013 1781 4 CODE 1797 1053 1800 4 Test Scripts 1842 1014 1781 3 COTS SW Rej/Def 1054 1800 3 CODE 1806 1015 1781 4 CODE 1810 1055 1800 5 Test SW Rej/Def 1016 1781 3 CODE 1801 1056 1800 5 Test SW Rej/Def 1017 1781 4 CODE Rej/Def 1057 1800 4 CODE 1858 1018 1782 4 Test Scripts Rej/Def 1058 1800 3 Test SW Rej/Def 1019 1782 2 COTS SW Rej/Def 1059 1800 5 Test SW Rej/Def 266 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 1060 1800 5 CODE 1858 1100 1806 3 Test Equip Rej/Def 1061 1800 3 CODE 1806 1101 1807 3 CODE 1810 1062 1801 5 Docs 1843 1102 1807 4 CODE 1901 1063 1801 5 Test Equip Rej/Def 1103 1807 5 Test Equip Rej/Def 1064 1801 3 CODE 1806 1104 1807 4 Test SW Rej/Def 1065 1801 3 CODE 1806 1105 1808 3 N/A Rej/Def 1066 1801 3 CODE 1809 1106 1808 4 Test SW Rej/Def 1067 1801 5 Docs 1803 1107 1808 4 CODE 1858 1068 1801 4 CODE 1939 1108 1808 5 Test Equip Rej/Def 1069 1801 5 CODE Rej/Def 1109 1809 3 Test Scripts Rej/Def 1070 1801 5 CODE 1803 1110 1809 2 CODE Rej/Def 1071 1801 4 CODE 1982 1111 1809 3 Other 1913 1072 1801 2 CODE Rej/Def 1112 1809 3 CODE 1858 1073 1801 3 CODE Rej/Def 1113 1809 4 CODE 1858 1074 1801 5 Docs 1843 1114 1809 5 Test Equip Rej/Def 1075 1801 5 CODE 1813 1115 1809 2 CODE 1810 1076 1801 2 CODE 1806 1116 1809 2 CODE 1810 1077 1802 2 Test SW Rej/Def 1117 1810 4 COTS SW Rej/Def 1078 1802 3 CODE 1858 1118 1811 3 Hardware Rej/Def 1079 1802 3 Other Rej/Def 1119 1812 3 N/A Rej/Def 1080 1802 5 CODE 1858 1120 1813 3 CODE 1858 1081 1802 3 CODE 1806 1121 1813 5 Docs Rej/Def 1082 1802 4 CODE 1901 1122 1813 5 Test Equip Rej/Def 1083 1803 4 COTS SW Rej/Def 1123 1813 5 Other Rej/Def 1084 1803 4 CODE 1858 1124 1813 3 CODE 1858 1085 1803 4 CODE 1806 1125 1814 4 CODE 1858 1086 1803 5 COTS SW Rej/Def 1126 1814 5 Docs 1843 1087 1803 4 N/A Rej/Def 1127 1814 4 CODE 1901 1088 1803 4 Test SW Rej/Def 1128 1814 4 CODE 1901 1089 1803 3 CODE 1806 1129 1814 3 CODE 1858 1090 1803 3 CODE Rej/Def 1130 1814 3 CODE 1901 1091 1804 3 CODE Rej/Def 1131 1814 4 Other Rej/Def 1092 1805 3 CODE 1809 1132 1815 4 Docs Rej/Def 1093 1806 3 CODE 1913 1133 1815 5 CODE Rej/Def 1094 1806 5 Test HW Rej/Def 1134 1815 3 CODE 1939 1095 1806 4 CODE Rej/Def 1135 1816 4 CODE 1858 1096 1806 2 CODE 1809 1136 1816 5 COTS SW 1858 1097 1806 4 CODE 2356 1137 1816 5 COTS SW 1858 1098 1806 2 CODE 1809 1138 1816 5 CODE 1858 1099 1806 1 CODE Rej/Def 1139 1816 5 COTS SW Rej/Def 267 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 1140 1816 5 Docs 1843 1180 1845 5 Dev Tools Rej/Def 1141 1816 5 Test Equip Rej/Def 1181 1846 4 CODE 1858 1142 1816 4 COTS SW Rej/Def 1182 1848 5 Docs Rej/Def 1143 1816 3 CODE 1913 1183 1848 5 Docs 1890 1144 1816 5 CODE 1858 1184 1848 5 Docs 1865 1145 1816 3 CODE 1858 1185 1848 3 CODE 1873 1146 1823 5 CODE 1858 1186 1848 5 Docs 1851 1147 1828 3 CODE 1835 1187 1848 3 CODE 1873 1148 1830 5 CODE 1858 1188 1848 3 CODE 1873 1149 1830 5 CODE 1858 1189 1848 3 CODE 1873 1150 1830 4 CODE 1873 1190 1848 3 Test SW Rej/Def 1151 1831 5 Test SW Rej/Def 1191 1848 5 Docs 1857 1152 1831 2 CODE 1858 1192 1848 4 CODE 1858 1153 1831 5 Test SW Rej/Def 1193 1848 3 CODE 1865 1154 1834 5 COTS SW Rej/Def 1194 1849 5 Docs 2068 1155 1834 3 CODE Rej/Def 1195 1849 3 CODE 1858 1156 1835 3 CODE 1858 1196 1850 4 Elec Files 1873 1157 1835 3 CODE 1873 1197 1851 5 Docs Rej/Def 1158 1835 5 CODE 1939 1198 1851 2 CODE 1873 1159 1836 3 CODE 1873 1199 1851 3 CODE 1873 1160 1836 4 CODE 1858 1200 1851 4 Elec Files 1873 1161 1836 5 Docs 1843 1201 1851 3 Other Rej/Def 1162 1836 5 CODE 1954 1202 1852 5 Test SW 1858 1163 1837 4 CODE Rej/Def 1203 1852 5 Test SW Rej/Def 1164 1837 3 Docs 1901 1204 1852 2 CODE 1873 1165 1838 5 Docs Rej/Def 1205 1855 3 CODE Rej/Def 1166 1841 4 CODE Rej/Def 1206 1855 5 CODE 1858 1167 1841 4 CODE Rej/Def 1207 1855 5 CODE 1858 1168 1841 3 COTS SW Rej/Def 1208 1855 5 CODE 1873 1169 1841 4 N/A Rej/Def 1209 1855 2 Test Equip Rej/Def 1170 1843 3 CODE 1856 1210 1855 3 CODE 1883 1171 1843 5 Docs 1857 1211 1856 3 Docs Rej/Def 1172 1843 3 CODE 1858 1212 1856 3 CODE 1873 1173 1843 5 Test Scripts Rej/Def 1213 1856 3 CODE Rej/Def 1174 1844 5 Docs Rej/Def 1214 1857 5 CODE 1957 1175 1844 5 Docs 1873 1215 1857 4 CODE 1898 1176 1844 4 CODE 1901 1216 1857 4 CODE 1901 1177 1845 3 CODE 1858 1217 1858 4 CODE 1901 1178 1845 3 CODE 1873 1218 1858 5 CODE Rej/Def 1179 1845 3 CODE 1873 1219 1858 2 CODE 1873 268 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 1220 1859 5 Other Rej/Def 1260 1870 3 Test Equip Rej/Def 1221 1859 5 Docs Rej/Def 1261 1871 5 Docs 1927 1222 1859 3 CODE 1873 1262 1871 5 Test Equip Rej/Def 1223 1859 4 CODE 1873 1263 1872 4 CODE 1901 1224 1859 2 CODE 1873 1264 1873 1 CODE 1873 1225 1860 5 N/A Rej/Def 1265 1873 4 CODE 1954 1226 1860 5 N/A Rej/Def 1266 1873 4 Test Equip Rej/Def 1227 1860 5 Docs 1978 1267 1873 3 CODE 1957 1228 1860 5 N/A Rej/Def 1268 1873 4 CODE 1901 1229 1860 5 Docs Rej/Def 1269 1873 3 CODE 1901 1230 1860 5 Docs Rej/Def 1270 1873 3 CODE 1939 1231 1862 5 Docs 1984 1271 1873 3 CODE 1901 1232 1862 3 CODE 1873 1272 1876 3 CODE 1939 1233 1862 4 CODE Rej/Def 1273 1876 5 CODE Rej/Def 1234 1862 5 CODE Rej/Def 1274 1876 5 CODE 2110 1235 1862 3 CODE 1873 1275 1876 5 Dev Tools 2003 1236 1862 4 COTS SW 1901 1276 1877 5 CODE Rej/Def 1237 1863 5 Tools 1878 1277 1877 5 CODE Rej/Def 1238 1863 3 CODE 1873 1278 1877 4 CODE 1939 1239 1863 5 Docs Rej/Def 1279 1877 4 CODE Rej/Def 1240 1864 4 CODE 1873 1280 1877 4 CODE 2124 1241 1864 3 CODE 1901 1281 1877 4 CODE 1901 1242 1865 3 CODE 1873 1282 1877 4 CODE 1901 1243 1865 4 CODE 1949 1283 1877 4 CODE 1939 1244 1865 3 CODE 1892 1284 1877 4 CODE 1901 1245 1865 4 Docs 1873 1285 1877 5 Docs Rej/Def 1246 1866 5 Docs Rej/Def 1286 1877 5 N/A 1879 1247 1866 4 Elec Files Rej/Def 1287 1877 4 CODE 1890 1248 1866 4 N/A Rej/Def 1288 1877 5 CODE 1939 1249 1866 4 COTS SW Rej/Def 1289 1877 4 CODE 1901 1250 1866 5 N/A Rej/Def 1290 1877 4 CODE Rej/Def 1251 1868 3 CODE 1901 1291 1877 4 CODE 1954 1252 1869 4 Test Scripts 1913 1292 1877 4 Docs Rej/Def 1253 1869 2 CODE 1873 1293 1877 3 CODE 1939 1254 1869 5 Docs 1978 1294 1877 4 CODE 1901 1255 1869 3 N/A Rej/Def 1295 1879 3 CODE 1901 1256 1869 5 N/A Rej/Def 1296 1879 3 CODE 1902 1257 1869 4 CODE Rej/Def 1297 1879 3 CODE 1901 1258 1869 5 N/A 1880 1298 1880 3 Test Equip Rej/Def 1259 1870 4 COTS SW Rej/Def 1299 1880 3 CODE Rej/Def 269 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 1300 1881 4 N/A Rej/Def 1340 1892 3 CODE 1913 1301 1882 5 N/A Rej/Def 1341 1893 3 CODE 1939 1302 1883 5 Docs 1932 1342 1893 4 CODE 1901 1303 1883 4 CODE Rej/Def 1343 1893 4 CODE 1898 1304 1883 4 CODE 1939 1344 1893 2 CODE 1913 1305 1883 3 Other Rej/Def 1345 1893 3 CODE 1939 1306 1884 2 Test Equip Rej/Def 1346 1893 3 CODE Rej/Def 1307 1884 4 CODE Rej/Def 1347 1894 5 Docs 1898 1308 1885 3 CODE 1939 1348 1894 4 CODE 1913 1309 1885 3 CODE 1929 1349 1894 3 CODE 1982 1310 1885 4 CODE Rej/Def 1350 1894 3 CODE Rej/Def 1311 1885 4 CODE Rej/Def 1351 1894 4 CODE 1913 1312 1885 3 CODE 1939 1352 1895 4 CODE Rej/Def 1313 1885 3 CODE Rej/Def 1353 1897 4 CODE 1901 1314 1885 5 Test Scripts 1897 1354 1897 3 Test Scripts Rej/Def 1315 1886 5 Docs 1940 1355 1897 5 COTS SW Rej/Def 1316 1886 4 N/A Rej/Def 1356 1897 4 CODE 1913 1317 1886 3 Test Equip Rej/Def 1357 1897 4 CODE 1913 1318 1886 5 SIQT Scripts Rej/Def 1358 1897 5 CODE 1901 1319 1887 4 CODE Rej/Def 1359 1898 2 Test HW Rej/Def 1320 1887 3 CODE 1901 1360 1898 3 CODE 1913 1321 1887 3 CODE 1901 1361 1899 5 COTS SW Rej/Def 1322 1887 4 CODE 1901 1362 1899 4 COTS SW Rej/Def 1323 1887 4 CODE 1939 1363 1899 5 CODE Rej/Def 1324 1887 5 CODE Rej/Def 1364 1899 4 CODE Rej/Def 1325 1888 5 Docs 2003 1365 1899 4 CODE 1940 1326 1890 3 CODE 1918 1366 1899 5 Dev Tools Rej/Def 1327 1890 4 CODE 1939 1367 1900 5 Docs Rej/Def 1328 1891 5 COTS SW Rej/Def 1368 1900 5 Docs 2003 1329 1891 3 CODE 1954 1369 1900 2 CODE 1939 1330 1891 4 CODE Rej/Def 1370 1900 5 Docs 2040 1331 1891 3 CODE 1901 1371 1900 5 Test Equip Rej/Def 1332 1891 5 COTS SW Rej/Def 1372 1904 4 CODE 1913 1333 1892 4 CODE 1939 1373 1904 4 COTS SW 2011 1334 1892 2 CODE Rej/Def 1374 1906 3 CODE 1913 1335 1892 4 CODE 1901 1375 1906 3 CODE Rej/Def 1336 1892 3 CODE 1901 1376 1906 3 CODE 1913 1337 1892 3 Other Rej/Def 1377 1906 5 Test Equip Rej/Def 1338 1892 5 N/A Rej/Def 1378 1906 4 CODE 1913 1339 1892 4 Test Scripts 1982 1379 1907 4 CODE 1939 270 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 1380 1907 5 Tools 2059 1420 1928 5 CODE Rej/Def 1381 1907 4 CODE 1939 1421 1928 5 Test Equip Rej/Def 1382 1907 3 CODE Rej/Def 1422 1929 2 CODE 1954 1383 1907 3 CODE 1939 1423 1929 3 Test HW Rej/Def 1384 1907 3 CODE 1954 1424 1929 5 Test Equip Rej/Def 1385 1908 5 CODE Rej/Def 1425 1929 3 CODE 1982 1386 1908 3 CODE 1939 1426 1932 3 CODE 1939 1387 1911 4 Test Scripts Rej/Def 1427 1932 4 CODE 1941 1388 1911 2 CODE 1939 1428 1932 5 Test Equip Rej/Def 1389 1911 3 CODE Rej/Def 1429 1932 3 CODE 2027 1390 1911 4 CODE 1939 1430 1932 3 CODE 1982 1391 1911 5 Test Scripts 1918 1431 1932 3 CODE Rej/Def 1392 1912 2 CODE 1939 1432 1932 3 CODE 1939 1393 1913 4 CODE 1939 1433 1933 4 CODE Rej/Def 1394 1914 3 CODE 1939 1434 1933 2 CODE 1939 1395 1915 3 CODE 1939 1435 1933 4 CODE 1939 1396 1915 2 Test Equip 1978 1436 1933 4 CODE 1939 1397 1915 5 CODE Rej/Def 1437 1933 3 CODE 1954 1398 1916 5 Docs 1933 1438 1933 3 CODE 1954 1399 1916 5 CODE 1939 1439 1934 3 CODE 1939 1400 1918 5 CODE 1939 1440 1934 4 CODE 1957 1401 1919 4 CODE 1939 1441 1935 4 CODE 1954 1402 1919 3 CODE 1939 1442 1935 3 CODE 1957 1403 1919 5 Dev Tools Rej/Def 1443 1935 3 CODE 1954 1404 1920 5 CODE 1960 1444 1936 4 CODE 1949 1405 1920 2 CODE 1939 1445 1936 4 Test HW Rej/Def 1406 1921 2 CODE 1925 1446 1936 4 CODE 2025 1407 1922 4 N/A Rej/Def 1447 1936 3 CODE 1982 1408 1922 5 CODE 2073 1448 1936 4 CODE Rej/Def 1409 1923 3 CODE 1954 1449 1937 4 CODE 1957 1410 1925 3 CODE 2025 1450 1939 4 CODE 1977 1411 1925 4 CODE 1939 1451 1939 3 CODE 1957 1412 1925 3 CODE 1939 1452 1939 3 CODE 1964 1413 1925 5 Test Equip Rej/Def 1453 1939 3 CODE 1954 1414 1925 4 Test Equip Rej/Def 1454 1940 4 CODE 1954 1415 1925 3 CODE 1982 1455 1940 2 CODE 1954 1416 1925 3 CODE 2103 1456 1940 2 CODE 1954 1417 1925 4 N/A Rej/Def 1457 1940 4 N/A Rej/Def 1418 1927 4 CODE 1939 1458 1941 3 Test Scripts 1953 1419 1927 4 CODE Rej/Def 1459 1941 3 Test Scripts 1941 271 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 1460 1941 4 CODE 1964 1500 1953 4 CODE 1964 1461 1941 5 Test Equip Rej/Def 1501 1953 5 CODE Rej/Def 1462 1942 2 CODE 1954 1502 1953 4 CODE Rej/Def 1463 1942 3 CODE 1954 1503 1954 5 Test Equip Rej/Def 1464 1942 3 CODE 1954 1504 1954 4 COTS SW Rej/Def 1465 1943 3 CODE 1964 1505 1955 3 CODE 1982 1466 1943 3 CODE 2025 1506 1955 4 CODE 2046 1467 1943 3 CODE 1954 1507 1955 4 CODE Rej/Def 1468 1943 3 CODE 1954 1508 1955 2 CODE 1964 1469 1943 3 Dev Tools 1949 1509 1955 3 Test Scripts Rej/Def 1470 1944 4 CODE 1954 1510 1955 4 COTS SW Rej/Def 1471 1944 5 CODE 2111 1511 1955 3 CODE 2025 1472 1944 4 SIQT Scripts Rej/Def 1512 1956 3 CODE 1977 1473 1946 3 Other 1982 1513 1956 3 CODE 1977 1474 1946 4 CODE Rej/Def 1514 1956 3 Test Equip Rej/Def 1475 1946 4 CODE 1964 1515 1956 3 CODE 1977 1476 1947 2 Test HW Rej/Def 1516 1956 3 CODE 1964 1477 1947 3 CODE 1964 1517 1957 3 N/A Rej/Def 1478 1947 3 CODE 1964 1518 1957 4 CODE 1982 1479 1947 3 CODE 2150 1519 1957 5 CODE Rej/Def 1480 1947 3 Other 1982 1520 1957 3 CODE Rej/Def 1481 1948 4 Test Scripts Rej/Def 1521 1957 5 CODE 2097 1482 1948 4 CODE 1964 1522 1957 4 CODE 1982 1483 1948 3 CODE 1964 1523 1957 5 Test Scripts 1988 1484 1948 5 Docs 1968 1524 1957 1 CODE 1964 1485 1949 4 CODE 1964 1525 1958 3 N/A Rej/Def 1486 1949 3 CODE 1982 1526 1960 4 CODE 2046 1487 1949 5 Test Equip Rej/Def 1527 1960 3 CODE 1982 1488 1949 4 CODE 1954 1528 1960 5 Test Scripts Rej/Def 1489 1949 4 CODE 1954 1529 1961 4 CODE 1977 1490 1949 2 CODE 1954 1530 1961 4 CODE 1982 1491 1949 4 CODE 1964 1531 1961 5 CODE Rej/Def 1492 1949 4 CODE 1964 1532 1962 5 CODE 2100 1493 1949 3 CODE 1982 1533 1963 4 CODE 2025 1494 1950 4 CODE 1964 1534 1963 4 CODE 1982 1495 1950 2 Test Scripts 1982 1535 1963 5 Docs 2004 1496 1953 5 CODE 1983 1536 1963 3 CODE 1977 1497 1953 3 CODE 1964 1537 1963 4 CODE 1982 1498 1953 3 CODE 1964 1538 1963 3 CODE 1977 1499 1953 4 CODE 1964 1539 1964 4 CODE 1977 272 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 1540 1964 2 CODE 1982 1580 1978 5 Test Scripts 1990 1541 1967 3 CODE 1982 1581 1978 5 CODE 1982 1542 1967 2 CODE 1977 1582 1978 4 CODE 2025 1543 1967 5 CODE 2103 1583 1978 4 CODE 2062 1544 1967 4 CODE 2025 1584 1979 4 CODE Rej/Def 1545 1967 4 CODE 2025 1585 1981 5 CODE Rej/Def 1546 1967 4 CODE 2318 1586 1981 2 CODE 1982 1547 1967 5 CODE 1977 1587 1981 4 CODE Rej/Def 1548 1968 5 SIQT Scripts 1983 1588 1981 4 Test Scripts Rej/Def 1549 1968 5 CODE Rej/Def 1589 1981 4 CODE 2082 1550 1968 4 CODE Rej/Def 1590 1981 5 Test Equip Rej/Def 1551 1968 4 CODE 2046 1591 1981 4 CODE 2117 1552 1969 4 CODE 2025 1592 1982 4 CODE 2025 1553 1970 4 CODE 2025 1593 1983 5 Dev Tools 1991 1554 1970 3 Docs Rej/Def 1594 1984 4 Test Scripts 2037 1555 1970 3 N/A Rej/Def 1595 1984 4 N/A Rej/Def 1556 1970 4 CODE 1982 1596 1984 4 CODE 2046 1557 1970 2 CODE 1977 1597 1986 3 Test Equip 2011 1558 1971 4 CODE 1982 1598 1986 4 Test Scripts 1991 1559 1971 4 CODE 1982 1599 1989 5 Docs 2220 1560 1971 4 CODE 1982 1600 1989 5 Dev Tools 1995 1561 1971 4 CODE Rej/Def 1601 1989 3 CODE 2025 1562 1971 5 CODE 1988 1602 1989 3 CODE 2249 1563 1971 4 CODE 1982 1603 1990 3 CODE 2046 1564 1971 3 CODE 1982 1604 1990 3 CODE Rej/Def 1565 1971 4 SIQT Scripts Rej/Def 1605 1990 3 N/A Rej/Def 1566 1971 5 CODE 2025 1606 1990 4 Test Scripts 2024 1567 1971 4 Test Equip Rej/Def 1607 1991 2 CODE 2025 1568 1972 4 CODE 1982 1608 1991 3 CODE 2146 1569 1975 5 Test Scripts 1978 1609 1991 4 CODE 2150 1570 1975 3 N/A 1982 1610 1991 2 CODE 2058 1571 1975 2 N/A Rej/Def 1611 1991 4 COTS SW 2046 1572 1976 4 CODE 2046 1612 1991 4 N/A Rej/Def 1573 1976 2 CODE 1986 1613 1991 4 CODE 2046 1574 1976 5 Test Equip Rej/Def 1614 1991 4 CODE 2025 1575 1976 4 N/A Rej/Def 1615 1992 3 CODE 2025 1576 1976 4 Test Scripts Rej/Def 1616 1992 4 CODE 2025 1577 1977 2 CODE 1982 1617 1992 4 COTS SW Rej/Def 1578 1977 3 CODE Rej/Def 1618 1992 4 Test Scripts Rej/Def 1579 1977 4 CODE 1982 1619 1995 4 CODE 2046 273 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 1620 1995 4 N/A Rej/Def 1660 2012 2 CODE 2062 1621 1997 5 Docs 2003 1661 2012 3 CODE 2082 1622 1997 4 CODE 2025 1662 2012 3 CODE 2082 1623 1997 4 CODE 2025 1663 2012 3 CODE Rej/Def 1624 1997 4 Docs Rej/Def 1664 2012 4 CODE 2079 1625 1997 5 COTS SW 2096 1665 2013 5 CODE 2034 1626 1997 4 CODE 2046 1666 2013 4 Dev Tools Rej/Def 1627 1997 4 CODE Rej/Def 1667 2013 4 Test SW Rej/Def 1628 1999 3 CODE 2046 1668 2013 5 CODE 2058 1629 1999 1 CODE 2025 1669 2013 4 CODE Rej/Def 1630 1999 4 CODE 2103 1670 2013 4 CODE 2059 1631 1999 3 Test Scripts Rej/Def 1671 2013 4 CODE 2179 1632 1999 4 N/A Rej/Def 1672 2017 4 CODE 2046 1633 1999 4 N/A Rej/Def 1673 2017 5 Docs 2026 1634 1999 4 CODE 2046 1674 2017 5 Test Equip Rej/Def 1635 1999 4 N/A Rej/Def 1675 2017 5 N/A Rej/Def 1636 2001 4 CODE Rej/Def 1676 2018 4 COTS SW Rej/Def 1637 2002 4 CODE Rej/Def 1677 2020 4 CODE 2046 1638 2002 4 N/A Rej/Def 1678 2023 4 N/A Rej/Def 1639 2002 4 CODE 2103 1679 2024 4 CODE 2046 1640 2003 5 CODE 2131 1680 2024 5 Docs 2040 1641 2003 3 CODE 2046 1681 2024 5 Docs Rej/Def 1642 2004 5 Unk Rej/Def 1682 2024 5 Docs Rej/Def 1643 2004 4 CODE 2025 1683 2024 3 CODE Rej/Def 1644 2004 3 Test Scripts 2025 1684 2024 5 Docs 2033 1645 2004 5 Test SW Rej/Def 1685 2025 4 CODE 2046 1646 2004 4 CODE 2137 1686 2025 3 Hardware 2082 1647 2004 4 N/A Rej/Def 1687 2025 4 CODE Rej/Def 1648 2004 4 CODE 2150 1688 2026 4 N/A Rej/Def 1649 2004 4 N/A Rej/Def 1689 2026 4 CODE 2046 1650 2005 5 N/A Rej/Def 1690 2026 4 CODE 2150 1651 2005 5 COTS SW 2025 1691 2027 5 Test Scripts 2062 1652 2006 3 CODE 2062 1692 2031 2 CODE 2062 1653 2011 5 Docs 2011 1693 2031 5 Docs 2040 1654 2011 3 Test Scripts 2062 1694 2031 5 N/A Rej/Def 1655 2011 3 CODE 2082 1695 2032 4 CODE 2046 1656 2012 2 Test Equip 2304 1696 2032 4 Other Rej/Def 1657 2012 2 CODE 2062 1697 2032 2 CODE 2117 1658 2012 2 CODE 2086 1698 2032 3 SIQT Scripts 2065 1659 2012 2 CODE 2062 1699 2032 4 CODE Rej/Def 274 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 1700 2033 5 Dev Tools 2095 1740 2061 2 CODE 2082 1701 2033 5 N/A Rej/Def 1741 2061 5 Test Scripts 2131 1702 2034 4 CODE 2046 1742 2063 4 CODE 2124 1703 2035 4 CODE 2062 1743 2065 4 CODE 2103 1704 2037 4 CODE 2046 1744 2067 3 CODE 2103 1705 2037 4 CODE 2046 1745 2067 5 Docs 2158 1706 2037 5 Dev Tools Rej/Def 1746 2067 3 CODE 2107 1707 2037 4 CODE Rej/Def 1747 2067 4 CODE 2107 1708 2039 5 Docs Rej/Def 1748 2067 5 Test Scripts Rej/Def 1709 2039 5 Docs 2167 1749 2067 5 Test Scripts Rej/Def 1710 2039 4 CODE 2062 1750 2067 2 CODE 2124 1711 2039 4 CODE 2117 1751 2067 5 N/A Rej/Def 1712 2039 3 Test Equip Rej/Def 1752 2068 4 CODE 2107 1713 2040 4 N/A Rej/Def 1753 2068 5 Test Scripts Rej/Def 1714 2041 5 Docs Rej/Def 1754 2068 5 Test Scripts Rej/Def 1715 2041 4 CODE 2107 1755 2068 5 CODE 2348 1716 2044 3 CODE Rej/Def 1756 2068 4 CODE 2107 1717 2044 4 N/A Rej/Def 1757 2068 3 Test Scripts 2093 1718 2045 5 CODE 2046 1758 2069 3 N/A Rej/Def 1719 2045 4 CODE 2062 1759 2069 3 CODE 2082 1720 2045 4 CODE 2062 1760 2069 4 N/A Rej/Def 1721 2046 4 CODE 2082 1761 2069 4 CODE 2150 1722 2046 5 Other 2055 1762 2069 4 CODE 2103 1723 2047 4 CODE 2062 1763 2073 4 CODE Rej/Def 1724 2047 4 CODE 2089 1764 2073 4 Other 2117 1725 2048 4 CODE Rej/Def 1765 2074 4 CODE Rej/Def 1726 2051 3 CODE Rej/Def 1766 2074 5 Dev Tools 2124 1727 2051 2 CODE 2062 1767 2075 3 N/A Rej/Def 1728 2053 3 N/A Rej/Def 1768 2076 3 CODE Rej/Def 1729 2054 5 Other 2058 1769 2076 5 COTS SW Rej/Def 1730 2054 5 N/A 2059 1770 2076 3 CODE 2081 1731 2054 3 CODE 2107 1771 2076 5 Test Scripts Rej/Def 1732 2054 5 COTS SW Rej/Def 1772 2080 4 CODE 2124 1733 2059 5 Docs 2059 1773 2080 5 N/A Rej/Def 1734 2060 5 Docs 2167 1774 2081 5 Docs 2116 1735 2060 3 CODE 2107 1775 2081 5 Docs 2116 1736 2061 5 Docs Rej/Def 1776 2082 5 CODE 2086 1737 2061 4 CODE 2107 1777 2083 3 N/A Rej/Def 1738 2061 4 CODE 2192 1778 2084 2 CODE Rej/Def 1739 2061 4 Docs Rej/Def 1779 2085 4 CODE Rej/Def 275 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 1780 2085 3 Test Scripts Rej/Def 1820 2107 2 CODE 2117 1781 2086 5 N/A 2291 1821 2108 5 N/A Rej/Def 1782 2086 4 CODE 2107 1822 2108 2 CODE 2117 1783 2086 4 CODE 2249 1823 2108 3 CODE 2236 1784 2087 5 Qual Scripts Rej/Def 1824 2108 4 CODE 2117 1785 2087 4 CODE 2107 1825 2110 4 CODE 2150 1786 2088 5 N/A Rej/Def 1826 2110 3 N/A Rej/Def 1787 2088 5 N/A Rej/Def 1827 2111 3 N/A Rej/Def 1788 2088 5 Test Equip Rej/Def 1828 2114 4 CODE 2124 1789 2088 5 N/A Rej/Def 1829 2115 4 N/A Rej/Def 1790 2088 5 Test Scripts 2128 1830 2115 4 CODE 2150 1791 2088 5 CODE 2142 1831 2116 5 N/A Rej/Def 1792 2088 5 Test Scripts 2124 1832 2117 5 N/A Rej/Def 1793 2088 5 CODE Rej/Def 1833 2117 5 N/A Rej/Def 1794 2088 5 Test Scripts 2144 1834 2117 5 N/A Rej/Def 1795 2088 5 Other 2102 1835 2118 4 CODE 2150 1796 2089 5 N/A 2095 1836 2118 5 Test Scripts Rej/Def 1797 2091 5 CODE Rej/Def 1837 2119 4 CODE 2124 1798 2091 5 N/A Rej/Def 1838 2121 4 CODE 2124 1799 2091 3 CODE 2102 1839 2121 2 CODE 2124 1800 2091 5 N/A Rej/Def 1840 2121 5 Qual Scripts Rej/Def 1801 2093 5 Hardware Rej/Def 1841 2121 5 Test Scripts 2130 1802 2093 5 Dev Tools Rej/Def 1842 2121 4 CODE 2195 1803 2095 2 CODE 2107 1843 2121 5 Docs Rej/Def 1804 2097 5 N/A 2339 1844 2122 4 Test Equip 2192 1805 2100 4 CODE 2124 1845 2122 3 CODE 2199 1806 2100 4 CODE 2314 1846 2122 4 CODE 2150 1807 2101 3 CODE 2228 1847 2122 5 Dev Tools 2142 1808 2101 4 CODE 2117 1848 2123 5 N/A Rej/Def 1809 2101 3 CODE 2150 1849 2123 3 Hardware 2348 1810 2102 5 N/A Rej/Def 1850 2124 4 CODE 2150 1811 2102 4 CODE Rej/Def 1851 2125 5 Qual Scripts Rej/Def 1812 2102 5 CODE 2331 1852 2125 5 N/A Rej/Def 1813 2103 2 CODE 2125 1853 2128 5 N/A Rej/Def 1814 2104 5 Test Scripts 2109 1854 2128 5 N/A Rej/Def 1815 2104 3 CODE 2293 1855 2128 5 CODE Rej/Def 1816 2104 5 CODE 2150 1856 2128 5 N/A Rej/Def 1817 2104 3 N/A Rej/Def 1857 2128 2 CODE 2199 1818 2107 5 COTS SW Rej/Def 1858 2128 3 Test HW Rej/Def 1819 2107 2 CODE 2117 1859 2128 3 Docs 2192 276 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 1860 2129 4 N/A Rej/Def 1900 2147 5 N/A Rej/Def 1861 2130 2 N/A Rej/Def 1901 2150 3 CODE 2192 1862 2130 5 CODE 2304 1902 2150 3 CODE Rej/Def 1863 2130 4 CODE 2150 1903 2157 2 CODE 2164 1864 2130 4 CODE 2150 1904 2157 1 CODE 2177 1865 2131 5 Test Scripts 2178 1905 2159 4 CODE 2347 1866 2131 2 CODE 2332 1906 2159 4 N/A Rej/Def 1867 2131 3 CODE 2150 1907 2159 3 CODE 2192 1868 2131 5 Test Scripts Rej/Def 1908 2163 4 Test Scripts 2249 1869 2132 3 CODE 2283 1909 2163 4 CODE 2332 1870 2132 3 N/A Rej/Def 1910 2163 2 CODE 2270 1871 2134 5 Dev Tools 2138 1911 2164 5 CODE Rej/Def 1872 2134 4 Dev Tools 2142 1912 2165 2 CODE 2195 1873 2134 4 N/A 2431 1913 2165 5 Tools 2262 1874 2135 5 Qual Scripts Rej/Def 1914 2166 4 CODE 2192 1875 2136 4 CODE 2192 1915 2166 4 CODE 2195 1876 2136 3 CODE 2192 1916 2166 4 Docs 2292 1877 2136 3 CODE 2192 1917 2171 2 CODE 2178 1878 2136 3 CODE 2192 1918 2171 5 N/A Rej/Def 1879 2137 4 CODE 2150 1919 2171 3 CODE 2177 1880 2137 4 N/A Rej/Def 1920 2172 3 CODE 2195 1881 2137 4 N/A Rej/Def 1921 2172 4 CODE 2236 1882 2137 4 N/A Rej/Def 1922 2172 4 CODE 2195 1883 2139 5 N/A Rej/Def 1923 2172 3 CODE 2304 1884 2139 5 N/A Rej/Def 1924 2172 3 CODE 2236 1885 2139 4 CODE 2150 1925 2172 4 N/A Rej/Def 1886 2139 4 N/A Rej/Def 1926 2172 4 CODE 2236 1887 2142 4 CODE 2163 1927 2174 4 CODE 2195 1888 2143 3 CODE 2195 1928 2174 2 CODE 2195 1889 2143 3 CODE Rej/Def 1929 2176 2 CODE 2192 1890 2143 5 Docs Rej/Def 1930 2177 5 Docs 2229 1891 2144 4 CODE 2236 1931 2177 3 CODE 2272 1892 2145 2 CODE 2236 1932 2178 5 CODE 2241 1893 2145 4 CODE 2192 1933 2178 4 CODE 2236 1894 2146 3 CODE 2304 1934 2178 4 CODE 2236 1895 2146 5 Test Scripts Rej/Def 1935 2179 4 CODE 2236 1896 2146 4 CODE 2192 1936 2179 4 CODE 2236 1897 2146 3 CODE 2272 1937 2179 5 Docs 2212 1898 2146 4 N/A Rej/Def 1938 2179 5 Docs 2278 1899 2147 5 N/A Rej/Def 1939 2180 2 Qual Scripts Rej/Def 277 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 1940 2180 2 CODE 2195 1980 2208 4 CODE 2272 1941 2193 5 N/A Rej/Def 1981 2208 4 CODE 2272 1942 2193 4 N/A Rej/Def 1982 2208 4 CODE 2272 1943 2194 4 N/A Rej/Def 1983 2212 5 CODE 2304 1944 2195 5 Docs 2283 1984 2212 5 CODE 2272 1945 2195 4 N/A Rej/Def 1985 2213 4 N/A Rej/Def 1946 2196 5 Test Scripts 2299 1986 2213 3 Test Scripts 2291 1947 2198 4 CODE 2236 1987 2213 5 Test Equip Rej/Def 1948 2198 4 CODE Rej/Def 1988 2214 4 CODE 2250 1949 2198 5 Docs 2229 1989 2214 4 CODE 2236 1950 2200 4 CODE 2236 1990 2215 5 Docs 2220 1951 2200 4 N/A Rej/Def 1991 2219 5 Docs 2276 1952 2200 4 CODE 2249 1992 2220 3 CODE 2250 1953 2201 4 CODE 2272 1993 2220 4 N/A Rej/Def 1954 2201 4 CODE 2236 1994 2220 4 Docs Rej/Def 1955 2201 4 CODE Rej/Def 1995 2222 5 Docs 2268 1956 2201 4 CODE 2325 1996 2222 4 CODE 2236 1957 2201 3 CODE 2236 1997 2223 4 CODE 2236 1958 2205 3 CODE 2292 1998 2226 4 CODE 2272 1959 2205 4 CODE Rej/Def 1999 2227 3 CODE 2272 1960 2205 4 CODE 2249 2000 2227 5 Extern SW Rej/Def 1961 2205 4 Docs Rej/Def 2001 2228 5 N/A Rej/Def 1962 2205 4 CODE 2249 2002 2230 3 CODE 2250 1963 2205 4 CODE 2250 2003 2230 4 CODE 2250 1964 2205 4 CODE Rej/Def 2004 2230 4 CODE 2250 1965 2205 4 CODE 2236 2005 2233 4 CODE 2250 1966 2206 4 CODE 2236 2006 2233 5 Test Scripts 2247 1967 2206 4 Docs Rej/Def 2007 2234 4 CODE 2272 1968 2206 4 Test Scripts 2347 2008 2234 4 CODE 2272 1969 2206 4 CODE 2304 2009 2235 4 CODE 2250 1970 2206 4 Test Scripts Rej/Def 2010 2235 5 Docs 2293 1971 2206 4 CODE Rej/Def 2011 2236 4 CODE 2272 1972 2206 4 CODE 2272 2012 2237 5 Test Scripts 2348 1973 2206 4 Test Scripts Rej/Def 2013 2237 3 CODE Rej/Def 1974 2206 4 COTS SW Rej/Def 2014 2240 2 CODE 2250 1975 2206 4 COTS SW Rej/Def 2015 2240 4 CODE 2250 1976 2208 4 CODE Rej/Def 2016 2241 4 CODE 2304 1977 2208 4 CODE 2272 2017 2242 4 CODE 2250 1978 2208 4 CODE 2256 2018 2242 4 CODE 2250 1979 2208 4 CODE 2272 2019 2242 4 CODE 2283 278 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 2020 2242 4 CODE Rej/Def 2060 2283 4 CODE 2304 2021 2243 4 N/A Rej/Def 2061 2283 4 CODE 2304 2022 2243 2 CODE 2250 2062 2283 4 CODE Rej/Def 2023 2243 4 CODE 2272 2063 2283 2 CODE 2304 2024 2243 4 CODE 2272 2064 2283 2 CODE 2318 2025 2244 4 N/A Rej/Def 2065 2285 5 Test Scripts 2293 2026 2247 2 CODE 2269 2066 2286 4 CODE 2318 2027 2248 2 CODE 2272 2067 2286 5 Test HW Rej/Def 2028 2248 4 CODE 2272 2068 2287 5 Test Scripts Rej/Def 2029 2248 2 CODE 2269 2069 2289 5 Test Scripts 2296 2030 2248 4 CODE Rej/Def 2070 2290 5 Docs 2298 2031 2249 3 N/A Rej/Def 2071 2290 2 CODE 2304 2032 2249 4 CODE 2272 2072 2291 2 CODE 2304 2033 2250 4 CODE 2272 2073 2292 2 CODE Rej/Def 2034 2250 5 Test Scripts Rej/Def 2074 2293 5 Test Scripts 2317 2035 2251 4 CODE Rej/Def 2075 2293 2 CODE 2332 2036 2251 4 CODE 2325 2076 2296 4 CODE 2304 2037 2252 4 CODE 2272 2077 2296 4 Test SW Rej/Def 2038 2255 2 CODE 2269 2078 2297 4 CODE 2304 2039 2255 3 CODE 2272 2079 2298 5 CODE 2349 2040 2255 4 CODE 2272 2080 2299 2 CODE 2325 2041 2256 3 N/A Rej/Def 2081 2299 1 CODE 2325 2042 2256 4 Dev Tools 2332 2082 2300 3 CODE 2304 2043 2256 4 Test Scripts 2318 2083 2300 4 CODE 2304 2044 2256 5 Test Scripts 2299 2084 2300 5 Docs Rej/Def 2045 2257 5 N/A Rej/Def 2085 2300 4 CODE 2332 2046 2257 3 Test Scripts 2272 2086 2303 4 CODE 2318 2047 2259 2 CODE 2269 2087 2305 3 CODE 2318 2048 2265 5 Docs 2293 2088 2306 2 CODE Rej/Def 2049 2268 3 CODE 2363 2089 2307 5 Other 2320 2050 2270 4 CODE 2325 2090 2309 5 Test Scripts 2318 2051 2275 4 CODE 2283 2091 2309 4 CODE 2325 2052 2275 4 CODE 2304 2092 2310 4 CODE 2318 2053 2277 4 N/A Rej/Def 2093 2310 4 CODE Rej/Def 2054 2278 1 Hardware Rej/Def 2094 2312 5 COTS SW Rej/Def 2055 2278 5 CODE Rej/Def 2095 2312 4 CODE 2318 2056 2282 4 CODE 2318 2096 2313 3 CODE 2318 2057 2283 3 Hardware 2319 2097 2314 2 CODE 2332 2058 2283 3 CODE 2304 2098 2317 3 CODE 2318 2059 2283 4 CODE 2318 2099 2318 2 CODE 2332 279 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 2100 2319 4 Test HW Rej/Def 2140 2363 4 CODE Rej/Def 2101 2320 4 CODE Rej/Def 2141 2366 3 CODE Rej/Def 2102 2321 4 CODE Rej/Def 2142 2366 Unk Unk Unk 2103 2324 2 CODE 2332 2143 2366 5 Test Scripts Rej/Def 2104 2325 2 CODE 2332 2144 2367 Unk Unk Unk 2105 2325 4 Elec Files 2347 2145 2367 Unk Unk Unk 2106 2325 4 CODE 2347 2146 2368 Unk Unk Unk 2107 2326 3 CODE Rej/Def 2147 2368 Unk Unk Unk 2108 2327 4 N/A Rej/Def 2148 2368 Unk Unk Unk 2109 2327 5 Docs 2347 2149 2369 3 CODE Rej/Def 2110 2327 2 CODE 2332 2150 2374 5 Unk Unk 2111 2327 3 CODE 2363 2151 2377 Unk Unk Unk 2112 2327 3 CODE Rej/Def 2152 2380 4 CODE Rej/Def 2113 2331 5 Docs 2348 2153 2380 Unk Unk Unk 2114 2332 3 CODE 2333 2154 2381 4 CODE Rej/Def 2115 2333 5 Test SW Rej/Def 2155 2384 Unk Unk Unk 2116 2333 4 CODE 2347 2156 2385 Unk Unk Unk 2117 2333 5 CODE 2334 2157 2388 4 Elec Files 2404 2118 2334 5 Test Scripts Rej/Def 2158 2388 3 Unk Unk 2119 2335 4 CODE 2363 2159 2388 4 CODE Rej/Def 2120 2335 5 Test Equip Rej/Def 2160 2390 4 CODE 2404 2121 2335 4 CODE Rej/Def 2161 2390 Unk Unk Unk 2122 2335 3 CODE 2363 2162 2391 Unk Unk Unk 2123 2340 5 Docs Rej/Def 2163 2391 4 CODE Rej/Def 2124 2340 4 CODE 2363 2164 2391 4 CODE Rej/Def 2125 2341 4 CODE Rej/Def 2165 2392 Unk Unk Unk 2126 2342 4 CODE 2363 2166 2393 Unk Unk Unk 2127 2345 4 CODE 2363 2167 2394 Unk Unk Unk 2128 2346 4 CODE 2363 2168 2395 Unk Unk Unk 2129 2346 3 CODE 2363 2169 2396 5 Dev Tools 2416 2130 2347 4 CODE Rej/Def 2170 2396 Unk Unk Unk 2131 2347 3 CODE 2363 2171 2396 Unk Unk Unk 2132 2353 4 CODE Rej/Def 2172 2396 5 Unk Unk 2133 2354 4 Test Scripts Rej/Def 2173 2397 4 CODE Rej/Def 2134 2355 5 CODE Rej/Def 2174 2397 Unk Unk Unk 2135 2356 4 CODE 2363 2175 2398 Unk Unk Unk 2136 2360 4 CODE Unk 2176 2398 4 CODE Rej/Def 2137 2362 Unk Unk Unk 2177 2398 Unk Unk Unk 2138 2362 Unk Unk Unk 2178 2398 4 CODE Rej/Def 2139 2362 Unk Unk Unk 2179 2402 1 CODE 2404 280 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 2180 2402 2 CODE 2404 2220 2452 Unk Unk Unk 2181 2403 5 COTS SW 2429 2221 2453 Unk Unk Unk 2182 2406 Unk Unk Unk 2222 2454 Unk Unk Unk 2183 2410 1 COTS SW Rej/Def 2223 2455 Unk Unk Unk 2184 2410 5 Docs Rej/Def 2224 2456 Unk Unk Unk 2185 2410 3 CODE Rej/Def 2225 2457 Unk Unk Unk 2186 2412 3 CODE Rej/Def 2226 2460 3 COTS SW Rej/Def 2187 2412 1 Unk Unk 2227 2462 Unk Unk Unk 2188 2412 Unk Unk Unk 2228 2464 Unk Unk Unk 2189 2412 Unk Unk Unk 2229 2466 Unk Unk Unk 2190 2412 4 CODE Rej/Def 2230 2468 Unk Unk Unk 2191 2414 Unk Unk Unk 2231 2468 4 Unk Unk 2192 2416 4 Dev Tools 2429 2232 2470 Unk Unk Unk 2193 2418 Unk Unk Unk 2233 2473 5 Test Scripts 2598 2194 2420 Unk Unk Unk 2234 2476 Unk Unk Unk 2195 2422 3 CODE Rej/Def 2235 2479 Unk Unk Unk 2196 2423 4 CODE Rej/Def 2236 2482 Unk Unk Unk 2197 2424 5 Unk Unk 2237 2485 Unk Unk Unk 2198 2424 4 CODE Rej/Def 2238 2488 Unk Unk Unk 2199 2425 2 CODE Rej/Def 2239 2491 Unk Unk Unk 2200 2425 4 CODE Rej/Def 2240 2494 Unk Unk Unk 2201 2426 4 CODE Rej/Def 2241 2494 4 CODE 2535 2202 2426 4 CODE Rej/Def 2242 2495 Unk Unk Unk 2203 2428 4 CODE Rej/Def 2243 2496 Unk Unk Unk 2204 2429 4 CODE Rej/Def 2244 2497 Unk Unk Unk 2205 2430 Unk Unk Unk 2245 2499 4 CODE 2535 2206 2430 Unk Unk Unk 2246 2501 Unk Unk Unk 2207 2431 4 CODE Rej/Def 2247 2503 Unk Unk Unk 2208 2432 2 CODE Rej/Def 2248 2505 Unk Unk Unk 2209 2434 Unk Unk Unk 2249 2507 Unk Unk Unk 2210 2436 Unk Unk Unk 2250 2509 Unk Unk Unk 2211 2438 Unk Unk Unk 2251 2511 4 CODE Rej/Def 2212 2440 Unk Unk Unk 2252 2515 2 CODE 2535 2213 2442 Unk Unk Unk 2253 2515 4 CODE 2535 2214 2444 Unk Unk Unk 2254 2518 Unk Unk Unk 2215 2446 Unk Unk Unk 2255 2520 5 Docs Rej/Def 2216 2448 Unk Unk Unk 2256 2522 Unk Unk Unk 2217 2450 Unk Unk Unk 2257 2523 Unk Unk Unk 2218 2450 2 COTS SW Rej/Def 2258 2524 Unk Unk Unk 2219 2451 Unk Unk Unk 2259 2525 4 CODE Rej/Def 281 Table 32: Continued # Staff Day Sev Product Verify Day # Staff Day Sev Product Verify Day 2260 2528 Unk Unk Unk 2300 2629 4 CODE Rej/Def 2261 2531 Unk Unk Unk 2301 2632 3 CODE Rej/Def 2262 2534 5 Docs Rej/Def 2302 2632 4 CODE Rej/Def 2263 2537 5 Docs Rej/Def 2303 2633 4 CODE Rej/Def 2264 2537 Unk Unk Unk 2304 2634 4 CODE Rej/Def 2265 2537 5 Docs Rej/Def 2305 2636 4 Unk Unk 2266 2539 Unk Unk Unk 2306 2639 4 CODE Rej/Def 2267 2540 Unk Unk Unk 2307 2641 3 CODE Rej/Def 2268 2542 4 CODE Rej/Def 2308 2643 4 CODE Rej/Def 2269 2543 4 CODE Unk 2309 2646 3 CODE Rej/Def 2270 2544 5 Docs Rej/Def 2310 2647 4 CODE Rej/Def 2271 2551 Unk Unk Unk 2311 2648 3 CODE Rej/Def 2272 2557 4 CODE 2586 2312 2685 Unk Unk Unk 2273 2566 5 Test Scripts 2598 2313 2694 Unk Unk Unk 2274 2566 4 CODE Rej/Def 2314 2710 4 CODE Rej/Def 2275 2566 4 CODE Rej/Def 2315 2719 3 Macros Rej/Def 2276 2572 4 Unk Unk 2316 2726 4 CODE Rej/Def 2277 2572 Unk Unk Unk 2317 2728 Unk Unk Unk 2278 2572 Unk Unk Unk 2318 2730 4 Unk Unk 2279 2572 4 CODE 2586 2319 2732 3 CODE Unk 2280 2574 Unk Unk Unk 2281 2576 4 CODE 2586 2282 2577 3 CODE Unk 2283 2577 4 CODE 2586 2284 2580 Unk Unk Unk 2285 2583 Unk Unk Unk 2286 2587 3 CODE Rej/Def 2287 2589 Unk Unk Unk 2288 2590 Unk Unk Unk 2289 2593 4 CODE Rej/Def 2290 2598 4 CODE Rej/Def 2291 2600 4 Unk Unk 2292 2601 4 CODE Rej/Def 2293 2603 Unk Unk Unk 2294 2604 Unk Unk Unk 2295 2606 4 CODE Rej/Def 2296 2606 Unk Unk Unk 2297 2607 Unk Unk Unk 2298 2607 4 CODE 2695 2299 2616 Unk Unk Unk 282 A P P E N D I X : C – P E R L S C R I P T F O R P A R S I N G D R D A T A C.1 Introduction This appendix includes an example of a Perl script that was written by the author to extract data from Project-D’s defect data. The data was provided in the form of a multi-megabyte Microsoft Word file. The file was converted into ANSI text, and then parsed using this script to extract out the specific defect information included in this dissertation. Similar scripts were written for the other projects as well. C.2 Project-D Perl Script #!/usr/bin/perl -w # ######################################################################################################### # # DRparse.pl # ######################################################################################################### # # Written by: Douglas J. Buettner # # Purpose: # # Parses and reformats Contractor Project-D MS Word data dumps (converted into ANSI text files) that # contain software defect data info. The output is in comma delimited format readable by MS Excel. # # REVISION HISTORY: # # January 2004: Create base scripts using examples found online from Russel Quong dated 2/19/98 # # May 5, 2007: Output was redesigned to handle problem with linefeed characters that appeared after # updating to new version of cygwin and Perl. This fix requires downloading and # installing the CPAN freeware, "Text-Chomp-0.02" from Steve Peters. # # Mar 13, 2008: Output was reconfigured to reformat the date and remove the severity information # # Jun 18, 2008: Reconfigured to parse Project-D DR data files originals files are converted from # MSWord using the native save as a text file option. # # Jul 17, 2008: Reformatted and removed contractor's name and Project-D's acronym for inclusion in the # dissertation. # ######################################################################################################## # Define Include Files use English; # Use english vars use FileHandle; # Use filehandles use Date::Manip; # Use date manipulation routines for SRE output use Text::Chomp; # Output on PC was not deleting /r/n linefeeds with normal chomp # causing each print to append a linefeed ... this Text chomp removes the problem. # Define Global Variables my($lineno) = 0; # variable, current line number my($outlineno) = 0; # variable, line number in the output file my($OUTptr) = \*STDOUT; # default output pointer to the file stream, stdout my($PARSEOUTptr) = \*STDOUT; # parsed output pointer to the data file stream, my($IN) = \*STDIN; # default input file stream, stdin my($OUTBUFFER) = 0; # parsed output data buffer, # print out a non-crucial for-your-information messages. # By making fyi() a function, we enable/disable debugging messages easily. sub fyi ($) { my($str) = @_; print "$str\n"; } # The program 283 sub main() { if (@ARGV == 0) { # number of args fyi("Useage: perl DRparse.pl filename"); fyi("filename is the ASCII text file dumped from the DR database"); fyi("output.csv is the filename created which contains the parsed data."); } else { set_filehandles(); handle_file(@ARGV); } } # End of main # Set file handles from the arguments, in the @ARGV array. # we assume flags begin with a '-' (dash or minus sign). # sub set_filehandles () { my($a, $outname) = (undef, undef); $PARSEOUTptr = new FileHandle "> output.csv"; if (! defined($PARSEOUTptr) ) { print "Unable to open output file: output.csv. Bah-bye."; exit(1); } foreach $a (@ARGV) { if ($a =~ /^-o/) { shift @ARGV; # discard ARGV[0] = the -o flag $outname = $ARGV[0]; # get arg after -o shift @ARGV; # discard ARGV[0] = output file name $OUTptr = new FileHandle "> $oname"; if (! defined($OUTptr) ) { print "Unable to open output file: $oname. Bah-bye."; exit(1); } else { print $outname; # Output File Name } } else { last; # break out of this loop } } } # End of set_filehandles () # handle_file (FILENAME); # open a file handle or input stream for the file named FILENAME. # if FILENAME == '-' use stdin instead. sub handle_file ($) { my($infile) = @_; fyi("Parsing DR Dump input file: $infile"); if ($infile eq "-") { read_file(\*STDIN, "[stdin]"); # \*STDIN=input stream for STDIN. } else { my($IN) = new FileHandle "$infile"; if (! defined($IN)) { fyi("Can't open spec file $infile: $!\n"); return; } read_file($IN, "$infile"); # $IN = file handle for $infile $IN->close(); # done, close the file. $PARSEOUTptr->close(); # done, close the file. } } # End of handle_file ($) # # read_file (INPUT_STREAM, filename); # # Parses the input filename and prints out the sought lines into the output dat file # sub read_file ($$) { my($IN, $filename) = @_; my($line, $from) = ("", ""); # Parsed strings that are being looked for ... my($dr, $start, $submit, $srbDate) = ("DR/CR", "DR/CR ABCsv", "Submitted", "Opened"); my($assigned, $resolved, $verified, $closed) = ("Assigned", "Resolved", "Verified", "Closed"); my($detectedby, $disposition, $phase) = ("Detection method:", "Disposition:", "Detected in phase:"); my($severity, $needby, $cause, $resolution, $title) = ("Problem severity:", "Sched need date:", "Root Cause:", "Resolution:", "TITLE"); my ($analSection, $resolveSection) = ("ANALYSIS INFORMATION", "RESOLUTION INFORMATION"); my($commentLine) = (">"); my($asteriskLine) = ("*******************************************************************************"); my($tmpstring,$persistentID,$holdDisposition); # Print initial line for the output.csv file containing the items retrieved print $PARSEOUTptr "#\tID\tProduct\tSubmit Date\tSRB Date\tAssigned\tResolved\tVerified\tClosed\tDetected by\tDisposition\tDetection phase\tSeverity\tRoot Cause\tResolution\n\r"; 284 $lineno = 0; # Initialize the line number to zero for this file $outlineno = 0; $verifiedFound = 0; $OUTBUFFER = "\n"; # Preset the output buffer while ( defined($line = <$IN>) ) { chomp($line); # strip off trailing '\n' (newline char) if ($line =~ /$start/ && $line =~ /$submit/) { ## Found the DR/CR Start if($lineno != 3 || $line =~ $commentLine) { # Tripped on a comment line of a DR so do nothing } else { print $PARSEOUTptr $OUTBUFFER; # Print the accumulated buffer from prior DR parse $lineno = 0; # Reset to zero to start count for the following lines $srbFound = 0; #fyi("Line text -> $line\n"); $line =~ /$dr\s(\w{10})\s+?(\w+?\W\w+?)\s+?$submit\s(\d{6})/; if($1) { # First group (\w{10}) is the DR Identifier $tmpstring = tchomp $1; # Remove linefeed character from the string $persistentID = $tmpstring; #fyi("Parsed DR text -> $tmpstring\n"); $outlineno++; # Increment the output line number $OUTBUFFER = "\n$outlineno\t$tmpstring\t"; # Set the output buffer for this DR } else { fyi("\nLine text -> $line"); fyi("There is a serious problem-oh in $persistentID for string#1 -- exiting"); exit(1); ## There is a serious problem-oh } if($2) { $tmpstring = tchomp $2; # Remove linefeed character from the string #fyi("Parsed product text -> $tmpstring\n"); $OUTBUFFER .= "$tmpstring\t"; # Append the product to the output buffer for this DR } else { fyi("\nLine text -> $line"); fyi("There is a serious problem-oh in $persistentID for string#2 -- exiting"); exit(1); ## There is a serious problem-oh } if($3) { $tmpstring = tchomp $3; # Remove linefeed character from the string if(substr($tmpstring, 0, 1) == "9") { $tmpstring = "19".$tmpstring; # Append a 19 to the year if first character is a 9 } else { $tmpstring = "20".$tmpstring; # Append a 20 to the year if first character is not a 9 } my $year = substr($tmpstring, 0, 4); # Extract the year, month and day from the string my $month = substr($tmpstring, 4, 2); my $day = substr($tmpstring, 6, 2); $tmpstring = $month."-".$day."-".$year; # Put the date into the temporary string } else { $tmpstring = "00-00-00"; # Use 00-00-00 for missing data fyi("No Submit Date found for $persistentID filling with $tmpstring"); fyi("Line text -> $line\n"); } $OUTBUFFER .= $tmpstring."\t"; # Append the date to the output buffer for this DR } # END of else this is not a commented line #fyi("Parsed submit date -> $tmpstring\n"); #fyi("Formatted date -> $month-$day-$year\n"); } elsif ($lineno == 1 && $line =~ /$srbDate/) { ## Found the SRB Reviewed Date #fyi("Line text -> $line\n"); $line =~ /$srbDate\s(\d{6})/; 285 $srbFound = 1; if($1) { $tmpstring = tchomp $1; # Remove linefeed character from the string if(substr($tmpstring, 0, 1) == "9") { $tmpstring = "19".$tmpstring; # Append a 19 to the year if first character is a 9 } else { $tmpstring = "20".$tmpstring; # Append a 20 to the year if first character is not a 9 } $year = substr($tmpstring, 0, 4); # Extract the year, month and day from the string $month = substr($tmpstring, 4, 2); $day = substr($tmpstring, 6, 2); $tmpstring = $month."-".$day."-".$year; # Append the date to the output buffer for this DR } else { $tmpstring = "00-00-00"; # Use 00-00-00 for missing data fyi("No SRB Review Date found for $persistentID filling with $tmpstring"); fyi("Line text -> $line\n"); } $OUTBUFFER .= "$tmpstring\t"; # Append the date to the output buffer for this DR #fyi("Formatted SRB date -> $tmpstring\n"); } elsif ($lineno == 2 && $line =~ /$assigned/) { ## Found the date the DR was 'assigned' #fyi("Line text -> $line\n"); $line =~ /$assigned\s(\d{6})/; if($1) { $tmpstring = tchomp $1; # Remove linefeed character from the string if(substr($tmpstring, 0, 1) == "9") { $tmpstring = "19".$tmpstring; # Append a 19 to the year if first character is a 9 } else { $tmpstring = "20".$tmpstring; # Append a 20 to the year if first character is not a 9 } $year = substr($tmpstring, 0, 4); # Extract the year, month and day from the string $month = substr($tmpstring, 4, 2); $day = substr($tmpstring, 6, 2); $tmpstring = $month."-".$day."-".$year; # Append the date to the output buffer for this DR } else { $tmpstring = "00-00-00"; # Use 00-00-00 for missing data fyi("No Assigned Date found for $persistentID filling with $tmpstring"); fyi("Line text -> $line\n"); } $OUTBUFFER .= "$tmpstring\t"; # Append the date to the output buffer for this DR #fyi("Formatted assigned date -> $tmpstring\n"); } elsif ($lineno == 3 && $line =~ /$resolved/) { ## Found the date the DR was 'resolved' #fyi("Line text -> $line\n"); $line =~ /$resolved\s(\d{6})/; if($1) { $tmpstring = tchomp $1; # Remove linefeed character from the string if(substr($tmpstring, 0, 1) == "9") { $tmpstring = "19".$tmpstring; # Append a 19 to the year if first character is a 9 } else { $tmpstring = "20".$tmpstring; # Append a 20 to the year if first character is not a 9 } $year = substr($tmpstring, 0, 4); # Extract the year, month and day from the string $month = substr($tmpstring, 4, 2); $day = substr($tmpstring, 6, 2); $tmpstring = $month."-".$day."-".$year; # Append the date to the output buffer for this DR } else { $tmpstring = "00-00-00"; # Use 00-00-00 for missing data fyi("No Resolved Date found for $persistentID filling with $tmpstring"); fyi("Line text -> $line\n"); } 286 $OUTBUFFER .= "$tmpstring\t"; # Append the date to the output buffer for this DR #fyi("Formatted resolved date -> $tmpstring\n"); } elsif ($lineno == 4 && $line =~ /$verified/) { ## Found the date the DR was 'verified' #fyi("Line text -> $line\n"); $line =~ /$verified\s(\d{6})/; $verifiedFound = 1; if($1) { $tmpstring = tchomp $1; # Remove linefeed character from the string if(substr($tmpstring, 0, 1) == "9") { $tmpstring = "19".$tmpstring; # Append a 19 to the year if first character is a 9 } else { $tmpstring = "20".$tmpstring; # Append a 20 to the year if first character is not a 9 } $year = substr($tmpstring, 0, 4); # Extract the year, month and day from the string $month = substr($tmpstring, 4, 2); $day = substr($tmpstring, 6, 2); $tmpstring = $month."-".$day."-".$year; # Append the date to the output buffer for this DR } else { $tmpstring = "00-00-00"; # Use 00-00-00 for missing data fyi("No Verified Date found for $persistentID filling with $tmpstring"); fyi("Line text -> $line\n"); } $OUTBUFFER .= "$tmpstring\t"; # Append the date to the output buffer for this DR #fyi("Formatted verified date -> $tmpstring\n"); } elsif ($lineno == 5 && $line =~ /$closed/) { ## Found the date the DR was 'closed' #fyi("Line text -> $line\n"); $line =~ /$closed\s(\d{6})/; if($verifiedFound == 0) { # Found startup transient case with no verification $OUTBUFFER .= "$tmpstring\t"; # Append the resolved date again in place of the missing verification } else { $verifiedFound = 0; } if($1) { $tmpstring = tchomp $1; # Remove linefeed character from the string if(substr($tmpstring, 0, 1) == "9") { $tmpstring = "19".$tmpstring; # Append a 19 to the year if first character is a 9 } else { $tmpstring = "20".$tmpstring; # Append a 20 to the year if first character is not a 9 } $year = substr($tmpstring, 0, 4); # Extract the year, month and day from the string $month = substr($tmpstring, 4, 2); $day = substr($tmpstring, 6, 2); $tmpstring = $month."-".$day."-".$year; # Append the date to the output buffer for this DR } else { $tmpstring = "00-00-00"; # Use 00-00-00 for missing data fyi("No Date Closed found for $persistentID filling with $tmpstring"); fyi("Line text -> $line\n"); } $OUTBUFFER .= "$tmpstring\t"; # Append the date to the output buffer for this DR #fyi("Formatted closed date -> $tmpstring\n"); } elsif ($line =~ /$detectedby/ && $line =~ /$disposition/) { ## Found the detection method and the disposition status #fyi("Line text -> $line\n"); $line =~ /$detectedby\s+?(\w+?)\s+?(\w+?\W)\s+?(.*)/; if($1) { $tmpstring = tchomp $1; # Remove linefeed character from the string } 287 else { $line =~ /$detectedby\s+?(\w+?\s\w+?)\s+?(\w+?\W)\s+?(.*)/; # Try 2 words for detection method if($1) { $tmpstring = tchomp $1; # Remove linefeed character from the string } else { $line =~ /$detectedby\s+?(\w{1,2}\W\w+?\s\w+?)\s+?(\w+?\W)\s+?(.*)/; # Try 2 words for detection method if($1) { $tmpstring = tchomp $1; # Remove linefeed character from the string #fyi("Detection Method found for $persistentID is $tmpstring"); #fyi("Disposition for $persistentID is $3"); $holdDisposition = $3; } else { #$line =~ /$detectedby\s+?(\w+?\W\w+?\s\w+?)\s+?(\w+?\W)\s+?(.*)/; # Try 2 words for detection method $tmpstring = "EMPTY"; # Use EMPTY for missing data #fyi("I GIVE UP:: Detection Method found for $persistentID filling with $tmpstring"); #fyi("Line text -> $line\n"); } } } $OUTBUFFER .= "$tmpstring\t"; # Append the detection phase to the output buffer for this DR #fyi("Detection method -> $tmpstring\n"); if($3) { $tmpstring = tchomp $3; # Remove linefeed character from the string } else { if($holdDisposition) { $tmpstring = tchomp $holdDisposition; $holdDisposition = 0; } else { $tmpstring = "EMPTY"; # Use EMPTY for missing data #fyi("Case 1: No Dispostion status found for $persistentID filling with $tmpstring"); #fyi("Line text -> $line\n"); #fyi("Temp text -> $tmpstring\n"); } } if($tmpstring =~ /$disposition/) { ## Didn't get removed... $tmpstring =~ /(\w+?\W)\s+?(.*)/; if($2) { $tmpstring = tchomp $2; # Remove linefeed character from the string } else { $tmpstring = "EMPTY"; # Use EMPTY for missing data #fyi("Case 2: No Dispostion status found for $persistentID filling with $tmpstring"); #fyi("Line text -> $line\n"); # fyi("Temp text -> $tmpstring\n"); } } $OUTBUFFER .= "$tmpstring\t"; # Append the disposition to the output buffer for this DR #fyi("Disposition -> $tmpstring\n"); } elsif ($lineno == 8 && $line =~ /$phase/) { ## Found the detection phase #fyi("Line text -> $line\n"); $line =~ /$phase\s+?(\w+?)\s+?/; if($1) { $tmpstring = tchomp $1; # Remove linefeed character from the string } else { $tmpstring = "EMPTY"; # Use EMPTY for missing data fyi("No Detection Phase found for $persistentID filling with $tmpstring"); fyi("Line text -> $line\n"); } $OUTBUFFER .= "$tmpstring\t"; # Append the detection phase to the output buffer for this DR #fyi("Detection phase -> $tmpstring\n"); } elsif ($line =~ /$severity/ && $line =~ /$needby/) { ## Found the severity #fyi("Line text -> $line\n"); $line =~ /$severity\s+?(\d)\s+?/; if($1) { $tmpstring = tchomp $1; # Remove linefeed character from the string 288 } else { $tmpstring = "EMPTY"; # Use EMPTY for missing data fyi("No Severity found for $persistentID filling with $tmpstring"); fyi("Line text -> $line\n"); } $OUTBUFFER .= "$tmpstring\t"; # Append the severity to the output buffer for this DR #fyi("Severity -> $tmpstring\n"); } elsif ($line =~ /$analSection/) { ## Found the analysis information section #fyi("Line text -> $line\n"); $lineno = 0; # Reset the line number for this section } elsif ($lineno == 1 && $line =~ /$cause/) { ## Found the root cause #fyi("Line text -> $line\n"); $line =~ /$cause\s+?(\w+?)\s+?/; if($1) { $tmpstring = tchomp $1; # Remove linefeed character from the string } else { $tmpstring = "EMPTY"; # Use EMPTY for missing data fyi("No Root Cause found for $persistentID filling with $tmpstring"); fyi("Line text -> $line\n"); } $OUTBUFFER .= "$tmpstring\t"; # Append the root cause to the output buffer for this DR #fyi("Root cause -> $tmpstring\n"); } elsif ($line =~ /$resolveSection/) { ## Found the resolution information section #fyi("Line text -> $line\n"); $lineno = 0; # Reset the line number for this section } elsif ($lineno == 1 && $line =~ /$resolution/) { ## Found the how resolved #fyi("Line text -> $line\n"); $line =~ /$resolution\s+?(.*)/; if($1) { $tmpstring = tchomp $1; # Remove linefeed character from the string } else { $tmpstring = "EMPTY"; # Use EMPTY for missing data fyi("No Resolution found for $persistentID filling with $tmpstring"); fyi("Line text -> $line\n"); } $OUTBUFFER .= "$tmpstring\t"; # Append the root cause to the output buffer for this DR #fyi("Resolution -> $tmpstring\n"); } elsif ($line =~ /\Q$asteriskLine\E/) { ## Found the start of the next DR/CR section #fyi("Line text -> $line\n"); $lineno = 0; # Reset the line number for this section } $lineno++; # Keep track of where we are to help identify the correct lines } # END of while loop } # END of read_file() # start execution at main() # main(); 0; # return 0 (no error from this script) 289 A P P E N D I X : D – D Y N A M I C S M O D E L E Q U A T I O N S D.1 Introduction This appendix includes the equations, symbol types and their names, which are used in the Modified Madachy Model, which is described in chapter 4. Native Powersim functions in Table 33 such as MIN, TIMESTEP, IF and others use capital letters. Also, note that during testing, a minor model modification (the ‘Accept Tasks’ reservoir) was included into the integration test feedback loop, but due to mathematical equivalence does not affect the results. In addition, this table points out those variables/constants that had different values for the various test matrices, but which are also discussed in chapter 4. D.2 Modified Madachy Model (MMM) Equations Table 33: MMM Equations Type Name Equation, Default or Used Run Execution Values Auxiliary AccelerationTime (TimeStep_TM-('Est Dev Schedule_TM'+'Estimated Test Schedule_TM'))/('Estimated Test Schedule_TM') Reservoir Accepted Tasks 0 Out Flow Review Board Delay and Recode Rate.out 'Review Board Delay and Recode Rate' Out Flow Review Board Delay and Redesign Rate.out 'Review Board Delay and Redesign Rate' In Flow Review Board Task Accept Rate.in 'Review Board Task Accept Rate' Auxiliary Attrition Rate IF('Resource Leveling_EfM'>0,PULSE('Manpower Pool'*(1-'Resource Leveling_EfM'),STARTTIME+999*TIMESTEP,999*TIM ESTEP),0*PULSE('Manpower Pool'*(1-'Resource Leveling_EfM'),STARTTIME+999*TIMESTEP,999*TIM ESTEP)) Constant Average Design Error Amplification 1 Auxiliary Average Design Error Amplification_ErrM 'Average Design Error Amplification' Constant Average Reject and Defer Percent 0.05 (NOTE: Used this value for Madachy comparisons and 0.3 for the tests with project A and C staffing curves.) Auxiliary Average Reject and Defer Percent _TM DELAYPPL('Average Reject and Defer Percent','Review Board Delay Time_TM') Constant Calibrated COCOMO Constant 3.6 Auxiliary Calibrated COCOMO Constant_EfM 'Calibrated COCOMO Constant' Auxiliary Calibrated COCOMO Constant_TEAM 'Calibrated COCOMO Constant' Auxiliary Code and ReCode Inspection Rate_ErrM 'Code Inspection Rate'+'Recode Inspection Rate' Auxiliary Code and ReCode Non-Inspection Rate_ErrM 'Code Non-Inspection Rate'+'Recode Non-Inspection Rate' Auxiliary Code Defect Density Ratio 'Code Error Density_TM'/'Total Injected Error Density' Constant Code Error Density 1.388 290 Table 33: Continued Type Name Equation, Default or Used Run Execution Values Auxiliary Code Error Density_ErrM 'Code Error Density' Auxiliary Code Error Density_TM 'Code Error Density' Auxiliary Code Error Detection Rate MIN(('Code Error Density_ErrM'+'Design Error Density in Code')*('Code and ReCode Inspection Rate_ErrM'*'Inspection Effectiveness_ErrM'+'Unit and Re- Unit Testing Rate_ErrM'*'Unit Test Effectiveness_ErrM'),'Code Errors'/TIMESTEP) Auxiliary Code Error Detection Rate_cumlative 'Code Error Detection Rate' Auxiliary Code Error Escape Rate MIN(('Code Error Density_ErrM'+'Design Error Density in Code')*('Code and ReCode Inspection Rate_ErrM'*(1- 'Inspection Effectiveness_ErrM')+'Unit and Re-Unit Testing Rate_ErrM'*(1-'Unit Test Effectiveness_ErrM')+'Code and ReCode Non-Inspection Rate_ErrM'+'Enable Non-Unit Test Error Feedthrough_ErrM'*('Non-Unit Testing Rate_ErrM'+'Re- Non-Unit Testing Rate_ErrM')),'Code Errors'/TIMESTEP) Auxiliary Code Error Generation Rate 'Code Error Density_ErrM'*'Coding Rate_ErrM' Auxiliary Code Error Rework Rate MIN(DELAYPPL('Detected Code Errors'/10,7<<da>>)/TIMESTEP,'Detected Code Errors'/TIMESTEP) Auxiliary Code Error Rework Rate_EfM 'Code Error Rework Rate' Reservoir Code Errors 0 Out Flow Code Error Detection Rate.out 'Code Error Detection Rate' Out Flow Code Error Escape Rate.out 'Code Error Escape Rate' In Flow Code Error Generation Rate.in 'Code Error Generation Rate' In Flow Design Error Pass and Amplification Rate.in 'Design Error Pass and Amplification Rate' In Flow Re-worked Code Error Generation Rate.in 'Re-worked Code Error Generation Rate' Constant Code Inspection Delay Time 5<<da>> Auxiliary Code Inspection Delay Time_TM 'Code Inspection Delay Time' Auxiliary Code Inspection Manpower Rate 'Code Inspection Rate_EfM'*'Inspection Effort per Task_EfM' Auxiliary Code Inspection Manpower Rate_TE 'Code Inspection Manpower Rate' Constant Code Inspection Practice 1 Auxiliary Code Inspection Practice_TM 'Code Inspection Practice' Auxiliary Code Inspection Rate DELAYPPL('Code Inspection Practice_TM'*('Unit Testing Rate'+(1-'Unit Test Practice_TM')*'Coding Rate'),'Code Inspection Delay Time_TM') Auxiliary Code Inspection Rate_cumulative 'Code Inspection Rate' Auxiliary Code Inspection Rate_EfM 'Code Inspection Rate' Constant Code Rework Effort per Error 0.11 Auxiliary Code Rework Effort per Error_EfM 'Code Rework Effort per Error' Auxiliary Code Rework Manpower Rate 'Code Error Rework Rate_EfM'*'Code Rework Effort per Error_EfM' Auxiliary Code Rework Manpower Rate_TE 'Code Rework Manpower Rate' Auxiliary Coding Manpower Rate 'Manpower Pool'*'Coding Staff Curve'/TIMESTEP Auxiliary Coding Manpower Rate_TE 'Coding Manpower Rate' 291 Table 33: Continued Type Name Equation, Default or Used Run Execution Values Auxiliary Coding Manpower Rate_TM 'Coding Manpower Rate' Auxiliary Code Non-Inspection Rate IF('Unit Test Practice_TM'=0 AND 'Code Inspection Practice_TM'=0,'Coding Rate', IF('Code Inspection Practice_TM'<1 AND 'Unit Test Practice_TM'=0, (1-'Code Inspection Practice_TM')*'Coding Rate', (1-'Code Inspection Practice_TM')*('Unit Testing Rate'+(1-'Unit Test Practice_TM')*'Coding Rate'))) Auxiliary Coding Rate MIN('Coding Manpower Rate_TM'*'Current Productivity_TM'/'Fraction of Effort for Coding_TM','Tasks for Coding'/TIMESTEP) Auxiliary Coding Rate_cumulative 'Coding Rate' Auxiliary Coding Rate_ErrM 'Coding Rate' Auxiliary Coding Staff Curve 'Coding Staff Modification Curve'*'Raw Coding Staff Curve' Auxiliary Coding Staff Modification Curve GRAPH(CodingTime,0,120*DeltaTime_EfM*'Madachy DesignCode Calibration_EfM',{1,1, 1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}) Auxiliary CodingTime TimeStep_EfM/(0.85*'Estimated Development Schedule') Level Cum Test Code Error Fix Effort 0 In Flow Integration Test Code Error Fix Manpower Rate.in 'Integration Test Code Error Fix Manpower Rate' Level Cum Test Design Error Fix Effort 0 In Flow Integration Test Design Error Fix Manpower Rate.in 'Integration Test Design Error Fix Manpower Rate' Level Cumulative Code Inspection Effort 0 In Flow Code Inspection Manpower Rate.in 'Code Inspection Manpower Rate' Level Cumulative Code Rework Effort 0 In Flow Code Rework Manpower Rate.in 'Code Rework Manpower Rate' Level Cumulative Code Tasks Inspected 0 In Flow Code Inspection Rate_cumulative.in 'Code Inspection Rate_cumulative' Level Cumulative Code Tasks Re-Unit-Tested 0 In Flow Re-Unit Testing Rate_cumulative.in 'Re-Unit Testing Rate_cumulative' Level Cumulative Code Tasks Unit-Tested 0 In Flow Unit Testing Rate_cumulative.in 'Unit Testing Rate_cumulative' Level Cumulative Coding Effort 0 In Flow Coding Manpower Rate.in 'Coding Manpower Rate' Level Cumulative Design Effort 0 In Flow Design Manpower Rate.in 'Design Manpower Rate' Level Cumulative Design Inspection Effort 0 In Flow Design Inspection Manpower Rate.in 'Design Inspection Manpower Rate' Level Cumulative Design Rework Effort 0 292 Table 33: Continued Type Name Equation, Default or Used Run Execution Values In Flow Design Rework Manpower Rate.in 'Design Rework Manpower Rate' Level Cumulative Design Tasks Inspected 0 In Flow Design Inspection Rate_cumulative.in 'Design Inspection Rate_cumulative' Level Cumulative Detected Code Errors 0 In Flow Code Error Detection Rate_cumulative.in 'Code Error Detection Rate_cumlative' Level Cumulative Detected Design Errors 0 In Flow Design Error Detection Rate_cumulative.in 'Design Error Detection Rate_cumulative' Level Cumulative Integration Test Failing Rate 0 In Flow Integration Test Failing Rate_cumulative.in 'Integration Test Failing Rate_cumulative' Level Cumulative Re-Code Tasks Inspected 0 In Flow Recode Inspection Rate_cumulative.in 'Recode Inspection Rate_cumulative' Level Cumulative Re-Design Tasks Inspected 0 In Flow Re-Design Inspection Rate_cumulative.in 'Re-Design Inspection Rate_cumulative' Level Cumulative Redesigned Tasks 0 In Flow Redesign Rate_cumulative.in 'Redesign Rate_cumulative' Level Cumulative Tasks Coded 0 In Flow Coding Rate_cumulative.in Coding Rate_cumulative' Level Cumulative Tasks Designed 0.001 (NOTE: This is intentionally set to a small value to avoid a divide by zero error.) In Flow Design Rate_cumulative.in Design Rate_cumulative' Auxiliary Cumulative Tasks Designed and ReDesigned_ErrM 'Cumulative Tasks Designed'+'Cumulative Redesigned Tasks' Level Cumulative Tasks Re-Coded 0 In Flow Re-Coding Rate_cumulative.in 'Re-Coding Rate_cumulative' Auxiliary Cumulative Tested Tasks 'Tasks Tested'+'Tasks RE-Tested'+'Rejected and Defered Tasks' Level Cumulative Testing Effort 0 In Flow Testing Manpower Rate.in Testing Manpower Rate' Level Cumulative Total Effort 0 In Flow Total Manpower Rate.in Total Manpower Rate' Auxiliary Current Productivity 'Max Productivity'*'Learning Curve'/100/'SCED Schedule Constraint_EfM' Auxiliary Current Productivity_TM 'Current Productivity' Auxiliary Defect Density 'Sampled Defect Density' Auxiliary Defect Density_ErrM 'Defect Density' Auxiliary Defect Density_TM 'Sampled Defect Density' Auxiliary Defer Task Acceleration Model GRAPH(AccelerationTime,0,6*DeltaTime_TM*'Madachy Test Calibration_TM', {0.1,0.105,0.12,0.15,0.18,0.195,0.2}) Auxiliary Delta_Time TIMESTEP/((STOPTIME-STARTTIME)) Auxiliary DeltaTime_EfM 'Delta_Time' Auxiliary DeltaTime_TM 'Delta_Time' 293 Table 33: Continued Type Name Equation, Default or Used Run Execution Values Auxiliary Design Defect Density Ratio 'Design Error Density in Code_TM'/'Total Injected Error Density' Constant Design Error Density 1.385 Auxiliary Design Error Density_ErrM 'Design Error Density' Auxiliary Design Error Density in Code 'Undetected Design Errors'*'Average Design Error Amplification_ErrM'/'Cumulative Tasks Designed and ReDesigned_ErrM' Auxiliary Design Error Density in Code_TM 'Design Error Density in Code' Auxiliary Design Error Detection Rate 'Design_ReDesign Inspection Rate_ErrM'*'Design Error Density_ErrM'*'Inspection Effectiveness_ErrM' Auxiliary Design Error Detection Rate_cumulative 'Design Error Detection Rate' Auxiliary Design Error Escape Rate Design Error Density_ErrM'*('Design_ReDesign Inspection Rate_ErrM'*(1-'Inspection Effectiveness_ErrM') +'Design_ReDesign Non-Inspection Rate_ErrM') Auxiliary Design Error Generation Rate 'Design Rate_ErrM'*'Design Error Density_ErrM' Auxiliary Design Error Pass and Amplification Rate 'Design Error Escape Rate'*'Average Design Error Amplification_ErrM' Level Design Errors 0 Out Flow Design Error Detection Rate.out Design Error Detection Rate' Out Flow Design Error Escape Rate.out Design Error Escape Rate' In Flow Design Error Generation Rate.in Design Error Generation Rate' In Flow Redesign Error Generation Rate.in 'Redesign Error Generation Rate' Constant Design Inspection Delay Time 10<<da>> Auxiliary Design Inspection Delay Time_TM 'Design Inspection Delay Time' Auxiliary Design Inspection Manpower Rate 'Design Inspection Rate_EfM'*'Inspection Effort per Task_EfM' Auxiliary Design Inspection Manpower Rate_TE 'Design Inspection Manpower Rate' Constant Design Inspection Practice 1 Auxiliary Design Inspection Practice_TM 'Design Inspection Practice' Auxiliary Design Inspection Rate DELAYPPL('Design Inspection Practice_TM'*'Design Rate','Design Inspection Delay Time_TM') Auxiliary Design Inspection Rate_cumulative 'Design Inspection Rate' Auxiliary Design Inspection Rate_EfM 'Design Inspection Rate' Auxiliary Design Manpower Rate 'Manpower Pool'*'Design Staffing Curve'/TIMESTEP Auxiliary Design Manpower Rate_TE 'Design Manpower Rate' Auxiliary Design Manpower Rate_TM 'Design Manpower Rate' Auxiliary Design Non-Inspection Rate (1-'Design Inspection Practice_TM')*'Design Rate' Auxiliary Design Rate Design Manpower Rate_TM'*'Current Productivity_TM'/'Fraction of Effort for Design_TM' Auxiliary Design Rate_cumulative 'Design Rate' Auxiliary Design Rate_ErrM 'Design Rate' Constant Design Rework Effort per Error 0.055 Auxiliary Design Rework Manpower Rate 'Design Rework Rate_EfM'*'Design Rework Effort per Error' 294 Table 33: Continued Type Name Equation, Default or Used Run Execution Values Auxiliary Design Rework Manpower Rate_TE 'Design Rework Manpower Rate' Auxiliary Design Rework Rate DELAYPPL('Detected Design Errors'/10,7*TIMESTEP)/TIMESTEP Auxiliary Design Rework Rate_EfM 'Design Rework Rate' Auxiliary Design Staff Modification Curve GRAPH(DesignTime,0,120*DeltaTime_EfM*'Madachy DesignCode Calibration_EfM',{1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}) Auxiliary Design Staffing Curve 'Design Staff Modification Curve'*'Raw Design Staffing Curve' Auxiliary Design_ReDesign Inspection Rate_ErrM 'Design Inspection Rate'+'Redesign Inspection Rate' Auxiliary Design_ReDesign Non-Inspection Rate_ErrM 'Design Non-Inspection Rate'+'Redesign Non-Inspection Rate' Auxiliary DesignTime (TimeStep_EfM)/(0.85*'Estimated Development Schedule') Level Detected Code Errors 0 In Flow Code Error Detection Rate.in Code Error Detection Rate' Out Flow Code Error Rework Rate.out Code Error Rework Rate' Level Detected Design Errors 0 In Flow Design Error Detection Rate.in 'Design Error Detection Rate' Out Flow Design Rework Rate.out 'Design Rework Rate' Level Detected IT Errors 0 In Flow Errors Found in IT Rate_cumulative.in 'Errors Found in IT Rate_cumulative' Out Flow Integration Test Code Error Rework Rate.out 'Integration Test Code Error Rework Rate' Out Flow Integration Test Design Error Rework Rate.out 'Integration Test Design Error Rework Rate' Constant Disable Test Effort Adjustment FALSE Auxiliary Disable Test Effort Adjustment_TEAM 'Disable Test Effort Adjustment' Constant Enable IT Feedback 1 Auxiliary Enable IT Feedback_ErrM 'Enable IT Feedback' Auxiliary Enable IT Feedback_TM 'Enable IT Feedback' Constant Enable Non-Unit Test Error Feedthrough 1 Auxiliary Enable Non-Unit Test Error Feedthrough_ErrM 'Enable Non-Unit Test Error Feedthrough' Level Errors Escaping Integration Test 0 In Flow Integration Test Error Escape Rate.in 'Integration Test Error Escape Rate' Auxiliary Errors Fixed in Test_TEAM 'Errors Found in IT' Level Errors Found in IT 0 In Flow Integration RE-Test Errors Found Rate.in 'Integration RE-Test Errors Found Rate' In Flow Integration Test Errors Found Rate.in 'Integration Test Errors Found Rate' Auxiliary Errors Found in IT Rate_cumulative 'Integration RE-Test Errors Found Rate'+'Integration Test Errors Found Rate' 295 Table 33: Continued Type Name Equation, Default or Used Run Execution Values Reservoir Escaped Errors 0 In Flow Code Error Escape Rate.in 'Code Error Escape Rate' Out Flow Integration RE-Test Errors Found Rate.out 'Integration RE-Test Errors Found Rate' Out Flow Integration Test Error Escape Rate.out 'Integration Test Error Escape Rate' Out Flow Integration Test Errors Found Rate.out 'Integration Test Errors Found Rate' Auxiliary Escaped Errors_TEAM 'Escaped Errors' Auxiliary Est Dev Schedule_TM 'Estimated Development Schedule' Auxiliary Estimated Development Schedule 20*2.5/'SCED Schedule Constraint_EfM'*('Calibrated COCOMO Constant_EfM'*(0.06*'Job Size_EfM') ^1.2)^0.32 Auxiliary Estimated Development Schedule_TEAM 'Estimated Development Schedule' Auxiliary Estimated Development Schedule_TM INTEGER('Estimated Development Schedule') Auxiliary Estimated Test Schedule 0.35*20*2.5/'SCED Schedule Constraint_EfM'*('Test Effort Adjustment_EfM'*'Calibrated COCOMO Constant_EfM'*(0.06*'Job Size_EfM')^1.2)^0.32 Auxiliary Estimated Test Schedule_TEAM 'Estimated Test Schedule' Auxiliary Estimated Test Schedule_TM 'Estimated Test Schedule' Auxiliary Failure Intensity Decay Parameter_TM 'Task Failure Intensity Decay Parameter' Auxiliary Final Defect Density IF('Tasks Tested_TEAM'>1,'Errors Fixed in Test_TEAM'/'Tasks Tested_TEAM',0) Auxiliary Fraction Done 'Tasks Tested_EfM'/'Job Size_EfM' Constant Fraction of Design Errors Requiring Full Redesign 0.01 Auxiliary Fraction of Design Errors Requiring Full Redesign_TM 'Fraction of Design Errors Requiring Full Redesign' Constant Fraction of Effort for Coding 0.2657 (Note: Switched value is 0.454.) Auxiliary Fraction of Effort for Coding_TM 'Fraction of Effort for Coding' Constant Fraction of Effort for Design 0.454 (Note: Switched value is 0.2657.) Auxiliary Fraction of Effort for Design_TM 'Fraction of Effort for Design' Constant Fraction of Effort for Testing 0.255 Auxiliary Fraction of Effort for Testing_TM 'Fraction of Effort for Testing' Constant Fraction of Tests that Fail 0.11 Auxiliary Hiring and Manpower Allocation Personnel Modification Factor_EfM'*PULSE((('SCED Schedule Constraint_EfM'^2) *1.46)* (20*'Calibrated COCOMO Constant_EfM'*(0.06*'Job Size_EfM')^1.2)/(20*2.5*('Calibrated COCOMO Constant_EfM'*(0.06*'Job Size_EfM')^1.2)^0.32),STARTTIME,99999*TIMESTEP) Constant Inspection Effectiveness 0.6 Auxiliary Inspection Effectiveness_ErrM 'Inspection Effectiveness' Constant Inspection Effort per Task 0.19 Auxiliary Inspection Effort per Task_EfM 'Inspection Effort per Task' Auxiliary Integration RE-Test Errors Found Rate MIN('IT RE-Test Rate','Escaped Errors'/TIMESTEP) Auxiliary Integration RE-Test Failing Rate MIN('Task Failure Rate Model'*'Integration RE-Testing Rate','Tasks Ready for RE-Test'/TIMESTEP) 296 Table 33: Continued Type Name Equation, Default or Used Run Execution Values Auxiliary Integration RE-Test Passing Rate MIN((1-'Task Failure Rate Model')*'Integration RE- Testing Rate','Tasks Ready for RE-Test'/TIMESTEP) Auxiliary Integration RE-Testing Rate (('Integration Test Code Error Fix Manpower Rate_TM'+'Integration Test Design Error Fix Manpower Rate_TM')*'Current Productivity_TM')/'Fraction of Effort for Testing_TM'/'Testing Effort Adjustment_TM' Auxiliary Integration RE-Testing Rate_ErrM 'Integration RE-Testing Rate' Auxiliary Integration Test Code Error Fix Manpower Rate 'Integration Test Code Error Rework Rate_EfM'*'Testing Effort per Error_EfM' Auxiliary Integration Test Code Error Fix Manpower Rate _TM 'Integration Test Code Error Fix Manpower Rate' Auxiliary Integration Test Code Error Fix Rate Integration Test Code Error Fix Manpower Rate_TM'*'Current Productivity_TM'/'Fraction of Effort for Testing_TM' Auxiliary Integration Test Code Error Rework Rate MIN(DELAYPPL('Only Recode Fraction_ErrM'*'Detected IT Errors'/'IT Rework Rate Reduction Factor_ErrM',7<<da>>)/TIMESTEP,'Only Recode Fraction_ErrM'*'Detected IT Errors'/TIMESTEP) Auxiliary Integration Test Code Error Rework Rate_EfM 'Integration Test Code Error Rework Rate' Auxiliary Integration Test Design Error Fix Manpower Rate 'Integration Test Design Error Rework Rate_EfM'*'Testing Effort per Error_EfM' Auxiliary Integration Test Design Error Fix Manpower Rate_TM 'Integration Test Design Error Fix Manpower Rate' Auxiliary Integration Test Design Error Fix Rate 'Integration Test Design Error Fix Manpower Rate_TM'*'Current Productivity_TM'/'Fraction of Effort for Testing_TM' Auxiliary Integration Test Design Error Rework Rate MIN(DELAYPPL('Redesign Error Fraction_ErrM'*'Detected IT Errors'/'IT Rework Rate Reduction Factor_ErrM',7<<da>>)/TIMESTEP,'Redesign Error Fraction_ErrM'*'Detected IT Errors'/TIMESTEP) Auxiliary Integration Test Design Error Rework Rate_EfM 'Integration Test Design Error Rework Rate' Constant Integration Test Effectiveness 0.85 Auxiliary Integration Test Effectiveness_ErrM 'Integration Test Effectiveness' Auxiliary Integration Test Error Escape Rate MIN((1-'Integration Test Effectiveness_ErrM')*'IT Rate','Escaped Errors'/TIMESTEP) Auxiliary Integration Test Errors Found Rate MIN('Integration Test Effectiveness_ErrM'*'IT Rate','Escaped Errors'/TIMESTEP) Auxiliary Integration Test Failing Rate MIN('Enable IT Feedback_TM'*'Task Failure Rate Model'*'Integration Testing Rate','Tasks Ready for Test'/TIMESTEP) Auxiliary Integration Test Failing Rate_cumulative 'Integration Test Failing Rate'+'Integration RE-Test Failing Rate' Auxiliary Integration Test Passing Rate MIN((1-'Task Failure Rate Model')*'Integration Testing Rate','Tasks Ready for Test'/TIMESTEP) Auxiliary Integration Testing Rate ('Testing Manpower Rate_TM'*'Current Productivity_TM')/'Fraction of Effort for Testing_TM'/'Testing Effort Adjustment_TM' Auxiliary Integration Testing Rate_ErrM 'Integration Testing Rate' 297 Table 33: Continued Type Name Equation, Default or Used Run Execution Values Auxiliary IT Rate ('Defect Density_ErrM'*'Integration Testing Rate_ErrM'/'Test Effort Adjustment_ErrM') Auxiliary IT RE-Test Rate ('Defect Density_ErrM'*'Integration RE-Testing Rate_ErrM'/'Test Effort Adjustment_ErrM') Constant IT Rework Rate Reduction Factor 10 Auxiliary IT Rework Rate Reduction Factor_ErrM 'IT Rework Rate Reduction Factor' Constant Job Size 1133.33 Auxiliary Job Size_EfM 'Job Size' Auxiliary Job Size_TEAM 'Job Size' Auxiliary Job Size_TM 'Job Size' Auxiliary Learning Curve GRAPHCURVE('Fraction Done',0,1.0, {100.0,100.0,100.0,100.0,100.0,100.0,100.0,100.0,100.0,10 0.0,100.0,100.0,100.0}) Auxiliary Madachy DesignCode Calibration 'Madachy DesignCode Dx_Dataspread'/'Powersims Dx DataSpread for DesignCode Staff Curve' Auxiliary Madachy DesignCode Calibration_EfM 'Madachy DesignCode Calibration' Constant Madachy DesignCode Dx_Dataspread 0.0588210 (NOTE: This is the default value required to match Madachy’s staffing curve data. This value was divided by (4) for the test cases using staffing curves based on project’s A and C.) Auxiliary Madachy Test Calibration 'Madachy Test Dx_Dataspread'/'Powersims Dx DataSpread for Test Staff Curve' Auxiliary Madachy Test Calibration_EfM 'Madachy Test Calibration' Auxiliary Madachy Test Calibration_TM 'Madachy Test Calibration' Constant Madachy Test Dx_Dataspread 0.14286450 (NOTE: This is the default value required to match Madachy’s staffing curve data. This value was divided by (4) for the test cases using staffing curves based on project’s A and C.) Level Manpower Pool 0 Out Flow Attrition Rate.out 'Attrition Rate' In Flow Hiring and Manpower Allocation.in 'Hiring and Manpower Allocation' Auxiliary Max Productivity 'Job Size_EfM'/(20*'Calibrated COCOMO Constant_EfM'*(0.06*'Job Size_EfM')^1.2) Auxiliary Nominal Test Schedule 0.35*20*2.5/'SCED Schedule Constraint_EfM'*('Calibrated COCOMO Constant_EfM'*(0.06*'Job Size_EfM')^1.2)^0.32 Auxiliary Non-Unit Testing Rate (1-'Unit Test Practice_TM')*'Coding Rate' Auxiliary Non-Unit Testing Rate_ErrM 'Non-Unit Testing Rate' Auxiliary Only Recode Fraction (1-'Fraction of Design Errors Requiring Full Redesign_TM')*'Design Defect Density Ratio'+'Code Defect Density Ratio' Auxiliary Only Recode Fraction_ErrM 'Only Recode Fraction' Constant Personnel Modification Factor 70/43.65 Auxiliary Personnel Modification Factor_EfM 'Personnel Modification Factor' 298 Table 33: Continued Type Name Equation, Default or Used Run Execution Values Auxiliary Raw Coding Staff Curve GRAPH(CodingTime,0,120*DeltaTime_EfM*'Madachy DesignCode Calibration_EfM',{0,0.003136275, 0.00627255,0.009408825,0.0125451,0.015681375,0.01847 2272,0.02061079,0.023777407,0.026999579,0.030002514, 0.033032035,0.036103419,0.039230735,0.042431525,0.04 5727697,0.049145976,0.052720137,0.056493134,0.06051 9174,0.06486941,0.069639031,0.074957753,0.081005126, 0.088041879,0.096457562,0.106862986,0.120392643,0.13 8357763,0.139165407,0.126103412,0.11498791,0.102341 137,0.10641176,0.125747072,0.16133611,0.204404846,0. 260071303,0.31741566,0.339996169,0.320112509,0.2748 04649,0.245351001,0.202929977,0.177181916,0.1597054 58,0.147041321,0.13746554,0.129989005,0.124004706,0. 119118751,0.115065051,0.11165705,0.108759744,0.1062 73514,0.104123036,0.102250329,0.098768996,0.0955369 28,0.092516003,0.089675235,0.08698919,0.084436784,0. 082000382,0.079665133,0.077418434,0.075249529,0.073 149175,0.071109388,0.069123238,0.067184677,0.065288 405,0.063429755,0.061604606,0.059809302,0.058040589, 0.056295562,0.054571616,0.052866413,0.051177845,0.04 9504009,0.047843183,0.046193807,0.044862893,0.06250 1381,0.079465466,0.096431939,0.113669402,0.12827080 7,0.129486799,0.130667551,0.131848302,0.133029054,0. 134209806,0.135399991,0.136710715,0.137254902,0.137 254902,0.137254902,0.128602589,0.118932358,0.109262 126,0.099591894,0.089921662,0.080251431,0.070581199, 0.060910967,0.055821372,0.055821372,0.055821372,0.05 5821372,0.055821372,0.055821372,0.055821372,0.05582 1372,0.055821372,0.055821372,0.055821372,0.05582137 2,0.055821372,0.022985271}) 299 Table 33: Continued Type Name Equation, Default or Used Run Execution Values Auxiliary Raw Design Staffing Curve GRAPH(DesignTime,0,120*DeltaTime_EfM*'Madachy DesignCode Calibration_EfM',{0,0.001568137, 0.003136275,0.004704412,0.00627255,0.007840687,0.009 258593,0.010392729,0.011226755,0.01189812,0.0137347 09,0.017168544,0.020606496,0.02435731,0.028420986,0. 032797525,0.037486926,0.04248919,0.047804317,0.0534 32305,0.059373156,0.06562687,0.072193446,0.07907288 5,0.086265186,0.093770349,0.101588375,0.110286121,0. 107263537,0.115755662,0.115661856,0.105764281,0.094 368562,0.104563187,0.127890529,0.166221994,0.207858 711,0.254427027,0.277911947,0.262695404,0.274306722, 0.254698959,0.225815788,0.185634262,0.159248207,0.14 0592863,0.126664422,0.115894127,0.107337395,0.10039 272,0.09465804,0.089855183,0.085785183,0.082301561,0 .079294679,0.076680588,0.074393957,0.071058658,0.068 022916,0.065242243,0.062680223,0.06030678,0.0580968 38,0.056029315,0.054086371,0.052252803,0.050515577,0 .048863458,0.047286703,0.045776827,0.044326405,0.042 92891,0.041578582,0.04027032,0.038999587,0.03776233 8,0.036554952,0.03537418,0.034217095,0.033081057,0.0 31963676,0.030862788,0.02977643,0.028901505,0.04024 6982,0.051155763,0.062067576,0.078268072,0.11126361 1,0.136010375,0.16004268,0.184074986,0.208107291,0.2 32139596,0.256363916,0.283041592,0.294117647,0.2941 17647,0.294117647,0.275576977,0.254855052,0.2341331 27,0.213411202,0.192689277,0.171967352,0.151245426,0 .130523501,0.119617225,0.119617225,0.119617225,0.119 617225,0.119617225,0.119617225,0.119617225,0.119617 225,0.119617225,0.119617225,0.119617225,0.119617225, 0.119617225,0.01313444}) Auxiliary Powersims Dx DataSpread for DesignCode Staff Curve 4.75*7*Delta_Time Auxiliary Powersims Dx DataSpread for Test Staff Curve 4.75*7*Delta_Time 300 Table 33: Continued Type Name Equation, Default or Used Run Execution Values Auxiliary Raw Test Staff Curve GRAPH(TestingTime,0,120*DeltaTime_EfM*'Madachy Test Calibration_EfM',{0,0.005401362,0.010802725, 0.016204087,0.021605449,0.027006812,0.032903783,0.03 9736904,0.045842035,0.052054274,0.0577727,0.0629640 95,0.067507489,0.071374755,0.074531351,0.076944979,0 .07859132,0.079448994,0.079499838,0.078731463,0.0771 31999,0.074690604,0.071398261,0.067248406,0.0622337 79,0.05634856,0.049587486,0.041320747,0.039101644,0. 038488411,0.059710178,0.092526826,0.126801834,0.124 684235,0.130732869,0.140596399,0.139815612,0.127211 32,0.101329293,0.092872536,0.095124806,0.164980019,0 .222601684,0.305269644,0.357506031,0.393701905,0.420 326738,0.440684423,0.456716746,0.469637401,0.480245 565,0.489087491,0.496549935,0.502915133,0.508392806, 0.513142499,0.51728768,0.511393102,0.50495099,0.4980 43911,0.490739201,0.483092304,0.475149302,0.4669488 38,0.458523532,0.449901117,0.441105319,0.432156556,0 .423072494,0.413868495,0.404557985,0.395152744,0.385 663154,0.376098401,0.36646664,0.356775137,0.3470303 89,0.337238219,0.327403865,0.317532052,0.307627051,0 .297692725,0.287732579,0.279672612,0.389867024,0.495 887211,0.601902011,0.702357315,0.760465579,0.734502 826,0.709289769,0.684076712,0.658863655,0.633650598, 0.608236092,0.580247693,0.568627451,0.568627451,0.56 8627451,0.532782156,0.492719767,0.452657379,0.41259 499,0.372532602,0.332470213,0.292407824,0.252345436, 0.231259968,0.231259968,0.231259968,0.231259968,0.23 1259968,0.231259968,0.231259968,0.231259968,0.23125 9968,0.231259968,0.231259968,0.231259968,0.23125996 8,0.141195234}) Auxiliary Re-Coding Rate_cumulative 'Redesign Recoding Rate'+'Review Board Delay and Recode Rate' Auxiliary Re-Design Inspection Rate_cumulative 'Redesign Inspection Rate' Auxiliary Re-Non-Unit Testing Rate (1-'Unit Test Practice_TM')*('Redesign Recoding Rate'+'Review Board Delay and Recode Rate') Auxiliary Re-Non-Unit Testing Rate_ErrM 'Re-Non-Unit Testing Rate' Auxiliary Re-Unit Testing Rate MIN(DELAYPPL('Unit Test Practice_TM'*('Redesign Recoding Rate'+'Review Board Delay and Recode Rate'),'Unit Test Delay Time_TM'),'Tasks Recoded'/TIMESTEP) Auxiliary Re-Unit Testing Rate_cumulative 'Re-Unit Testing Rate' Auxiliary Re-worked Code Error Generation Rate 'Enable IT Feedback_ErrM'*'Code Error Density_ErrM'*'Review Board Delay and Recode Rate_ErrM' Auxiliary Recode Inspection Rate_cumulative 'Recode Inspection Rate' Auxiliary Recode Inspection Rate MIN(DELAYPPL('Code Inspection Practice_TM'*('Re- Unit Testing Rate'+(1-'Unit Test Practice_TM')*('Redesign Recoding Rate'+'Review Board Delay and Recode Rate')),'Code Inspection Delay Time_TM'),'Tasks Re-Unit Tested'/TIMESTEP) 301 Table 33: Continued Type Name Equation, Default or Used Run Execution Values Auxiliary Recode Non-Inspection Rate IF('Unit Test Practice_TM'=0 AND 'Code Inspection Practice_TM'=0,('Redesign Recoding Rate'+'Review Board Delay and Recode Rate'), IF('Code Inspection Practice_TM'<1 AND 'Unit Test Practice_TM'=0, (1-'Code Inspection Practice_TM')*('Redesign Recoding Rate'+'Review Board Delay and Recode Rate'),(1-'Code Inspection Practice_TM')*('Re-Unit Testing Rate'+(1-'Unit Test Practice_TM')*('Redesign Recoding Rate'+'Review Board Delay and Recode Rate')))) Auxiliary Redesign Error Fraction 'Fraction of Design Errors Requiring Full Redesign_TM'*'Design Defect Density Ratio' Auxiliary Redesign Error Fraction_ErrM 'Redesign Error Fraction' Auxiliary Redesign Error Generation Rate IF('Errors Found in IT'>0,'Enable IT Feedback_ErrM'*'Design Error Density_ErrM'*'Redesign Rate_ErrM',0/TIMESTEP) Auxiliary Redesign Inspection Rate MIN(DELAYPPL('Design Inspection Practice_TM'*'Review Board Delay and Redesign Rate','Design Inspection Delay Time_TM'),'Tasks Redesigned'/TIMESTEP) Auxiliary Redesign Non-Inspection Rate (1-'Design Inspection Practice_TM')*'Review Board Delay and Redesign Rate' Auxiliary Redesign Rate_cumulative 'Review Board Delay and Redesign Rate' Auxiliary Redesign Rate_ErrM 'Review Board Delay and Redesign Rate' Auxiliary Redesign Recoding Rate MIN('Integration Test Design Error Fix Rate','Tasks for Recoding'/TIMESTEP) Level Rejected and Defered Tasks 0 In Flow Review Board Task Reject and Defer Rate.in 'Review Board Task Reject and Defer Rate' Reservoir Requirements 0 Out Flow Design Rate.out 'Design Rate' In Flow Requirements Generation Rate.in 'Requirements Generation Rate' Auxiliary Requirements Generation Rate PULSE('Job Size_TM',STARTTIME,9999*TIMESTEP) Constant Resource Leveling 0 Auxiliary Resource Leveling_EfM 'Resource Leveling' Constant Review Board Action Delay Time 7<<da>> Auxiliary Review Board Decision Percent IF('Use Constant Reject & Defer Rate Model_TM','Average Reject and Defer Percent _TM'/TIMESTEP,'Review Board Reject and Percent Rate Model'/TIMESTEP) Auxiliary Review Board Delay and Recode Rate MIN(DELAYPPL('Integration Test Code Error Fix Rate','Review Board Delay Time_TM'),'Only Recode Fraction'*'Accepted Tasks'/TIMESTEP) Auxiliary Review Board Delay and Recode Rate_ErrM 'Review Board Delay and Recode Rate' Auxiliary Review Board Delay and Redesign Rate MIN(DELAYPPL('Integration Test Design Error Fix Rate','Review Board Delay Time_TM'),'Redesign Error Fraction'*'Accepted Tasks'/TIMESTEP) Constant Review Board Delay Time 7<<da>> 302 Table 33: Continued Type Name Equation, Default or Used Run Execution Values Auxiliary Review Board Delay Time_TM 'Review Board Delay Time' Auxiliary Review Board Reject and Percent Rate Model DELAYPPL('Defer Task Acceleration Model','Review Board Delay Time_TM') Auxiliary Review Board Task Accept Rate MIN('Tasks Failing'*(1/TIMESTEP-'Review Board Decision Percent'),'Tasks Failing'/TIMESTEP) Auxiliary Review Board Task Reject and Defer Rate MIN('Tasks Failing'*'Review Board Decision Percent','Tasks Failing'/TIMESTEP) Level Reworked Code Errors 0 In Flow Code Error Rework Rate.in 'Code Error Rework Rate' Level Reworked Design Errors 0 In Flow Design Rework Rate.in 'Design Rework Rate' Auxiliary Runmax Defect Density RUNMAX('Running Defect Density') Auxiliary Running Defect Density IF('Tasks Ready For Test_TEAM'>1,'Escaped Errors_TEAM'/'Tasks Ready For Test_TEAM',0) Level Sampled Defect Density 0 In Flow Sampler.in 'Sampler' Auxiliary Sampler PULSE('Running Defect Density',STARTTIME+TIMESTEP*(0.65*'Estimated Development Schedule_TEAM'-1),9999*TIMESTEP) Constant SCED Schedule Constraint 1 Auxiliary SCED Schedule Constraint_EfM 'SCED Schedule Constraint' Constant Task Failure Intensity Decay Parameter 30<<da>> Auxiliary Task Failure Rate Model IF('Use Constant Task Failure Rate Model_TM', ('Defect Density_TM'/3.3)*'Fraction of Tests that Fail', IF('Time for Decay'<0,('Defect Density_TM'/3.3)* 'Fraction of Tests that Fail',('Defect Density_TM'/3.3)* 'Fraction of Tests that Fail'*EXP(-('Time for Decay'* 1<<da>>)/'Failure Intensity Decay Parameter_TM'))) Reservoir Tasks Coded 0 In Flow Coding Rate.in 'Coding Rate' Out Flow Non-Unit Testing Rate.out 'Non-Unit Testing Rate' Out Flow Unit Testing Rate.out 'Unit Testing Rate' Reservoir Tasks Designed 0 Out Flow Design Inspection Rate.out 'Design Inspection Rate' Out Flow Design Non-Inspection Rate.out 'Design Non-Inspection Rate' In Flow Design Rate.in 'Design Rate' Reservoir Tasks Failing 0 In Flow Integration RE-Test Failing Rate.in 'Integration RE-Test Failing Rate' In Flow Integration Test Failing Rate.in 'Integration Test Failing Rate' Out Flow Review Board Task Accept Rate.out 'Review Board Task Accept Rate' Out Flow Review Board Task Reject and Defer Rate.out 'Review Board Task Reject and Defer Rate' 303 Table 33: Continued Type Name Equation, Default or Used Run Execution Values Reservoir Tasks for Coding 0 Out Flow Coding Rate.out 'Coding Rate' In Flow Design Inspection Rate.in 'Design Inspection Rate' In Flow Design Non-Inspection Rate.in 'Design Non-Inspection Rate' Reservoir Tasks for Recoding 0 In Flow Redesign Inspection Rate.in 'Redesign Inspection Rate' In Flow Redesign Non-Inspection Rate.in 'Redesign Non-Inspection Rate' Out Flow Redesign Recoding Rate.out 'Redesign Recoding Rate' Reservoir Tasks RE-Tested 0 In Flow Integration RE-Test Passing Rate.in 'Integration RE-Test Passing Rate' Reservoir Tasks Re-Unit Tested 0 In Flow Re-Non-Unit Testing Rate.in 'Re-Non-Unit Testing Rate' In Flow Re-Unit Testing Rate.in 'Re-Unit Testing Rate' Out Flow Recode Inspection Rate.out 'Recode Inspection Rate' Out Flow Recode Non-Inspection Rate.out 'Recode Non-Inspection Rate' Reservoir Tasks Ready for RE-Test 0 Out Flow Integration RE-Test Failing Rate.out 'Integration RE-Test Failing Rate' Out Flow Integration RE-Test Passing Rate.out 'Integration RE-Test Passing Rate' In Flow Recode Inspection Rate.in 'Recode Inspection Rate' In Flow Recode Non-Inspection Rate.in 'Recode Non-Inspection Rate' Reservoir Tasks Ready for Test 0 In Flow Code Inspection Rate.in 'Code Inspection Rate' In Flow Code Non-Inspection Rate.in 'Code Non-Inspection Rate' Out Flow Integration Test Failing Rate.out 'Integration Test Failing Rate' Out Flow Integration Test Passingn Rate.out 'Integration Test Passing Rate' Auxiliary Tasks Ready For Test_TEAM 'Tasks Ready for Test' Reservoir Tasks Recoded 0 Out Flow Re-Non-Unit Testing Rate.out 'Re-Non-Unit Testing Rate' Out Flow Re-Unit Testing Rate.out 'Re-Unit Testing Rate' In Flow Redesign Recoding Rate.in 'Redesign Recoding Rate' In Flow Review Board Delay and Recode Rate.in 'Review Board Delay and Recode Rate' Reservoir Tasks Redesigned 0 Out Flow Redesign Inspection Rate.out 'Redesign Inspection Rate' Out Flow Redesign Non-Inspection Rate.out 'Redesign Non-Inspection Rate' In Flow Review Board Delay and Redesign Rate.in 'Review Board Delay and Redesign Rate' Reservoir Tasks Tested 0 In Flow Integration Test Passing Rate.in 'Integration Test Passing Rate' Auxiliary Tasks Tested_EfM 'Tasks Tested' Auxiliary Tasks Tested_TEAM 'Tasks Tested' 304 Table 33: Continued Type Name Equation, Default or Used Run Execution Values Level Tasks Unit Tested 0 Out Flow Code Inspection Rate.out 'Code Inspection Rate' Out Flow Code Non-Inspection Rate'.out 'Code Non-Inspection Rate' In Flow Non-Unit Testing Rate.in 'Non-Unit Testing Rate' In Flow Unit Testing Rate.in 'Unit Testing Rate' Auxiliary Test Effort Adjustment IF('Disable Test Effort Adjustment_TEAM',1,(0.0803*(20*'Calibrated COCOMO Constant_TEAM'*(0.06*'Job Size_TEAM')^1.2)+'Job Size_TEAM'*'Sampled Defect Density'*'Testing Effort per Error_TEAM')/(0.0803*(20*'Calibrated COCOMO Constant_TEAM'*(0.06*'Job Size_TEAM')^1.2)+'Job Size_TEAM'*3*'Testing Effort per Error_TEAM')) Auxiliary Test Effort Adjustment_EfM 'Test Effort Adjustment' Auxiliary Test Effort Adjustment_ErrM 'Test Effort Adjustment' Auxiliary Testing Effort Adjustment_TM 'Test Effort Adjustment' Auxiliary Test Schedule Adjustment IF('Disable Test Effort Adjustment_TEAM',1,'Estimated Test Schedule_TEAM'/(0.35*'Estimated Development Schedule_TEAM')) Auxiliary Test Staff Curve 'Raw Test Staff Curve'*'Test Staff Modification Curve' Auxiliary Test Staff Modification Curve GRAPH(TestingTime,0,120*DeltaTime_EfM*'Madachy Test Calibration_EfM',{1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}) Auxiliary Testing Effort per Error 0.16*'Calibrated COCOMO Constant' Auxiliary Testing Effort per Error_EfM 'Testing Effort per Error' Auxiliary Testing Effort per Error_TEAM 'Testing Effort per Error' Auxiliary Testing Manpower Level Adjustment IF('Disable Test Effort Adjustment_TEAM',1,'Test Effort Adjustment'/'Test Schedule Adjustment') Auxiliary Testing Manpower Level Adjustment_EfM 'Testing Manpower Level Adjustment' Auxiliary Testing Manpower Rate 'Manpower Pool'*'Test Staff Curve'*'Testing Manpower Level Adjustment_EfM'/TIMESTEP Auxiliary Testing Manpower Rate_TE 'Testing Manpower Rate' Auxiliary Testing Manpower Rate_TM 'Testing Manpower Rate' Auxiliary TestingTime (TimeStep_EfM-0.65*'Estimated Development Schedule')/'Estimated Test Schedule' Auxiliary Time for Decay ('TimeStep#_TM'-0.85*'Estimated Development Schedule_TM') Auxiliary TimeStep#_TM TimeStepNumber Auxiliary TimeStep_EfM TimeStepNumber Auxiliary TimeStep_TM TimeStepNumber Auxiliary TimeStep_value TIMESTEP Auxiliary TimeStepFraction (TIME-STARTTIME)/(STOPTIME-STARTTIME) Auxiliary TimeStepNumber (TIME-STARTTIME)/(TIMESTEP) 305 Table 33: Continued Type Name Equation, Default or Used Run Execution Values Auxiliary Total Injected Error Density 'Design Error Density in Code_TM'+'Code Error Density_TM' Auxiliary Total Manpower Rate 'Code Inspection Manpower Rate_TE'+'Code Rework Manpower Rate_TE'+'Coding Manpower Rate_TE'+'Design Inspection Manpower Rate_TE'+'Design Manpower Rate_TE'+'Design Rework Manpower Rate_TE'+'Testing Manpower Rate_TE' Level Undetected Design Errors 0 In Flow Design Error Escape Rate.in 'Design Error Escape Rate' Auxiliary Unit and Re-Unit Testing Rate_ErrM 'Unit Testing Rate'+'Re-Unit Testing Rate' Constant Unit Test Delay Time 10<<da>> Auxiliary Unit Test Delay Time_TM 'Unit Test Delay Time' Constant Unit Test Effectiveness 1 Auxiliary Unit Test Effectiveness_ErrM 'Unit Test Effectiveness' Constant Unit Test Practice 1 Auxiliary Unit Test Practice_TM 'Unit Test Practice' Auxiliary Unit Testing Rate DELAYPPL('Unit Test Practice_TM'*'Coding Rate','Unit Test Delay Time_TM') Auxiliary Unit Testing Rate_cumulative 'Unit Testing Rate' Constant Use Constant Reject & Defer Rate Model TRUE Auxiliary Use Constant Reject & Defer Rate Model_TM 'Use Constant Reject & Defer Rate Model' Constant Use Constant Task Failure Rate Model FALSE Auxiliary Use Constant Task Failure Rate Model_TM 'Use Constant Task Failure Rate Model' 306 A P P E N D I X : E – D Y N A M I C S M O D E L I N G D A T A E.1 Introduction This appendix provides the baseline test matrix and the results from numerically comparing Madachy’s original iThink implementation of his inspection-based model to the Powersim implementation used here. The comparison results are computed as the iThink value minus the equivalent Powersim value. The small numerical differences are likely caused by subtle details between reservoirs and levels in Powersim versus iThink. The issue has to do with when a level or reservoir goes to zero or below in Powersim due to out flow rates pulling them down, and how iThink implements this. During data runs it appeared that iThink simply returned zeros, while Powersim went below zero. Thus, the implementation in the Powersim model discussed in Chapter 4 and the equations shown above in Table 33 of appendix D uses first order control minimization (MIN functions) to gracefully take the reservoirs to zero. This appendix also provides an augmented test matrix and corresponding results to probe the implementation of the unit testing and integration test feedback effects on the model. Unless specifically noted, all of Madachy’s default values are used throughout all the test cases. E.2 Model Comparison Tables 307 Madachy ID My ID Job Size (tasks) Rel. Sched. COCOMO Constant Design Insp. Practice Code Insp. Practice Design Error Density Code Error Density Average Design Error Amp. Testing Effort Per Error 1.1 BD1.1 533.3 1 3.6 1 1 1.5 1.5 1 0.58 1.2 BD1.2 533.3 1 3.6 0.5 0.5 1.5 1.5 1 0.58 1.3 BD1.3 533.3 1 3.6 0 0 1.5 1.5 1 0.58 2.1 BD2.1 1066.7 1 3.6 1 1 1.5 1.5 1 0.58 2.2 BD2.2 1066.7 1 3.6 0.5 0.5 1.5 1.5 1 0.58 2.3 BD2.3 1066.7 1 3.6 0 0 1.5 1.5 1 0.58 3.1 BD3.1 533.3 1 7.2 1 1 1.5 1.5 1 1.15 3.2 BD3.2 533.3 1 7.2 0.5 0.5 1.5 1.5 1 1.15 3.3 BD3.3 533.3 1 7.2 0 0 1.5 1.5 1 1.15 4.1 BD4.1 1066.7 1 7.2 1 1 1.5 1.5 1 1.15 4.2 BD4.2 1066.7 1 7.2 0.5 0.5 1.5 1.5 1 1.15 4.3 BD4.3 1066.7 1 7.2 0 0 1.5 1.5 1 1.15 8.1 BD8.1 533.3 1 3.6 1 1 2.4 2.4 1 0.58 8.2 BD8.2 533.3 1 3.6 1 1 1.2 1.2 1 0.58 8.3 BD8.3 533.3 1 3.6 1 1 1.8 1.2 1 0.58 8.4 BD8.4 533.3 1 3.6 1 1 2.1 0.9 1 0.58 8.5 BD8.5 533.3 1 3.6 1 1 0.6 0.6 1 0.58 8.6 BD8.6 533.3 1 3.6 0 0 2.4 2.4 1 0.58 8.7 BD8.7 533.3 1 3.6 0 0 1.2 1.2 1 0.58 8.8 BD8.8 533.3 1 3.6 0 0 1.8 1.2 1 0.58 8.9 BD8.9 533.3 1 3.6 0 0 2.1 0.9 1 0.58 8.10 BD8.10 533.3 1 3.6 0 0 0.6 0.6 1 0.58 8.11 BD8.11 533.3 1 3.6 1 1 0.3 0.3 1 0.58 8.12 BD8.12 533.3 1 3.6 0 0 0.3 0.3 1 0.58 8.13 BD8.13 533.3 1 3.6 1 1 2.1 2.1 1 0.58 8.14 BD8.14 533.3 1 3.6 0 0 2.1 2.1 1 0.58 8.15 BD8.15 533.3 1 3.6 1 1 1.8 1.8 1 0.58 8.16 BD8.16 533.3 1 3.6 0 0 1.8 1.8 1 0.58 8.17 BD8.17 533.3 1 3.6 1 1 0.9 0.9 1 0.58 8.18 BD8.18 533.3 1 3.6 0 0 0.9 0.9 1 0.58 Table 34: Baseline Test Matrix 308 Madachy ID My ID Job Size (tasks) Rel. Sched. COCOMO Constant Design Insp. Practice Code Insp. Practice Design Error Density Code Error Density Average Design Error Amp. Testing Effort Per Error 9.2 BD9.2 533.3 1 3.6 1 1 1.5 1.5 2.5 0.58 9.3 BD9.3 533.3 1 3.6 1 1 1.5 1.5 5 0.58 9.4 BD9.4 533.3 1 3.6 1 1 1.5 1.5 7.5 0.58 9.5 BD9.5 533.3 1 3.6 1 1 1.5 1.5 10 0.58 9.7 BD9.7 533.3 1 3.6 0 0 1.5 1.5 2.5 0.58 9.8 BD9.8 533.3 1 3.6 0 0 1.5 1.5 5 0.58 9.9 BD9.9 533.3 1 3.6 0 0 1.5 1.5 7.5 0.58 9.10 BD9.10 533.3 1 3.6 0 0 1.5 1.5 10 0.58 11.2 BD11.2 533.3 0.9 3.6 1 1 1.5 1.5 1 0.58 11.3 BD11.3 533.3 0.8 3.6 1 1 1.5 1.5 1 0.58 11.4 BD11.4 533.3 0.7 3.6 1 1 1.5 1.5 1 0.58 Table 34: Continued 309 My ID Completion Time (days) Cum Design Effort Cum Coding Effort Cum Design Insp. Effort Cum Design Rework Effort Cum Code Insp. Effort Cum Code Rework Effort Cum Testing Effort Cum Total Effort Undet. Design Errors Errors Fixed in Test BD1.1 0.00 -0.13 0.12 -0.01 0.00 -0.01 0.00 0.06 0.03 -0.02 -0.03 BD1.2 0.00 -0.13 0.12 0.00 0.00 0.00 0.01 0.06 0.06 -0.03 -0.06 BD1.3 -1.00 -0.13 0.12 0.00 0.00 0.00 0.00 0.14 0.13 -0.05 -0.46 BD2.1 0.00 -0.29 0.27 -0.01 0.00 -0.01 -0.01 0.16 0.10 -0.04 -0.06 BD2.2 0.00 -0.29 0.27 -0.01 0.00 -0.01 -0.01 0.17 0.13 -0.07 -0.12 BD2.3 0.00 -0.29 0.27 0.00 0.00 0.00 0.00 0.31 0.29 -0.10 -0.69 BD3.1 0.00 -0.26 0.24 -0.01 0.00 -0.01 0.00 0.13 0.10 -0.02 -0.03 BD3.2 0.00 -0.26 0.24 0.00 0.00 0.00 0.00 0.14 0.11 -0.03 -0.06 BD3.3 0.00 -0.26 0.24 0.00 0.00 0.00 0.00 0.26 0.25 -0.05 -0.32 BD4.1 0.00 -0.58 0.54 -0.01 0.00 -0.01 -0.01 0.31 0.24 -0.04 -0.06 BD4.2 0.00 -0.58 0.54 -0.01 0.00 -0.01 -0.01 0.37 0.31 -0.07 -0.12 BD4.3 0.00 -0.58 0.54 0.00 0.00 0.00 0.00 0.62 0.58 -0.10 -0.45 BD8.1 0.00 -0.13 0.12 -0.01 0.00 -0.01 -0.01 0.08 0.05 -0.03 -0.05 BD8.2 0.00 -0.13 0.12 -0.01 0.00 -0.01 0.00 0.06 0.03 -0.02 -0.02 BD8.3 0.00 -0.13 0.12 -0.01 0.00 -0.01 0.00 0.06 0.03 -0.02 -0.03 BD8.4 0.00 -0.13 0.12 -0.01 0.00 -0.01 0.00 0.06 0.03 -0.03 -0.03 BD8.5 0.00 -0.13 0.12 -0.01 0.00 -0.01 0.00 0.05 0.02 -0.01 -0.01 BD8.6 -1.00 -0.13 0.12 0.00 0.00 0.00 0.00 0.19 0.18 -0.08 -0.11 BD8.7 0.00 -0.13 0.12 0.00 0.00 0.00 0.00 0.12 0.10 -0.04 -0.37 BD8.8 -1.00 -0.13 0.12 0.00 0.00 0.00 0.00 0.14 0.13 -0.06 -0.39 BD8.9 -1.00 -0.13 0.12 0.00 0.00 0.00 0.00 0.14 0.13 -0.07 -0.31 BD8.10 0.00 -0.13 0.12 0.00 0.00 0.00 0.00 0.08 0.07 -0.02 -0.18 BD8.11 2.00 -0.13 0.12 -0.01 0.00 -0.01 0.00 0.04 0.02 0.00 -0.01 BD8.12 0.00 -0.13 0.12 0.00 0.00 0.00 0.00 0.06 0.05 -0.01 -0.09 BD8.13 0.00 -0.13 0.12 -0.01 0.00 -0.01 -0.01 0.08 0.04 -0.03 -0.04 BD8.14 0.00 -0.13 0.12 0.00 0.00 0.00 0.00 0.17 0.16 -0.07 -0.11 BD8.15 0.00 -0.13 0.12 -0.01 0.00 -0.01 -0.01 0.07 0.04 -0.02 -0.04 BD8.16 0.00 -0.13 0.12 0.00 0.00 0.00 0.00 0.16 0.14 -0.06 -0.10 Table 35: Comparison Results (iThink minus Powersim) 310 My ID Completion Time (days) Cum Design Effort Cum Coding Effort Cum Design Insp. Effort Cum Design Rework Effort Cum Code Insp. Effort Cum Code Rework Effort Cum Testing Effort Cum Total Effort Undet. Design Errors Errors Fixed in Test BD8.17 0.00 -0.13 0.12 -0.01 0.00 -0.01 0.00 0.05 0.02 -0.01 -0.02 BD8.18 0.00 -0.13 0.12 0.00 0.00 0.00 0.00 0.09 0.08 -0.03 -0.27 BD9.2 0.00 -0.13 0.12 -0.01 0.00 -0.01 -0.01 0.08 0.04 -0.02 -0.04 BD9.3 0.00 -0.13 0.12 -0.01 0.00 -0.01 -0.01 0.09 0.05 -0.02 -0.07 BD9.4 0.00 -0.13 0.12 -0.01 0.00 -0.01 -0.02 0.11 0.06 -0.02 -0.09 BD9.5 0.00 -0.13 0.12 -0.01 0.00 -0.01 -0.02 0.12 0.08 -0.02 -0.10 BD9.7 0.00 -0.13 0.12 0.00 0.00 0.00 0.00 0.20 0.19 -0.05 -0.11 BD9.8 0.00 -0.13 0.12 0.00 0.00 0.00 0.00 0.32 0.31 -0.05 -0.12 BD9.9 0.00 -0.13 0.12 0.00 0.00 0.00 0.00 0.44 0.43 -0.05 -0.13 BD9.10 -1.00 -0.13 0.12 0.00 0.00 0.00 0.00 0.56 0.55 -0.05 -0.13 BD11.2 0.00 -0.11 0.10 -0.01 0.00 -0.01 0.00 0.06 0.03 -0.02 -0.03 BD11.3 0.00 -0.10 0.09 -0.01 0.00 -0.01 0.00 0.05 0.03 -0.02 -0.03 BD11.4 0.00 -0.09 0.08 -0.01 0.00 -0.01 0.00 0.04 0.02 -0.02 -0.03 Table 35: Continued 311 E.3 Modified Model Test Matrix and Results Tables 312 My ID Job Size (tasks) Rel. Sched. COCO. Const. Design Insp. Prac. Code Insp. Prac. Design Error Density Code Error Density Ave. Design Error Amp. Frac Des. Err. Full Unit Test Prac. Unit Test Effic. Frac. of Tests that Fail Integrat. Test Effic. Integrat. Test Feedbck Loop En. BD20.1 1666.7 1 3.6 1 1 1.5 1.5 1 0 0 0 0 1 0 BD20.2 5000.0 1 3.6 1 1 1.5 1.5 1 0 0 0 0 1 0 BD20.3 8333.3 1 3.6 1 1 1.5 1.5 1 0 0 0 0 1 0 BD20.4 16666.7 1 3.6 1 1 1.5 1.5 1 0 0 0 0 1 0 BD20.5 25000.0 1 3.6 1 1 1.5 1.5 1 0 0 0 0 1 0 BD20.6 33333.3 1 3.6 1 1 1.5 1.5 1 0 1 1 0 1 0 BD20.7 533.3 1 3.6 1 1 1.5 1.5 1 0 1 1 0 1 0 BD20.8 1066.7 1 3.6 1 1 1.5 1.5 1 0 1 1 0 1 0 BD20.9 1666.7 1 3.6 1 1 1.5 1.5 1 0 1 1 0 1 0 BD20.10 5000.0 1 3.6 1 1 1.5 1.5 1 0 1 1 0 1 0 BD20.11 8333.3 1 3.6 1 1 1.5 1.5 1 0 1 1 0 1 0 BD20.12 16666.7 1 3.6 1 1 1.5 1.5 1 0 1 1 0 1 0 BD20.13 25000.0 1 3.6 1 1 1.5 1.5 1 0 1 1 0 1 0 BD20.14 33333.3 1 3.6 1 1 1.5 1.5 1 0 1 1 0 1 0 BD20.15 33333.3 1 3.6 0 0 1.5 1.5 1 0 1 1 0 1 0 U1.1 533.3 1 3.6 1 1 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U1.2 533.3 1 3.6 0.5 0.5 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U1.3 533.3 1 3.6 0 0 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U1.4 533.3 1 3.6 1 1 1.5 1.5 1 0.01 1 0.85 0.1 0.85 1 U1.5 533.3 1 3.6 0.5 0.5 1.5 1.5 1 0.01 1 0.85 0.1 0.85 1 U1.6 533.3 1 3.6 0 0 1.5 1.5 1 0.01 1 0.85 0.1 0.85 1 U1.7 533.3 1 3.6 1 1 1.5 1.5 1 0.01 1 0.5 0.1 0.85 1 U1.8 533.3 1 3.6 0.5 0.5 1.5 1.5 1 0.01 1 0.5 0.1 0.85 1 U1.9 533.3 1 3.6 0 0 1.5 1.5 1 0.01 1 0.5 0.1 0.85 1 U1.10 533.3 1 3.6 1 1 1.5 1.5 1 0.01 1 0.15 0.1 0.85 1 U1.11 533.3 1 3.6 0.5 0.5 1.5 1.5 1 0.01 1 0.15 0.1 0.85 1 U1.12 533.3 1 3.6 0 0 1.5 1.5 1 0.01 1 0.15 0.1 0.85 1 U1.13 533.3 1 3.6 1 1 1.5 1.5 1 0.01 1 1 0.2 0.85 1 U1.14 533.3 1 3.6 0.5 0.5 1.5 1.5 1 0.01 1 1 0.2 0.85 1 U1.15 533.3 1 3.6 0 0 1.5 1.5 1 0.01 1 1 0.2 0.85 1 Table 36: Augmented Test Matrix 313 My ID Job Size (tasks) Rel. Sched. COCO. Const. Design Insp. Prac. Code Insp. Prac. Design Error Density Code Error Density Ave. Design Error Amp. Test Effort Per Error Unit Test Prac. Unit Test Eff. Frac. of Tests that Fail Integrat. Test Eff. Integrat. Test Feedbck Loop En. U1.16 533.3 1 3.6 1 1 1.5 1.5 1 0.05 1 1 0.1 0.85 1 U1.17 533.3 1 3.6 0.5 0.5 1.5 1.5 1 0.05 1 1 0.1 0.85 1 U1.18 533.3 1 3.6 0 0 1.5 1.5 1 0.05 1 1 0.1 0.85 1 U1.19 533.3 1 3.6 1 1 1.5 1.5 1 0.01 1 1 0.3 0.85 1 U1.20 533.3 1 3.6 0.5 0.5 1.5 1.5 1 0.01 1 1 0.3 0.85 1 U1.21 533.3 1 3.6 0 0 1.5 1.5 1 0.01 1 1 0.3 0.85 1 U1.22 533.3 1 3.6 1 1 1.5 1.5 1 0.01 1 1 0.1 0.75 1 U1.23 533.3 1 3.6 0.5 0.5 1.5 1.5 1 0.01 1 1 0.1 0.75 1 U1.24 533.3 1 3.6 0 0 1.5 1.5 1 0.01 1 1 0.1 0.75 1 U1.25 533.3 1 3.6 1 1 1.5 1.5 1 0.01 1 1 0.1 0.5 1 U1.26 533.3 1 3.6 0.5 0.5 1.5 1.5 1 0.01 1 1 0.1 0.5 1 U1.27 533.3 1 3.6 0 0 1.5 1.5 1 0.01 1 1 0.1 0.5 1 U1.28 533.3 1 3.6 1 1 1.5 1.5 1 0.01 0.5 1 0.1 0.85 1 U1.29 533.3 1 3.6 0.5 0.5 1.5 1.5 1 0.01 0.5 1 0.1 0.85 1 U1.30 533.3 1 3.6 0 0 1.5 1.5 1 0.01 0.5 1 0.1 0.85 1 U1.31 533.3 1 3.6 1 1 1.5 1.5 1 0.01 0 1 0.1 0.85 1 U1.32 533.3 1 3.6 0.5 0.5 1.5 1.5 1 0.01 0 1 0.1 0.85 1 U1.33 533.3 1 3.6 0 0 1.5 1.5 1 0.01 0 1 0.1 0.85 1 U1.34 533.3 1 3.6 1 1 1.5 1.5 1 0.01 0.5 0.85 0.1 0.85 1 U1.35 533.3 1 3.6 0.5 0.5 1.5 1.5 1 0.01 0.5 0.85 0.1 0.85 1 U1.36 533.3 1 3.6 0 0 1.5 1.5 1 0.01 0.5 0.85 0.1 0.85 1 U1.37 533.3 1 3.6 1 1 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 U1.38 533.3 1 3.6 0.5 0.5 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 U1.39 533.3 1 3.6 0 0 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 U2.1 1066.7 1 3.6 1 1 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U2.2 1066.7 1 3.6 0.5 0.5 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U2.3 1066.7 1 3.6 0 0 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U2.4 1066.7 1 3.6 1 1 1.5 1.5 1 0.01 0.5 1 0.1 0.85 1 U2.5 1066.7 1 3.6 0.5 0.5 1.5 1.5 1 0.01 0.5 1 0.1 0.85 1 U2.6 1066.7 1 3.6 0 0 1.5 1.5 1 0.01 0.5 1 0.1 0.85 1 U2.7 1066.7 1 3.6 1 1 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 Table 36: Continued 314 My ID Job Size (tasks) Rel. Sched. COCO. Const. Design Insp. Prac. Code Insp. Prac. Design Error Density Code Error Density Ave. Design Error Amp. Test Effort Per Error Unit Test Prac. Unit Test Eff. Frac. of Tests that Fail Integrat. Test Eff. Integrat. Test Feedbck Loop En. U2.8 1066.7 1 3.6 0.5 0.5 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 U2.9 1066.7 1 3.6 0 0 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 U2.10 1066.7 1 3.6 1 1 1.5 1.5 1 0.01 0 1 0.1 0.85 1 U2.11 1066.7 1 3.6 0.5 0.5 1.5 1.5 1 0.01 0 1 0.1 0.85 1 U2.12 1066.7 1 3.6 0 0 1.5 1.5 1 0.01 0 1 0.1 0.85 1 U8.1 533.3 1 3.6 1 1 2.4 2.4 1 0.01 1 1 0.1 0.85 1 U8.2 533.3 1 3.6 1 1 1.2 1.2 1 0.01 1 1 0.1 0.85 1 U8.3 533.3 1 3.6 1 1 1.8 1.2 1 0.01 1 1 0.1 0.85 1 U8.4 533.3 1 3.6 1 1 2.1 0.9 1 0.01 1 1 0.1 0.85 1 U8.5 533.3 1 3.6 1 1 0.6 0.6 1 0.01 1 1 0.1 0.85 1 U8.6 533.3 1 3.6 0 0 2.4 2.4 1 0.01 1 1 0.1 0.85 1 U8.7 533.3 1 3.6 0 0 1.2 1.2 1 0.01 1 1 0.1 0.85 1 U8.8 533.3 1 3.6 0 0 1.8 1.2 1 0.01 1 1 0.1 0.85 1 U8.9 533.3 1 3.6 0 0 2.1 0.9 1 0.01 1 1 0.1 0.85 1 U8.10 533.3 1 3.6 0 0 0.6 0.6 1 0.01 1 1 0.1 0.85 1 U8.11 533.3 1 3.6 1 1 0.3 0.3 1 0.01 1 1 0.1 0.85 1 U8.12 533.3 1 3.6 0 0 0.3 0.3 1 0.01 1 1 0.1 0.85 1 U8.13 533.3 1 3.6 1 1 2.1 2.1 1 0.01 1 1 0.1 0.85 1 U8.14 533.3 1 3.6 0 0 2.1 2.1 1 0.01 1 1 0.1 0.85 1 U8.15 533.3 1 3.6 1 1 1.8 1.8 1 0.01 1 1 0.1 0.85 1 U8.16 533.3 1 3.6 0 0 1.8 1.8 1 0.01 1 1 0.1 0.85 1 U8.17 533.3 1 3.6 1 1 0.9 0.9 1 0.01 1 1 0.1 0.85 1 U8.18 533.3 1 3.6 0 0 0.9 0.9 1 0.01 1 1 0.1 0.85 1 U8.19 533.3 1 3.6 1 1 2.4 2.4 1 0.01 0.5 0.5 0.1 0.85 1 U8.20 533.3 1 3.6 1 1 1.2 1.2 1 0.01 0.5 0.5 0.1 0.85 1 U8.21 533.3 1 3.6 1 1 1.8 1.2 1 0.01 0.5 0.5 0.1 0.85 1 U8.22 533.3 1 3.6 1 1 2.1 0.9 1 0.01 0.5 0.5 0.1 0.85 1 U8.23 533.3 1 3.6 1 1 0.6 0.6 1 0.01 0.5 0.5 0.1 0.85 1 U8.24 533.3 1 3.6 0 0 2.4 2.4 1 0.01 0.5 0.5 0.1 0.85 1 U8.25 533.3 1 3.6 0 0 1.2 1.2 1 0.01 0.5 0.5 0.1 0.85 1 U8.26 533.3 1 3.6 0 0 1.8 1.2 1 0.01 0.5 0.5 0.1 0.85 1 Table 36: Continued 315 My ID Job Size (tasks) Rel. Sched. COCO. Const. Design Insp. Prac. Code Insp. Prac. Design Error Density Code Error Density Ave. Design Error Amp. Test Effort Per Error Unit Test Prac. Unit Test Eff. Frac. of Tests that Fail Integrat. Test Eff. Integrat. Test Feedbck Loop En. U8.27 533.3 1 3.6 0 0 2.1 0.9 1 0.01 0.5 0.5 0.1 0.85 1 U8.28 533.3 1 3.6 0 0 0.6 0.6 1 0.01 0.5 0.5 0.1 0.85 1 U8.29 533.3 1 3.6 1 1 0.3 0.3 1 0.01 0.5 0.5 0.1 0.85 1 U8.30 533.3 1 3.6 0 0 0.3 0.3 1 0.01 0.5 0.5 0.1 0.85 1 U8.31 533.3 1 3.6 1 1 2.1 2.1 1 0.01 0.5 0.5 0.1 0.85 1 U8.32 533.3 1 3.6 0 0 2.1 2.1 1 0.01 0.5 0.5 0.1 0.85 1 U8.33 533.3 1 3.6 1 1 1.8 1.8 1 0.01 0.5 0.5 0.1 0.85 1 U8.34 533.3 1 3.6 0 0 1.8 1.8 1 0.01 0.5 0.5 0.1 0.85 1 U8.35 533.3 1 3.6 1 1 0.9 0.9 1 0.01 0.5 0.5 0.1 0.85 1 U8.36 533.3 1 3.6 0 0 0.9 0.9 1 0.01 0.5 0.5 0.1 0.85 1 U9.2 533.3 1 3.6 1 1 1.5 1.5 2.5 0.01 1 1 0.1 0.85 1 U9.3 533.3 1 3.6 1 1 1.5 1.5 5 0.01 1 1 0.1 0.85 1 U9.4 533.3 1 3.6 1 1 1.5 1.5 7.5 0.01 1 1 0.1 0.85 1 U9.5 533.3 1 3.6 1 1 1.5 1.5 10 0.01 1 1 0.1 0.85 1 U9.7 533.3 1 3.6 0 0 1.5 1.5 2.5 0.01 1 1 0.1 0.85 1 U9.8 533.3 1 3.6 0 0 1.5 1.5 5 0.01 1 1 0.1 0.85 1 U9.9 533.3 1 3.6 0 0 1.5 1.5 7.5 0.01 1 1 0.1 0.85 1 U9.10 533.3 1 3.6 0 0 1.5 1.5 10 0.01 1 1 0.1 0.85 1 U9.11 533.3 1 3.6 1 1 1.5 1.5 2.5 0.01 0.5 0.5 0.1 0.85 1 U9.12 533.3 1 3.6 1 1 1.5 1.5 5 0.01 0.5 0.5 0.1 0.85 1 U9.13 533.3 1 3.6 1 1 1.5 1.5 7.5 0.01 0.5 0.5 0.1 0.85 1 U9.14 533.3 1 3.6 1 1 1.5 1.5 10 0.01 0.5 0.5 0.1 0.85 1 U9.15 533.3 1 3.6 0 0 1.5 1.5 2.5 0.01 0.5 0.5 0.1 0.85 1 U9.16 533.3 1 3.6 0 0 1.5 1.5 5 0.01 0.5 0.5 0.1 0.85 1 U9.17 533.3 1 3.6 0 0 1.5 1.5 7.5 0.01 0.5 0.5 0.1 0.85 1 U9.18 533.3 1 3.6 0 0 1.5 1.5 10 0.01 0.5 0.5 0.1 0.85 1 U11.2 533.3 0.9 3.6 1 1 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U11.3 533.3 0.8 3.6 1 1 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U11.4 533.3 0.7 3.6 1 1 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U11.5 533.3 0.9 3.6 1 1 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 U11.6 533.3 0.8 3.6 1 1 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 Table 36: Continued 316 My ID Job Size (tasks) Rel. Sched. COCO. Const. Design Insp. Prac. Code Insp. Prac. Design Error Density Code Error Density Ave. Design Error Amp. Test Effort Per Error Unit Test Prac. Unit Test Eff. Frac. of Tests that Fail Integrat. Test Eff. Integrat. Test Feedbck Loop En. U11.7 533.3 0.7 3.6 1 1 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 U11.8 533.3 0.9 3.6 0 0 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U11.9 533.3 0.8 3.6 0 0 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U11.10 533.3 0.7 3.6 0 0 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U11.11 533.3 0.9 3.6 0 0 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 U11.12 533.3 0.8 3.6 0 0 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 U11.13 533.3 0.7 3.6 0 0 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 U11.14 533.3 0.9 3.6 0 0 1.5 1.5 1 0.01 0 0 0.1 0.85 1 U11.15 533.3 0.8 3.6 0 0 1.5 1.5 1 0.01 0 0 0.1 0.85 1 U11.16 533.3 0.7 3.6 0 0 1.5 1.5 1 0.01 0 0 0.1 0.85 1 U20.1 1666.7 1 3.6 1 1 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U20.2 8333.3 1 3.6 1 1 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U20.3 16666.7 1 3.6 1 1 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U20.4 25000.0 1 3.6 1 1 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U20.5 33333.3 1 3.6 1 1 1.5 1.5 1 0.01 1 1 0.1 0.85 1 U20.6 1666.7 1 3.6 1 1 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 U20.7 8333.3 1 3.6 1 1 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 U20.8 16666.7 1 3.6 1 1 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 U20.9 25000.0 1 3.6 1 1 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 U20.10 33333.3 1 3.6 1 1 1.5 1.5 1 0.01 0.5 0.5 0.1 0.85 1 Table 36: Continued 317 My ID Compl. Time (days) Cum Design Effort Cum Coding Effort Cum Design Insp. Effort Cum Design Rework Effort Cum Code Insp. Effort Cum Code Rework Effort Cum Test Effort Cum Total Effort Undet. Design Errors Errors Escaping Integration Test Errors Found in IT BD1.1 265 2090 1224 101 26 101 74 575 4192 320 0 446 BD1.2 277 2090 1224 51 13 51 45 897 4371 560 0 949 BD1.3 286 2090 1224 0 0 0 0 1188 4503 799 0 1598 BD2.1 347 4804 2813 203 53 203 147 1380 9602 640 0 893 BD2.2 361 4804 2813 101 26 101 90 2058 9993 1119 0 1900 BD2.3 373 4804 2813 0 0 0 0 2730 10347 1599 0 3198 BD3.1 331 4181 2449 101 26 101 74 1150 8083 320 0 446 BD3.2 345 4181 2449 51 13 51 45 1771 8560 560 0 950 BD3.3 356 4181 2449 0 0 0 0 2376 9006 800 0 1599 BD4.1 433 9609 5626 203 53 203 147 2761 18602 640 0 893 BD4.2 450 9609 5626 101 26 101 90 4079 19633 1120 0 1901 BD4.3 465 9609 5626 0 0 0 0 5460 20695 1599 0 3199 BD8.1 271 2090 1224 101 42 101 118 715 4392 512 0 714 BD8.2 263 2090 1224 101 21 101 59 528 4125 256 0 357 BD8.3 264 2090 1224 101 32 101 67 554 4169 384 0 407 BD8.4 263 2090 1224 101 37 101 61 532 4147 448 0 369 BD8.5 258 2090 1224 101 11 101 29 434 3991 128 0 178 BD8.6 298 2090 1224 0 0 0 0 1697 5011 1279 0 1812 BD8.7 281 2090 1224 0 0 0 0 1019 4333 639 0 1279 BD8.8 286 2090 1224 0 0 0 0 1188 4503 959 0 1598 BD8.9 286 2090 1224 0 0 0 0 1188 4503 1119 0 1598 BD8.10 269 2090 1224 0 0 0 0 680 3994 320 0 639 BD8.11 255 2090 1224 101 5 101 15 387 3924 64 0 89 BD8.12 262 2090 1224 0 0 0 0 510 3824 160 0 320 BD8.13 269 2090 1224 101 37 101 103 668 4325 448 0 624 BD8.14 294 2090 1224 0 0 0 0 1527 4842 1119 0 1762 BD8.15 267 2090 1224 101 32 101 88 621 4258 384 0 535 BD8.16 290 2090 1224 0 0 0 0 1358 4672 959 0 1699 BD8.17 260 2090 1224 101 16 101 44 481 4058 192 0 268 BD8.18 275 2090 1224 0 0 0 0 849 4163 480 0 959 Table 37: Modified Madachy Model Results 318 My ID Compl. Time (days) Cum Design Effort Cum Coding Effort Cum Design Insp. Effort Cum Design Rework Effort Cum Code Insp. Effort Cum Code Rework Effort Cum Test Effort Cum Total Effort Undet. Design Errors Errors Escaping Integration Test Errors Found in IT BD9.2 269 2090 1224 101 26 101 105 671 4320 320 0 635 BD9.3 275 2090 1224 101 26 101 157 833 4533 320 0 951 BD9.4 280 2090 1224 101 26 101 209 994 4747 320 0 1267 BD9.5 285 2090 1224 101 26 101 261 1156 4960 320 0 1583 BD9.7 300 2090 1224 0 0 0 0 1824 5138 799 0 1844 BD9.8 318 2090 1224 0 0 0 0 2884 6198 799 0 2000 BD9.9 332 2090 1224 0 0 0 0 3943 7258 799 0 2072 BD9.10 344 2090 1224 0 0 0 0 5003 8317 799 0 2113 BD11.2 294 1881 1102 101 26 101 74 517 3803 320 0 446 BD11.3 331 1673 979 101 26 101 74 460 3415 320 0 446 BD11.4 378 1464 857 101 26 101 74 403 3026 320 0 447 BD20.1 413 8208 4805 317 82 317 230 2424 16383 1000 0 1396 BD20.2 634 30680 17959 950 247 950 692 9705 61183 2999 0 4192 BD20.3 773 56638 33151 1583 412 1583 1153 18489 113009 4999 0 6988 BD20.4 1013 130131 76162 3167 825 3167 2307 44296 260054 10000 0 13983 BD20.5 1186 211692 123894 4750 1238 4750 3461 73807 423591 15000 0 20978 BD20.6 1327 298980 174974 6334 1650 6334 4616 105999 598886 20001 0 27974 BD20.7 265 2090 1224 101 26 101 93 575 4211 320 0 272 BD20.8 347 4804 2813 203 53 203 179 1380 9634 640 0 610 BD20.9 413 8208 4805 317 82 317 274 2425 16427 1000 0 1006 BD20.10 634 30680 17959 950 247 950 789 9706 61281 2999 0 3324 BD20.11 773 56638 33150 1583 412 1583 1296 18490 113153 4999 0 5716 BD20.12 1013 130131 76161 3167 825 3167 2550 44298 260298 10000 0 11818 BD20.13 1186 211692 123893 4750 1238 4750 3795 73804 423922 15000 0 18000 BD20.14 1327 298980 174974 6334 1650 6334 5036 105965 599271 20001 0 24217 BD20.15 1396 298980 174974 0 0 0 0 169798 643752 50002 0 100003 U1.1 265 2090 1223 101 26 101 94 574 4210 320 43 233 U1.2 278 2090 1224 51 13 51 93 916 4438 560 62 465 U1.3 286 2090 1224 0 0 0 90 1187 4591 800 89 729 U1.4 272 2090 1224 101 26 101 85 755 4384 320 48 306 Table 37: Continued 319 My ID Compl. Time (days) Cum Design Effort Cum Coding Effort Cum Design Insp. Effort Cum Design Rework Effort Cum Code Insp. Effort Cum Code Rework Effort Cum Test Effort Cum Total Effort Undet. Design Errors Errors Escaping Integration Test Errors Found in IT U1.5 282 2090 1224 51 13 51 84 1061 4573 560 70 547 U1.6 289 2090 1224 0 0 0 80 1313 4708 800 98 818 U1.7 285 2090 1224 101 26 101 67 1180 4789 320 65 476 U1.8 291 2090 1224 51 13 51 63 1399 4891 560 92 737 U1.9 296 2090 1224 0 0 0 58 1610 4982 800 122 1026 U1.10 296 2090 1224 101 26 101 48 1603 5194 320 85 647 U1.11 298 2090 1224 51 13 51 42 1736 5208 560 115 929 U1.12 302 2090 1224 0 0 0 29 1906 5250 800 154 1288 U1.13 265 2090 1223 101 26 101 94 574 4210 320 44 236 U1.14 278 2090 1224 51 13 51 95 916 4440 560 67 476 U1.15 286 2090 1224 0 0 0 94 1186 4594 800 97 753 U1.16 265 2090 1223 101 26 101 94 574 4210 320 43 233 U1.17 278 2090 1224 51 13 51 93 916 4438 560 62 465 U1.18 286 2090 1224 0 0 0 90 1187 4591 800 89 729 U1.19 265 2090 1223 101 26 101 95 574 4211 320 46 240 U1.20 278 2090 1224 51 13 51 97 916 4442 560 73 488 U1.21 286 2090 1224 0 0 0 97 1187 4598 800 106 776 U1.22 265 2090 1223 101 26 101 94 573 4209 320 62 214 U1.23 278 2090 1224 51 13 51 93 916 4438 560 101 427 U1.24 286 2090 1224 0 0 0 90 1187 4591 800 147 671 U1.25 265 2090 1223 101 26 101 94 573 4210 320 116 161 U1.26 278 2090 1224 51 13 51 94 916 4438 560 210 317 U1.27 286 2090 1224 0 0 0 90 1186 4590 800 314 505 U1.28 287 2090 1224 101 26 101 61 1242 4846 320 70 530 U1.29 293 2090 1224 51 13 51 59 1484 4971 560 99 784 U1.30 298 2090 1224 0 0 0 54 1720 5088 800 129 1071 U1.31 291 2090 1224 101 26 101 46 1400 4989 320 85 661 U1.32 296 2090 1224 51 13 51 40 1612 5081 560 116 952 U1.33 299 2090 1224 0 0 0 0 1785 5100 800 177 1546 U1.34 289 2090 1224 101 26 101 58 1301 4902 320 73 553 Table 37: Continued 320 My ID Compl. Time (days) Cum Design Effort Cum Coding Effort Cum Design Insp. Effort Cum Design Rework Effort Cum Code Insp. Effort Cum Code Rework Effort Cum Test Effort Cum Total Effort Undet. Design Errors Errors Escaping Integration Test Errors Found in IT U1.35 294 2090 1224 51 13 51 55 1536 5020 560 101 813 U1.36 299 2090 1224 0 0 0 50 1767 5132 800 132 1103 U1.37 292 2090 1224 101 26 101 52 1439 5034 320 80 609 U1.38 297 2090 1224 51 13 51 48 1655 5132 560 109 880 U1.39 301 2090 1224 0 0 0 42 1877 5234 800 143 1184 U2.1 347 4804 2811 203 53 203 180 1378 9631 640 86 532 U2.2 362 4804 2812 101 26 101 185 2079 10109 1120 120 957 U2.3 373 4804 2812 0 0 0 180 2726 10523 1600 170 1469 U2.4 370 4804 2812 203 53 203 124 2551 10749 640 131 1039 U2.5 379 4804 2812 101 26 101 122 3199 11165 1120 182 1534 U2.6 388 4804 2813 0 0 0 112 3835 11564 1600 245 2119 U2.7 376 4804 2812 203 53 203 107 2953 11134 640 147 1191 U2.8 384 4804 2812 101 26 101 101 3587 11534 1120 206 1723 U2.9 392 4804 2813 0 0 0 88 4201 11906 1600 276 2347 U2.10 373 4804 2812 203 53 203 95 2783 10952 640 155 1301 U2.11 382 4804 2813 101 26 101 83 3400 11329 1120 216 1882 U2.12 389 4804 2813 0 0 0 0 3978 11595 1601 353 3109 U8.1 370 2090 1223 101 42 101 150 714 4423 512 59 387 U8.2 263 2090 1223 101 21 101 75 527 4139 256 36 185 U8.3 264 2090 1223 101 32 101 87 553 4188 384 37 200 U8.4 263 2090 1224 101 37 101 81 532 4166 448 32 167 U8.5 258 2090 1223 101 11 101 37 433 3997 128 20 89 U8.6 298 2090 1224 0 0 0 148 1695 5158 1280 143 1206 U8.7 281 2090 1224 0 0 0 71 1017 4403 640 72 578 U8.8 286 2090 1224 0 0 0 90 1187 4590 959 88 727 U8.9 286 2090 1224 0 0 0 89 1186 4590 1119 86 724 U8.10 269 2090 1223 0 0 0 35 678 4027 320 41 281 U8.11 256 2090 1223 101 5 101 19 387 3927 64 11 44 U8.12 262 2090 1223 0 0 0 18 509 3840 160 23 138 U8.13 269 2090 1224 101 37 101 131 667 4352 448 54 335 Table 37: Continued 321 My ID Compl. Time (days) Cum Design Effort Cum Coding Effort Cum Design Insp. Effort Cum Design Rework Effort Cum Code Insp. Effort Cum Code Rework Effort Cum Test Effort Cum Total Effort Undet. Design Errors Errors Escaping Integration Test Errors Found in IT U8.14 294 2090 1224 0 0 0 128 1526 4968 1120 124 1043 U8.15 267 2090 1224 101 32 101 112 621 4281 384 48 284 U8.16 290 2090 1224 0 0 0 109 1356 4779 959 106 884 U8.17 261 2090 1223 101 16 101 56 480 4068 192 29 136 U8.18 275 2090 1224 0 0 0 53 848 4215 480 56 428 U8.19 306 2090 1224 101 101 101 88 2099 5747 512 135 1015 U8.20 286 2090 1224 101 21 101 41 1219 4798 256 65 482 U8.21 289 2090 1224 101 32 101 46 1340 4934 384 74 560 U8.22 287 2090 1224 101 37 101 41 1241 4835 448 66 509 U8.23 273 2090 1224 101 11 101 20 779 4326 128 36 233 U8.24 317 2090 1224 0 0 0 79 2798 6192 1281 246 1988 U8.25 295 2090 1224 0 0 0 32 1570 4916 640 112 935 U8.26 301 2090 1224 0 0 0 37 1877 5228 960 143 1212 U8.27 301 2090 1224 0 0 0 31 1877 5223 1120 143 1239 U8.28 279 2090 1223 0 0 0 15 955 4284 320 58 455 U8.29 264 2090 1223 101 5 101 10 560 4091 64 21 114 U8.30 268 2090 1224 0 0 0 7 647 3968 160 32 223 U8.31 301 2090 1224 101 37 101 76 1879 5508 448 115 875 U8.32 312 2090 1224 0 0 0 65 2492 5872 1120 209 1712 U8.33 297 2090 1224 101 32 101 63 1659 5271 384 96 740 U8.34 307 2090 1224 0 0 0 53 2184 5552 960 176 1444 U8.35 280 2090 1223 101 16 101 30 1000 4562 192 50 356 U8.36 288 2090 1224 0 0 0 23 1262 4599 480 84 690 U9.2 281 2090 1223 101 26 101 140 168 4353 320 48 293 U9.3 275 2090 1223 101 26 101 214 834 4591 320 57 410 U9.4 280 2090 1224 101 26 101 288 997 4827 320 68 534 U9.5 285 2090 1224 101 26 101 362 1158 5064 320 81 651 U9.7 300 2090 1224 0 0 0 159 1822 5296 800 151 1299 U9.8 318 2090 1224 0 0 0 279 2881 6474 802 267 2266 U9.9 332 2090 1224 0 0 0 394 3943 7652 802 311 2525 Table 37: Continued 322 My ID Compl. Time (days) Cum Design Effort Cum Coding Effort Cum Design Insp. Effort Cum Design Rework Effort Cum Code Insp. Effort Cum Code Rework Effort Cum Test Effort Cum Total Effort Undet. Design Errors Errors Escaping Integration Test Errors Found in IT U9.10 344 2090 1224 0 0 0 510 5002 8826 803 317 2373 U9.11 302 2090 1224 101 26 101 73 1892 5508 320 115 901 U9.12 315 2090 1224 101 26 101 110 2648 6301 320 179 1388 U9.13 326 2090 1224 101 26 101 149 3403 7096 321 245 1879 U9.14 335 2090 1224 101 26 101 187 4160 7891 321 307 2366 U9.15 320 2090 1224 0 0 0 63 3030 6407 801 269 2237 U9.16 343 2090 1224 0 0 0 93 4955 8363 803 317 2379 U9.17 361 2090 1224 0 0 0 125 6878 11605 805 323 2235 U9.18 375 2090 1224 0 0 0 157 8801 13516 806 327 2161 U11.2 294 1881 1101 101 26 101 92 517 3820 320 42 248 U11.3 331 1673 979 101 26 101 91 460 3431 320 41 263 U11.4 378 1464 857 101 26 101 89 403 3201 320 41 278 U11.5 322 1881 1102 101 26 101 52 1230 4494 320 73 605 U11.6 361 1673 979 101 26 101 53 1045 4324 320 65 601 U11.7 409 1464 857 101 26 101 53 850 3452 320 57 598 U11.8 317 1881 1101 0 0 0 90 1068 4141 800 82 734 U11.9 357 1673 979 0 0 0 89 949 3690 800 73 740 U11.10 408 1464 857 0 0 0 89 831 3241 800 64 747 U11.11 335 1881 1102 0 0 0 41 1683 4708 800 132 1184 U11.12 376 1673 979 0 0 0 42 1491 4185 800 119 1182 U11.13 429 1464 857 0 0 0 42 1274 3636 800 102 1182 U11.14 332 1881 1102 0 0 0 0 1586 4569 800 165 1544 U11.15 373 1673 979 0 0 0 0 1400 4051 800 147 1548 U11.16 427 1464 857 0 0 0 0 1225 3545 800 128 1554 U20.1 413 8208 4804 317 82 317 275 2422 16424 1000 134 885 U20.2 773 56638 33145 1583 412 1583 1304 18481 113146 5000 703 5090 U20.3 1013 130131 76158 3167 825 3167 2569 44281 260297 10000 1459 10532 U20.4 1186 211692 123891 4750 1238 4750 3825 73781 423926 15001 2255 16012 U20.5 1327 298980 174971 6334 1650 6333 5076 105932 599276 20002 3071 21510 U20.6 441 8208 4805 317 82 317 170 4604 18503 1000 220 1835 Table 37: Continued 323 My ID Compl. Time (days) Cum Design Effort Cum Coding Effort Cum Design Insp. Effort Cum Design Rework Effort Cum Code Insp. Effort Cum Code Rework Effort Cum Test Effort Cum Total Effort Undet. Design Errors Errors Escaping Integration Test Errors Found in IT U20.7 804 56638 33147 1583 412 1583 883 27028 121274 5000 1054 8833 U20.8 1046 130131 76158 3167 825 3167 1784 60409 275641 10001 2143 17460 U20.9 1221 211692 123891 4750 1238 4750 2691 97430 446442 15002 3261 25991 U20.10 1363 298979 174971 6334 1650 6333 3597 137152 629017 20003 4414 34477 Table 37: Continued 324 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test RW0.1 1 1 1 1.265 1.265 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 795 RW0.2 1 0.75 0.75 0.876 0.876 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1680 676 RW0.3 1 0.5 0.5 0.7995 0.7995 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1680 683 RW0.4 1 0.25 0.25 0.471 0.471 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 1680 680 RW0.5 1 0 0 0.3407 0.3407 0 0 0.85 7 10 FALSE TRUE 70/43.65 1680 677 RW1.1 1 0 0 2 0.01 0 0 0.85 7 10 FALSE TRUE 70/43.65 4859 1906 RW1.2 1 0 0 1.5 0.01 0 0 0.85 7 10 FALSE TRUE 70/43.65 3634 1446 RW1.3 1 0 0 1 0.01 0 0 0.85 7 10 FALSE TRUE 70/43.65 2378 1018 RW1.4 1 0 0 0.5 0.2172 0 0 0.85 7 10 FALSE TRUE 70/43.65 1680 772 RW1.5 1 0 0 0.01 2 0 0 0.85 7 10 FALSE TRUE 70/43.65 5766 2297 RW1.6 1 0 0 0.01 1.5 0 0 0.85 7 10 FALSE TRUE 70/43.65 4119 1748 RW1.7 1 0 0 0.01 1 0 0 0.85 7 10 FALSE TRUE 70/43.65 2612 1125 RW1.8 1 0 0 0.2024 0.5 0 0 0.85 7 10 FALSE TRUE 70/43.65 1680 780 RW1.9 1 0.25 0.25 2 0.01 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 3150 1269 RW1.10 1 0.25 0.25 1.5 0.01 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 2344 1002 RW1.11 1 0.25 0.25 1 0.0816 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 1680 770 RW1.12 1 0.25 0.25 0.5 0.4825 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 1681 777 RW1.13 1 0.25 0.25 0.01 2 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 4365 1842 RW1.14 1 0.25 0.25 0.01 1.5 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 3164 1321 RW1.15 1 0.25 0.25 0.01 1 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 1969 888 RW1.16 1 0.25 0.25 0.4775 0.5 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 1680 777 RW1.17 1 0.5 0.5 2 0.0492 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1680 774 RW1.18 1 0.5 0.5 1.5 0.386 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1680 778 RW1.19 1 0.5 0.5 1 0.719 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1680 779 RW1.20 1 0.5 0.5 0.5 1.053 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1680 783 RW1.21 1 0.5 0.5 0.01 2 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 2590 1116 RW1.22 1 0.5 0.5 0.01 1.5 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1854 847 RW1.23 1 0.5 0.5 0.5783 1 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1680 781 RW1.24 1 0.5 0.5 1.3285 0.5 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1680 777 RW1.25 1 0.75 0.75 2 0.3402 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1680 768 RW1.26 1 0.75 0.75 1.5 0.6025 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1680 770 RW1.27 1 0.75 0.75 1 0.8639 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1680 771 RW1.28 1 0.75 0.75 0.5 1.1255 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1680 773 RW1.29 1 0.75 0.75 0.01 2 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 2583 1101 RW1.30 1 0.75 0.75 0.01 1.5 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1848 835 RW1.31 1 0.75 0.75 0.74 1 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1680 773 RW1.32 1 0.75 0.75 1.695 0.5 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1680 770 RW1.33 1 1 1 2 0.9975 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 793 RW1.34 1 1 1 1.5 1.1795 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 794 RW1.35 1 1 1 1 1.367 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 793 RW1.36 1 1 1 0.5 1.562 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 793 Table 38: Project-A Results: Using Baseline Effort Fraction and an Unmodified Staffing Profile 325 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test RW1.37 1 1 1 0.01 2 1 1 0.85 7 10 FALSE TRUE 70/43.65 1963 890 RW1.38 1 1 1 0.6595 1.5 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 793 RW1.39 1 1 1 1.992 1 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 793 RW1.40 1 1 1 3.364 0.5 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 791 RW2.1 1 1 1 1.404 1.404 1 1 0.15 7 10 FALSE TRUE 70/43.65 1680 1088 RW2.2 1 0.75 0.75 1.035 1.035 0.75 0.75 0.15 7 10 FALSE TRUE 70/43.65 1680 1126 RW2.3 1 0.5 0.5 0.9395 0.9395 0.5 0.5 0.15 7 10 FALSE TRUE 70/43.65 1680 1115 RW2.4 1 0.25 0.25 0.5565 0.5565 0.25 0.25 0.15 7 10 FALSE TRUE 70/43.65 1680 1121 RW2.5 1 0 0 0.4035 0.4035 0 0 0.15 7 10 FALSE TRUE 70/43.65 1680 1125 RW3.1 0.9 1 1 1.2848 1.2848 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 745 RW3.2 0.9 0.75 0.75 0.9245 0.9245 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1680 739 RW3.3 0.9 0.5 0.5 0.8428 0.8428 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1680 742 RW3.4 0.9 0.25 0.25 0.4948 0.4948 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 1680 741 RW3.5 0.9 0 0 0.359 0.359 0 0 0.85 7 10 FALSE TRUE 70/43.65 1680 740 RW4.1 1 1 1 1.282 1.282 1 1 0.85 7 10 TRUE TRUE 70/43.65 1680 792 RW4.2 1 0.75 0.75 0.9334 0.9334 0.75 0.75 0.85 7 10 TRUE TRUE 70/43.65 1680 812 RW4.3 1 0.5 0.5 0.8476 0.8476 0.5 0.5 0.85 7 10 TRUE TRUE 70/43.65 1680 809 RW4.4 1 0.25 0.25 0.4934 0.4934 0.25 0.25 0.85 7 10 TRUE TRUE 70/43.65 1680 780 RW4.5 1 0 0 0.3557 0.3557 0 0 0.85 7 10 TRUE TRUE 70/43.65 1680 764 RW5.1 1 1 1 2.0726 2.0726 1 1 0.15 7 10 TRUE TRUE 70/43.65 1680 2462 RW5.2 1 0.75 0.75 1.4565 1.4565 0.75 0.75 0.15 7 10 TRUE TRUE 70/43.65 1680 2297 RW5.3 1 0.5 0.5 1.3312 1.3312 0.5 0.5 0.15 7 10 TRUE TRUE 70/43.65 1680 2308 RW5.4 1 0.25 0.25 0.7954 0.7954 0.25 0.25 0.15 7 10 TRUE TRUE 70/43.65 1680 2346 RW5.5 1 0 0 0.5962 0.5962 0 0 0.15 7 10 TRUE TRUE 70/43.65 1680 2494 RW6.1 1 1 1 1.359 1.359 1 1 0.85 7 10 FALSE FALSE 70/43.65 1680 780 RW6.2 1 0.75 0.75 0.9578 0.9578 0.75 0.75 0.85 7 10 FALSE FALSE 70/43.65 1680 773 RW6.3 1 0.5 0.5 0.8644 0.8644 0.5 0.5 0.85 7 10 FALSE FALSE 70/43.65 1680 769 RW6.4 1 0.25 0.25 0.5048 0.5048 0.25 0.25 0.85 7 10 FALSE FALSE 70/43.65 1680 771 RW6.5 1 0 0 0.3679 0.3679 0 0 0.85 7 10 FALSE FALSE 70/43.65 1680 770 RW7.1 1 1 1 1.261 1.261 1 1 0.85 7 3 FALSE TRUE 70/43.65 1680 793 RW7.2 1 0.75 0.75 0.9119 0.9119 0.75 0.75 0.85 7 3 FALSE TRUE 70/43.65 1680 775 RW7.3 1 0.5 0.5 0.832 0.832 0.5 0.5 0.85 7 3 FALSE TRUE 70/43.65 1680 772 RW7.4 1 0.25 0.25 0.4902 0.4902 0.25 0.25 0.85 7 3 FALSE TRUE 70/43.65 1680 777 RW7.5 1 0 0 0.355 0.355 0 0 0.85 7 3 FALSE TRUE 70/43.65 1680 776 RW8.1 1 1 1 1.265 1.265 1 1 0.85 1 10 FALSE TRUE 70/43.65 1680 794 RW8.2 1 0.75 0.75 0.9107 0.9107 0.75 0.75 0.85 1 10 FALSE TRUE 70/43.65 1680 772 RW8.3 1 0.5 0.5 0.8326 0.8326 0.5 0.5 0.85 1 10 FALSE TRUE 70/43.65 1680 783 RW8.4 1 0.25 0.25 0.4901 0.4901 0.25 0.25 0.85 1 10 FALSE TRUE 70/43.65 1680 777 RW8.5 1 0 0 0.35505 0.35505 0 0 0.85 1 10 FALSE TRUE 70/43.65 1680 776 Table 38: Continued 326 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test RW0.1 1 1 1 1.3 1.3 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 743 RW0.2 1 0.75 0.75 0.7442 0.7442 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1680 745 RW0.3 1 0.5 0.5 0.5659 0.5659 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1680 760 RW0.4 1 0.25 0.25 0.3533 0.3533 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 1680 766 RW0.5 1 0 0 0.2694 0.2694 0 0 0.85 7 10 FALSE TRUE 70/43.65 1680 773 RW1.1 1 0 0 2 0.01 0 0 0.85 7 10 FALSE TRUE 70/43.65 8468 2695 RW1.2 1 0 0 1.5 0.01 0 0 0.85 7 10 FALSE TRUE 70/43.65 6175 2264 RW1.3 1 0 0 1 0.01 0 0 0.85 7 10 FALSE TRUE 70/43.65 3968 1569 RW1.4 1 0 0 0.5 0.01 0 0 0.85 7 10 FALSE TRUE 70/43.65 1809 830 RW1.5 1 0 0 0.01 2 0 0 0.85 7 10 FALSE TRUE 70/43.65 5589 2048 RW1.6 1 0 0 0.01 1.5 0 0 0.85 7 10 FALSE TRUE 70/43.65 4006 1558 RW1.7 1 0 0 0.01 1 0 0 0.85 7 10 FALSE TRUE 70/43.65 2529 1002 RW1.8 1 0 0 0.131 0.5 0 0 0.85 7 10 FALSE TRUE 70/43.65 1680 753 RW1.9 1 0.25 0.25 2 0.01 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 6054 2214 RW1.10 1 0.25 0.25 1.5 0.01 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 4433 1708 RW1.11 1 0.25 0.25 1 0.01 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 2782 1164 RW1.12 1 0.25 0.25 0.5 0.156 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 1681 789 RW1.13 1 0.25 0.25 0.01 2 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 4243 1628 RW1.14 1 0.25 0.25 0.01 1.5 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 3057 1169 RW1.15 1 0.25 0.25 0.01 1 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 1910 803 RW1.16 1 0.25 0.25 0.2618 0.5 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 1680 763 RW1.17 1 0.5 0.5 2 0.01 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 3585 1362 RW1.18 1 0.5 0.5 1.5 0.01 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 2573 1057 RW1.19 1 0.5 0.5 1 0.0385 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1681 771 RW1.20 1 0.5 0.5 0.5 0.68 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1680 763 RW1.21 1 0.5 0.5 0.01 2 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 2522 976 RW1.22 1 0.5 0.5 0.01 1.5 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1799 759 RW1.23 1 0.5 0.5 0.3 1 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1680 747 RW1.24 1 0.5 0.5 0.6035 0.5 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1680 758 RW1.25 1 0.75 0.75 2 0.01 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 2014 866 RW1.26 1 0.75 0.75 1.5 0.1515 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1680 770 RW1.27 1 0.75 0.75 1 0.5184 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1680 756 RW1.28 1 0.75 0.75 0.5 0.966 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1681 731 RW1.29 1 0.75 0.75 0.01 2 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 2509 974 RW1.30 1 0.75 0.75 0.01 1.5 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1798 747 RW1.31 1 0.75 0.75 0.463 1 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1681 730 RW1.32 1 0.75 0.75 1.0204 0.5 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1681 757 RW1.33 1 1 1 2 1.03 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 747 RW1.34 1 1 1 1.5 1.223 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 745 RW1.35 1 1 1 1 1.416 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 742 Table 39: Project-A Results: Using Switched Effort Fraction and an Unmodified Staffing Profile 327 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test RW1.36 1 1 1 0.5 1.61 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 741 RW1.37 1 1 1 0.01 2 1 1 0.85 7 10 FALSE TRUE 70/43.65 1902 806 RW1.38 1 1 1 0.783 1.5 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 741 RW1.39 1 1 1 2.077 1 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 747 RW1.40 1 1 1 3.372 0.5 1 1 0.85 7 10 FALSE TRUE 70/43.65 1680 754 RW2.1 1 1 1 1.488 1.488 1 1 0.15 7 10 FALSE TRUE 70/43.65 1680 1125 RW2.2 1 0.75 0.75 0.867 0.867 0.75 0.75 0.15 7 10 FALSE TRUE 70/43.65 1680 1163 RW2.3 1 0.5 0.5 0.684 0.684 0.5 0.5 0.15 7 10 FALSE TRUE 70/43.65 1680 1278 RW2.4 1 0.25 0.25 0.42 0.42 0.25 0.25 0.15 7 10 FALSE TRUE 70/43.65 1681 1236 RW3.1 0.9 1 1 1.304 1.304 1 1 0.85 7 10 FALSE TRUE 70/43.65 1681 702 RW3.2 0.9 0.75 0.75 0.7436 0.7436 0.75 0.75 0.85 7 10 FALSE TRUE 70/43.65 1681 700 RW3.3 0.9 0.5 0.5 0.5651 0.5651 0.5 0.5 0.85 7 10 FALSE TRUE 70/43.65 1680 713 RW3.4 0.9 0.25 0.25 0.353 0.353 0.25 0.25 0.85 7 10 FALSE TRUE 70/43.65 1680 718 RW3.5 0.9 0 0 0.269 0.269 0 0 0.85 7 10 FALSE TRUE 70/43.65 1679 722 RW4.1 1 1 1 1.3126 1.3126 1 1 0.85 7 10 TRUE TRUE 70/43.65 1680 721 RW4.2 1 0.75 0.75 0.744 0.744 0.75 0.75 0.85 7 10 TRUE TRUE 70/43.65 1680 724 RW4.3 1 0.5 0.5 0.551 0.551 0.5 0.5 0.85 7 10 TRUE TRUE 70/43.65 1680 690 RW4.4 1 0.25 0.25 0.3433 0.3433 0.25 0.25 0.85 7 10 TRUE TRUE 70/43.65 1680 685 RW4.5 1 0 0 0.2613 0.2613 0 0 0.85 7 10 TRUE TRUE 70/43.65 1680 685 RW5.1 1 1 1 0.5058 0.5058 1 1 0.15 7 10 TRUE TRUE 70/43.65 1681 2631 RW5.2 1 0.75 0.75 1.3305 1.3305 0.75 0.75 0.15 7 10 TRUE TRUE 70/43.65 1681 2720 RW5.3 1 0.5 0.5 1.069 1.069 0.5 0.5 0.15 7 10 TRUE TRUE 70/43.65 1681 2954 RW5.4 1 0.25 0.25 0.6645 0.6645 0.25 0.25 0.15 7 10 TRUE TRUE 70/43.65 1680 2958 RW5.5 1 0 0 0.5058 0.5058 0 0 0.15 7 10 TRUE TRUE 70/43.65 1680 2975 RW6.1 1 1 1 1.298 1.298 1 1 0.85 7 10 FALSE FALSE 70/43.65 1681 724 RW6.2 1 0.75 0.75 0.77136 0.77136 0.75 0.75 0.85 7 10 FALSE FALSE 70/43.65 1681 740 RW6.3 1 0.5 0.5 0.5727 0.5727 0.5 0.5 0.85 7 10 FALSE FALSE 70/43.65 1681 761 RW6.4 1 0.25 0.25 0.3627 0.3627 0.25 0.25 0.85 7 10 FALSE FALSE 70/43.65 1681 776 RW6.5 1 0 0 0.2787 0.2787 0 0 0.85 7 10 FALSE FALSE 70/43.65 1680 780 RW7.1 1 1 1 1.3 1.3 1 1 0.85 7 3 FALSE TRUE 70/43.65 1680 740 RW7.2 1 0.75 0.75 0.7477 0.7477 0.75 0.75 0.85 7 3 FALSE TRUE 70/43.65 1681 755 RW7.3 1 0.5 0.5 0.5681 0.5681 0.5 0.5 0.85 7 3 FALSE TRUE 70/43.65 1681 765 RW7.4 1 0.25 0.25 0.3539 0.3539 0.25 0.25 0.85 7 3 FALSE TRUE 70/43.65 1681 769 RW7.5 1 0 0 0.2695 0.2695 0 0 0.85 7 3 FALSE TRUE 70/43.65 1681 773 RW8.1 1 1 1 1.3 1.3 1 1 0.85 1 10 FALSE TRUE 70/43.65 1681 742 RW8.2 1 0.75 0.75 0.7446 0.7446 0.75 0.75 0.85 1 10 FALSE TRUE 70/43.65 1680 746 RW8.3 1 0.5 0.5 0.5662 0.5662 0.5 0.5 0.85 1 10 FALSE TRUE 70/43.65 1680 761 RW8.4 1 0.25 0.25 0.3535 0.3535 0.25 0.25 0.85 1 10 FALSE TRUE 70/43.65 1681 766 RW8.5 1 0 0 0.2695 0.2695 0 0 0.85 1 10 FALSE TRUE 70/43.65 1681 773 Table 39: Continued 328 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test MD0.1 1 1 1 2.39 2.39 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 825 MD0.2 1 0.75 0.75 1.7755 1.7755 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 800 MD0.3 1 0.5 0.5 1.6235 1.6235 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 804 MD0.4 1 0.25 0.25 0.9872 0.9872 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1680 810 MD0.5 1 0 0 0.7091 0.7091 0 0 0.85 7 10 FALSE TRUE 140/43.65 1680 813 MD1.1 1 0 0 2 0.01 0 0 0.85 7 10 FALSE TRUE 140/43.65 2195 1032 MD1.2 1 0 0 1.5 0.041 0 0 0.85 7 10 FALSE TRUE 140/43.65 1681 801 MD1.3 1 0 0 1 0.463795 0 0 0.85 7 10 FALSE TRUE 140/43.65 1680 809 MD1.4 1 0 0 0.5 0.8855 0 0 0.85 7 10 FALSE TRUE 140/43.65 1680 817 MD1.5 1 0 0 0.01 2 0 0 0.85 7 10 FALSE TRUE 140/43.65 2822 1356 MD1.6 1 0 0 0.01 1.5 0 0 0.85 7 10 FALSE TRUE 140/43.65 2014 975 MD1.7 1 0 0 0.362 1 0 0 0.85 7 10 FALSE TRUE 140/43.65 1680 816 MD1.8 1 0 0 0.9575 0.5 0 0 0.85 7 10 FALSE TRUE 140/43.65 1680 809 MD1.9 1 0.25 0.25 2 0.2945 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1680 801 MD1.10 1 0.25 0.25 1.5 0.6381 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1680 807 MD1.11 1 0.25 0.25 1 0.98 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1680 812 MD1.12 1 0.25 0.25 0.5 1.3174 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1681 815 MD1.13 1 0.25 0.25 0.01 2 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 2137 1031 MD1.14 1 0.25 0.25 0.2285 1.5 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1680 817 MD1.15 1 0.25 0.25 0.969 1 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1680 811 MD1.16 1 0.25 0.25 1.701 0.5 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1680 804 MD1.17 1 0.5 0.5 2 1.4035 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 804 MD1.18 1 0.5 0.5 1.5 1.696 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 804 MD1.19 1 0.5 0.5 1 1.99 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1681 807 MD1.20 1 0.5 0.5 0.5 2.2807 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 808 MD1.21 1 0.5 0.5 0.98 2 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 806 MD1.22 1 0.5 0.5 1.835 1.5 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 804 MD1.23 1 0.5 0.5 2.68 1 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 799 MD1.24 1 0.5 0.5 3.5285 0.5 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 797 MD1.25 1 0.75 0.75 2 1.672 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 799 MD1.26 1 0.75 0.75 1.5 1.9025 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1681 800 MD1.27 1 0.75 0.75 1 2.133 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 802 MD1.28 1 0.75 0.75 0.5 2.3655 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 806 MD1.29 1 0.75 0.75 1.29 2 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 802 MD1.30 1 0.75 0.75 2.37 1.5 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 797 MD1.31 1 0.75 0.75 3.4455 1 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 792 MD1.32 1 0.75 0.75 4.524 0.5 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 789 MD1.33 1 1 1 2 2.543 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 827 MD1.34 1 1 1 1.5 2.732 1 1 0.85 7 10 FALSE TRUE 140/43.65 1681 823 MD1.35 1 1 1 1 2.926 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 824 Table 40: Project-A Results: Using Baseline Effort Fraction and a Modified Staffing Profile 329 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test MD1.36 1 1 1 0.5 3.121 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 825 MD1.37 1 1 1 3.3999 2 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 826 MD1.38 1 1 1 4.705 1.5 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 828 MD1.39 1 1 1 6.001 1 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 824 MD1.40 1 1 1 7.335 0.5 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 829 MD2.1 1 1 1 2.511 2.511 1 1 0.15 7 10 FALSE TRUE 140/43.65 1680 979 MD2.2 1 0.75 0.75 1.9151 1.9151 0.75 0.75 0.15 7 10 FALSE TRUE 140/43.65 1681 1019 MD2.3 1 0.5 0.5 1.745 1.745 0.5 0.5 0.15 7 10 FALSE TRUE 140/43.65 1680 1011 MD2.4 1 0.25 0.25 1.0577 1.0577 0.25 0.25 0.15 7 10 FALSE TRUE 140/43.65 1680 1003 MD2.5 1 0 0 0.7573 0.7573 0 0 0.15 7 10 FALSE TRUE 140/43.65 1680 998 MD3.1 0.9 1 1 2.3735 2.3735 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 786 MD3.2 0.9 0.75 0.75 1.784 1.784 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 769 MD3.3 0.9 0.5 0.5 1.632 1.632 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 768 MD3.4 0.9 0.25 0.25 0.9949 0.9949 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1680 774 MD3.5 0.9 0 0 0.7115 0.7115 0 0 0.85 7 10 FALSE TRUE 140/43.65 1680 770 MD4.1 1 1 1 2.58 2.58 1 1 0.85 7 10 TRUE TRUE 140/43.65 1680 987 MD4.2 1 0.75 0.75 1.9132 1.9132 0.75 0.75 0.85 7 10 TRUE TRUE 140/43.65 1680 972 MD4.3 1 0.5 0.5 1.7469 1.7469 0.5 0.5 0.85 7 10 TRUE TRUE 140/43.65 1680 975 MD4.4 1 0.25 0.25 1.03885 1.03885 0.25 0.25 0.85 7 10 TRUE TRUE 140/43.65 1681 925 MD4.5 1 0 0 0.74817 0.74817 0 0 0.85 7 10 TRUE TRUE 140/43.65 1681 930 MD5.1 1 1 1 2.9742 2.9742 1 1 0.15 7 10 TRUE TRUE 140/43.65 1680 1498 MD5.2 1 0.75 0.75 2.2878 2.2878 0.75 0.75 0.15 7 10 TRUE TRUE 140/43.65 1680 1565 MD5.3 1 0.5 0.5 2.078 2.078 0.5 0.5 0.15 7 10 TRUE TRUE 140/43.65 1680 1542 MD5.4 1 0.25 0.25 1.2328 1.2328 0.25 0.25 0.15 7 10 TRUE TRUE 140/43.65 1687 1456 MD5.5 1 0 0 0.9032 0.9032 0 0 0.15 7 10 TRUE TRUE 140/43.65 1677 1534 MD6.1 1 1 1 2.846 2.846 1 1 0.85 7 10 FALSE FALSE 140/43.65 1680 806 MD6.2 1 0.75 0.75 1.992 1.992 0.75 0.75 0.85 7 10 FALSE FALSE 140/43.65 1680 793 MD6.3 1 0.5 0.5 1.795 1.795 0.5 0.5 0.85 7 10 FALSE FALSE 140/43.65 1680 800 MD6.4 1 0.25 0.25 1.0635 1.0635 0.25 0.25 0.85 7 10 FALSE FALSE 140/43.65 1680 802 MD6.5 1 0 0 0.7751 0.7751 0 0 0.85 7 10 FALSE FALSE 140/43.65 1680 802 MD7.1 1 1 1 2.365 2.365 1 1 0.85 7 3 FALSE TRUE 140/43.65 1680 827 MD7.2 1 0.75 0.75 1.78 1.78 0.75 0.75 0.85 7 3 FALSE TRUE 140/43.65 1680 806 MD7.3 1 0.5 0.5 1.63 1.63 0.5 0.5 0.85 7 3 FALSE TRUE 140/43.65 1680 799 MD7.4 1 0.25 0.25 0.9875 0.9875 0.25 0.25 0.85 7 3 FALSE TRUE 140/43.65 1680 810 MD7.5 1 0 0 0.7091 0.7091 0 0 0.85 7 3 FALSE TRUE 140/43.65 1680 813 MD8.1 1 1 1 2.391 2.391 1 1 0.85 1 10 FALSE TRUE 140/43.65 1680 825 MD8.2 1 0.75 0.75 1.777 1.777 0.75 0.75 0.85 1 10 FALSE TRUE 140/43.65 1680 801 MD8.3 1 0.5 0.5 1.624 1.624 0.5 0.5 0.85 1 10 FALSE TRUE 140/43.65 1680 805 MD8.4 1 0.25 0.25 0.988 0.988 0.25 0.25 0.85 1 10 FALSE TRUE 140/43.65 1680 813 MD8.5 1 0 0 0.70917 0.70917 0 0 0.85 1 10 FALSE TRUE 140/43.65 1680 813 Table 40: Continued 330 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test MD0.1 1 1 1 1.598 1.598 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 826 MD0.2 1 0.75 0.75 1.1395 1.1395 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 800 MD0.3 1 0.5 0.5 1.01515 1.01515 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 809 MD0.4 1 0.25 0.25 0.6129 0.6129 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1679 810 MD0.5 1 0 0 0.4455 0.4455 0 0 0.85 7 10 FALSE TRUE 140/43.65 1680 813 MD1.1 1 0 0 2 0.01 0 0 0.85 7 10 FALSE TRUE 140/43.65 3836 1675 MD1.2 1 0 0 1.5 0.01 0 0 0.85 7 10 FALSE TRUE 140/43.65 2847 1291 MD1.3 1 0 0 1 0.01 0 0 0.85 7 10 FALSE TRUE 140/43.65 1877 888 MD1.4 1 0 0 0.5 0.3936 0 0 0.85 7 10 FALSE TRUE 140/43.65 1681 813 MD1.5 1 0 0 0.01 2 0 0 0.85 7 10 FALSE TRUE 140/43.65 4536 2033 MD1.6 1 0 0 0.01 1.5 0 0 0.85 7 10 FALSE TRUE 140/43.65 3239 1487 MD1.7 1 0 0 0.01 1 0 0 0.85 7 10 FALSE TRUE 140/43.65 1997 951 MD1.8 1 0 0 0.3884 0.5 0 0 0.85 7 10 FALSE TRUE 140/43.65 1680 813 MD1.9 1 0.25 0.25 2 0.01 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 2466 1136 MD1.10 1 0.25 0.25 1.5 0.01 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1856 877 MD1.11 1 0.25 0.25 1 0.3025 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1680 811 MD1.12 1 0.25 0.25 0.5 0.70438 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1680 812 MD1.13 1 0.25 0.25 0.01 2 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 3429 1570 MD1.14 1 0.25 0.25 0.01 1.5 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 2422 1134 MD1.15 1 0.25 0.25 0.132 1 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1680 812 MD1.16 1 0.25 0.25 0.753 0.5 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1681 808 MD1.17 1 0.5 0.5 2 0.31 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 817 MD1.18 1 0.5 0.5 1.5 0.667 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 813 MD1.19 1 0.5 0.5 1 1.026 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 808 MD1.20 1 0.5 0.5 0.5 1.392 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 811 MD1.21 1 0.5 0.5 0.01 2 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1997 947 MD1.22 1 0.5 0.5 0.344 1.5 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 810 MD1.23 1 0.5 0.5 1.0386 1 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1680 811 MD1.24 1 0.5 0.5 1.7344 0.5 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1681 815 MD1.25 1 0.75 0.75 2 0.688 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1681 803 MD1.26 1 0.75 0.75 1.5 0.948 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1681 797 MD1.27 1 0.75 0.75 1 1.212 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 798 MD1.28 1 0.75 0.75 0.5 1.479 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 803 MD1.29 1 0.75 0.75 0.01 2 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1982 930 MD1.30 1 0.75 0.75 0.459 1.5 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 802 MD1.31 1 0.75 0.75 1.402 1 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 798 MD1.32 1 0.75 0.75 2.3515 0.5 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 800 MD1.33 1 1 1 2 1.445 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 828 MD1.34 1 1 1 1.5 1.638 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 827 MD1.35 1 1 1 1 1.831 1 1 0.85 7 10 FALSE TRUE 140/43.65 1681 825 Table 41: Project-A Results: Using Switched Effort Fraction and a Modified Staffing Profile 331 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test MD1.36 1 1 1 0.5 2.027 1 1 0.85 7 10 FALSE TRUE 140/43.65 1681 823 MD1.37 1 1 1 0.565 2 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 823 MD1.38 1 1 1 1.853 1.5 1 1 0.85 7 10 FALSE TRUE 140/43.65 1681 826 MD1.39 1 1 1 3.24 1 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 826 MD1.40 1 1 1 4.63 0.5 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 824 MD2.1 1 1 1 1.7155 1.7155 1 1 0.15 7 10 FALSE TRUE 140/43.65 1680 1035 MD2.2 1 0.75 0.75 1.24 1.24 0.75 0.75 0.15 7 10 FALSE TRUE 140/43.65 1680 1037 MD2.3 1 0.5 0.5 1.099 1.099 0.5 0.5 0.15 7 10 FALSE TRUE 140/43.65 1680 1025 MD2.4 1 0.25 0.25 0.665 0.665 0.25 0.25 0.15 7 10 FALSE TRUE 140/43.65 1680 1032 MD2.5 1 0 0 0.4835 0.4835 0 0 0.15 7 10 FALSE TRUE 140/43.65 1680 1039 MD3.1 0.9 1 1 1.655 1.655 1 1 0.85 7 10 FALSE TRUE 140/43.65 1680 782 MD3.2 0.9 0.75 0.75 1.174 1.174 0.75 0.75 0.85 7 10 FALSE TRUE 140/43.65 1680 759 MD3.3 0.9 0.5 0.5 1.049 1.049 0.5 0.5 0.85 7 10 FALSE TRUE 140/43.65 1681 771 MD3.4 0.9 0.25 0.25 0.63 0.63 0.25 0.25 0.85 7 10 FALSE TRUE 140/43.65 1680 776 MD3.5 0.9 0 0 0.4584 0.4584 0 0 0.85 7 10 FALSE TRUE 140/43.65 1680 778 MD4.1 1 1 1 1.7335 1.7335 1 1 0.85 7 10 TRUE TRUE 140/43.65 1681 994 MD4.2 1 0.75 0.75 1.229 1.229 0.75 0.75 0.85 7 10 TRUE TRUE 140/43.65 1680 969 MD4.3 1 0.5 0.5 1.094 1.094 0.5 0.5 0.85 7 10 TRUE TRUE 140/43.65 1681 982 MD4.4 1 0.25 0.25 0.6534 0.6534 0.25 0.25 0.85 7 10 TRUE TRUE 140/43.65 1680 957 MD4.5 1 0 0 0.4758 0.4758 0 0 0.85 7 10 TRUE TRUE 140/43.65 1680 961 MD5.1 1 1 1 2.2 2.2 1 1 0.15 7 10 TRUE TRUE 140/43.65 1680 1821 MD5.2 1 0.75 0.75 1.628991 1.628991 0.75 0.75 0.15 7 10 TRUE TRUE 140/43.65 1681 1903 MD5.3 1 0.5 0.5 1.415 1.415 0.5 0.5 0.15 7 10 TRUE TRUE 140/43.65 1680 1809 MD5.4 1 0.25 0.25 0.8512 0.8512 0.25 0.25 0.15 7 10 TRUE TRUE 140/43.65 1680 1801 MD5.5 1 0 0 0.62768 0.62768 0 0 0.15 7 10 TRUE TRUE 140/43.65 1680 1862 MD6.1 1 1 1 1.774 1.774 1 1 0.85 7 10 FALSE FALSE 140/43.65 1681 803 MD6.2 1 0.75 0.75 1.218 1.218 0.75 0.75 0.85 7 10 FALSE FALSE 140/43.65 1680 788 MD6.3 1 0.5 0.5 1.061 1.061 0.5 0.5 0.85 7 10 FALSE FALSE 140/43.65 1680 802 MD6.4 1 0.25 0.25 0.64 0.64 0.25 0.25 0.85 7 10 FALSE FALSE 140/43.65 1680 802 MD6.5 1 0 0 0.4687 0.4687 0 0 0.85 7 10 FALSE FALSE 140/43.65 1680 802 MD7.1 1 1 1 1.5867 1.5867 1 1 0.85 7 3 FALSE TRUE 140/43.65 1681 825 MD7.2 1 0.75 0.75 1.1442 1.1442 0.75 0.75 0.85 7 3 FALSE TRUE 140/43.65 1680 810 MD7.3 1 0.5 0.5 1.0227 1.0227 0.5 0.5 0.85 7 3 FALSE TRUE 140/43.65 1680 814 MD7.4 1 0.25 0.25 0.6135 0.6135 0.25 0.25 0.85 7 3 FALSE TRUE 140/43.65 1680 811 MD7.5 1 0 0 0.4455 0.4455 0 0 0.85 7 3 FALSE TRUE 140/43.65 1680 813 MD8.1 1 1 1 1.599 1.599 1 1 0.85 1 10 FALSE TRUE 140/43.65 1680 827 MD8.2 1 0.75 0.75 1.139 1.139 0.75 0.75 0.85 1 10 FALSE TRUE 140/43.65 1680 799 MD8.3 1 0.5 0.5 1.016 1.016 0.5 0.5 0.85 1 10 FALSE TRUE 140/43.65 1680 810 MD8.4 1 0.25 0.25 0.6135 0.6135 0.25 0.25 0.85 1 10 FALSE TRUE 140/43.65 1680 812 MD8.5 1 0 0 0.4455 0.4455 0 0 0.85 1 10 FALSE TRUE 140/43.65 1680 813 Table 41: Continued 332 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test RW0.1 1 1 1 2.0305 2.0305 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 1065 RW0.2 1 0.75 0.75 1.463 1.463 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 1045 RW0.3 1 0.5 0.5 1.348 1.348 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 1091 RW0.4 1 0.25 0.25 0.7945 0.7945 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 1053 RW0.5 1 0 0 0.5755 0.5755 0 0 0.85 7 10 FALSE TRUE 70/59.31 2318 1048 RW1.1 1 0 0 2 0.01 0 0 0.85 7 10 FALSE TRUE 70/59.31 4153 1530 RW1.2 1 0 0 1.5 0.01 0 0 0.85 7 10 FALSE TRUE 70/59.31 3024 1244 RW1.3 1 0 0 1 0.176 0 0 0.85 7 10 FALSE TRUE 70/59.31 2318 1039 RW1.4 1 0 0 0.5 0.6467 0 0 0.85 7 10 FALSE TRUE 70/59.31 2318 1050 RW1.5 1 0 0 0.01 2 0 0 0.85 7 10 FALSE TRUE 70/59.31 4683 1704 RW1.6 1 0 0 0.01 1.5 0 0 0.85 7 10 FALSE TRUE 70/59.31 3359 1307 RW1.7 1 0 0 0.125 1 0 0 0.85 7 10 FALSE TRUE 70/59.31 2318 1058 RW1.8 1 0 0 0.656 0.5 0 0 0.85 7 10 FALSE TRUE 70/59.31 2318 1047 RW1.9 1 0.25 0.25 2 0.01 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2604 1120 RW1.10 1 0.25 0.25 1.5 0.241 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 1039 RW1.11 1 0.25 0.25 1 0.6332 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 1049 RW1.12 1 0.25 0.25 0.5 1.0261 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 1058 RW1.13 1 0.25 0.25 0.01 2 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 3567 1358 RW1.14 1 0.25 0.25 0.01 1.5 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2488 1123 RW1.15 1 0.25 0.25 0.5332 1 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 1058 RW1.16 1 0.25 0.25 1.1704 0.5 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 1047 RW1.17 1 0.5 0.5 2 0.9199 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 1083 RW1.18 1 0.5 0.5 1.5 1.2476 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 1089 RW1.19 1 0.5 0.5 1 1.57105 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 1085 RW1.20 1 0.5 0.5 0.5 1.88703 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 1070 RW1.21 1 0.5 0.5 0.328 2 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 1071 RW1.22 1 0.5 0.5 1.116 1.5 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 1093 RW1.23 1 0.5 0.5 1.877 1 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 1084 RW1.24 1 0.5 0.5 2.638 0.5 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 1074 RW1.25 1 0.75 0.75 2 1.184 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 1038 RW1.26 1 0.75 0.75 1.5 1.4439 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 1044 RW1.27 1 0.75 0.75 1 1.701 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 1046 RW1.28 1 0.75 0.75 0.5 1.9585 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 1048 RW1.29 1 0.75 0.75 0.418 2 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 1047 RW1.30 1 0.75 0.75 1.391 1.5 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 1045 RW1.31 1 0.75 0.75 2.36 1 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 1038 RW1.32 1 0.75 0.75 3.333 0.5 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 1034 RW1.33 1 1 1 2 2.0423 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 1065 RW1.34 1 1 1 1.5 2.235 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 1066 RW1.35 1 1 1 1 2.427 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 1066 Table 42: Project-C Results: Using Baseline Effort Fraction and an Unmodified Staffing Profile 333 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test RW1.36 1 1 1 0.5 2.62 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 1066 RW1.37 1 1 1 2.108 2 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 1065 RW1.38 1 1 1 3.409 1.5 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 1065 RW1.39 1 1 1 4.71 1 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 1064 RW1.40 1 1 1 6.006 0.5 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 1062 RW2.1 1 1 1 2.468 2.468 1 1 0.15 7 10 FALSE TRUE 70/59.31 2318 1848 RW2.2 1 0.75 0.75 1.816 1.816 0.75 0.75 0.15 7 10 FALSE TRUE 70/59.31 2318 1906 RW2.3 1 0.5 0.5 1.62549 1.62549 0.5 0.5 0.15 7 10 FALSE TRUE 70/59.31 2318 1843 RW2.4 1 0.25 0.25 0.95388 0.95388 0.25 0.25 0.15 7 10 FALSE TRUE 70/59.31 2318 1754 RW2.5 1 0 0 0.6908 0.6908 0 0 0.15 7 10 FALSE TRUE 70/59.31 2318 1750 RW3.1 0.9 1 1 2.0037 2.0037 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 1031 RW3.2 0.9 0.75 0.75 1.44805 1.44805 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 1010 RW3.3 0.9 0.5 0.5 1.3245 1.3245 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 1029 RW3.4 0.9 0.25 0.25 0.7821 0.7821 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 1000 RW3.5 0.9 0 0 0.5675 0.5675 0 0 0.85 7 10 FALSE TRUE 70/59.31 2318 1002 RW4.1 1 1 1 1.99578 1.99578 1 1 0.85 7 10 TRUE TRUE 70/59.31 2318 944 RW4.2 1 0.75 0.75 1.4183 1.4183 0.75 0.75 0.85 7 10 TRUE TRUE 70/59.31 2318 902 RW4.3 1 0.5 0.5 1.3224 1.3224 0.5 0.5 0.85 7 10 TRUE TRUE 70/59.31 2318 985 RW4.4 1 0.25 0.25 0.7733 0.7733 0.25 0.25 0.85 7 10 TRUE TRUE 70/59.31 2318 938 RW4.5 1 0 0 0.5583 0.5583 0 0 0.85 7 10 TRUE TRUE 70/59.31 2318 918 RW5.1 1 1 1 3.271 3.271 1 1 0.15 7 10 TRUE TRUE 70/59.31 2318 3228 RW5.2 1 0.75 0.75 2.293317 2.293317 0.75 0.75 0.15 7 10 TRUE TRUE 70/59.31 2318 3028 RW5.3 1 0.5 0.5 2.0399 2.0399 0.5 0.5 0.15 7 10 TRUE TRUE 70/59.31 2319 2883 RW5.4 1 0.25 0.25 1.2233 1.2233 0.25 0.25 0.15 7 10 TRUE TRUE 70/59.31 2318 2912 RW5.5 1 0 0 0.9105 0.9105 0 0 0.15 7 10 TRUE TRUE 70/59.31 2318 3057 RW6.1 1 1 1 2.166 2.166 1 1 0.85 7 10 FALSE FALSE 70/59.31 2318 1059 RW6.2 1 0.75 0.75 1.5315 1.5315 0.75 0.75 0.85 7 10 FALSE FALSE 70/59.31 2318 1040 RW6.3 1 0.5 0.5 1.389 1.389 0.5 0.5 0.85 7 10 FALSE FALSE 70/59.31 2318 1042 RW6.4 1 0.25 0.25 0.817 0.817 0.25 0.25 0.85 7 10 FALSE FALSE 70/59.31 2318 1047 RW6.5 1 0 0 0.5945 0.5945 0 0 0.85 7 10 FALSE FALSE 70/59.31 2318 1042 RW7.1 1 1 1 2.01 2.01 1 1 0.85 7 3 FALSE TRUE 70/59.31 2318 1051 RW7.2 1 0.75 0.75 1.464 1.464 0.75 0.75 0.85 7 3 FALSE TRUE 70/59.31 2318 1042 RW7.3 1 0.5 0.5 1.3407 1.3407 0.5 0.5 0.85 7 3 FALSE TRUE 70/59.31 2318 1056 RW7.4 1 0.25 0.25 0.789 0.789 0.25 0.25 0.85 7 3 FALSE TRUE 70/59.31 2318 1028 RW7.5 1 0 0 0.5756 0.5756 0 0 0.85 7 3 FALSE TRUE 70/59.31 2318 1048 RW8.1 1 1 1 2.03 2.03 1 1 0.85 1 10 FALSE TRUE 70/59.31 2318 1064 RW8.2 1 0.75 0.75 1.4636 1.4636 0.75 0.75 0.85 1 10 FALSE TRUE 70/59.31 2318 1045 RW8.3 1 0.5 0.5 1.348 1.348 0.5 0.5 0.85 1 10 FALSE TRUE 70/59.31 2318 1091 RW8.4 1 0.25 0.25 0.7948 0.7948 0.25 0.25 0.85 1 10 FALSE TRUE 70/59.31 2318 1053 RW8.5 1 0 0 0.5756 0.5756 0 0 0.85 1 10 FALSE TRUE 70/59.31 2318 1049 Table 42: Continued 334 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test RW9.1 1 1 1 1.293 1.293 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 1041 RW9.2 1 0.75 0.75 0.883 0.883 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 960 RW9.3 1 0.5 0.5 0.804 0.804 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 1015 RW9.4 1 0.25 0.25 0.479 0.479 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 1038 RW9.5 1 0 0 0.350175 0.350175 0 0 0.85 7 10 FALSE TRUE 70/59.31 2318 1037 Table 43: Project-C Results: Using Switched Effort Fraction and an Unmodified Staffing Profile 335 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test MD0.1 1 1 1 1.404 1.404 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 436 MD0.2 1 0.75 0.75 0.9665 0.9665 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 410 MD0.3 1 0.5 0.5 0.751476 0.751476 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 383 MD0.4 1 0.25 0.25 0.5162 0.5162 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 409 MD0.5 1 0 0 0.37755 0.37755 0 0 0.85 7 10 FALSE TRUE 70/59.31 2318 416 MD1.1 1 0 0 2 0.01 0 0 0.85 7 10 FALSE TRUE 70/59.31 6381 799 MD1.2 1 0 0 1.5 0.01 0 0 0.85 7 10 FALSE TRUE 70/59.31 4752 642 MD1.3 1 0 0 1 0.01 0 0 0.85 7 10 FALSE TRUE 70/59.31 3136 471 MD1.4 1 0 0 0.5 0.2579 0 0 0.85 7 10 FALSE TRUE 70/59.31 2318 414 MD1.5 1 0 0 0.01 2 0 0 0.85 7 10 FALSE TRUE 70/59.31 6770 924 MD1.6 1 0 0 0.01 1.5 0 0 0.85 7 10 FALSE TRUE 70/59.31 4956 727 MD1.7 1 0 0 0.01 1 0 0 0.85 7 10 FALSE TRUE 70/59.31 3215 524 MD1.8 1 0 0 0.2525 0.5 0 0 0.85 7 10 FALSE TRUE 70/59.31 2318 419 MD1.9 1 0.25 0.25 2 0.01 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 4426 585 MD1.10 1 0.25 0.25 1.5 0.01 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 3312 462 MD1.11 1 0.25 0.25 1 0.067 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 353 MD1.12 1 0.25 0.25 0.5 0.5299 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 411 MD1.13 1 0.25 0.25 0.01 2 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 5243 755 MD1.14 1 0.25 0.25 0.01 1.5 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 3859 587 MD1.15 1 0.25 0.25 0.01 1 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2490 443 MD1.16 1 0.25 0.25 0.53579 0.5 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 409 MD1.17 1 0.5 0.5 2 0.01 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2928 376 MD1.18 1 0.5 0.5 1.5 0.086 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 324 MD1.19 1 0.5 0.5 1 0.5153 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 363 MD1.20 1 0.5 0.5 0.5 0.989 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 397 MD1.21 1 0.5 0.5 0.01 2 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 3267 526 MD1.22 1 0.5 0.5 0.01 1.5 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2386 436 MD1.23 1 0.5 0.5 0.488 1 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 397 MD1.24 1 0.5 0.5 1.0138 0.5 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 357 MD1.25 1 0.75 0.75 2 0.4101 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 402 MD1.26 1 0.75 0.75 1.5 0.68329 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 413 MD1.27 1 0.75 0.75 1 0.949 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 411 MD1.28 1 0.75 0.75 0.5 1.2201 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 425 MD1.29 1 0.75 0.75 0.01 2 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 3199 528 MD1.30 1 0.75 0.75 0.01 1.5 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2339 435 MD1.31 1 0.75 0.75 0.904 1 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 411 MD1.32 1 0.75 0.75 1.835 0.5 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 405 MD1.33 1 1 1 2 1.267 1 1 0.85 7 10 FALSE TRUE 70/59.31 2317 450 MD1.34 1 1 1 1.5 1.3775 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 438 MD1.35 1 1 1 1 1.556 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 438 Table 44: Project-C Results: Using Baseline Effort Fraction and a Modified Staffing Profile 336 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test MD1.36 1 1 1 0.5 1.753 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 441 MD1.37 1 1 1 0.03225 2 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 328 MD1.38 1 1 1 1.143 1.5 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 436 MD1.39 1 1 1 3.259 1 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 479 MD1.40 1 1 1 5.27 0.5 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 508 MD2.1 1 1 1 2.852 2.852 1 1 0.15 7 10 FALSE TRUE 70/59.31 2318 3454 MD2.2 1 0.75 0.75 2.35 2.35 0.75 0.75 0.15 7 10 FALSE TRUE 70/59.31 2318 4474 MD2.3 1 0.5 0.5 2.304 2.304 0.5 0.5 0.15 7 10 FALSE TRUE 70/59.31 2315 6245 MD2.4 1 0.25 0.25 1.37 1.37 0.25 0.25 0.15 7 10 FALSE TRUE 70/59.31 2318 5058 MD2.5 1 0 0 0.8217 0.8217 0 0 0.15 7 10 FALSE TRUE 70/59.31 2318 3711 MD3.1 0.9 1 1 1.3911 1.3911 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 426 MD3.2 0.9 0.75 0.75 0.9599 0.9599 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 396 MD3.3 0.9 0.5 0.5 0.7414 0.7414 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 347 MD3.4 0.9 0.25 0.25 0.51525 0.51525 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 408 MD3.5 0.9 0 0 0.37605 0.37605 0 0 0.85 7 10 FALSE TRUE 70/59.31 2318 408 MD4.1 1 1 1 1.398 1.398 1 1 0.85 7 10 TRUE TRUE 70/59.31 2318 415 MD4.2 1 0.75 0.75 0.972 0.972 0.75 0.75 0.85 7 10 TRUE TRUE 70/59.31 2318 416 MD4.3 1 0.5 0.5 0.7551 0.7551 0.5 0.5 0.85 7 10 TRUE TRUE 70/59.31 2318 394 MD4.4 1 0.25 0.25 0.517 0.517 0.25 0.25 0.85 7 10 TRUE TRUE 70/59.31 2318 409 MD4.5 1 0 0 0.378 0.378 0 0 0.85 7 10 TRUE TRUE 70/59.31 2318 412 MD5.1 1 1 1 4.398 4.398 1 1 0.15 7 10 TRUE TRUE 70/59.31 2318 6687 MD5.2 1 0.75 0.75 2.324 2.324 0.75 0.75 0.15 7 10 TRUE TRUE 70/59.31 2318 4396 MD5.3 1 0.5 0.5 1.5815 1.5815 0.5 0.5 0.15 7 10 TRUE TRUE 70/59.31 2318 3456 MD5.4 1 0.25 0.25 1.1799 1.1799 0.25 0.25 0.15 7 10 TRUE TRUE 70/59.31 2318 4004 MD5.5 1 0 0 1.091 1.091 0 0 0.15 7 10 TRUE TRUE 70/59.31 2318 5754 MD6.1 1 1 1 1.437 1.437 1 1 0.85 7 10 FALSE FALSE 70/59.31 2318 421 MD6.2 1 0.75 0.75 0.9875 0.9875 0.75 0.75 0.85 7 10 FALSE FALSE 70/59.31 2318 414 MD6.3 1 0.5 0.5 0.75901 0.75901 0.5 0.5 0.85 7 10 FALSE FALSE 70/59.31 2318 372 MD6.4 1 0.25 0.25 0.52221 0.52221 0.25 0.25 0.85 7 10 FALSE FALSE 70/59.31 2318 406 MD6.5 1 0 0 0.3812 0.3812 0 0 0.85 7 10 FALSE FALSE 70/59.31 2318 403 MD7.1 1 1 1 1.3795 1.3795 1 1 0.85 7 3 FALSE TRUE 70/59.31 2318 424 MD7.2 1 0.75 0.75 0.9745 0.9745 0.75 0.75 0.85 7 3 FALSE TRUE 70/59.31 2318 427 MD7.3 1 0.5 0.5 0.7531 0.7531 0.5 0.5 0.85 7 3 FALSE TRUE 70/59.31 2318 375 MD7.4 1 0.25 0.25 0.5173 0.5173 0.25 0.25 0.85 7 3 FALSE TRUE 70/59.31 2318 411 MD7.5 1 0 0 0.37751 0.37751 0 0 0.85 7 3 FALSE TRUE 70/59.31 2318 416 MD8.1 1 1 1 1.4045 1.4045 1 1 0.85 1 10 FALSE TRUE 70/59.31 2318 436 MD8.2 1 0.75 0.75 0.9665 0.9665 0.75 0.75 0.85 1 10 FALSE TRUE 70/59.31 2318 410 MD8.3 1 0.5 0.5 0.7506 0.7506 0.5 0.5 0.85 1 10 FALSE TRUE 70/59.31 2318 380 MD8.4 1 0.25 0.25 0.5163 0.5163 0.25 0.25 0.85 1 10 FALSE TRUE 70/59.31 2318 409 MD8.5 1 0 0 0.3776 0.3776 0 0 0.85 1 10 FALSE TRUE 70/59.31 2318 415 Table 44: Continued 337 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test MD10.1 1 1 1 0.03 2.303 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 328 MD10.2 1 1 1 0.04 1.988 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 365 MD10.3 1 1 1 0.05 1.993 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 384 MD10.4 1 1 1 0.06 1.99109 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 401 MD10.5 1 1 1 7 0.03 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 521 Table 45: Project-C Results: Low Defect Density Effects with Switched Effort Fraction and an Unmodified Staffing Profile 338 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test MD0.1 1 1 1 1.344 1.344 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 764 MD0.2 1 0.75 0.75 0.7732 0.7732 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 624 MD0.3 1 0.5 0.5 0.5322 0.5322 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 514 MD0.4 1 0.25 0.25 0.3676 0.3676 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 539 MD0.5 1 0 0 0.2795 0.2795 0 0 0.85 7 10 FALSE TRUE 70/59.31 2318 561 MD1.1 1 0 0 2 0.01 0 0 0.85 7 10 FALSE TRUE 70/59.31 11066 1181 MD1.2 1 0 0 1.5 0.01 0 0 0.85 7 10 FALSE TRUE 70/59.31 8250 944 MD1.3 1 0 0 1 0.01 0 0 0.85 7 10 FALSE TRUE 70/59.31 5322 821 MD1.4 1 0 0 0.5 0.01 0 0 0.85 7 10 FALSE TRUE 70/59.31 2520 569 MD1.5 1 0 0 0.01 2 0 0 0.85 7 10 FALSE TRUE 70/59.31 7510 1468 MD1.6 1 0 0 0.01 1.5 0 0 0.85 7 10 FALSE TRUE 70/59.31 5342 1285 MD1.7 1 0 0 0.01 1 0 0 0.85 7 10 FALSE TRUE 70/59.31 3361 993 MD1.8 1 0 0 0.14595 0.5 0 0 0.85 7 10 FALSE TRUE 70/59.31 2318 684 MD1.9 1 0.25 0.25 2 0.01 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 8069 809 MD1.10 1 0.25 0.25 1.5 0.01 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 5937 738 MD1.11 1 0.25 0.25 1 0.01 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 3816 650 MD1.12 1 0.25 0.25 0.5 0.1799 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 493 MD1.13 1 0.25 0.25 0.01 2 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 5658 1361 MD1.14 1 0.25 0.25 0.01 1.5 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 4045 1143 MD1.15 1 0.25 0.25 0.01 1 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2561 853 MD1.16 1 0.25 0.25 0.277119 0.5 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 590 MD1.17 1 0.5 0.5 2 0.01 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 5219 753 MD1.18 1 0.5 0.5 1.5 0.01 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 3822 666 MD1.19 1 0.5 0.5 1 0.01 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2462 540 MD1.20 1 0.5 0.5 0.5 0.5805 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 525 MD1.21 1 0.5 0.5 0.01 2 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 3448 1012 MD1.22 1 0.5 0.5 0.01 1.5 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2471 832 MD1.23 1 0.5 0.5 0.24769 1 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 656 MD1.24 1 0.5 0.5 0.554 0.5 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 508 MD1.25 1 0.75 0.75 2 0.01 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2745 590 MD1.26 1 0.75 0.75 1.5 0.178 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 537 MD1.27 1 0.75 0.75 1 0.581 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 585 MD1.28 1 0.75 0.75 0.5 0.9987 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 663 MD1.29 1 0.75 0.75 0.01 2 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 3393 944 MD1.30 1 0.75 0.75 0.01 1.5 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2463 755 MD1.31 1 0.75 0.75 0.50468 1 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 674 MD1.32 1 0.75 0.75 1.098 0.5 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 570 MD1.33 1 1 1 2 1.1303 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 746 MD1.34 1 1 1 1.5 1.286 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 759 Table 46: Project-C Results: Using Switched Effort Fraction and a Modified Staffing Profile 339 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test MD1.35 1 1 1 1 1.4775 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 774 MD1.36 1 1 1 0.5 1.674 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 787 MD1.37 1 1 1 0.016929 2 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 328 MD1.38 1 1 1 0.942 1.5 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 775 MD1.39 1 1 1 2.39 1 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 711 MD1.40 1 1 1 3.862 0.5 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 680 MD2.1 1 1 1 2.121 2.121 1 1 0.15 7 10 FALSE TRUE 70/59.31 2318 2697 MD2.2 1 0.75 0.75 1.3855 1.3855 0.75 0.75 0.15 7 10 FALSE TRUE 70/59.31 2318 3031 MD2.3 1 0.5 0.5 1.1227 1.1227 0.5 0.5 0.15 7 10 FALSE TRUE 70/59.31 2318 3723 MD2.4 1 0.25 0.25 0.7352 0.7352 0.25 0.25 0.15 7 10 FALSE TRUE 70/59.31 2318 3445 MD2.5 1 0 0 0.5239 0.5239 0 0 0.15 7 10 FALSE TRUE 70/59.31 2318 3154 MD3.1 0.9 1 1 1.3285 1.3285 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 736 MD3.2 0.9 0.75 0.75 0.769634 0.769634 0.75 0.75 0.85 7 10 FALSE TRUE 70/59.31 2318 609 MD3.3 0.9 0.5 0.5 0.53 0.53 0.5 0.5 0.85 7 10 FALSE TRUE 70/59.31 2318 504 MD3.4 0.9 0.25 0.25 0.3657 0.3657 0.25 0.25 0.85 7 10 FALSE TRUE 70/59.31 2318 526 MD3.5 0.9 0 0 0.278 0.278 0 0 0.85 7 10 FALSE TRUE 70/59.31 2318 547 MD4.1 1 1 1 1.1871 1.1871 1 1 0.85 7 10 TRUE TRUE 70/59.31 2318 428 MD4.2 1 0.75 0.75 0.7462 0.7462 0.75 0.75 0.85 7 10 TRUE TRUE 70/59.31 2318 502 MD4.3 1 0.5 0.5 0.5257 0.5257 0.5 0.5 0.85 7 10 TRUE TRUE 70/59.31 2318 472 MD4.4 1 0.25 0.25 0.3594 0.3594 0.25 0.25 0.85 7 10 TRUE TRUE 70/59.31 2318 467 MD4.5 1 0 0 0.2676 0.2676 0 0 0.85 7 10 TRUE TRUE 70/59.31 2318 439 MD5.1 1 1 1 3.928 3.928 1 1 0.15 7 10 TRUE TRUE 70/59.31 2318 7147 MD5.2 1 0.75 0.75 1.91 1.91 0.75 0.75 0.15 7 10 TRUE TRUE 70/59.31 2318 5080 MD5.3 1 0.5 0.5 1.274 1.274 0.5 0.5 0.15 7 10 TRUE TRUE 70/59.31 2318 4548 MD5.4 1 0.25 0.25 0.9234 0.9234 0.25 0.25 0.15 7 10 TRUE TRUE 70/59.31 2318 4936 MD5.5 1 0 0 0.8170997 0.8170997 0 0 0.15 7 10 TRUE TRUE 70/59.31 2318 6261 MD6.1 1 1 1 1.3799 1.3799 1 1 0.85 7 10 FALSE FALSE 70/59.31 2318 768 MD6.2 1 0.75 0.75 0.7879 0.7879 0.75 0.75 0.85 7 10 FALSE FALSE 70/59.31 2318 616 MD6.3 1 0.5 0.5 0.5379 0.5379 0.5 0.5 0.85 7 10 FALSE FALSE 70/59.31 2318 509 MD6.4 1 0.25 0.25 0.37113 0.37113 0.25 0.25 0.85 7 10 FALSE FALSE 70/59.31 2318 534 MD6.5 1 0 0 0.2824 0.2824 0 0 0.85 7 10 FALSE FALSE 70/59.31 2318 557 MD7.1 1 1 1 1.3375 1.3375 1 1 0.85 7 3 FALSE TRUE 70/59.31 2318 752 MD7.2 1 0.75 0.75 0.77518 0.77518 0.75 0.75 0.85 7 3 FALSE TRUE 70/59.31 2318 623 MD7.3 1 0.5 0.5 0.534 0.534 0.5 0.5 0.85 7 3 FALSE TRUE 70/59.31 2318 514 MD7.4 1 0.25 0.25 0.3677 0.3677 0.25 0.25 0.85 7 3 FALSE TRUE 70/59.31 2318 538 MD7.5 1 0 0 0.2795 0.2795 0 0 0.85 7 3 FALSE TRUE 70/59.31 2318 561 Table 46: Continued 340 Test ID Rel. Sched. Design Insp. Practice Code Insp. Prac. Design Error Density Code Error Density Unit Test Effect. Unit Test Prac. Int. Test Effect. Rev. Brd Delay Unit Test Delay Disable Test Effort Adj. Use Const. Task Fail. Rate Personnel Mod. Factor Total Errors Found in IT Tot. Err. Escaping Integ. Test MD10.1 1 1 1 0.03 1.805 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 492 MD10.2 1 1 1 0.04 1.83 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 579 MD10.3 1 1 1 0.05 1.8542 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 652 MD10.4 1 1 1 0.06 1.8523 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 686 MD10.5 1 1 1 5.234 0.03 1 1 0.85 7 10 FALSE TRUE 70/59.31 2318 631 Table 47: Project-C Results: Low Defect Density Effects with Switched Effort Fraction and a Modified Staffing Profile 341 Table 48: Project-A Raw Staffing and Modification Curves # Raw Design Staffing Curve Design Staff Modification Curve Raw Coding Staff Curve Coding Staff Modification Curve Raw Test Staff Curve Test Staff Modification Curve 1 0 1 0 1 0 100 2 0.001568137 1 0.003136275 1 0.005401362 0.5 3 0.003136275 1 0.00627255 1 0.010802725 0.5 4 0.004704412 1 0.009408825 1 0.016204087 0.3 5 0.00627255 1 0.0125451 1 0.021605449 0.3 6 0.007840687 1 0.015681375 1 0.027006812 0.3 7 0.009258593 0.5 0.018472272 1 0.032903783 0.3 8 0.010392729 0.5 0.02061079 1 0.039736904 0.3 9 0.011226755 0.5 0.023777407 1 0.045842035 0.3 10 0.01189812 0.5 0.026999579 1 0.052054274 0.3 11 0.013734709 0.5 0.030002514 1 0.0577727 0.3 12 0.017168544 0.2 0.033032035 1 0.062964095 0.1 13 0.020606496 0.2 0.036103419 1 0.067507489 0.1 14 0.02435731 0.2 0.039230735 1 0.071374755 0.1 15 0.028420986 0.2 0.042431525 1 0.074531351 0.1 16 0.032797525 0.2 0.045727697 1 0.076944979 0.1 17 0.037486926 0.2 0.049145976 1 0.07859132 0.1 18 0.04248919 0.2 0.052720137 1 0.079448994 0.1 19 0.047804317 0.2 0.056493134 1 0.079499838 0.1 20 0.053432305 0.2 0.060519174 1 0.078731463 0.1 21 0.059373156 0.2 0.06486941 1 0.077131999 0.1 22 0.06562687 0.2 0.069639031 1 0.074690604 0.1 23 0.072193446 0.2 0.074957753 1 0.071398261 0.1 24 0.079072885 0.2 0.081005126 1 0.067248406 0.1 25 0.086265186 0.2 0.088041879 1 0.062233779 0.1 26 0.093770349 0.2 0.096457562 1 0.05634856 0.1 27 0.101588375 0.2 0.106862986 1 0.049587486 0.1 28 0.110286121 0.2 0.120392643 1 0.041320747 0.4 29 0.107263537 0.2 0.138357763 1 0.039101644 0.4 30 0.115755662 0.2 0.139165407 1 0.038488411 0.4 31 0.115661856 0.2 0.126103412 1 0.059710178 0.4 32 0.105764281 0.2 0.11498791 1 0.092526826 0.4 33 0.094368562 0.2 0.102341137 1 0.126801834 0.4 34 0.104563187 0.2 0.10641176 1 0.124684235 0.4 35 0.127890529 0.1 0.125747072 1 0.130732869 0.4 36 0.166221994 0.1 0.16133611 1 0.140596399 0.5 37 0.207858711 0.1 0.204404846 1 0.139815612 0.5 38 0.254427027 0.1 0.260071303 1 0.12721132 0.5 39 0.277911947 0.1 0.31741566 1 0.101329293 0.5 40 0.262695404 0.1 0.339996169 1 0.092872536 0.5 342 Table 48: Continued # Raw Design Staffing Curve Design Staff Modification Curve Raw Coding Staff Curve Coding Staff Modification Curve Raw Test Staff Curve Test Staff Modification Curve 41 0.274306722 0.1 0.320112509 1 0.095124806 0.5 42 0.254698959 0.1 0.274804649 1 0.164980019 0.5 43 0.225815788 0.1 0.245351001 1 0.222601684 0.5 44 0.185634262 0.1 0.202929977 1 0.305269644 0.4 45 0.159248207 0.1 0.177181916 1 0.357506031 0.4 46 0.140592863 0.1 0.159705458 1 0.393701905 0.4 47 0.126664422 0.1 0.147041321 1 0.420326738 0.4 48 0.115894127 0.1 0.13746554 1 0.440684423 0.4 49 0.107337395 0.1 0.129989005 1 0.456716746 0.4 50 0.10039272 0.2 0.124004706 1 0.469637401 0.4 51 0.09465804 0.2 0.119118751 1 0.480245565 0.4 52 0.089855183 0.2 0.115065051 1 0.489087491 0.4 53 0.085785183 0.3 0.11165705 1 0.496549935 0.4 54 0.082301561 0.3 0.108759744 1 0.502915133 0.4 55 0.079294679 0.5 0.106273514 1 0.508392806 0.4 56 0.076680588 0.5 0.104123036 1 0.513142499 0.4 57 0.074393957 0.5 0.102250329 1 0.51728768 0.4 58 0.071058658 0.5 0.098768996 1 0.511393102 0.4 59 0.068022916 0.5 0.095536928 1 0.50495099 0.4 60 0.065242243 0.5 0.092516003 1 0.498043911 0.5 61 0.062680223 0.5 0.089675235 1 0.490739201 0.5 62 0.06030678 0.5 0.08698919 1 0.483092304 0.5 63 0.058096838 0.5 0.084436784 1 0.475149302 0.5 64 0.056029315 0.5 0.082000382 1 0.466948838 0.5 65 0.054086371 0.5 0.079665133 1 0.458523532 0.5 66 0.052252803 0.5 0.077418434 1 0.449901117 0.5 67 0.050515577 0.5 0.075249529 1 0.441105319 0.5 68 0.048863458 0.5 0.073149175 1 0.432156556 0.5 69 0.047286703 0.5 0.071109388 1 0.423072494 0.5 70 0.045776827 0.5 0.069123238 1 0.413868495 0.5 71 0.044326405 0.5 0.067184677 1 0.404557985 0.5 72 0.04292891 0.5 0.065288405 1 0.395152744 0.5 73 0.041578582 0.5 0.063429755 1 0.385663154 0.5 74 0.04027032 0.5 0.061604606 1 0.376098401 0.5 75 0.038999587 0.5 0.059809302 1 0.36646664 0.5 76 0.037762338 0.5 0.058040589 1 0.356775137 0.5 77 0.036554952 0.5 0.056295562 1 0.347030389 0.5 78 0.03537418 0.5 0.054571616 1 0.337238219 0.5 79 0.034217095 0.5 0.052866413 1 0.327403865 0.5 343 Table 48: Continued # Raw Design Staffing Curve Design Staff Modification Curve Raw Coding Staff Curve Coding Staff Modification Curve Raw Test Staff Curve Test Staff Modification Curve 80 0.033081057 0.5 0.051177845 1 0.317532052 0.5 81 0.031963676 0.5 0.049504009 1 0.307627051 0.5 82 0.030862788 0.5 0.047843183 1 0.297692725 0.5 83 0.02977643 0.3 0.046193807 1 0.287732579 0.5 84 0.028901505 0.3 0.044862893 1 0.279672612 0.5 85 0.040246982 0.3 0.062501381 1 0.389867024 0.5 86 0.051155763 0.3 0.079465466 1 0.495887211 0.5 87 0.062067576 0.3 0.096431939 1 0.601902011 0.5 88 0.078268072 0.3 0.113669402 1 0.702357315 0.5 89 0.111263611 0.3 0.128270807 1 0.760465579 0.5 90 0.136010375 0.3 0.129486799 1 0.734502826 0.5 91 0.16004268 0.1 0.130667551 1 0.709289769 0.5 92 0.184074986 0.1 0.131848302 1 0.684076712 0.5 93 0.208107291 0.1 0.133029054 1 0.658863655 0.5 94 0.232139596 0.1 0.134209806 1 0.633650598 0.5 95 0.256363916 0.1 0.135399991 1 0.608236092 0.5 96 0.283041592 0.1 0.136710715 1 0.580247693 0.5 97 0.294117647 0.1 0.137254902 1 0.568627451 0.5 98 0.294117647 0.1 0.137254902 1 0.568627451 0.5 99 0.294117647 0.1 0.137254902 1 0.568627451 0.5 100 0.275576977 0.1 0.128602589 1 0.532782156 0.5 101 0.254855052 0.1 0.118932358 1 0.492719767 0.5 102 0.234133127 0.1 0.109262126 1 0.452657379 0.5 103 0.213411202 0.1 0.099591894 1 0.41259499 0.5 104 0.192689277 0.1 0.089921662 1 0.372532602 0.5 105 0.171967352 0.1 0.080251431 1 0.332470213 0.5 106 0.151245426 0.1 0.070581199 1 0.292407824 0.5 107 0.130523501 0.5 0.060910967 1 0.252345436 0.5 108 0.119617225 0.5 0.055821372 1 0.231259968 1 109 0.119617225 0.5 0.055821372 1 0.231259968 1 110 0.119617225 0.5 0.055821372 1 0.231259968 1 111 0.119617225 0.5 0.055821372 1 0.231259968 1 112 0.119617225 0.5 0.055821372 1 0.231259968 1 113 0.119617225 0.5 0.055821372 1 0.231259968 1 114 0.119617225 0.5 0.055821372 1 0.231259968 1 115 0.119617225 0.5 0.055821372 1 0.231259968 1 116 0.119617225 0.5 0.055821372 1 0.231259968 1 117 0.119617225 0.5 0.055821372 1 0.231259968 1 118 0.119617225 0.5 0.055821372 1 0.231259968 1 119 0.119617225 0.5 0.055821372 1 0.231259968 1 120 0.119617225 0.5 0.055821372 1 0.231259968 1 121 0.01313444 0.5 0.022985271 1 0.141195234 1 344 Table 49: Project-C Raw Staffing and Modification Curves # Raw Design Staffing Curve Design Staff Modification Curve Raw Coding Staff Curve Coding Staff Modification Curve Raw Test Staff Curve Test Staff Modification Curve 1 0 0 0 1 0 0 2 0.063260341 0 0.069593041 1 0.092457421 0 3 0.063260341 0 0.120481233 1 0.092457421 0 4 0.063260341 0 0.159312292 1 0.092457421 0 5 0.063260341 0 0.189917457 1 0.092457421 0 6 0.063260341 0 0.214660224 1 0.092457421 0 7 0.063260341 0 0.235077778 1 0.092457421 0 8 0.063260341 0 0.252213077 1 0.092457421 0 9 0.063260341 0 0.266798703 1 0.092457421 0 10 0.063260341 0 0.279364324 1 0.092457421 0 11 0.062449311 5 0.295446646 0 0.092395921 0 12 0.061367937 5 0.308176894 0 0.092065648 0.1 13 0.060286564 5 0.316208701 0 0.091291753 0.1 14 0.059205191 5 0.320306406 0 0.08973127 0.1 15 0.060218978 5 0.321167883 0 0.085139788 0.1 16 0.067518248 5 0.321167883 0 0.076410278 0.2 17 0.072452014 5 0.320627197 0 0.074885104 0.2 18 0.070289267 5 0.31846445 0 0.082454717 0.2 19 0.068126521 5 0.316301703 0 0.090024331 0.2 20 0.065963774 5 0.314138956 0 0.097593944 0.3 21 0.063801027 5 0.31197621 0 0.105163558 0.3 22 0.063536775 5 0.3115807 0 0.105338486 0.3 23 0.064478526 5 0.31223553 0 0.103048519 0.3 24 0.065786961 5 0.313435342 0 0.100758552 0.3 25 0.067310282 5 0.315200489 0 0.098468585 0.3 26 0.068361233 5 0.316898776 0 0.097443351 0.3 27 0.068714387 5 0.31806108 0.25 0.098409425 0.3 28 0.06895683 5 0.31919013 0.25 0.100362545 0.3 29 0.069126453 5 0.320299589 0.25 0.103325232 0.3 30 0.06925042 5 0.321406831 0.25 0.107320697 0.3 31 0.069349649 5 0.322533976 0.25 0.112372868 0.3 32 0.069441552 5 0.323709267 0.25 0.118506419 0.35 33 0.06954189 5 0.324968935 0.25 0.125746796 0.4 34 0.069666216 5 0.326359753 0.25 0.134120252 0.45 35 0.069831208 3 0.327942606 0.5 0.143653876 0.5 36 0.070056128 3 0.329797567 0.5 0.154375628 0.5 37 0.070364695 3 0.332031269 0.5 0.166314371 0.5 38 0.071211139 3 0.34322965 0.5 0.176273216 0.5 39 0.074959974 3 0.362949487 0.5 0.183707348 0.5 40 0.081579401 3 0.382548737 0.5 0.191529276 0.5 41 0.090674893 3 0.401988559 0.5 0.199595274 0.5 42 0.101919401 3 0.421211421 0.5 0.207771001 0.5 43 0.11503818 1 0.440128331 0.75 0.215930286 0.55 44 0.129795908 1 0.458593911 0.75 0.223954068 0.6 345 Table 49: Continued # Raw Design Staffing Curve Design Staff Modification Curve Raw Coding Staff Curve Coding Staff Modification Curve Raw Test Staff Curve Test Staff Modification Curve 45 0.14598297 1 0.476353352 0.75 0.231729456 0.6 46 0.163391362 1 0.492915561 0.75 0.239148914 0.6 47 0.181727191 1 0.507196644 0.75 0.24610952 0.65 48 0.199740612 1 0.51623883 0.75 0.252512321 0.65 49 0.212724734 1 0.516329401 0.75 0.258183761 0.65 50 0.223442558 1 0.514124832 0.75 0.263051225 0.7 51 0.231700032 0.5 0.509746314 0.8 0.267452239 0.7 52 0.237721152 0.5 0.50269789 0.85 0.271798654 0.75 53 0.241703528 0.5 0.492320324 0.9 0.276592435 0.75 54 0.243822149 0.5 0.477717881 0.95 0.282468555 0.75 55 0.240539023 0.5 0.459728579 1 0.290256565 0.75 56 0.225566161 0.5 0.4423158 1 0.30056291 0.75 57 0.2105933 0.5 0.421303326 1 0.313100582 0.75 58 0.195620438 0.5 0.400000000 1 0.327007299 0.75 59 0.202087748 0.5 0.361755323 1 0.364352036 0.75 60 0.208555058 0.5 0.328481386 1 0.39919965 0.75 61 0.215022369 0.5 0.300271535 1 0.43106336 0.75 62 0.221489679 0.5 0.277221469 1 0.459321072 0.75 63 0.227956989 0.5 0.259429315 1 0.483164848 0.75 64 0.2344243 0.5 0.246995703 1 0.501525906 0.75 65 0.24089161 0.5 0.240023846 1 0.512960109 0.75 66 0.240875912 0.5 0.241686942 1 0.515815085 0.75 67 0.221411192 0.5 0.254663423 1 0.515815085 0.75 68 0.201946472 0.5 0.267639903 1 0.515815085 1 69 0.223844282 0.5 0.187347932 1 0.420924574 1 70 0.165450122 0.5 0.231143552 1 0.399026764 1 71 0.130586305 0.5 0.249378359 1 0.369431967 1 72 0.11644907 0.5 0.255935803 1 0.35242089 1 73 0.11207128 0.5 0.257844497 1 0.345759281 1 74 0.111922141 0.5 0.257907543 1 0.345498783 1 75 0.113793749 0.5 0.257896335 1 0.348493356 1 76 0.117536964 0.5 0.257795246 1 0.3544825 0.75 77 0.12128018 0.5 0.257555737 1 0.360471645 0.75 78 0.125023395 0.5 0.25711695 1 0.36646079 0.75 79 0.128766611 0.5 0.256376227 1 0.372449934 0.75 80 0.132509826 0.5 0.25514578 1 0.378439079 0.75 81 0.136253041 0.5 0.253041363 1 0.384428224 0.75 82 0.136253041 0.5 0.237670784 1 0.379936365 0.75 83 0.136253041 0.5 0.222829992 1 0.375444507 0.75 84 0.136253041 0.5 0.210448918 1 0.370952648 0.5 85 0.136253041 0.5 0.209245742 1 0.396798566 0.5 86 0.135929997 0.5 0.212614636 1 0.428357914 0.5 87 0.133957589 0.5 0.217106494 1 0.438063841 0.5 88 0.128095657 0.5 0.221598353 1 0.448640244 0.5 346 Table 49: Continued # Raw Design Staffing Curve Design Staff Modification Curve Raw Coding Staff Curve Coding Staff Modification Curve Raw Test Staff Curve Test Staff Modification Curve 89 0.094939619 0.5 0.196593674 1 0.478031353 0.5 90 0.091849148 0.5 0.158614895 1 0.444038929 0.5 91 0.1628007 0.5 0.189051606 0.5 0.301703163 0.5 92 0.157713228 0.5 0.198725892 0.5 0.301703163 0.25 93 0.146158402 0.5 0.20318956 0.5 0.301703163 0.25 94 0.128497111 0.5 0.205405255 0.5 0.301703163 0.25 95 0.105063882 0.5 0.20644748 0.5 0.301703163 0.25 96 0.076169246 0.5 0.206800053 0.5 0.301703163 0.25 97 0.056611672 0.5 0.206812652 0.5 0.301703163 0.25 98 0.04172369 0.5 0.206812652 0.5 0.301703163 0.25 99 0.02734243 0.5 0.206812652 0.5 0.301703163 0.25 100 0.013442454 0.5 0.206812652 0.5 0.301703163 0.25 101 0 0.5 0 0.5 0 0.25 347 A P P E N D I X : F – S I M U L A T E D D Y N A M I C D E F E C T D A T A F.1 Introduction This appendix provides the dynamic plots of the simulated defect data for projects A and C co- plotted with the actual project’s total number of database defects for reference. F.2 Plots of Simulated Dynamic Defects from Integration Testing 348 Figure 73: Project-A Reference Cases Using Interpolated (Raw) Staff Curves Figure 74: Project-A Reference Cases Using Interpolated (Modified) Staff Curves 349 Figure 75: Project-A Dynamics of Varying Defect Densities with Baseline Effort Figure 76: Project-A Dynamics of Varying Defect Densities with Switched Effort 350 Figure 77: Project-A Unmodified Staff Dynamics with Moderate Quality Practices Figure 78: Project-A Modified Staff Dynamics with Moderate Quality Practices 351 Figure 79: Project-C Reference Cases Using Interpolated (Raw) Staff Curves Figure 80: Project-C Reference Cases Using Interpolated (Modified) Staff Curves 352 Figure 81: Project-C Dynamics of Varying Defect Densities with Baseline Effort Figure 82: Project-C Dynamics of Varying Defect Densities with Switched Effort 353 Figure 83: Project-C Unmodified Staff Dynamics with High Quality Practices Figure 84: Project-C Modified Staff Dynamics with High Quality Practices 354 Figure 85: Project-C Dynamics for Ultra-low Design Defect Densities Figure 86: Project-A ‘Starved’ Requirements Task in the RW8.5 baseline Reservoir without 1 st order control going below zero 355 Figure 87: Project-A Noisy Behavior in Test Case MD5.5 Desired ‘Errors Found in IT’ Value 356 A P P E N D I X : G – L A T I N H Y P E R C U B E S A M P L I N G G.1 Introduction This appendix provides the distribution settings used for Latin Hypercube sampling (using 100 iterations, and a seed value of 3137) for projects that are simulated as schedule-driven, quality-driven or schedule-and-cost-driven using Madachy’s 533.3 task (32 KSLOC) embedded mode project. G.2 Latin Hypercube Sampling Versus the Use of the Monte Carlo Sampling Method To perform sensitivity analysis of the model used by the research, both Monte Carlo and Latin Hypercube sampling were tried by the author, since both of these methods are provided by the modeling tool’s risk analysis functionality. Chapter 4 only discusses the results from the Latin Hypercube sampling method with numerical results provided in Table 51 of this appendix. Use of the Monte Carlo method on the exact same distributions used provided in appendix-G identified an over-abundance of “starved” reservoirs (an example is provided in Figure 86 of the prior appendix) that due to the model’s implementation resulted in a large number of undefined numerical results. G.3 Sampling Distribution and Results Table 50: Latin Hypercube Sampling Distributions Distribution Type and Inputs Hi Quality Hi Quality - Low Cost Short Schedule- Constant SCED Short Schedule- Variable SCED Short Schedule-Low Cost Const. SCED Short Schedule-Low Cost Vari. SCED Average Design Error Amplification Fixed Fixed Fixed Fixed Trunc. Normal Trunc. Normal Expected Value 1 1 1 1 1 1 Standard Deviation 1 1 1 1 2 2 Lower Limit 1 1 1 1 1 1 Upper Limit 1 1 1 1 10 10 Integration Test Effectiveness Normal Normal Normal Normal Normal Normal Expected Value 0.85 0.85 0.85 0.85 0.85 0.85 Standard Deviation 0.09 0.09 0.09 0.09 0.09 0.09 Personnel Modification Factor Fixed Triangular Fixed Fixed Triangular Triangular Minimum 1 0.5 1 1 0.5 0.5 Maximum 1 1 1 1 1 1 Peak 1 0.75 1 1 0.75 0.75 SCED Schedule Constraint Fixed Fixed Fixed Triangular Fixed Triangular Minimum 1 1 1 0.7 1 0.7 Maximum 1 1 1 1 1 1 Peak 1 1 1 1 1 1 Unit Test Practice Fixed Triangular Triangular Triangular Triangular Triangular Minimum 1 0.75 0.25 0.25 0.25 0.25 Maximum 1 1 1 1 1 1 Peak 1 1 0.5 0.5 0.25 0.25 357 Table 50: Continued Distribution Type and Inputs Hi Quality Hi Quality - Low Cost Short Schedule- Constant SCED Short Schedule- Variable SCED Short Schedule-Low Cost Const. SCED Short Schedule-Low Cost Vari. SCED Unit Test Effect. Triangular Triangular Triangular Triangular Triangular Triangular Minimum 0.6 0.75 0 0 0 0 Maximum 1 1 1 1 1 1 Peak 0.9 1 0.5 0.5 0.5 0.5 Inspection Effect. Fixed Fixed Fixed Fixed Fixed Fixed Expected Value 0.6 0.6 0.6 0.6 0.6 0.6 Standard Deviation 0 0 0 0 0 0 Design Inspection Practice Triangular Triangular Triangular Triangular Triangular Triangular Minimum 0.95 0.75 0 0 0 0 Maximum 1 1 1 1 1 1 Peak 1 1 0.3 0.3 0.3 0.3 Code Inspection Practice Triangular Triangular Triangular Triangular Triangular Triangular Minimum 0.98 0.75 0.25 0.25 0.25 0.25 Maximum 1 1 1 1 1 1 Peak 1 1 0.75 0.75 0.75 0.75 Design Error Density Trunc. Normal Trunc. Normal Normal Normal Normal Normal Expected Value 0.75 0.75 1.5 1.5 1.5 1.5 Standard Deviation 0.2 0.2 0.5 0.5 0.5 0.5 Lower Limit 0.01 0.01 - - - - Upper Limit 2.4 2.4 - - - - Code Error Density Trunc. Normal Trunc. Normal Normal Normal Normal Normal Expected Value 0.75 0.75 1.5 1.5 1.5 1.5 Standard Deviation 0.2 0.2 0.5 0.5 0.5 0.5 Lower Limit 0.01 0.01 - - - - Upper Limit 2.4 2.4 - - - - Job Size 533.3 533.3 533.3 533.3 533.3 533.3 Table 51: Latin Hypercube Sampling Results Distribution Type and Inputs Hi Quality Hi Quality - Low Cost Short Schedule- Constant SCED Short Schedule- Variable SCED Short Schedule- Low Cost Const. SCED Short Schedule- Low Cost Vari. SCED Errors Escaping Integration Test Average 28.05 22.41 105.73 96.5 138.18 136.83 5th Percentile 3.04 2.28 8.27 5.05 8.16 6.15 10th Percentile 11.15 9.55 26.62 24.67 29.35 26.82 25th Percentile 21.57 14.9 59.42 56.33 60.93 58.15 75th Percentile 36.54 33.39 156.51 144.54 188.47 198.13 90th Percentile 42.19 39.67 197.94 1777.94 275.9 290.73 95th Percentile 53.95 41.71 214.2 198.38 363.55 358.5 Cumulative Total Effort Average 4218.88 3193.78 5632.19 5083.43 5215.99 4785.71 5th Percentile 4087.09 2380.41 4873.23 4075.78 3464.68 2943.69 10th Percentile 4097.34 2591.03 4966.97 4279.22 3678.7 3379.47 25th Percentile 4149.36 2898.53 5203.71 4670.42 4370.83 3927.47 75th Percentile 4285.02 3500.63 5902.94 5474.42 5857.28 5533.43 90th Percentile 4344.73 3806.23 6274.43 5821.57 7096.94 6418.24 95th Percentile 4385.88 3955.08 6555.78 5934.61 7467.87 6953.81 Errors Found in Integration Testing Average 154.89 133.82 844.52 838.47 1093.48 1145.5 5th Percentile 92.34 71.22 460.46 478.26 397.86 441.87 10th Percentile 103.92 85.34 515.05 495.09 493.76 564.69 25th Percentile 122.43 101.1 631.06 636.36 742.76 735.28 75th Percentile 148.39 159.14 1002.82 860.44 1359.53 1485.23 90th Percentile 210.06 189.43 1146.12 1004.04 1722.36 1833.29 95th Percentile 220.94 207.88 1256.57 1202.71 2050.61 2063.79 358 A P P E N D I X : H – A N O T E O N F R E E M A N D Y S O N ’ S P R O B L E M H.1 Introduction Three wonderful quotes from Freeman Dyson’s Disturbing the Universe are used at the beginning of chapter’s 2, 3 and 6. The first provides a tactfully placed job description for anyone using the title ‘engineer’ which traditionally is carefully regulated in most industries – except for software. The second provides his prediction of the factors limiting a technological society. We see these same factors as contributing to our ability (or inability) to successfully build software intensive systems. Finally, the third quote alludes to an interesting problem that is essentially the same as the one considered in this research. How do we label our systems as either good or bad, for any number of success critical stakeholders? Consider for a moment the labeling situation from the varying view points of numerous stakeholders; there are the government customers eager for the services of the expensive new system, the politicians whose political-games approved the funding for the system, the taxpayers paying the bill for it, the stockholder whose company is competing to build it, the engineers tasked with designing and building it, the company managers who are trying to win the contract to build it, and even the competing governments whose own systems, policies and politically motivated games drive the building of it in the first place. The selection of strategies for all of these players are intrinsically linked. However, some thoughts on Dyson’s problem for just the SCS of the system are provided here. H.2 Software Intensive System Stakeholders Cost and schedule information is typically available (although not readily available in a usable form in our case studies), and is (usually) closely tracked using Earned Value Metrics (EVM). Quality on the other hand is notoriously difficult to measure, but it must be measured in some quantifiable manner. For a completed software solution, we could choose no high severity software defects while executing in the target hardware environment as our readiness criteria. This “quality” criteria does not, however, state that no high severity defects do not still exist in the software, as the discovery of high 359 severity defects is only as good as the methods used to find them. The unobtainable desire is to completely test a system prior to use, while the more realistic criterion is a required probabilistic reliability for the software that is measured during software development and system testing processes. However, what if the contractor that was selected has difficulties meeting these criteria? Is this due to upstream funding or schedule constraints, or perhaps a flawed system acquisition negotiation process? Or was the initially negotiated expectation that the result would be significantly higher down stream cost and schedule overruns? For all of these questions, we consider that the lack of training and incorporation of continuous process improvements is a cost and schedule issue, and not due to a lack of desire for quality by the engineers and their management. Numerous books and papers are available on the subject of quality and the resulting lack of use of quality processes to describe what was demonstrated in this dissertation from case study data and modeling. So, how do we solve this situation for software intensive space system acquisitions? Boehm describes the use of the WinWin negotiation method between success critical stakeholders (SCS). An up-front planning and negotiation process that uses a three-axis coordinate system with cost, schedule, and quality lying along each of the three axes provides an alternative conceptual method of visualizing the life cycle dynamics of this situation for software intensive system acquisitions. All of the axes have a functional dependency on a variable we will simply call “Effort”. We can generate a three-dimensional graph similar to the hypothetical example in Figure 88 (on the next page). The end point for the planned path in Figure 88 winds up in a region that is GOOD for all three players. The negotiated “tube” is used for identifying policies for the control variables that are used for managing the project. The final consideration is to determine if the path is WinWinWinLose or WinWinWinWin for the “Effort” needed to build the system. Hence, we have pointed out in this dissertation that some of our systems are not properly accounting for this “Effort” variable; the result of this careless oversight is corner cutting by the employees doing the work. 360 Schedule Cost Q uality Negotiated Schedule, Cost and Quality WinWinWin (GOOD) region Excellence region All execution paths with the end point not in the GOOD region is labeled as BAD by someone Program execution path into the GOOD region Negotiated project execution “tube” into the GOOD region Projected cost risk Projection onto the Cost Schedule axes Figure 88: Hypothetical Execution Path that is GOOD for 3-Players The deliberation for “Effort” and other variables using cost and schedule models and negotiation steps by all the success critical stakeholders should be considered before a system can even be labeled as GOOD to build. Hence, the concept is that all three factors of quality, cost, and schedule have a functional dependency on “Effort” and other factors. One could also construct corresponding diagrams with “Effort” as an axis. Moreover, there are a number of functional dependencies; a simple look at the parameters used by software cost models such as COCOMO II or SEER-SEM provides examples of some of the other factors, all of which have varying degrees of impact on the primary 4 axes and their associated temporal dependency. The diagram provided here is simplified, but could be modified to have criteria based transition regions between the planning, design, and implementation phases, and ‘planning path tubes’ surrounding the planned path to accommodate risk and perturbations from shocks in any of the dimensions. Mapping onto a monitored two-dimensional coordinate system; the tube becomes a negotiated band in which the acquisition should execute. Effort and other factors are estimated from cost models, and can 361 accommodate numerous risk factors. The risk factors and mitigation strategies affecting software development were provided in chapter 2 and a prior appendix. Design and manufacturing solutions that yield up-front negotiated paths with management reserve levels based on risk that enter into the stakeholder solution region are deemed acceptable for creation. During system creation, the measured entrance into the negotiated region using a build-to- design processes are labeled as good, by the success critical stakeholders, while failure to enter this negotiated space following the planned and negotiated path are labeled as bad, and a more robust design and alternative build solutions are sought. Incentive plans for matching the negotiated build path is included in the contract award, and accommodates cost and schedule risk with planned mitigations (control variable policies) for “shocks” that will occur to internal variables from external sources, and thus gives the contractor’s a financially incentivized win situation. We should be able to use Bellman’s approach or Isaac’s to investigate shocks from staff turn over, cost perturbations and others. H.3 N-Dimensional Dynamic Bargaining Theory The solution to Freeman Dyson’s problem for all players requires a dynamic bargaining solution for N-players. The suggested bargaining method from chapter 5 suggests a periodic re- negotiation as a player’s (or coalition of players) negotiation position changes. While (for long lead system development situations) the advent of new technology during the execution of the system development that improves one of the player’s negotiation positions provides for new bargaining opportunities, the adoption of these new (and presumably proven) technologies during the execution of the program leads to perturbations to the existing pre-negotiated system. This re-negotiation step occurs on a periodic time scale to track to what amounts to a negotiated point in N-dimensional space, but the impacts on the development of the system becomes one of a dynamic cost benefits analysis for the adoption of the new technology. Hence, provided here is the mathematical concept of a negotiated “N- dimensional fuzzy-ball” with quality, schedule, cost, and effort as orthogonal negotiated axes with the additional negotiated understanding that there will be a periodic re-negotiations (leading to time as the 362 5 th axis) that will allow one’s negotiated position to improve as the technology situation evolves for the success critical players in the game. Further concepts can be theorized… 363 A p p e n d i x H E n d n o t e s [1] Melvyn Coles, “Dynamic Bargaining Theory,” published by the Federal Reserve Bank of Minneapolis, staff report 172; Internet available online at http://ideas.repec.org/p/fip/fedmsr/172.html. [2] Ziv Hellman, “Bargaining Set Solution Concepts in Dynamic Cooperative Games,” Munich Personal RePEc Archive (MPRA) (April 2008); Internet available on line at http://mpra.ub.uni- muenchen.de/8798/1/MPRA_paper_8798.pdf. 364 A P P E N D I X : I – U M L A N A L O G Y F O R N O N - S O F T W A R E P E O P L E I.1 Introduction Here we provide an analogy to blue prints for what it looks like to government customers when contractors do not use the UML. The situation is analogous to constructing a building without blueprints, although the use of the analogy is not new [1] it is included here to provide an analogy to a real situation that occurred on one of our flight software projects and documents the point of view of our technical representatives. I.2 A Project-D Analogy of UML to Blueprints Imagine you are about to move into a complex of buildings that a trillionaire customer has contracted to be designed and built – but first lets set the stage. In the negotiation process, the contractor assures you he knows exactly what he’s doing and can show you tons of wonderful slide shows about what they will build for you. And so the trillionaire signs a contract with you worth a few billion, to design and construct a very elaborate complex of buildings. The contractor has done such a wonderful sales job that the customer doesn’t spend much time paying attention to what the contractor has been doing and in fact convinces everyone that the best method to save money is to do away with most of the building inspections. Later the contractor comes back and says that he is woefully behind schedule due to various technical reasons and he is expecting to be billions over spent. (The tie in here is to the lack of cost and schedule planning.) The customer then brings in a team of independent technical experts to advise him, and his staff about what they need to do to make sure this complex is ready when its supposed to be, and that he will spare no expense making sure that the complex is adequately funded. (Removing the initial cost constraints, but still with significant schedule constraints.) Over the next few years the experts are allowed to watch the minute details of the windows being brought in (analogous to new software functionality), and make sure the quality checks on the steel girders is occurring (in process reviews of critical software architectural framework) on the unfinished parts of the buildings. The buildings begin to take shape and various quality problems 365 identified by the customer’s small team of experts are quickly attended to. While in the distance there is one particular complex with a wonderful bridge (critical software architectural functionality that allows direct communication in distributed software systems) that runs between two of the buildings (different computers doing different parts of the job) that nobody seems to pay much attention to, as the foreman on that job is an excellent salesman, and there is never many findings with the processes that their team is using (the contactor is doing a good job of building the software functionality that is being watched). Finally, the day to determine if the complex is ready to be moved into has arrived (the software is installed on THE target hardware), and furniture and file cabinets are beginning to be moved in (getting ready to use the target hardware system). During this initial inspection period, some one opens a door to an office, and it falls off the hinges (some part of the software functionality does not work properly due to some minor design errors), and complaints start to arrive that not only does the electricity not work (some other piece of the software functionality used in different parts of the distributed hardware does not work properly)… the plumbing doesn’t work either (another piece of the software functionality was not designed properly – if at all). So the trillionaire assembles his team of expert engineers with the contractor’s team of expert engineers to determine how long it will take to fix these latest quality issues. During the inspection of the premises, the experts identify that the bridge is warped, and the floor appears unstable (the critical communications path for the distributed software functionality does not appear like it will hold under stressing loads). Later the assembled experts determine that if the bridge is to be used, only one person can cross it at a time … and he’d better run (further inspection indicates that the communications path may not even work properly during normal operation). Further, an inspection of the basement identifies that the foundation is crumbling and is leaking water (the architectural foundation may not hold during stress, or may not hold up at all). The cement is tested, and determined to be completely inadequate to hold the weight of the two different buildings for any period of time (the very architectural foundation on which the software was built will not work once the system is put into normal operation with the computational throughput it is intended to deal with). 366 The contractor rushes to move the prior plumbers, and electricians, and carpenters and the foreman onto other jobs and tells you that the new plumbers, and electricians, and carpenters are the finest anywhere (they bring in a new software “A” team – and gee you told us we had the “A” team). They bring in a new foreman and a new software architect, and these individuals proceed to draw on …. napkins …. (PowerPoint charts) describing how they are going to fix the customer’s problem (with a new “architecture”). The plan they lay out is that they are going to build a new test foundation (going to prototype the new “architecture”)… over there somewhere (in an undefined location the new software team is frantically rewriting software from another program for your project), and that they will lift the two buildings off their foundations (will reuse much of the existing software that does work), and will place the new buildings onto the new foundation (will get the reused software to work with the new software that the new team is unbeknownst to you – frantically working on). And over the months that ensue, more napkins (more PowerPoint chart presentations) continue to be provided by the new team leads (in “design review” meetings) describing how these issues (all the original architectural fiascos from the existing system) will all be fixed. From the start of the entire napkin ordeal, the technical experts hired by the trillionaire continue to keep asking the question… “You’re going to show me the blueprints, right?” (So – your contractually obligated software development plan states that you use the Unified Modeling Language – so where is it? This architectural redesign is aggressive and since the rest of the software industry uses UML, we naturally assume that you are using it to make sure that the new software foundation will fit together with the existing software that is being reused?)… And are told, “But of course, yes, we’ll get those right over to you.” … This proceeds for quite some time (months actually), when finally the technical experts go to the jobsite to sit down and look at the blueprints that were never provided … but the foreman is in fact directing the team of workers … with napkins. I.3 A Discussion on the use of UML UML has become the software industry’s de-facto standard for documenting and communicating the software architecture and design. The analogy of UML to the use of blueprints by 367 architects for the design of complex buildings is exact. The experts the government brings onto large software projects expect to see professionally designed software products, and today that means using UML to design the software, and using that design to communicate how the product will be built to the software development team. On the use of UML, Booch comments that, Many other organizations, however, have not enjoyed the successes they assumed to be implicit by merely using the UML. Success with the UML requires thought and planning accompanied by an understanding of its purpose, limitations, and strengths- much like the usage of any technology. It is only through such awareness that an organization is most capable of applying the UML to address its unique needs, in its own context, and in a value-added manner. Blind adoption and usage of technology for technology's sake is a recipe for disaster for any technology.[2] Later in the same article Booch comments about a “UML fever” that various organizations get as a manifestation of deeper ills in the organization’s software development processes, and that these organizations should launch a self diagnosis campaign to assess the presence of UML in their programs. The reality of our situation in the aerospace industry is that we encounter a number of organizations whose staff view UML as a hindrance to their writing the code, not as a technology that will allow them obtain high quality products on tight schedules. Further, arguments made by the author and others to then retrain those staff, or bring in a team that knows how to design first are always overridden by the establishment due to – schedule pressure. Hence, we suggest that the question isn’t so much whether or not UML is the correct design language for space flight software (even though our experience indicates that those organizations that truly use it as it is intended to be used, and not just documenting the existing code for the government – the resulting code from this design process is higher quality), but that there is the clear need for a standard agreed upon communication language of a design and an architecture between offerors and customers, and a design checking methodology to insure that the costly implementation phase will go through with as few hitches as possible resulting in fewer news articles concerning space software “glitches”. Ad Astra! 368 A p p e n d i x I E n d n o t e s [1] Grady Booch, private communication. [2] Grady Booch, The Fever is Real; Internet available from http://www.acmqueue.org/modules.php?name=Content&pa=showpage&pid=131.
Abstract (if available)
Abstract
The development of schedule-constrained software-intensive space systems is challenging. Case study data from national security space programs developed at the U.S. Air Force Space and Missile Systems Center (USAF SMC) provide evidence of the strong desire by contractors to skip or severely reduce software development design and early defect detection methods in these schedule-constrained environments. The research findings suggest recommendations to fully address these issues at numerous levels. However, the observations lead us to investigate modeling and theoretical methods to fundamentally understand what motivated this behavior in the first place. As a result, Madachy’s inspection-based system dynamics model is modified to include unit testing and an integration test feedback loop. This Modified Madachy Model (MMM) is used as a tool to investigate the consequences of this behavior on the observed defect dynamics for two remarkably different case study software projects. Latin Hypercube sampling of the MMM with sample distributions for quality, schedule and cost-driven strategies demonstrate that the higher cost and effort quality-driven strategies provide consistently better schedule performance than the schedule-driven up-front effort-reduction strategies. Game theory reasoning for schedule-driven engineers cutting corners on inspections and unit testing is based on the case study evidence and Austin’s agency model to describe the observed phenomena. Game theory concepts are then used to argue that the source of the problem and hence the solution to developers cutting corners on quality for schedule-driven system acquisitions ultimately lies with the government. The game theory arguments also lead to the suggestion that the use of a multi-player dynamic Nash bargaining game provides a solution for our observed lack of quality game between the government (the acquirer) and “large-corporation” software developers.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
A user-centric approach for improving a distributed software system's deployment architecture
PDF
Systems engineering and mission design of a lunar South Pole rover mission: a novel approach to the multidisciplinary design problem within a spacecraft systems engineering paradigm
PDF
A model for estimating schedule acceleration in agile software development projects
PDF
Thwarting adversaries with unpredictability: massive-scale game-theoretic algorithms for real-world security deployments
PDF
Impacts of system of system management strategies on system of system capability engineering effort
PDF
A synthesis approach to manage complexity in software systems design
PDF
A declarative design approach to modeling traditional and non-traditional space systems
PDF
The development of an autonomous subsystem reconfiguration algorithm for the guidance, navigation, and control of aggregated multi-satellite systems
PDF
Optimal guidance trajectories for proximity maneuvering and close approach with a tumbling resident space object under high fidelity J₂ and quadratic drag perturbation model
PDF
Hierarchical planning in security games: a game theoretic approach to strategic, tactical and operational decision making
PDF
Theoretical foundations and design methodologies for cyber-neural systems
PDF
Empirical methods in control and optimization
PDF
Defending industrial control systems: an end-to-end approach for managing cyber-physical risk
PDF
System stability effect of large scale of EV and renewable energy deployment
PDF
Predicting and planning against real-world adversaries: an end-to-end pipeline to combat illegal wildlife poachers on a global scale
PDF
Computationally efficient design of optimal strategies for passive and semiactive damping devices in smart structures
PDF
Techniques for analysis and design of temporary capture and resonant motion in astrodynamics
PDF
Context-adaptive expandable-compact POMDPs for engineering complex systems
PDF
Model based design of porous and patterned surfaces for passive turbulence control
PDF
Developing an agent-based simulation model to evaluate competition in private health care markets with an assessment of accountable care organizations
Asset Metadata
Creator
Buettner, Douglas John
(author)
Core Title
Designing an optimal software intensive system acquisition: a game theoretic approach
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Aerospace Engineering (Astronautics)
Publication Date
09/19/2010
Defense Date
09/04/2008
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
control theory,game theory,OAI-PMH Harvest,software systems,space,space software,system dynamics
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Erwin, Daniel A. (
committee chair
), Bellman, Kirstie L. (
committee member
), Boehm, Barry W. (
committee member
), Gruntman, Michael A. (
committee member
), Kunc, Joseph A. (
committee member
)
Creator Email
DJBuettner@ca.rr.com,Douglas.J.Buettner@aero.org
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m1611
Unique identifier
UC1115082
Identifier
etd-Buettner-2432 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-97601 (legacy record id),usctheses-m1611 (legacy record id)
Legacy Identifier
etd-Buettner-2432.pdf
Dmrecord
97601
Document Type
Dissertation
Rights
Buettner, Douglas John
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email