Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Effectiveness of engineering practices for the acquisition and employment of robotic systems
(USC Thesis Other)
Effectiveness of engineering practices for the acquisition and employment of robotic systems
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
EFFECTIVENESS OF ENGINEERING PRACTICES FOR THE ACQUISITION
AND EMPLOYMENT OF ROBOTIC SYSTEMS
by
DeWitt T. Latimer IV
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(COMPUTER SCIENCE)
May 2008
Copyright 2008 DeWitt T. Latimer IV
Table of Contents
List Of Tables vii
List Of Figures viii
Abstract x
Chapter 1: Introduction to Robotic Systems Acquisition 1
1.1 Definition of Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 The Practice of Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Importance of Engineering in Acquisition and Systems Engineering . . . 7
Chapter 2: Statement of Research Question 9
Chapter 3: Research Methodology 11
3.1 Methodology Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Selection and Discussion of Robotic Cases . . . . . . . . . . . . . . . . . . . . . 12
3.2.1 Anatomy of a Robotic System Case Study . . . . . . . . . . . . . . . . . 14
3.2.1.1 Narrative Description of the Case . . . . . . . . . . . . . . . . 14
3.2.1.2 Case Technical Metrics . . . . . . . . . . . . . . . . . . . . . 15
3.2.1.3 Political Analysis . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.1.4 Funding Analysis . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.2 Case 1: CMU Surface Assessment Robot . . . . . . . . . . . . . . . . . 20
3.2.3 Case 2: USAF Global Hawk . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.4 Case 3: Haptic PackBot Explorer Robot Controller . . . . . . . . . . . . 28
3.2.5 Case 4: Autonomous Helicopter Landing Capability . . . . . . . . . . . 31
3.2.6 Case 5: Hoboken, NJ Robot Garage Maintenance Change . . . . . . . . 33
3.2.7 Case 6: Robotic Vacuum User Satisfaction . . . . . . . . . . . . . . . . 35
3.2.8 Case Study Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 Robot Acquisition Engineering Survey . . . . . . . . . . . . . . . . . . . . . . 39
3.3.1 Goals of the Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3.2 Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3.3 Survey Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3.3.1 Introduction and Instruction . . . . . . . . . . . . . . . . . . . 40
3.3.3.2 Respondent Background and General Experience Questions . . 40
3.3.3.3 Robotic Acquisition Project Questions . . . . . . . . . . . . . 40
ii
3.3.3.4 Acquisition Engineering Practice Questions . . . . . . . . . . 40
3.3.3.5 Survey Feedback . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3.4 Survey Verification and Validation . . . . . . . . . . . . . . . . . . . . . 41
3.3.5 Sample Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3.6 Statistical Analysis Techniques . . . . . . . . . . . . . . . . . . . . . . . 42
3.3.7 Anticipated Results of Survey . . . . . . . . . . . . . . . . . . . . . . . 42
3.4 Revisiting Robotic System Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4.1 Examination of Survey Results to Cases . . . . . . . . . . . . . . . . . . 42
3.4.2 Analytical Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.5 Arriving at Feasibility Rationales for Robotic Systems Acquisition . . . . . . . . 45
Chapter 4: Analysis and Discussion of Results 46
4.1 Analysis of Robot Acquisition Engineering Survey . . . . . . . . . . . . . . . . 46
4.1.1 Survey Invitation and Response Rate . . . . . . . . . . . . . . . . . . . 46
4.1.2 Initial Outlier Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.1.3 Responding Population and Trends . . . . . . . . . . . . . . . . . . . . 47
4.1.4 Outcome Factor Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.1.4.1 Predicting Budget Performance . . . . . . . . . . . . . . . . . 52
4.1.4.2 Predicting Schedule Performance . . . . . . . . . . . . . . . . 52
4.1.4.3 Predicting Requirements Performance . . . . . . . . . . . . . 52
4.1.4.4 Predicting Suitability . . . . . . . . . . . . . . . . . . . . . . 58
4.2 Analysis of Robot Acquisition Cases . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2.1 Evaluating Outcome Factors in Cases . . . . . . . . . . . . . . . . . . . 62
4.2.2 Schedule Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.2.3 Requirements Performance . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2.4 Suitability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.2.5 Requirements and Suitability Disconnect . . . . . . . . . . . . . . . . . 70
Chapter 5: Conclusions and Directions for Future Work 73
5.1 Evaluation of Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.1.1 Rejection of H0 (Engineering methods do not impact the success of a
robotic acquisition) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.1.2 Rejection of H1 (Engineering Methods are Not Success Critical) . . . . . 73
5.1.3 Rejection of H2 (Engineering Provides a Complete Practice for Robotics) 73
5.1.4 Rejection of H3 (Lack of Robotic Engineering Methods) . . . . . . . . . 74
5.2 Feasibility Rationales for Robotic Systems Acquisition . . . . . . . . . . . . . . 75
5.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.3.0.1 Expand Project Pool . . . . . . . . . . . . . . . . . . . . . . . 77
5.3.0.2 Robotic Engineering Body of Knowledge . . . . . . . . . . . . 77
References 79
Appendices 89
iii
Appendix A
Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A.1 Instrument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A.1.1 Acquisition Engineering Introduction . . . . . . . . . . . . . . . . . . . 89
A.1.2 Background and General Experience Questions . . . . . . . . . . . . . . 90
A.1.3 Acquisition Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
A.1.4 Engineering Area Questions . . . . . . . . . . . . . . . . . . . . . . . . 96
A.1.4.1 Acquisition Requirements Development . . . . . . . . . . . . 97
A.1.4.2 Acquisition Technical Management . . . . . . . . . . . . . . . 102
A.1.4.3 Acquisition Verification . . . . . . . . . . . . . . . . . . . . . 105
A.1.4.4 Acquisition Validation . . . . . . . . . . . . . . . . . . . . . . 108
A.1.5 Final Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
A.2 Invitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
A.2.1 Invitation to Academics . . . . . . . . . . . . . . . . . . . . . . . . . . 112
A.2.2 Invitation to Government Acquisition Professionals . . . . . . . . . . . . 112
A.2.3 Invitation to Industry Professionals . . . . . . . . . . . . . . . . . . . . 113
Appendix B
Survey Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
B.1 Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
B.2 Summary Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
B.3 Response Correlations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
B.3.1 Response Correlations between X18 and Engineering Practice Complete-
ness and Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
B.3.2 Correlations to Outcome Variables . . . . . . . . . . . . . . . . . . . . . 121
B.4 Regression for Meeting Requirements (Y3) . . . . . . . . . . . . . . . . . . . . 124
B.4.1 Y3, Full Engineering Factors, and Submodel Considerations . . . . . . . 124
B.4.2 Y3, Environmental Factors, and Submodel Considerations . . . . . . . . 128
B.4.3 Final Regression for Y3 . . . . . . . . . . . . . . . . . . . . . . . . . . 134
B.5 Regression for Meeting Schedule (Y1) . . . . . . . . . . . . . . . . . . . . . . . 136
B.5.1 Y1, Full Engineering Factors, and Submodel Considerations . . . . . . . 136
B.5.2 Y1, Environmental Factors, and Submodel Considerations . . . . . . . . 140
B.5.3 Final Regression for Y1 . . . . . . . . . . . . . . . . . . . . . . . . . . 147
B.6 Regression for Fitness for Purpose (Y0) . . . . . . . . . . . . . . . . . . . . . . 147
B.6.1 Y0, Full Engineering Factors, and Submodel Considerations . . . . . . . 147
B.6.2 Y0, Environmental Factors, and Submodel Considerations . . . . . . . . 151
B.6.3 Final Regression for Y0 . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Appendix C
Surface Assessment Robot Project Analysis . . . . . . . . . . . . . . . . . . . . . . . 160
C.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
C.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
C.2.1 Cast of Players . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
C.2.2 Road Surface Assessment Domain Description . . . . . . . . . . . . . . 161
C.2.3 Acquisition Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 162
iv
C.2.4 Client and Acquiring Organization . . . . . . . . . . . . . . . . . . . . . 163
C.2.5 Developing Organization . . . . . . . . . . . . . . . . . . . . . . . . . . 163
C.3 System Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
C.3.1 Concept Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
C.3.2 Requirements Development . . . . . . . . . . . . . . . . . . . . . . . . 165
C.3.3 Preliminary Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
C.3.4 Critical Design Review . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
C.3.5 Robot Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
C.4 System Delivery and Demonstration . . . . . . . . . . . . . . . . . . . . . . . . 168
Appendix D
Global Hawk Project Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
D.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
D.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
D.2.1 Early History of Unmanned Aerial Vehicles . . . . . . . . . . . . . . . . 171
D.2.2 Global Hawk ACTD Program and Transition to a Major Acquisition Pro-
gram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
D.2.3 Early Transition to Operations . . . . . . . . . . . . . . . . . . . . . . . 174
D.2.4 Technical Description of the Global Hawk RQ-4A and RQ-4B . . . . . . 174
D.2.5 Recent Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Appendix E
PackBot Explorer Robot Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
E.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
E.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
E.2.1 Cast of Players . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
E.2.2 iRobot PackBot and Operational Environment Discussion . . . . . . . . 180
E.2.3 Haptic and Miniature Based Interfaces . . . . . . . . . . . . . . . . . . . 180
E.2.4 The Haptic Avatar PackBot Controller . . . . . . . . . . . . . . . . . . . 181
E.2.4.1 Final, formal requirements . . . . . . . . . . . . . . . . . . . 181
E.2.4.2 Actual Delivered Properties . . . . . . . . . . . . . . . . . . . 181
E.2.5 Acquisition Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 182
E.2.6 Client and Acquiring Organization . . . . . . . . . . . . . . . . . . . . . 183
E.2.7 Developing Organizations . . . . . . . . . . . . . . . . . . . . . . . . . 183
E.3 System Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
E.3.1 Pre-System Analysis Phase . . . . . . . . . . . . . . . . . . . . . . . . . 184
E.3.2 System Concept Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
E.3.3 Design/Built Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
E.4 System Delivery and Demonstration . . . . . . . . . . . . . . . . . . . . . . . . 187
E.5 Post Delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Appendix F
Autonomous Helicopter Safe and Precise Landing Capability . . . . . . . . . . . . . . 189
F.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
F.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
F.2.1 Cast of Players . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
v
F.2.2 Autonomous Helicopter and Operational Environment Discussion . . . . 189
F.2.3 Acquisition Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 191
F.2.4 Client and Acquiring Organization . . . . . . . . . . . . . . . . . . . . . 191
F.2.5 Developing Organization . . . . . . . . . . . . . . . . . . . . . . . . . . 192
F.3 System Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
F.4 Contract Completion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
F.5 Post Delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Appendix G
Hoboken, NJ Robot Garage Maintenance Change . . . . . . . . . . . . . . . . . . . . 194
G.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
G.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
G.2.1 Cast of Players . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
G.2.2 Robot Garage and Operational Environment Discussion . . . . . . . . . 194
G.2.3 Client and Acquiring Organization . . . . . . . . . . . . . . . . . . . . . 195
G.2.4 Developing Organization . . . . . . . . . . . . . . . . . . . . . . . . . . 195
G.3 Robotic Parking Garage Timeline . . . . . . . . . . . . . . . . . . . . . . . . . 195
G.3.1 Pre-Contract Modification . . . . . . . . . . . . . . . . . . . . . . . . . 195
G.3.2 Contract Modification . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
G.3.3 Dispute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
G.3.4 Legal Resolution and Recent Events . . . . . . . . . . . . . . . . . . . . 196
Appendix H
Robotic Vacuum Customer Satisfaction . . . . . . . . . . . . . . . . . . . . . . . . . 198
H.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
H.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
H.2.1 Robotic Vacuums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
H.2.2 Vendors and Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
H.2.3 Internet Product Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . 199
H.3 Robotic Vacuum Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
H.4 Features of Selected Robotic Vacuums . . . . . . . . . . . . . . . . . . . . . . . 201
H.5 Reverse Engineering a Weighted-Sum Trade-Off Matrix . . . . . . . . . . . . . 210
H.5.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
H.5.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
H.6 Interpretation of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
[Table of Contents]
vi
List Of Tables
3.1 Political Facts of Life Impacts on Cases . . . . . . . . . . . . . . . . . . . . . . 19
3.2 CMU Surface Assessment Case Design and Resource Metrics . . . . . . . . . . 21
3.3 Global Hawk Design and Resource Metrics . . . . . . . . . . . . . . . . . . . . 25
3.4 PackBot Controller Design and Resource Metrics . . . . . . . . . . . . . . . . . 29
3.5 Autonomous Helicopter Landing Capability Design and Resource Metrics . . . . 31
3.6 Hoboken Robotic Garage Design and Resource Metrics . . . . . . . . . . . . . . 34
3.7 Robotic Vacuum Design and Resource Metrics . . . . . . . . . . . . . . . . . . 36
4.1 Outcome Variable Pairwise Correlations and t-test Values . . . . . . . . . . . . . 50
4.2 Case Requirements Performance Related to ARD Practices Observed . . . . . . 66
4.3 Case Requirements Performance Related to ARD Practices Observed . . . . . . 67
4.4 Case Suitability Related to A VER Practices Observed . . . . . . . . . . . . . . . 69
D.1 Global Hawk Technical Measures . . . . . . . . . . . . . . . . . . . . . . . . . 175
H.1 Robotic Vacuum Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
H.2 Robotic Vacuum Review Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
vii
List Of Figures
3.1 Overview of the Research Methodology . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Case Study Design and Resource Metrics . . . . . . . . . . . . . . . . . . . . . 16
4.1 Box Plot of Completeness of Engineering Practices Conditioned by Student Sta-
tus (X18) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2 Scatter-plot Matrix of Overall Engineering Effort to Average Importance of En-
gineering Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.3 Scatter-plot Matrix of Overall Engineering Effort to Average Completeness of
Engineering Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.4 Expected Value Plot for Schedule Performance and the Completeness of ARD
Given the Other Predictor Variables . . . . . . . . . . . . . . . . . . . . . . . . 53
4.5 Expected Value Plot for Schedule Performance and the Importance of A VER
Given the Other Predictor Variables . . . . . . . . . . . . . . . . . . . . . . . . 54
4.6 Expected Value Plot for Schedule Performance and the Engineer’s Robotics Ex-
perience (X05) Given the Other Predictor Variables . . . . . . . . . . . . . . . . 55
4.7 Expected Value Plot for Schedule Performance and Desired Robot Autonomy
(X17) Given the Other Predictor Variables . . . . . . . . . . . . . . . . . . . . . 56
4.8 Plot of Response Values for Meeting Requirements Verses Completeness of ARD
Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.9 Expected Value Plot for Requirements Performance and the Completeness of
ARD Given the Other Predictor Variables . . . . . . . . . . . . . . . . . . . . . 58
4.10 Expected Value Plot for Requirements Performance and the Importance of ATM
Given the Other Predictor Variables . . . . . . . . . . . . . . . . . . . . . . . . 59
4.11 Expected Value Plot for Requirements Performance and the Importance of A V AL
Given the Other Predictor Variables . . . . . . . . . . . . . . . . . . . . . . . . 60
4.12 Outlier Test Graph for Best Suitability Predictors of Suitability . . . . . . . . . . 61
4.13 Expected Value Plot for Suitability and the Completeness of A VER Given the
Other Predictor Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.14 Expected Value Plot for Suitability and the Importance of A V AL Given the Other
Predictor Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
viii
4.15 Expected Value Plot for Suitability and the Importance of A VER Given the Other
Predictor Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.16 Expected Value Plot for Suitability and Being Designated as NOT Safety Critical
(X16) Given the Other Predictor Variables . . . . . . . . . . . . . . . . . . . . . 65
F.1 USC A V ATAR Helicopter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
ix
Abstract
This thesis is concerned with the identification of engineering practices that most influence the
ability of an organization to successfully acquire and employ a robot. Of specific interest are the
matches or mismatches between our technical efforts and achieving robotic systems that are suit-
able for the intended purpose. From a survey of engineers (n=18) who have advised or performed
the acquisition of robots, candidate relations between engineering practices and system success
metrics are proposed. Those relationships are then evaluated against a 5 case studies and one
mini-study to examine more closely how the practices are implemented as specific engineering
methods in context. From those observations, a series project feasibility rationales are proposed
to aid engineers and managers evaluate the feasibility of their robotic system acquisition.
1
1
The views expressed in this report are those of the author and do not reflect the official policy or position of the
United States Air Force, Department of Defense, or the U.S. Government.
x
Chapter 1
Introduction to Robotic Systems Acquisition
Robotic systems pose a particular challenge to the people buying them, the acquirers. Presently,
the majority of research in robotics is focused on how to develop the specific technical solutions.
However, many robots delivered with the best of technological solutions still fail to meet the
needs of the system’s stakeholders. Indeed many of the issues are not with the technology of the
robot built by the developer, but are rather with the acquirer’s ability to participate in a meaningful
fashion in the specification, development, or employment of the robot.
This research examined the engineering practices used by acquisition personnel for robots.
This work also seeks to determine if there are any gaps in the current methods available to engi-
neers supporting acquisition activities.
1.1 Definition of Terms
Many terms used in this thesis come from different disciplines, or are so broad as to need to be
constrained to make research feasible. Below several terms are defined with concrete examples.
In some cases these definitions are more limiting than common usage, to increase the precision
in which the terms are addressed. Many of these definitions are interrelated, for example under-
standing who the various stakeholders are is needed to understand the acquisition.
A Robot is a mechanism which physically interacts with the world, senses in an uncontrolled
environment, exhibits some level of autonomy, and is physically separated from its operator. In
this way, a robot is considered an integrated hardware/software mechanism which in response to
some sensed stimulus, physically interacts with the world, without being explicitly commanded,
and yet is physically separate from any human operator. Thus a mechatronic device that relays
sensor information to a user and the user explicitly commands all settings of mechanical actua-
tion would not be a robot for the purpose of this thesis. This definition is also consistent with
the Robotics and Automation Societies position on separating robotic systems from automation
1
systems (which also sense and interact with the world, but in controlled environments). Two ex-
ample robots used as case studies are the Carnegie Mellon University (CMU) surface assessment
robot and Northrop Grumman’s Global Hawk Unmanned Aerial System. The surface assessment
robot measures a road profile and then paints markings for various types of deviations between
the sensed road profile and a desired road profile. The Global Hawk, while typically referred to
as a remotely piloted vehicle, has autonomy to maintain safety of flight during communications
interruptions with the operator.
A robot alone typically does not satisfy a client’s needed for a capability. For this the whole
Robotic System must be considered. A robotic system includes the robot, support equipment
for the robot (chargers, programming environment, command/control instrumentation, etc. ),
training materials, repair equipment, or any other items and personnel needed for the robot to
provide a capability to a client. For the surface assessment robotic system, this would include
the robot, various instructions (for operation, maintenance, and shipping), and on site computers
and printers for handling the reports. In the case of the Global Hawk, the total robotic system
would include the air vehicle, ground control unit, spares and maintenance facilities, the training
materials for users, and communications infrastructure that enables the system to operate.
Next, are the Stakeholders that are relevant to the acquisition of robotic systems. Stakehold-
ers include the developer, acquirer, client, and user.
A Developer is the individual or organization that designs, manufactures, and delivers the
robot. Robots are normally referred to by their lead developer. Thus the developers in our cases
are CMU and Northrop Grumman, for the surface assessment robot and the Global Hawk respec-
tively. A developer does not typically entirely produce a robot from scratch, and may employ one
or more sub-contractors or partners to develop the robot. Unqualified references to a developer
will refer to the collection of lead (or prime) developer, partners, and sub-contractors as a single,
unified entity.
An Acquirer is an individual organization performing the acquisition process (see below).
Acquirers translate client and user needs to developer, manages and executes the source selection
of a prime developer, maintains oversight/insight into developer activities to keep the client/user
appraised of development risks, and transition the finished product to the users for operational
use. For the surface assessment robot, the case study refers to a specific manager working for the
client. For the Global Hawk system, the acquirer would be the Joint Program Office at Wright-
Patterson Air Force Base.
A Client is the individual or organization that is responsible for providing resources for
the acquisition. Such resources include money, personnel, schedule, and authority. Ultimately,
“make” and “buy” decisions are made by this stakeholder, although in many cases the client may
2
empower acquirers to make some or all “make” and “buy” decisions for their organization (prob-
ably given some guidelines, procedures, or laws). For the surface assessment robot, the client
is the unnamed sponsoring corporation as a whole. For the Global Hawk system, the Congress
of the United States would be the client, as they are the organization that authorizes the exact
amount of money that will be allowed to be spent to acquire the system.
A User is the individual or organization which will employ and/or support the robot directly.
Users can be operators, maintainers, and mangers who are responsible for ensuring the robot is
used to achieve a goal. In the case of the surface assessment robot, the users are the field engineers
and operators who perform, and use the results of, the surface assessment. For the Global Hawk,
the users are the military squadrons that operate and maintain the robots to perform their flying
mission.
Some literature refers to a Customer, which typically refers to the receiver in a supplier-
receiver business transaction. The customer is defined as the union of acquirer and client. Users
may or may not be customers, as some users may not be part of the customer interaction. The
presence of a user as a customer is dependent on the specific business model being used by the
client and acquisition organization being used. The term customer is avoided where possible
in this thesis, instead favoring to keep the roles of acquirer and client explicit. For the survey,
which deals with a broad audience, the term customer is used, as is standard in actual engineering
practice, where a customer can take on any of the roles of client, acquirer, or user.
Given the stakeholders, the Practice of Acquisition can be defined as the typical processes
used to “obtain products through contract” as defined by the CMMI-ACQ
SM
(Dodson 2006) and
that is “directed and funded” by a client as defined in the US Department of Defense Acquisition
System (DoD 2003a). Another definition of acquisition is proposed by Oberndorf, in which ac-
quisition is the ”set of activities performed to procure, develop, and maintain a system” (Meyers
2001). Acquirers use acquisition processes to obtain a product that is to be employed by users
to fulfill some need of the client. For the surface assessment robot, the acquisition process in-
cluded the funding of research agreements with CMU and the series of interactions between the
client and CMU. For the Global Hawk acquisition, the process is described in the various laws,
policies, and statues and specified by tailored procedures by the Joint Program Office. Examples
include the Federal Acquisition Regulation (DAR CAA 2006) and the DoD Directive 5000.1:
The Defense Acquisition System (DoD 2003a) mandates the creation of a series of management
and engineering plans by the government acquisition office, such as a Global Hawk System Engi-
neering Plan; and the implementation of guidance and processes described and detailed in DoDD
5000.2 (DoD 2003b). These policies and directives are the specific methods employed by the
acquirer.
3
This brings us to the definition of an Engineering Method. Koen states that the engineering
method is the application of engineering heuristics (Koen 1984). Koen proposed that definition
as the most compact. An intermediate definition proposed: “the engineering method is the use
of heuristics to cause the best change in a poorly understood situation within the available re-
sources”(Koen 1984) gives guidance on engineering heuristics. Hence, the engineering heuristics
used to achieve an acquisition practice would be the engineering methods used by acquirers. One
example of an engineering heuristic would be to use a generally accepted recommended prac-
tice of requirements specification by IEEE (IEEE 1998) to generate requirements, as was done
in the surface assessment robot case. Another example would be the methods proposed for air-
craft avionics (RTCA 2001), which would be used by for avionics software in Global Hawk, as
required by the Federal Aviation Administration (FAA) (FAA 2003) to utilize the Global Hawk
in civilian airspace. While both the FAA and the IEEE standard acknowledge that known meth-
ods cannot guarantee that all possible requirements are specified correctly to lead to a perfectly
correct solution, the standards act as a heuristic that generally leads to a solution. Thus, by Koen,
heuristics such as these would be the methods used to realize the practice of acquisition.
An example of an engineering method in acquisition would be to utilize open systems prin-
cipals to architect a technical solution at the acquisition level (as discussed in (Hanratty 1999)).
While both the DoD guidance and the CMMI-ACQ
SM
illuminate the need for practices to archi-
tect a technical solution at the acquisition level, this open systems methodology discusses using
the open system concept to achieve that goal as well as providing a method by which an engi-
neer can assess if a given architecture has been designed according to open systems principals.
Other technical methods proposed to support acquisition (such as dependability cases (Weinstock
2004), QUASAR (Firesmith 2006), and Architecture Trade-off Analysis Method (Kazman 2000))
outline methods by which systems engineering can be performed to support software intensive
system acquisition and development, but does not aid engineers and architects determine how
to engineer and design a robotic system (or indeed provide guidance on what is an appropriate
software architecture).
This finally brings us to the definition of an Engineering Practice. Per the CMMI-ACQ
SM
(Hanratty 1999) an engineering practice is more general than a method, and is a goal many differ-
ent specific engineering method may satisfy. An example, the requirements development process
area has a practice to perform trade-off analyzes for satisfying competing client desires. The
specific method employed by the engineer is comprised of a more specific tool (such as weighted
sum of features tables) and notions of which types of technical features should factor into that
trade-off.
4
1.2 The Practice of Acquisition
Consider the case of buying a house. The acquirer, client, and user is the home buyer. However,
since few users are familiar with all the issues of buying a house, we employ a Realtor to aid in
acquiring the house. This Realtor is responsible for helping us understand the various metrics and
trade-offs, such as what is a quarter-bath, or what is a semi-finished basement. Some questions
may involve the Realtor performing some research on our behalf. When a decision is made to
buy a property, the Realtor helps negotiate final closing issues and helps ensure that the buyer is
protected from any bait-and-switch with the property. Over the centuries, there is a well coded
understanding of how this acquirer, presently known as a Realtor, does their job and what are the
key aspects of home buying they need to ensure are represented to their clients. Of course some
Realtors are better acquirers for their clients than others, these norms do not preclude differences
due to individual abilities, local optimizations, “above and beyond” customer service, or other
factors.
For example, the Realtor has a practice to advise a client about the condition of the property
being considered. One method that may be used by the Realtor is to perform a visual inspec-
tion of the property with the buyer and point out problems. Another purpose of this joint walk
through method is that the Realtor can ensure that the property is being truthfully disclosed to
the buyer by the seller (in engineering, we do physical configuration audits which are similar, but
typically more invasive). Another set of methods the Realtor uses are to help the buyer develop
requirements for what kind of property they are going to buy. For example, the Realtor elicits
information about the buyer’s desire for proximity to place of work, family, and friends, as well
as checking the buyer’s family plans to see about being in an appropriate school district. Realtors
will also employ checklists of tasks that a buyer needs to perform, such as securing funding from
a lender, establishing escrow accounts, and performing various buyer inspections to ensure that
their buyer is kept on schedule to complete the transaction in a reasonable amount of time. In
totality, the set of methods to satisfy the statutory and customary requirements of a Relator to a
buyer makes up that industry’s generally accepted acquisition practice.
The most common encountered definition of acquisition practices and methods is given in
the description of the Defense Acquisition System, which “provides management principles and
mandatory policies and procedures for managing all acquisition programs”(DoD 2003a). In
essence, this is the organization specific directive of what acquisition processes must be oper-
ated in the Department of Defense, and not a generic framework to characterize or evaluate all
(including non-defense and non-government) acquisitions.
5
The Software Engineering Institute has developed a process model, the CMMI-ACQ
SM
, that
provides a breakdown of the various process areas into practices that would need to be employed
at various levels of maturity to be considered a mature acquisition organization. The CMMI-
ACQ
SM
describes what needs to be done for acquisition, but not how to achieve the given what.
Several process areas list engineering practices that needs to be accomplished, but does not spec-
ify how to achieve those practices. The how is the engineering methods of interest for this paper.
For example, the CMMI-ACQ
SM
has a process area for Acquisition Verification, which de-
scribes the set of practices that must be satisfied for an acquirer to effectively verify work prod-
ucts. One practice, “SP 1.1 - Select Work Products for Verification”, requires that an acquirer
consciously select which work products will be verified as well as establishing which verifica-
tion method will be used on each work product. The CMMI-ACQ
SM
does not address which
verification methods are appropriate to which work products, that is outside the scope of the
process framework. For acquisition verification of an engineering product, the acquirer would
need engineers of appropriate skill to be able to determine which method to use and then employ
that method to verify that work product. Thus, an engineer performing acquisition verification
would have to employ engineering methods from accepted sources. For example, the engineer
could use IEEE standard 1012-2004 “IEEE Standard for Software Verification and Validation”
and IEEE Standard 1062-1998 “IEEE Recommended Practice for Software Acquisition” to tailor
a verification method for a software work product subject to acquisition verification.
Maturity is important both in development and acquisition organizations. A common reason
for considering acquisition maturity is that a less mature acquirer may derail a mature developer
(Chick 2006). For example, if the acquirer forces a developer to short circuit disciplined config-
uration management to get a change done quickly,, problems may arise down the road leading
to increased maintenance costs (due to missing documentation) or increased development time
(because downstream developers may not have been notified of a change). Ultimately there are
many potential examples of how an acquirer can force a developer violate engineering methods
meant to contain cost and schedule growth to meet an immediate goal. However, typically, there
are many examples where these derailments are likely to end up delaying or raising the cost of
the whole acquisition. While maturity, as given by the CMMI-ACQ
SM
is important, and this
paper will use the CMMI-ACQ
SM
as a construct to focus on methods, explicit determination of
the level of maturity of a specific acquisition organization is outside the scope that will be be
provided in this paper.
In judging the effectiveness of an acquisition method, Kenneth J. Krieg, Under Secretary
of Defense (Acquisition, Technology & Logistics), indicates that “customer success has to be
the first criteria when judging the effectiveness of an acquisition” practice or method (Krieg
6
2005). Mr. Krieg’s testimony, to Congress includes several drives towards effective stewardship,
addressing that an acquisition system is accountable to its clients, in this case the Congress, as
well as the warfighters (the users of DoD acquired systems).
By considering the engineering methods at the acquisition level, this focuses the examina-
tion to what is known as “little-a acquisition” (Krieg 2006). This is differentiated from “big-A
Acquisition”, which focuses on how we decide where to invest resources by determining which
programs are more or less important. Thus, this paper will examine methods concerned with
making sure the needs are validated, but not including the methods used at the strategic level to
determine which programs will receive how much funding.
1.2.1 Importance of Engineering in Acquisition and Systems Engineering
The importance of engineering in acquisition has anecdotal support. (Marciniak 1980) attributes
several activities to the acquirer/buyer, such as reconciliation of independent verification and vali-
dation reports, requirements-cost trade-off decisions, and risk management. However, (Marciniak
1980) mostly noted these as special topics for project management. (Glaseman 1982) noted the
need for dedicated technical professionals, specifically for software, to be present in acquisition
offices. The lack of embedded expertise, Glaseman argued, prevented acquisition programs from
effectively evaluating information coming from prime contract developers, verification and vali-
dation authorities, and external think thank analyses.
Recent work in mapping the nature of acquisition tasks have culminated in the Capability Ma-
turity Model (Integration) for Acquisition Organizations (CMMI-ACQ) V1.2 (The CMMI Prod-
uct Team 2007). Of specific interest here are four process groups that are technically oriented:
acquisition requirements development (ARD), acquisition technical management (ATM), acqui-
sition verification (A VER), and acquisition validation (A V AL). ARD covers practices related to
the establishment of operational needs, technical requirements, contractual provisions, perform-
ing various requirements analyses, and allocating requirements. These ARD activities are distinct
from the requirements management functions (such functions seek to establish baselines, ensure
traceability, and are essential for the engineering of complex systems). ATM addresses technical
exchanges between the acquirer and the developer as well as the management of external inter-
faces. A VER and A V AL cover the base practices for the verification and validation of acquisition
level processes and products.
(Elm 2007) brought the notion of effectiveness in systems engineering to the forefront by
conducting a survey of 64 respondents to determine what correlations exist between CMMI pro-
cess groups and project success criteria. Of note in this survey, individual correlations may have
7
been based on smaller samples, as not all respondents answered all questions and some responses
may have been dropped as outliers. This study was largely based on the version of the CMMI for
developers. Due to the survey only asking a few questions about acquisition capabilities to the
developers, this survey was unable to establish a clear relationship between acquirer capability
and project performance.
8
Chapter 2
Statement of Research Question
What robotic engineering methods need to be employed by an acquirer to improve the likelihood
that a robotic system can be acquired on schedule and within budget and employed to the benefit
of the clients or users?
Although this question may appear to be straightforward, answering this question is hampered
by several factors:
Lack of a generally accepted definition of robotic engineering methods, especially methods
appropriate to the acquisition level.
Lack of a body of experience that collects the seminal or canonical examples of acquiring
and employing robotic systems that provide motivations for using the various engineering
methods.
This question is broken into several parts. The first part is to assess what methods acquirers
currently utilize when making acquisition decisions. The second part is to examine the success
or failure of a small set of acquisitions to meet client and user expectations. The final part is to
examine if this is evidence of any gaps in the methods available to be practiced by engineers.
Gaps are evidenced by issues analyzed in the selected acquisition case studies that do not have
corresponding methods in generally accepted or common use to address them.
Specifically, this thesis will have to address the following null hypothesis and alternate hy-
potheses:
H0 (Null Hypothesis): The absence or presence of engineering methods employed by the
acquirer does not impact the likelihood of a successful robotic acquisition.
H1 (Not Success Critical): No engineering methods exist that could make an unsuccessful
robot acquisition into a successful one.
9
H2 (Complete Practice): There is no observed gap in engineering methods available to
successfully acquire a robotic system.
H3 (Lack of Robotic Engineering Methods): No engineering methods applied at the
acquisition level are specific to the robotic systems.
10
Chapter 3
Research Methodology
3.1 Methodology Overview
This thesis conducts a case study investigation into the nature of robotics engineering methods as
they are applied to the acquisition and employment of robotics systems. The series of case studies
of robotic systems acquisitions with sufficient breadth are selected for future examination. A
survey is then conducted to determine which broad categories of engineering practices are success
critical to the acquisition and employment of robotic systems. The cases are then revisited in
light of these selected practices to determine which specific methods are being utilized and their
impact to the acquisition effort. Finally, based on the observations in the cases, a direction is cast
for future work in building a body of knowledge for the engineering of robotic systems. This
methodology is summarized in Figure 3.1.
Per (Yin 2002), the selection of case studies in advance of having theories can help ensure that
the cases are selected without bias. The survey then provides several theories based on statistical
analysis of the respondents. Running the survey after the selection of the cases ensures no cross
pollination of theories to the case selections. Although not necessary in this research, cases could
be eliminated in a second stage if the case did not contain information relevant to the theory
(e.g. in this thesis, a case that didn’t describe the engineering approach by the acquirer could be
eliminated without prejudice). By then examining the cases using the theories by the survey, the
cases can provide some level of validation of the results of the survey. In the end the full sample
size that supports the research could be considered to be the size of the survey respondent pool
plus the number of cases used for validation.
11
Figure 3.1: Overview of the Research Methodology
3.2 Selection and Discussion of Robotic Cases
Case studies provide a mechanism for us to examine not only an engineering method, but to
describe what was actually done in the context of a development or acquisition (Kardos 1979).
Case studies are used in engineering education to motivate students to participate in the classroom
environment and learn about the practice of engineering in the real-world. Yin (Yin 2002) is the
most cited reference for designing case study research, and has been used for tutorials in applying
case study research, including in software engineering (Perry 2004) and software acquisition
(Glasman 1982).
Additionally, case studies have frequently been used in business to build various theories, to
the point where the method for building those theories are fairly well established by Eisenhardt
(Eisenhardt 1989). This paper presents a straight forward method for conducting case study
research, consistent with Yin’s (Yin 2002) approach.
Initial work was to define the research question to be achieved in the case studies. For this, a
subset of the main research question was asked: What are the motivations and effects of robotic
engineering methods used by acquirers of robotic systems? This question provides guidance on
the selection of case studies as well as guidance into the next activity, the choosing of a priori
constructs.
The main construct selected to characterize the acquisition domain and the processes and
methods used in acquisition was the CMMI-ACQ
TM
. Further, we chose the bodies of knowledge
12
to represent the generally accepted practices and methods of disciplines of software, mechanical,
electrical, and systems engineering. We also selected continued use of the robotic system and
ability to meet budget and schedule constraints as indicators of effectiveness.
The need for quantitative, or empirical, data on acquisition practices was articulated recently
by Richard Turner (Turner 2004) in addressing that decisions to use engineering methods need
to be supported by analytically verifiable information. This work does not refute the need to
collect qualitative data, which can be used to put the quantitative data into perspective, but says
that qualitative data should not be solely used. The focus of the work to date has been to analyze
cases and systematically examine the qualitative factors.
The goal of the case studies chosen and developed was to demonstrate emerging robotic
engineering methods in the context of an actual acquisition. Since there are not a large number of
case studies of robotic system acquisitions, the focus will be on developing and selecting robotic
case studies that span various engineering process areas (as case study research tends to be too
sparse for statistical sampling methods (Eisenhardt 1989)). This will increase the breadth of
potential methods to be examined, which is beneficial in this case to reduce the likelihood of
examining only domains in which known methods from other disciplines are utilized.
In selecting the process areas to be surveyed, we reduce from the 22 process areas in the
CMMI-ACQ
SM
. Those 22 process areas are divided into 4 process area groups, of which the
“Acquisition Management Process Group” is the most directly applicable to our interest in engi-
neering methods. Within this process area group are the process areas of acquisition requirements
development, acquisition technical solution, acquisition verification and acquisition validation.
These process areas are similar to the engineering process areas within the CMMI-DEV
SM
(veri-
fication, validation, requirements development, and technical solution). The CMMI-DEV
SM
and
CMMI-ACQ
SM
are related, but slightly different, models proposed by the CMMI
SM
Steering
Group. These models both share a common goal of providing a framework for improving an
organization’s capability to build (CMMI-DEV
SM
) or acquire (CMMI-ACQ
SM
) systems. Case
studies will have to be selected that will provide data points in these four areas. Generally, we can
expect that case studies will provide data in multiple areas, but that they will primarily provide
data in one or two areas. Hence a minimum of three case studies should be sufficient to provide
some information on the methods used by the practices in these four process areas. More case
studies may be examined to foster cross-case pattern exploration, or backing up observations in
one case with related examples.
The developed case studies in the appendix appear to have strong evidence in the process ar-
eas of acquisition technical management (surface assessment robot, haptic controller), acquisition
13
validation (Global Hawk, surface assessment robot, autonomous helicopter), acquisition verifica-
tion (Global Hawk, surface assessment robot), acquisition requirements development (Global
Hawk, robotic vacuum), and absence of practice (robotic garage).
Three methods were utilized to help guide the analysis of the quantitative and qualitative fac-
tors that impact the case studies. The first method was the examination of the design to resource
complexity inherent in the case. The second method was the examination for the political fac-
tors impacting the engineering program. The third was to consider the a priori sufficiency of the
resources to mature a robot prototype to a system, product, or system-product.
3.2.1 Anatomy of a Robotic System Case Study
A case study is composed of a narrative description, a measure of its technical complexity and a
measure of its political risks. Attempts were taken to determine if the a priori funding profile was
sufficient based on prior related projects, but various problems were encountered with getting
sufficient data, as can be seen in the section on funding analysis.
The purpose of these metrics is to understand the context in which each of the selected cases
takes place. By examining each case using these metrics and methods up front, the cases can
be shown to have been both technically feasible and understand how the engineering methods
employed were able or not able to adapt to the political and social constraints.
3.2.1.1 Narrative Description of the Case
The narrative description of the case provides for the factual reporting of what occurred during
the acquisition of a robotic system. The purpose of the narrative is to provide information relating
to the case in such a manner as to inform future analysis, but not itself be analytical. Narratives
focus on specific people, doing specific things, at a specific time, with specific resources, and
in a specific environment. The specificity of the case is its power as a research tool, but limits
cases from general utility as a guide on how to perform an acquisition. The specificity enables
theories to be evaluated in the case’s real-world context so that factors not specifically relating to
the theory are available to explain variations or illuminate shortcomings of a theory. This same
specificity also limits general utility of the case as a guidebook, because the case is predicated
on the specific time and specific environment described, which likely will not apply to a different
project.
14
3.2.1.2 Case Technical Metrics
In order to show that the cases sufficiently span a range of difficulty and resources, a brief evalu-
ation of the design complexity and resources is performed.
(Moody 1997) provides metrics to measure the design difficulty and resources available to
an engineering design case. These metrics have been non-experimentally shown to have some
variability in assignment (ibid), but generally only one class of movement within the assignment
scale. Accurate and precise characterization of the cases is not necessary, as the cases will not
be directly compared based on these metrics. The accuracy demonstrated is sufficient to give
confidence that a set of cases spans the design and resource space.
The metrics for design complexity are:
1. Design Type (refers to the nature of the design effort on a scale of continuous improvement,
original innovative design, to breakthrough design at the highest scores)
2. Knowledge Complexity (refers to the number of people who hold the knowledge required
to build such a system, higher scores reflecting fewer people knowing the information)
3. Steps (refers to the number of process steps or components in the system)
4. Quality (refers to focus on implementing or continuing quality-related techniques)
5. Process Design (refers to the amount of manufacturing process design effort and how taxed
that process will be based on anticipated demand)
6. Aggressive Selling Price (refers to the extent that market forces drive challenging unit
price or other cost goals)
The metrics for resources are:
1. Cost (refers to the ability of the client to pay for the first unit)
2. Time (time from definition of need to delivery of the first unit)
3. Infrastructure (amount of infrastructure, at the time, needed by the project)
Note that several of these measures (such as infrastructure and cost) are based on the views of
the people involved (cost is the client’s view) or at the time (infrastructure considers the utilization
of resources available at the time). Also, the metric for design type is subjective.
We use these measures to determine if the systems fall into one of three classes: typical engi-
neering projects, politically risky projects, or technology risky projects. (Moody 1997) proposes
15
Figure 3.2: Case Study Design and Resource Metrics
that projects within a band of a line running between (0,0) and (50,35) are typical engineering
design projects. Those projects falling below (lower-right) are politically risky projects that con-
sume more resources than their design complexity would normally indicate and may be highly
subject to political changes. Such projects likely have low returns on investment and need non-
financial reasons to occur (such as road projects, or as (moody 1997) cites projects like the Apollo
program). Those projects falling above (upper-left) are high technology risks projects which may
need to prioritize technical capabilities in anticipation of trade-offs.
Table 3.2 summarizes the cases identified for later study based on these metrics. According
to this method, the CMU Surface Assessment Robot ends up being the most politically risky
project. As can be seen in the case description, this appears consistent with what actually hap-
pened. The Global Hawk and Haptic PackBot Explorer Robot Controller are the high technology
risk projects, which matches the types of technological and production problems encountered by
the systems. Finally, the Autonomous Helicopter Safe and Precise Landing Capability, Hobo-
ken Robot Garage Maintenance Change, and Robotic Vacuums appear to be normal engineering
projects. The last group doesn’t imply that there would be no problems, but that problems should
be able to be overcome with reasonable effort.
16
3.2.1.3 Political Analysis
Dr. Forman proposed the initial 5 political facts of life (FoL - Fact of Life) for government funded
engineering projects (Foreman 1995). These facts of life are used to explain and understand from
an engineering point of view how the political and engineering processes differ in their decision
making strategies. At the highest level of “big-a” acquisition, those acquisition engineering pro-
cesses are concerned with optimizing quantified values to obtain a solution for the whole user
enterprize. At the same time, political processes have to allocate scarce resources between many
competing, and diverse needs in which relative priorities may not be easily measured. By exam-
ining these facts of life, the context of a case’s engineering methods can be expressed, even if the
impact cannot be directly quantified.
These base set of 5 political facts of life proposed by Foreman were:
1. Politics, Not Technology, Dictates What Technology Is Allowed to Achieve (refers to
the various laws, regulations, licensing, et cetera that limit what is permitted to be done
with a technology or the path of a technology’s evolution/development)
2. Cost Rules (the political system allocates scare resources, and the most scarce resource is
money, hence a preference to reduce spending so that other programs can get, potentially
more, funding)
3. A Strong, Coherent Constituency Is Essential (politicians are elected by large groups
of people voting for them, hence a constituency that has a coherent voice for or against a
certain program may be more important than technical merit alone)
4. Technical Problems Become Political Problems (technical problems may include antici-
pated trial-and-error of prototype development, another agency, especially the GAO, giving
a poor review of a previously supposed successful project, or any other technical argument
that can be used by detractors to modify or eliminate an undesired program)
5. The Best Engineering Solutions Are Not Necessarily The Best Political Solutions (al-
though a full engineering analysis may indicate that a certain option is the most cost-
effective, less-risky, and meets all requirements, politics may allocate resources in another
fashion to meet the needs of the political process)
Beyond the base 5, several other facts of life are proposed by Dr. Cureton (Cureton 2006).
These additional facts of life are:
1. Political problems can become technical problems (the inverse of FoL #3)
17
2. Perception is Often More Important Than The Truth (a phrase, which has almost be-
come clich in modern usage, that states that the facts are not as important as people’s
perception of the facts).
3. Timing is everything (an indication that even a good idea can be presented at the wrong
time, such as after the budget for the year has already been determined)
4. Politics prefers immediate, near-term gratification (a reality due to the fact that politi-
cians typically have to run for re-election on a frequent basis, typically beginning their next
run around the time they being their term in the US House of Representatives)
5. Politics believes in gurus and heros (the allure of heros and gurus to the American peo-
ple is very strong, and politicians understand that and use these people to support their
positions)
6. A catchy slogan is essential to getting attention (political systems make thousands, or
even hundreds of thousands of decisions per year, most of which are “very important”. The
easiest ones to remember are the ones with the catchy names and slogans)
7. Staffers shape decision-making (although not the decision makers themselves, a staffer’s
position can shape the decision of the politician intentionally or unintentionally through
errors of omission or commission in what information is presented to the political member.)
Note that these additional facts of life are not necessarily present in all cases. In many ways,
these can act as a checklist for less common political interactions. Also, these can be examined
and elaborated in a case study to associate or differentiate two cases.
We used these political facts of life to illustrate and explain the interaction of the political
process with these highly political engineering programs. By using these facts of life, we have
the ability to characterize the potential root causes of issues that may not have been entirely
controlled by the engineering process.
Also note that these factors are not necessarily unique to interactions in the US Government.
Many of these political style interactions can and do take place in other countries, large corpo-
rations, or even a small team. Although these facts of life are generally applied to government
funded programs, application of these facts to other clients may help put qualitative observations
into a perspective that can then be compared between differing clients.
Table 3.1 summarizes the significance of political problems encountered by the various pro-
grams. Note that these measures are not comparable between programs. For example, the Haptic
18
Table 3.1: Political Facts of Life Impacts on Cases
PackBot Explorer Robot Controller is not politically less problematic than the CMU Surface As-
sessment Robot; only that the distribution of problems in the controller project seem more biased
to one area.
3.2.1.4 Funding Analysis
Brooks proposed a factor of three resource guideline for the relative effort to transition software
programs to software systems or software products, and a further factor of three to move from
products or systems to system-products (Brooks 1995). Meaning that if a software program
prototype took $10,000 develop, would take $90,000 to turn into a software system-product. This
heuristic is often used at the outset of a project to help a software engineer understand if the
resources provided are sufficient, within a rough order of magnitude.
However, this notion has not been fully evolved for robotic systems. For example, Brooks
considers and defines programs, program products, program system-products. In a similar way,
we can consider a robot the analogue to the program, which is a robotic technology component
or single robot that has a single scope of independent operation. Some robots can be combined
with other components in novel ways or combined with other independent systems to accomplish
greater goals, in which case we have a robotic system. Next, a robot may be considered for
commercialization, and needs to be packed with documentation to use and maintain the robot,
as well as additional engineering for end-user safety and customization; this is a robot product.
Finally, in some cases entire robot systems may be evolved into a product, in which case we
have a robot system-product. For robotics, there is no established Brooks-like factor for these
transitions. Thus, this transition factor could be used only to compare cases. For example, we
would use differences in the transition factors to queue further analysis to determine the cause of
the differences between the cases.
As future work, with sufficient cases, it may be possible to determine the transition factors
that can be used, as Brooks indicated, as an a Priori resource sufficiency heuristic. However, in
19
examining these cases, getting additional cost data proved to be problematic. Organizations that
would be wiling to discuss the technology and engineering of their system generally consider
their actual costs to be proprietary and competition sensitive.
Note that, for the above reasons, funding sufficiency will not be addressed.
3.2.2 Case 1: CMU Surface Assessment Robot
This case study illustrates a failure to employ a road surface assessment robot for its intended
purpose due to misapplied constraints levied by the acquiring organization. Inherently, the con-
straints illustrate a fundamental issue, that robotic systems that perform work are inherently part
of a larger system and that the robot must come from a technical solution that is aware of the
total interaction at a system-of-systems level. Attention will be paid to the interaction between
the acquiring organization and the developers to demonstrate what concerns the acquirer had and
what analysis was required to be completed by the developer as part of the acquisition. Finally,
the robot’s delivery and subsequent demonstration that lead to the failure to transition the system
to operations will be examined.
I worked as a graduate student from the early concept designs through the critical design
review (CDR). This case is based upon my own observations, reviews of working project notes
from CDR through final delivery, and personal conversations with the faculty, technical staff, and
fellow graduate students at CMU who worked on this project.
In the case of the surface assessment robot, the system was delivered and demonstrated to
meet all its specifications and requirements for technical performance, process integration, and
business case. However, the robot system has not been used to assess any surfaces after its initial
demonstration. Appendix A goes into how the robot was developed, the involvement of the
acquisition community, and the client and examines potential root causes for the failure of this
system to be transitioned to operations. Potential root causes revolve around the treatment of a
design constraint and how that constraint was handled by the acquirer and developer design team.
This case study was between a commercial entity and an academic institution with most of the
participants (up to and including the President of the client company) having engineering degrees
and experience. Political analysis generally was consistent with the experience of the project and
highlight the issues around keeping all the various stakeholders happy with the product.
As Table 3.2 indicates, this project fell in the main channel of executable projects according
to (Moody 1997), by using resources in relation to the design difficulty.
Although this is not a government program, the political analysis can still performed in the
client’s corporate context, paying attention to the various entities within and outside the client
20
Table 3.2: CMU Surface Assessment Case Design and Resource Metrics
21
corporation. In this case politics refers to ways senior managers, customers of the client, or
stockholders of the client interact or otherwise have their intentions felt by the project.
Politics Not Technology Dictates What Technology Is Allowed to Achieve
A negative impact from this FoL was the obvious example when the client engineers’ re-
quested that the robot not perform a quarter-car analysis or do anything that the engineer
would determine to be an “engineering decision”. Although the technology definitely exists
and the measurement tools selected supported this analysis, this constraint can be viewed
as a political lien on the technical solution space. Given that the licensed professional
engineers would be operating in multiple jurisdictions in multiple countries, the use of
automated analysis may not have been defendable at all project locations.
Cost Rules
Cost rules was the guiding principal for the funding and schedule of this project, through
the use of the single project Return on Investment (ROI) requirement. This development
was limited in developmental funding to what was anticipated to be saved in the use of the
robot in the first deployment. Hence, there was an extreme bias to eliminate any usage that
consumed time, as that anticipated end-user usage was in conflict with having development
resources.
The schedule pressure, although not reported as a significant factor by the developers, was
firm in that the system had to be ready for the target usage without any room for negotiation.
Hence, no solutions could be considered that delivered value after the initial assessment.
Ultimately, it cannot be concluded that the cost or schedule pressure played a significant
factor in this project.
A Strong, Coherent Constituency is Essential
Initially, this FoL was exploited to the benefit of the project. This project was shepherded
from inception to completion due to the involvement of many stakeholders within the
client’s corporation. Field engineers, quality assurance personnel, and acquirers. This
early involvement of so many stakeholders in building the business case and process in-
tegration served two purposes. The technical purpose helped keep this project on track
towards success. The more important purpose was that this involvement kept the project
sold within the client’s organization by building up many stakeholders who perceived value
in the project.
22
However, as pointed out in an upcoming thesis by Apurva Jain (Jain 2007), the lack of in-
volvement in the new management, after the corporate switch, lead to an inevitable success
model clash between the project and the new owning company.
Technical Problems Become Political Problems
In many ways, this project’s main negative driver that wasn’t overcome was an example
of technical problems becoming a political problem in the final demonstration. In the fi-
nal demonstration the technical issue of how to handle the deviations ended up causing
political problems with the managers and the engineers on one side, and the quality as-
surance organization on the other. In many ways, both wanted to believe they were a full
quality organization, but the unexpected markings exposed a difference in how the two
parties handled adherence to quality standards. With this problem being brought to the top
and fracturing the political alliance of the stakeholders, it should not be surprising that the
project was not transitioned to operations, as there was no longer a coherent set of stake-
holders to ensure that the project remained funded and focused after the developer’s role
terminated.
The Best Engineering Solutions are not Necessarily the Best Political Solutions
The main problem, which was made obvious by the occurrence of the previously stated
problem during the final demonstration, for this system was the realization of this FoL at
final demonstration. While a system that increased accurate detection of deviations was
technically superior, as demonstrated, the reality of the system was that increased accuracy
was not desired. Thus a more accurate system did not aid the general managers in dealing
with the political pressures of the project. One of these pressures, the need to appear to be
a high quality enterprize, is covered in more depth in the next FoL.
Perception Is Often More Important Than The Truth
In some ways, in talking with individuals present at the demonstration, the perception of a
high quality product was more important than the adherence to specific quality standards
(road roughness). The client was always very proud of their high quality products and
processes, hence the attention and involvement of engineers to support the robot develop-
ment. This pride in quality led to the decision to have the robot mark all deviations, since
the road profiler selected was only periodically sampling, rather than continuous like the
manual method. However, what was unanticipated was that the precision and accuracy of
the road profiler would be so much higher than the manual method. When confronted with
new information about the observed deviations from the specification, the client’s staff was
23
split on how to handle this. On one side, the quality assurance personnel wanted to start
recording and tracking this new information. On the other hand, the managers and field
engineers felt that this new information wasn’t useful, as it did not predict success or fail-
ure to meet the client’s customer’s ride quality metrics. Since the corporate culture was to
be a high quality organization, this left the managers in a quandary that couldn’t be easily
solved. However, it was clear that if they continued the manual method, they would meet
their customer’s expectation. Further, the manual method gave the appearance of intimate
attention to detail. On the other hand, while more accurate and precise, the automated
method gave the appearance of multiple flaws. Unfortunately, most of these flaws were not
significant and did not impact quality. Hence, the decision was fairly easy to continue with
the manual methods instead of getting increased accuracy and other process savings from
the automated system.
3.2.3 Case 2: USAF Global Hawk
The second case focuses on the interaction between the engineering methods and the political
system in the Global Hawk acquisition. In this case study, an examination of the events around
the Global Hawk’s transformation from a technical demonstration project (an Advanced Capabil-
ity Technology Demonstration, or ACTD) of a limited number of vehicles to a formal acquisition
for a large quantity of vehicles is used to illuminate current budget overruns and other acquisition
problems. A systematic examination of political factors is used to differentiate between routine
political actions that cause change in an acquisition program and those interactions that are due to
engineering problems. Lessons have been documented (Coale 2006) and the technology demon-
stration to acquisition program transition has been studied and improved (Dobbins 2004) in the
time since the transition. Explicit linking of the problems in (Coale 2006) to the political events in
the case study provide insight into potential rationales for the engineering method chosen. Initial
prototyping through a capability technology demonstration then later transitioned into a major
acquisition program. The success of technology demonstrators in providing capabilities early to
users (Krieg 2006) .
As Table 3.3 indicates, Global Hawk fell in the main channel of executable projects according
to (Moody 1997), by using resources in relation to the design difficulty.
The following political observations are made about this case:
Politics Not Technology Dictates What Technology Is Allowed to Achieve
Twice this fact appears, in budgets for fiscal years 2006 and 2007 Congress set specific
limits not only on the number to be procured in the current year, but also on how many could
24
Table 3.3: Global Hawk Design and Resource Metrics
be placed “in the queue” by starting advance procurement activities meant to accelerate
development.
As cited by both transition studies, the prevention by “color of money” (federal government
slang that refers to public laws dictating that money can only be used for the purpose
for which Congress appropriated and authorized the expenditure) and statutory limits of
certain development activities (such as logistics and manufacturing planning) during the
ACTD phase made the conversion to a Major Defense Acquisition Program (MDAP) very
difficult. In 2005/2006 one of these limits turned out to be problematic: the difficulties
meeting productivity goals, in part due to a lack of spares, which stems from the one year
development contracts authorized.
Cost Rules
Overall, since the Global Hawk was so well sold to the operators and warfighters, the goal
of the DoD and the United States Air Force (USAF) was to keep the program sold by trying
to paint the cost to acquire the Global Hawk in the best light possible. The entire effort by
the USAF to rebook costs to another reporting structure was one way of being able to
understate the cost to keep the system sold. Even the DoD came back and had to re-include
25
some of the costs into the reports to maintain credibility. Although even the DoD didn’t go
as far in accounting for a cost per unit as the Government Accounting Office (GAO) did.
A Strong, Coherent Constituency is Essential
One vital constituency that has fought to ensure the Global Hawk capabilities are the Com-
batant Commanders (the military commanders who are deployed to various areas to fight
wars). Most of them have included the mission for High-Altitude Long-Endurance (HALE)
systems high in their requirements and even have gone as far as to name Global Hawk
specifically. Since the US has been and is engaged in operations, the opinion of these
commanders carries very heavy weight with the congressional committees to which they
routinely testify. (OMB 2005)
Learning from other programs on building a constituency, the contractors and the govern-
ment offices are trying to ensure that decision makers are well aware of how many US
companies and communities are involved in this project. Even though the two take dif-
ferent strategies in how to cast a wide net (either by listing companies or by listing major
community sites), the essence is that the program on both the acquisition and development
level is trying to build a picture about how wide spread this program is in supporting many
communities with congressional members in key states one Senator proudly proclaims his
ability to have brought Global Hawk development to his state. (Lott 2006)
Technical Problems Become Political Problems
Slower than expected production rates lead congress to direct the program to take the sched-
ule slips into account for budget requests and start reducing program funding and reduce
the number of authorized units.
The Nunn-McCurdy breach in 2005, referenced in the case in Annex A, opened up a deeper
look into the program office after a GAO study concluded a different, high unit-cost growth
percentage. Although the program office claims that the overruns were justified as upgrades
and changes to requirements, reporting essentially half the percentage change that the GAO
noted caused Congress to become very interested in the specific quantity of units to be
acquired and add that quantity into national law with each subsequent year’s appropriation,
rather then allowing the military to determine the number of units to be acquired based on
the funding profile from Congress.
The Best Engineering Solutions are not Necessarily the Best Political Solutions
The clearest example of this fact of life is that Global Hawk should not have been simul-
taneously put into system design and production in 2001 with the existing ACTD team.
26
As pointed out, many options existed such as re-competing the entire program to find a
contractor capable of handling the much larger program or developing the current team to
add the management and engineering processes needed for the larger program. However,
due to a desire to achieve fast results and quickly be on contract for the fully authorized
budget, neither option was selected.
Global Hawk is also central to the decision of when to retire the aging U-2. Ideally, the
USAF should start reducing the U-2 fleet as the Global Hawk picks up missions and ca-
pabilities. However, Congress (specifically the House) has opted to say that no U-2 can
be decommissioned until the Secretary of Defense certifies that no national or military in-
telligence will be lost with the transition from U-2 to Global Hawk. Despite the fact that
some of that funding to maintain the U-2 could move to build improved Global Hawks,
Congress seems to be unwilling to take risks with intelligence collection capabilities. One
reason may be that Congress doesn’t want to take risks with intelligence while troops are
committed to action, despite the potential to accelerate the Global Hawk’s adoption of the
U-2 role.
Political Problems Become Technical Problems
The Global War on Terror, started in 2001 with Operation Enduring Freedom (OEF) in
Afghanistan, was a political issue that needed a quick solution. Since Afghanistan was
so far from friendly bases, traditional manned reconnaissance aircraft were going to be
hard pressed to support OEF. The Global Hawks could be pressed into service, despite still
being in the prototype stage. However, this raised several technical problems One was that
the prototypes actually performed so well that their users set increasing, and eventually
unrealistic expectations for requirements on its development (stressing the requirements
development and management processes of a research/ACTD project).
Perception Is Often More Important Than The Truth
Congress and the USAF wanted and needed a quick acquisition success. Since the Predator
was doing well on meeting cost goals (GAO 2004), it was thought that the Global Hawk
could be rapidly transferred from ACTD to MDAP since, after all, the plane was flying so
it must be ready for production! The truth was that more effort would be needed to transfer
Global Hawk from a science project to a production system than either party seemed to
want to initially admit.
Politics Believes Whatever Can be Seen, Can be Bought
27
Despite continual warnings in the engineering community that prototypes should not be
confused with production systems, politicians and managers continually believe that once
a system is seen that it is real and can therefore be purchased in large quantities. Nowhere
is this clearer than in the acquisition strategies for the Global Hawk.
3.2.4 Case 3: Haptic PackBot Explorer Robot Controller
The third case focuses on a zero-cash cost academic research project into a novel interface to
control a remotely operated ground vehicle. In this project, iRobot worked with the author to
conceive of a project that would reduce the training difficulty associated with learning their Pack-
Bot Explorer Robot. The idea was to do something revolutionary that would replace the normal
operator control unit and enhance the position of iRobot in the marketplace and with their users.
In a similar fashion to the next project (Autonomous Helicopter), this project had an additional
goal to expose iRobot to up and coming robotics students as a method to preview their work.
The graduate student (who was not interested in a job with iRobot, and hence avoided conflict of
interest) then oversaw a competition between two design teams in the Fall 2006 semester, even-
tually selecting one for development of a prototype in the Spring 2007 semester. The selected
interface was a ”haptic style avatar” in the form of a miniature stile PackBot Explorer, which
would command the sensor payload arm via manipulation of the miniature. Additional haptic
feedback (to close the command loop) was to be on a second generation prototype. The PackBot
was provided from the SPAWAR SSC (a government agency) at San Diego, CA as part of their
small robotics loan pool, which is available to government and academic uses to test out new
systems. The project was undertaken without expending any cash by iRobot, but did involve their
commitment to bi-weekly telecons and final presentations. The principal developer has filed for a
patent on some of the underlying technology and was invited to apply for an engineering position
with SPAWAR SSC. Despite the final interface meeting objectives, iRobot has not introduced this
style of haptic avatar beyond their immediate development team nor have any other efforts been
made to transition this system into their organization.
As Table 3.4 indicates, the PackBot Controller project fell in the “high technology” region
according to (Moody 1997), due to its having higher design difficulty than the proportion of the
resources allocated.
The following political observations are made about this case:
Politics Not Technology Dictates What Technology Is Allowed to Achieve
A minor factor in this case, the politics surrounding the export controls and proprietary
interfaces made development activities difficult. Although all involved were US citizens,
28
Table 3.4: PackBot Controller Design and Resource Metrics
the desire to keep information dissimilation at a minimum contributed to the acquirer es-
sentially joining the development team, as the acquirer was the only local individual with
access to the full technical interface specification for the PackBot Explorer.
Cost Rules
Despite being an acknowledged “zero-cost” project up-front, this project experienced defi-
nite cost and schedule pressure. On the iRobot front, there was uncertainty if iRobot would
be able to fly the developers out for the final talk. Given that the haptic avatar controller
was largely a physical system, an in-person interview made the most sense. Despite ini-
tial indications that funding would be made available for such a demonstration, come June
2006, iRobot indicated that they preferred movies and telecon for delivery of the system.
On the developer’s end, the self-funded students had a specific maximum they were willing
to spend on their project. When a primary motor was damaged in integration testing, the
answer was to not replace the motor, but use a spare that was known to not have enough
power for haptic feedback. Although resorting to no haptic feedback was an open trade,
the decision was essentially financial and not technical.
A Strong, Coherent Constituency is Essential
29
Ultimately this is where numerous problems occurred in the acquisition project. Initially
tight development between the acquirer and client occured in summer 2006 through the
early September 2006 site visit and focused on the creation of a train-the-user-in-the-field
graphical user interface concept. After that point, the acquirer worked with a new hire
to iRobot. This new hire was in charge of human factors engineering, and although very
competent, was new to iRobot. This new representative had different ideas about what
could be helpful for iRobot to consider for improvements to their operator control unit, one
idea being to work on new physical input modalities. The acquirer was then comfortable
in selecting the haptic avatar controller team, as it appeared to be iRobot’s new direction.
However, that assumption of a new direction by the developer and acquirer was observed
not to be true as the project moved to completion.
Also, the project experienced trouble in its name, specifically the “haptic” part of the name.
Although the acquirer and the client representative understood that the haptic feedback
portion could be traded out, this fact was not communicated to the rest of the government
and industrial robots team. Ultimately, when the developer traded out the haptic feedback
to meet their budget constraint, the acquirer and client representative did not communicate
what this would mean to the rest of the team. Thus, during the final presentation, the iRobot
staff focused on the lack of haptic feedback extensively, despite the fact that the system did
satisfactorily span the requirements.
Technical Problems Become Political Problems
This fact was present, but not driving in this case. Ultimately, the loss of the high-torque
servo motor (a technical problem) precipitated both the budget and constituency problems
that arose. There may have also been some impression that the project was “in trouble”,
despite the fact that the developers quickly implemented new procedural safeguards to
ensure that a similar mistake would not happen again.
The Best Engineering Solutions are not Necessarily the Best Political Solutions
The selection of the physical input device over a graphical user interface ultimately may be
the lead indicator for this political fact of life. Despite the fact that the haptic avatar con-
troller team was best performing, the loss of graphical user interface work did not provide
the tie in that the rest of iRobot was expecting to see in a new operator control unit. iRobot
believed that the best “bang-for-their-buck” was going to be in the graphical user interface
(the new hire didn’t speak up about his beliefs in the final meeting), so the pushing of a
new physical input controller was likely to have been seen as a distraction.
30
Table 3.5: Autonomous Helicopter Landing Capability Design and Resource Metrics
3.2.5 Case 4: Autonomous Helicopter Landing Capability
The fourth case focuses on a contract for research between the Jet Propulsion Laboratory (JPL)
and the University of Southern California (USC) (contrast with case 1, which was a grant and
case 3, which was no-fee) for an autonomous landing helicopter capability. JPL had an addi-
tional goal to be able to preview the work of upcoming graduate students who may be able to
be hired. This work was performed under JPL’s Partnership Research and Development Fund
(PRDF) program, which seeds money to small projects that may have great impact for JPL. This
means that the funds for the project did not come out of the immediate acquiring group’s fund-
ing, but was a plus-up award from higher JPL levels of management. The contract called for
the delivery of a technical report covering reprints of publications, a model and control system
for a class of helicopters, a method for emulating a specific kind of dynamics on a helicopter,
and a draft proposal for future funding opportunities. This project was undertaken in one year
in, during which time the autonomous landing capability was demonstrated under a variety of
conditions. Ultimately the capability was successfully acquired by JPL, but did involve effort by
the lead graduate student above-and-beyond contract requirements. Although no draft proposal
was submitted, JPL and the USC lab continue to have collaborative efforts. Also, this project did
lead to the successful hiring of the lead graduate student by JPL.
31
As Table 3.5 indicates, the Autonomous Helicopter project fell in the main channel of exe-
cutable projects according to (Moody 1997), by using resources in relation to the design difficulty.
The following political observations are made about this case:
Politics Not Technology Dictates What Technology Is Allowed to Achieve
One interesting aspect here was that the Kalman filter for the inertial navigation unit failed
several times. Due to the lack of agreements in place, USC was not allowed to see the code
of the filter to attempt to help fix the problem.
Cost Rules
The contract was done in a fixed-price, one-year duration. This move made the project
fairly insulated from the problems associated with multi-year budgets. Ultimately, when
the call for proposals was put out, the target price was known, and all bidders, including
USC, knew that meeting that cost goal was of paramount importance for being accepted.
A Strong, Coherent Constituency is Essential
One item that worked to USC’s favor was the efforts by the lead graduate student to keep
involved with the sponsor and the acquiring organization throughout the life of the project.
By doing this, the graduate student kept the project sold as important to all members of the
acquiring organization and ensured they would know how to use all the different deliver-
ables.
Technical Problems Become Political Problems
Of note here was that despite several crashes of the helicopter and some initial mismatches
in the navigation system’s software, the project remained sold to the acquiring organization.
Interestingly, this did not cause political problems with the acquiring organization, as the
group was familiar with the process of experimentation with physical robots.
The Best Engineering Solutions are not Necessarily the Best Political Solutions
Related to the issue of the Kalman Filter, the best technical solution would have been to
put an agreement in place to allow the lead graduate student direct access to the code.
However, such agreements were beyond the control of the acquiring organization, due to
the lead graduate student not being a US national.
32
3.2.6 Case 5: Hoboken, NJ Robot Garage Maintenance Change
The fifth case focuses on the change of the support contract for maintenance of a robotic parking
garage in Hoboken, New Jersey (NJ). The garage was built in 2002 by Robotic Parking, a com-
pany specializing in automated garages that store cars more efficiently than humans by packing
the cars in tighter. In 2006, the city of Hoboken, NJ desired to terminate the maintenance and
operation contract with Robotic Parking and undertake a new contract with a different vendor,
Unitronics. The city argued that the rise in software license, support and management fees (from
$24,000 to $27,000 per month, the increase being initially attributed to the software license por-
tion of the contract) was excessive and that Robotic Parking was not sufficiently maintaining the
garage. Without evidence of a technical analysis of the impact of removing Robotic Parking, the
city suddenly terminated the current contract and removed the Robotic Parking employees from
the site. Unfortunately, the software to operate the garage was part of that maintenance contract
as a software license. Robotic Parking had established their software so that if the license was
not renewed, the software would cease working. Within a few days of the contract termination,
the license expired and trapped several vehicles within the garage for two weeks. Eventually the
cars were released when a court imposed a new license fee of $5,500 per month for three years
for only the software licenses. Eventually, the city of Hoboken meant to operate and maintain the
garage with their employees and not pay software license fees. Ultimately, the city was able to
sever ties with Robotic Parking and have Unitronics provide their support in 2007.
As Table 3.6 indicates, the Robotic Garage fell in the main channel of executable projects
according to (Moody 1997), by using resources in relation to the design difficulty.
The following political observations are made about this case:
Politics Not Technology Dictates What Technology Is Allowed to Achieve
This fact plays out in three significant ways. First, when the city removes the current vendor
and installs the new vendor, the schedule is politically set and not based on technology
needs. Second, the vendor had installed software that was aware of its licensing, and thus
the garage would stop operating for licensing (a non-technical issue). Finally, when the
garage was inoperative, the courts dictated how a technological solution would release the
parking patrons’ vehicles from being party to the dispute.
Cost Rules
The issue with cost is what created the initial impasse between the city and the vendor.
When the vendor raised the licensing rate for the software, as part of their total month-
to-month contract for management and support, by 20%, the city ended up deciding to go
33
Table 3.6: Hoboken Robotic Garage Design and Resource Metrics
34
with a different vendor. Ultimately, the city had been plagued with budget problems with
the facility since the beginning of the construction phase (a different company provided
the building, and had several problems in delivery) and was servicing bond payments to
continue the operation of the facility.
A Strong, Coherent Constituency is Essential
The issue of constituency played out to the vendor’s detriment in two ways. First, the
vendor failed to build sufficient constituency with the city political forces to support the
raise in their rate. This enabled politicians that did not like any license fee to propose a new
vendor that would give the city the ability to maintain the garage on their own. Second,
the parking patrons became a very powerful constituency to the courts. The presence of
these constituents, which had no party to the contract dispute between the city and the
vendor, enabled the courts to rule that the software had to be reactivated for the purpose of
removing the vehicles, despite objections over pricing by the vendor.
Technical Problems Become Political Problems
The transition between vendors became very politically charged when the garage ceased to
operate, trapping parking patrons’ vehicles in the structure for weeks. This impasse created
a new constituency to the problem and quickly elevated the problem in political and judicial
arenas.
The Best Engineering Solutions are not Necessarily the Best Political Solutions
The best engineering solution would have been to continue to pay the higher rate for the
software license until the new vendor completed their work. However, the new license fees
were not politically viable for the city, and thus the solution was to immediately remove
the current vendor in favor of the new vendor. Unfortunately, without detailed planning on
the city’s part, the city’s project team did not understand technical challenge.
3.2.7 Case 6: Robotic Vacuum User Satisfaction
The sixth case attempts to understand the relationship between acquirers and developers of a large
marketed commercial robot, in this case robotic vacuums. Robotic vacuums are relatively small
robots intended to automatically perform vacuuming operations in a residential, non-commercial,
setting. Most of the robots operate by executing some coverage algorithm to attempt to vacuum
the entire room, the specifics of the algorithm depend on the sensors available to the robots.
Most of the robots require some human intervention to either prepare the space (by removing
35
Table 3.7: Robotic Vacuum Design and Resource Metrics
small obstacles, cords, or carpet fringe), empty debris bins, maintain bruses, or other small tasks
on a periodic basis. Depending on the vendor, the year of release, and the capabilities of the
robots, prices may vary from a few hundred dollars to over $2,000. User satisfaction with such
systems can be examined considering the reviews posted by users to various online forums such
as Amazon.com, RobotAdvice.com, and Epinions.com.
This case is more unusual, in that no specific acquirer is considered. This case is intended to
show how information is passed between vendors and potential clients/acquirers in a more market
based economy. Ultimately this case can only be used to verify and validate trend information
and not reveal specific methods of engineering.
As Table 3.7 indicates, robotic vacuums generally fell into the main channel of executable
projects according to (Moody 1997), by using resources in relation to the design difficulty.
36
As no specific acquirer or client is considered in this case, a surrogate political analysis is
performed, largely as a summary of comments of all robot behaviors in the online comments. No
formal analysis or inventory of the comments was performed, outside of noting the satisfaction or
rating number, so this section is largely based on the impression gained when reading comments
and the reaction of the author’s family to his own robotic vacuums at home. These observations
will be useful as a hypothetical political environment, but a more detailed study would be needed
for researchers more interested in consumer robotic vacuums.
The following political observations are made about this case:
Politics Not Technology Dictates What Technology Is Allowed to Achieve
This fact of life is only minimally present, as in a consumer electronics market, individual
clients do not have much say in the technology. However, we can imagine that a solution
that required a clients to install a position network in their house, which would enable truly
efficient room coverage, would be rejected as too invasive. However, the recent Lighthouse
functions being provided by the Roomba 500-series is starting to move in that direction.
Cost Rules
Cost probably plays some impact to the market penetration of the individual units. How-
ever, from a satisfaction point of view, cost seemed to not influence user satisfaction.
A Strong, Coherent Constituency is Essential
No direct evidence of teaming in the client side was able to be observed based on user
ratings of the robots. Of course, this is where marketing and brand recognition comes into
play, as companies try to get clients to be excited about their specific brand.
Technical Problems Become Political Problems
Although the comments were not directly processed, technical problems with robots were
a portion of the reasons for low rankings. Essentially, many technical problems lead the
user to give a low review for the product. Problems could have been with the life of the
battery (one user noted that after over 2 years of daily cycles the battery would no longer
charge, which is expected with the type of batteries used), negative experience with tech-
nical support, or even failure of the robot to handle carpet tassels or cords (which is noted
in the operations manual for all the robots considered). This negative review for negative
experience is what would be generally expected.
The Best Engineering Solutions are not Necessarily the Best Political Solutions
37
A few comments were noticed in which users did not like the random coverage and the
resulting longer time that it takes a robot to randomly cover a room. Despite that, without
a positioning network, random coverage is a generally the only engineering solution that
will probabilistically cover most of the floor of an unknown room, some clients seem to
have had a negative reaction to the behavior.
3.2.8 Case Study Coverage
This section outlines how the cases breakdown to indicate what kind of coverage we expect these
cases to provide.
Four of the six case studies have identifiable acquirers who differed form the client and user
populations. The JPL acquisition was performed by it’s intended user population (in this case,
engineers) and the robot vacuum cleaner case doesn’t identify any specific acquirer or user.
Two of the acquisitions, the CMU surface assessment robot and Global Hawk were formal in
their methods. The CMU project was specifically examining its engineering methods and exper-
imenting with concurrent and integrated methods. The Global Hawk program, under Northrop
Grumman, was subject to its corporate process improvement initiatives. Additionally, both acqui-
sitions had a track record of frequent interaction between the acquirer and the developer, imple-
menting the known best practices relating to integrated product and process development (CMMI
Product Team 2006). The remaining projects were less formal in their engineering methods.
Selection is also due to differences in the style of problems being encountered by the cases.
The surface assessment robot did not have problems meeting schedule and budget requirements.
The robot did have a problem being transitioned to operations, potentially due to an issue with
validating a project constraint elicited during the development of the technical solution. The
Global Hawk case study, on the other hand, had problems meeting its production schedule and
is having increases in the unit-cost of individual Global Hawk systems. The haptic controller
project did not get successfully transferred to the client, but its failure was slightly different.
The autonomous landing helicopter project was considered to be successful, despite not fully
implementing all normal engineering practices. The robotic parking project encountered several
problems after delivery that could have been identified by more thorough life cycle engineering.
Also, these case studies demonstrate very different methods of building a case study. The
Global Hawk and Hoboken Garage case studies are based on public information and reports in
the media, most of which are not engineering documents or artifacts (although, in the Global
Hawk case, many of the media reports refer to the outcomes of engineering activities). Some
of the information includes acquisition evaluations performed by independent experts and the
38
Government Accounting Office. The surface assessment robot and haptic PackBot Explorer con-
troller are based on personal experience and access to extensive engineering documentation and
artifacts. The Autonomous Helicopter project was accomplished within USC and based on in-
terview access to the principal investigator and lead graduate students as well as reading final
technical documents. Finally the robotic vacuum case study is based on some simplistic analysis
of feedback verses published technical specifications.
3.3 Robot Acquisition Engineering Survey
3.3.1 Goals of the Survey
The goals of the survey are to determine what engineering practices are currently being used by
people who acquire robots and elicit practice not currently documented in the various engineering
bodies of knowledge. Surveys have been used to determine methods and best practice application
in acquisition organizations in the past (Turner 2002).
3.3.2 Survey
The survey was conducted in an anonymous fashion via a web service, SurveyMonkey.com.
The survey was designed to elicit opinions and perception of completeness in performing
engineering analysis tasks by engineers and others involved in the acquisition of robotic systems.
Please note that this survey is not an assessment of the respondent’s engineering knowledge or
skill, nor an assessment of any individual’s job performance.
3.3.3 Survey Statements
The survey statements are broken into five sections:
1. Introduction and Instruction
2. Respondent Background and General Experience
3. Robotic Acquisition Project
4. Acquisition Engineering Practice
5. Survey Feedback
39
3.3.3.1 Introduction and Instruction
This section contained no questions for the respondent, but did log into the service provider that
a survey had been started. The instructions provided the purpose of the survey, survey instruction
(navigation and submission), a definition of acquisition and an example of how and why a robot
might be acquired by a university, and the organization of the remaining sections.
3.3.3.2 Respondent Background and General Experience Questions
This section consisted of eleven (11) questions designed to provide control information about
the field the respondent works in, their total experience with acquisition and robotics, and ed-
ucation levels. The section also asked for the work location, to track geographic dispersion of
respondents.
3.3.3.3 Robotic Acquisition Project Questions
This section consisted of fourteen (14) questions that asked about the most recently completed
acquisition project (or the current project, if none have been completed yet). This ”most recent”
designation was chosen in an effort to combat selection bias of only reporting on very successful
or failed projects. The first question asked how long ago the most recent project was completed.
The next set of questions addressed scale issues: per unit cost, and number of units acquired.
Next, quality and technical issues were addressed: anticipated lifespan in the robot’s intended en-
vironment, human or safety critical designation, commercial-off-the-shelf designation, lifecycle
strategy employed, and autonomy level. The final set of questions elicited the respondent’s opin-
ion of the overall engineering effort performed in addition to project success criteria of meeting
requirements, budget, schedule, and success in terms of fitness for the desired purpose.
3.3.3.4 Acquisition Engineering Practice Questions
This section consisted of 54 questions, of which 50 were paired. The first question in a pair was
about the opinion of how important a type of engineering activity was for the acquisition de-
scribed above. The second question was about the perception of the completeness of the efforts
undertaken, in terms of helping that acquisition. Respondents were instructed that there was no
expectation of an explicit or implied correlation between questions; it is acceptable to indicate
an activity was important, yet not have been completely effective, or to indicate an activity was
unimportant but was successfully completed (in this case, it may have been successful even if
only a small amount of effort was expended). Respondents were also instructed that using the
40
entire range for either importance or completeness was not necessary. Further, respondents were
instructed to respond based on their perceptions and not the formality of the work. It was possible
for activities to have been important and significant effort was expended, but no formal reports
or documents were generated. The questions in this section were mostly drawn from the spe-
cific practices of the CMMI-ACQ V1.2 (CMMI Product Team 2007), with the addition of two
questions that asked about external verification and validation respectively and the omission of a
question about peer review practices. The question on peer review was deleted as being redun-
dant with the more general verification and validation practice questions when the survey was
evaluated by independent reviewers. Finally, each question allowed for comments and each of
the four process groups asked if further practices would be needed.
Note, for all engineering practice questions, respondents had a 5 point scale for completeness
or importance and the option to not answer or indicate that information for that question is not
available for any reason.
3.3.3.5 Survey Feedback
This section asked only one question, if any other engineering practices should have been covered
by the survey. Respondents were provided a text box for free-form responses.
3.3.4 Survey Verification and Validation
Survey questions and format were examined by three engineers with more than 10 years experi-
ence and current assigned to acquisition programs. Additional examinations were performed by
two engineers with at least 5 years experience in development, but no direct acquisition experi-
ence.
3.3.5 Sample Group
Surveys of various sizes have been used to elicit practices of professionals in the field. As pointed
out previously, (Elm 2007) used 64 respondents to determine the impact of engineering in product
delivery. (Verner 2005) used 42 respondents for determining which software project management
practices lead to success. (Lehane 2005) used 43 respondents to determine significant factors in
software systems acceptance. (Wojcicki 2006) used 35 respondents to determine the current
state-of-practice in verification and validation practices for concurrent software programming.
And finally, (Surakka 2007) used only 23 data points to determine which skills were important
to software developers in Finland. Viewing engineering as a due-diligence activity, we need
only show what a ”reasonable” or average engineer would do when encountering similar issues.
41
This view indicates that state-of-practice performance should be able to be determined by only
examining a small population, as has been seen in the cited surveys.
3.3.6 Statistical Analysis Techniques
To determine which variables most correlated with project performance (schedule, requirements
satisfaction, and suitability), linear regressions were performed. The process of examining candi-
date regressions was to attempt initially to use as many engineering and project factors as possible.
Then sub-models were considered by deleting or adding parameters to the various base models
by examining Mallow’s statistic (Cook 1999) at each step. During this addition and subtraction
process, the r-squared (amount of variability in the response explained by the model) and p-value
(probability that the model will be rejected by finding more data) are considered. The goal is a
regression function with a high r-squared, but a p-value below 0.05. In advance of performing
regression analysis, each response variable was centered to its respective average and normal-
ized. This was done so that each response variable has an average of 0 and a standard deviation
of 1, which is normal practice to reduce numerical instability due to underlying statistical tool
assumptions (most of which assume a mean of zero and a standard deviation of 1) (Cook 1999).
3.3.7 Anticipated Results of Survey
By analyzing the survey respondents to get relations between engineering process areas with ac-
quisition success factors, the null hypothesis (H0 - engineering methods have no impact) would
be refuted. However, different relations, correlations, and regressions would not give direct ev-
idence against the other alternative hypotheses. In this case, these relations, correlations, and
regressions would need to be examined, validated, and elaborated in a case study context to have
theories that would refute those additional alternate hypotheses.
3.4 Revisiting Robotic System Cases
3.4.1 Examination of Survey Results to Cases
After performing the survey, the cases can then be evaluated for the engineering practices and
methods related to the outcome of the survey. The CMMI-ACQ
SM
provided the point of de-
parture for characterizing the methods employed by acquisition organizations, especially to help
look for flawed methods. This has been done numerous times (Fisher 2002), and the Standard
CMMI
SM
Appraisal Method for Process Improvement (SCAMPI Upgrade Team 2006) indicates
how to determine gaps using the CMMI-DEV
SM
process model.
42
Once there were a sufficient number of cases (where sufficient is determined by analysis to be
determined via the frameworks provided by Yin and Eisenhardt) of positive and negative example
case studies related to the engineering practices in question, the cases were then encoded by the
methods observed to be present or lacking in the cases. To support analysis, observations in the
case studies of methods in use are to be encoded to CMMI-ACQ
SM
. For example, an observed
lack of a method relating to the acquisition verification specific process 1.1 would be encoded as
A VER-SP1.1. When a method is present, if possible, a relation to the body of knowledge that
specifies the method is given (for example the specifying standard or formal body of knowledge
section) and also encoded to the CMMI-ACQ
SM
.
3.4.2 Analytical Technique
The specific encoding of the observed methods and practices from the CMMI forms a pattern that
can be analyzed directly. This forced the observations to be objectively classified by method and
practice. Thus, we could analyze the patterns for variance and covariance in the methods present.
These patterns of method usage could be paired with an indication if the method was related with
the absence or presence of problems in the source (case study or survey response). The linkage
of a cause (observed evidence of a practice) with the observed effect (mitigation of risk) was then
formed into potential linked pairs.
In some cases, we did not know the specific methods used, but we could see if there was
evidence that some method was done to achieve a practice, in the fashion that the Standard CMMI
Appraisal Method for Process Improvement (SCAMPI)(SCAMPI Upgrade Team 2006) assesses
maturity of an organization to the CMMI process model. Thus, if expected work products (final
reports) exist, that indicate that a practice was performed, and the case could be credited with
having satisfied that practice.
The assessment includes which specific practices from the CMMI-ACQ are in evidence, what
method was utilized to realize the practice, and any additional information about the method’s
utility. This profile of practice is then matched with the outcome of the case to form the cause and
effect pair. Initially such pairs are constructed for each considered outcome variable, schedule,
requirements, and performance. For example, a method used in the CMU Surface Assessment
robot was to demonstrate engineering models to the client, which resulted in meeting require-
ments, but not being suitable; that would generate two pairs: (demonstrate engineering models;
met requirements) and (demonstrate engineering models; not suitable).
In the case of conflicting cause and effect pairs, the first attempt was to be more specific in the
method. For example, with the CMU system and Global Hawk, both demonstrated engineering
43
models, but with different results for suitability. However, the CMU system demonstrated in non-
operational environment, while Global Hawk demonstrated in operational environments. If pairs
continue to conflict,, then the political analysis and case metrics could be used to compare the
context of these pairs. We could conclude that the project that was more “normal” in terms of its
political risk and its complexity would be the more normal case to be considered.
Remaining pairs were candidate causes and effects attributable to the robotic system’s nature.
The existence or absence of certain types of these pairs aided our efforts to evaluate the alternative
hypotheses H1, H2, and H3.
Negative effect pairs, by construction, could be viewed as gaps in current practice. Thus we
were be able to evaluate H1 (Complete Practice), by observing if there are any negative effect
pairs remaining after elimination. The presence of negative pairs were candidate gaps in the
current practice, which would refute H1.
H2 (Not Success Critical) would have been harder to directly refute, but again was possible
to show evidence against it. In unsuccessful acquisition examples, matching a predictive engi-
neering method to a acquisition decision that is counter to the method’s output yielded evidence
that engineering could have supported the acquisition. Alternately, the use of engineering meth-
ods, whose output is used to change an acquisition’s direction (yielding a successful acquisition)
also lent evidence to refuting this claim. In essence, after common causes were eliminated, both
positive and negative effect pairs disprove H2.
Next was to eliminate known root cause and effect pairs. Specifically, pairs that could be at-
tributed to generally accepted engineering practices as defined by professional organizations and
standards. In addition to IEEE, ANSI, ASME, and ISO, we considered best practices databases
for acquisition (Dangle 2005), RTCA (e.g. (RTCA 2001)), experience reports from the Mobile
Robot Knowledge Base (Joint Robotics Program 2008), and guides for conducting technical re-
views (e.g. (Cheng 2005)). However, care was taken here to ensure that the essential element
of previously conflicting, or closely related pairs is retained. Continuing the previous example,
the continued pair was (operationally relevant demonstrations; suitable) and (not operationally
relevant demonstrations; not suitable)
Positive effect pairs could then be viewed as methods that were employed but not currently
addressed by the common practice of engineering as specified by the component disciplines of
system, software, mechanical, and electrical engineering. Thus, with the positive effect pairs,
we had candidates for specific robotic engineering methods, which refuted H3 (Lack of Robotic
Engineering Methods). Even if no positive effect pairs were observed, we could not have con-
cluded that H2 is true, but this would have been significant finding that would warrant further
44
examination to determine if robotics is a field that possesses engineering methods unique from
the practice of the component disciplines.
3.5 Arriving at Feasibility Rationales for Robotic Systems Acquisition
Ultimately, cause and effect pairs can be generated into a list of questions for an engineer to
ask about a robotic system acquisition. These pairs capture the core of a heuristic (per Koen)
for the acquisition of robotic systems. These feasibility rationales were similar to a list of 100
questions for technical reviews by Cheng (Cheng 2005). In Cheng’s work, a database of space
system failures was mined for negative examples, and then a root cause analysis lead to a single
page description of what technical question may have illuminated the error earlier in the life
cycle. For example, lesson 81: “Designate a Responsible Engineer for Complex Equipment”
was reversed out from a failure of any one engineer (structural, manufacturing, or project) having
taken responsibility to ensure program requirements were satisfied for a piece of micrometeroid
shielding. Since no engineer was responsible for that piece of equipment, several tests were
waived, and upper level engineers assumed the design criteria were met and failure ensured.
Cheng also related other lessons learned to each other in an interdependent fashion, as problems
arising from failure to perform these activities are rarely a single point of failure in engineering
methods or heuristics.
In this case, since the analysis can include positive examples, the questions can include look-
ing for common success criteria. Continuing our previous example, the positive effect pair of
(operationally relevant demonstrations; suitable), then a technical reviewer should examine the
evidence presented that argues that the demonstrations were operationally relevant. If the demon-
stration are not operationally relevant, the engineer should ask why this project feels it can suc-
ceed.
This use of questions to support feasibility rationale decisions are themselves heuristics. In
full closure, these questions for feasibility examination traced fairly closely with the various
heuristics identified in the survey and the case studies. These questions combined the heuristics
in various ways, through the encoding and analysis done previously. The tracing was not one-
to-one, indeed it is easy to consider that a many-to-many relationship would exist between the
questions and the heuristics. For example, the operationally relevant demonstrations question
may relate to multiple feasibility questions, while a single question may be informed itself by the
description of the operational tests and other heuristics simultaneously.
45
Chapter 4
Analysis and Discussion of Results
4.1 Analysis of Robot Acquisition Engineering Survey
4.1.1 Survey Invitation and Response Rate
The survey invitation was sent to 39 individuals. 36 of the 39 individuals were anticipated re-
spondents. The 3 additional individuals, and 13 of the individuals were asked to propagate the
survey within their organizations. Invited groups included: University of Southern California
Robotics graduate students, faculty, and alumni, Stanford University Faculty, Georgia Tech Fac-
ulty, Carnegie Mellon University Faculty and graduate students, US Department of Commerce
researchers, US Department of Defense robotics program managers and engineers, and 4 different
commercial (non-defense related) entities.
From this invitation pool, 30 surveys were initiated and 21 were completed, giving a response
rate of 53.8%.
Given the nature of similar journal articles, especially (Surakka 2007) which used only 23 re-
spondents, and that the relatively new field of robotics is likely considerably smaller than software
development as a whole, this constituted a representative sample from which to draw conclusions.
4.1.2 Initial Outlier Detection
Two respondents indicated ”No Response” for all engineering practice completeness and impor-
tance criteria. Since such a respondent would not be matched in any statistical analysis, these two
respondents were dropped from the pool.
Several respondents indicated ”No Response” for a number of engineering practices or project
success criteria. All these respondents are kept in the pool, but statistical analysis will only be
based on the subset of the pool that responded to all factors of interest. This is noted later as the
number of cases (or ”n”) that supports that statistical value.
46
On examining the responses based on different types of populations, one professional (non-
student) respondent indicated low (1-2) completeness and importance for all engineering prac-
tices, where all other engineers had higher average responses (3-5). This engineer was report-
ing for a government, defense-related project, of which the responses were even further non-
characteristic. As this respondent was over two standard deviations outside of any other potential
matching population (an example for verification is in Figure 4.1), this respondent is an out-
lier and will not be included in the analysis. This gives a total qualified responding pool of 18
respondents. The remainder of the analysis will focus on these 18 respondents.
4.1.3 Responding Population and Trends
The survey contained largely respondents from California (United Kingdom=1, Pennslyvannia=2,
Massachuettes=1, Maryland=1, North Carolina=1, Virginia=1, California=14). Of this number,
5 of the 14 California respondents were students (2 additional students responded from other ar-
eas). This geographic dispersion is similar to the geographic dispersion of the survey invitations.
Overall, 7 students and 11 professional (nonstudents) responded to the survey.
The differences between students and professionals (those with t-test scores under 0.05) in-
cluded:
unit cost (students averaged $5-$10,000, where professionals were in the $100,000’s per
unit)
acquisition experience (professionals averaging 3 times as much experience)
robot lifespan (professionals acquire robots for over 1,000 hours of use, student median
response was for 25-199 hours)
designation as safety-critical (professionals acquiring with safety critical concerns)
level of desired autonomy (students desiring slightly more robotic autonomy)
opinion of overall engineering effort (professionals reporting strong and complete efforts,
students noting fragmentary efforts)
designation as commercial-off-the-shelf (students tend to buy off-the-shelf robots)
acquisition strategy (students likely to use a ”big-bang” acquisition, professionals were
more diverse in their choice)
acquisition schedule performance (professionals tended to deliver later than students)
47
Figure 4.1: Box Plot of Completeness of Engineering Practices Conditioned by Student Status
(X18)
48
There was no statistical difference (t-test scores over 0.05) between students and professionals
in terms of:
years experience with robotic systems
highest level of education achieved
age of project
quantity of robots to be acquired
if the robot met its requirements
if the robot was suitable for its purpose.
For engineering practices, students were only statistically different from professionals on the
average reported value for the completeness of acquisition technical management practices and
on the importance of acquisition verification practices.
For correlations, over all respondents, the overall impression of engineering completeness
corresponded more directly to the importances indicated for the specific engineering practices
than for the completeness responses of the same individual practices (correlations of 0.55, 0.74,
0.58, and 0.49 to importance verses 0.06, 0.47, 0.24, and 0.09 for completeness of requirements,
technical management, validation and verification respectively). This is illustrated in the scatter-
plot matrices in Figures 4.2 and 4.3
On project success variables, we note that although being fit for purpose and meeting require-
ments may appear correlated (0.45), the correlation may be random (t-test=0.29, a t-test at or
below 0.05 is required to reject the null hypothesis that the variables are unrelated). This may
be due to the disconnect between meeting requirements and fitness for purpose with regards to
schedule performance. There was a slight negative correlation (correlation -0.27; t-test 0.00) be-
tween schedule performance and being fit for purpose, which indicates a slight trend to being
more fit for purpose if the robot was delivered late. Further, schedule performance had a slight
(0.15) correlation with meeting requirements (t-test=0.00), which indicates a slight trend to meet-
ing requirements when systems are delivered on schedule (no projects in the response pool were
delivered early). This is summarized in Table 4.1.
The following responses had too little variation to be considered for analysis: if a robot was
successfully acquired (16/18 indicated yes or was in progress), if the robot was in budget (16/19
indicated yes or refused to answer).
49
Figure 4.2: Scatter-plot Matrix of Overall Engineering Effort to Average Importance of Engineer-
ing Practices
Table 4.1: Outcome Variable Pairwise Correlations and t-test Values
50
Figure 4.3: Scatter-plot Matrix of Overall Engineering Effort to Average Completeness of Engi-
neering Practices
51
4.1.4 Outcome Factor Analysis
Responses within each process group were averaged giving 8 variables, two (completeness and
importance) for each of the engineering process areas of ARD, ATM, A VER, and A V AL, along
with the personal and project question responses, to use as predictors of the outcome factors. The
success of the robot was based on budget, schedule, meeting requirements, and being suitable.
4.1.4.1 Predicting Budget Performance
Insufficient variation of the response variable (only two indicated performance other than ”on-
budget”, one of those being only slightly over budget)
4.1.4.2 Predicting Schedule Performance
The best linear regression for predicting schedule performance included a positive relation to the
completeness of the ARD practices (added value plot, or A VP, in Figure 4.4), negative relation to
the importance of A VER (A VP Figure 4.5), negative relation to experience with robotic systems
(A VP Figure 4.6), and a negative relation to the level of autonomy desired (A VP Figure 4.7).
This regression explained 58% of the variation of the data with 98% confidence (R-Squared
= 0.58, p-value = 0.02, n=17). The only survey respondent not used did not provide data about
schedule performance. Observe, as previously noted, two of the factors (importance of A VER and
level of autonomy) are strongly correlated with student or professional acquisitions. Models that
included discriminators for student projects were considered, but rejected (such models had worse
F-test scores and significantly worse p-values). Hence, although those factors are correlated, that
correlation does not appear to be significant here.
4.1.4.3 Predicting Requirements Performance
To truly maximize the predictive power, we consider only projects either fully meeting require-
ments or those that had the most problems meeting requirements (”partially met specifications,
many desired functions available”). Only three projects reported in the middle range (”Partially
met specifications, all high-priority desired functions available). Excluding these three middle
points (which have normalized response values of 0, so their removal doesn’t impact the proba-
bility distribution, see Figure 4.8) may aid us in analysis, because the difference in the response
levels may not be clear (the value was potentially too qualitative).
The best linear regression for predicting satisfaction of requirements is a positive relation to
the completeness of ARD practices (A VP Figure 4.9), a positive relation to the importance of
52
Figure 4.4: Expected Value Plot for Schedule Performance and the Completeness of ARD Given
the Other Predictor Variables
53
Figure 4.5: Expected Value Plot for Schedule Performance and the Importance of A VER Given
the Other Predictor Variables
54
Figure 4.6: Expected Value Plot for Schedule Performance and the Engineer’s Robotics Experi-
ence (X05) Given the Other Predictor Variables
55
Figure 4.7: Expected Value Plot for Schedule Performance and Desired Robot Autonomy (X17)
Given the Other Predictor Variables
56
Figure 4.8: Plot of Response Values for Meeting Requirements Verses Completeness of ARD
Practices
57
Figure 4.9: Expected Value Plot for Requirements Performance and the Completeness of ARD
Given the Other Predictor Variables
ATM (A VP Figure 4.10), and a negative relation to the importance of A V AL (A VP Figure 4.11).
The resulting regression explained 74% of the variation in the data with 99.7% confidence (R-
Squared = 0.74, p-value = 0.003, n = 14). In addition to the three excluded middle points, one
respondent did not provide data about meeting requirements.
4.1.4.4 Predicting Suitability
One respondent, a professional, was determined to be an outlier via a standard outlier test on
many of the candidate regressions (outlier test was significant and 4 times the value of other
responses in the finally selected regression, see Figure 4.12).
The best linear regression for predicting if a robot would be suitable for its purpose was a
positive relation to the completeness of A VER practices (A VP Figure 4.13), a positive relation to
the importance of A V AL (A VP Figure 4.14), a negative relation to the importance of A VER (A VP
58
Figure 4.10: Expected Value Plot for Requirements Performance and the Importance of ATM
Given the Other Predictor Variables
59
Figure 4.11: Expected Value Plot for Requirements Performance and the Importance of A V AL
Given the Other Predictor Variables
60
Figure 4.12: Outlier Test Graph for Best Suitability Predictors of Suitability
61
Figure 4.13: Expected Value Plot for Suitability and the Completeness of A VER Given the Other
Predictor Variables
Figure 4.15), and the designation as safety-critical (meaning that a safety critical system was more
likely to results in a suitable robot; A VP Figure 4.16). The resulting regression explained 80%
of the data with 98.4% confidence (R-Squared = 0.80, p-value = 0.016, n=15). Two additional
respondents were not used, one for not providing informational about fitness for purpose and one
for not providing information about the completeness of A VER practices.
4.2 Analysis of Robot Acquisition Cases
4.2.1 Evaluating Outcome Factors in Cases
In the cases, the important engineering factors to consider are ARD (requirements) and A VER
(verification) because there were the engineering process areas for which completeness appeared
as predictors from the survey. For each traditional acquisition case, an assessment of the practice
relevant to the outcome is made. The robot vacuum case is used to demonstrate the lack of
62
Figure 4.14: Expected Value Plot for Suitability and the Importance of A V AL Given the Other
Predictor Variables
63
Figure 4.15: Expected Value Plot for Suitability and the Importance of A VER Given the Other
Predictor Variables
64
Figure 4.16: Expected Value Plot for Suitability and Being Designated as NOT Safety Critical
(X16) Given the Other Predictor Variables
65
relationship between requirements achievement and suitability/satisfaction in a detailed fashion
between 10 different robot vacuum products.
4.2.2 Schedule Performance
From the case studies, table 4.2 was constructed regarding the evidence of performance of ARD
practices, methods, and the final outcome with regard to schedule performance.
Table 4.2: Case Requirements Performance Related to ARD Practices Observed
The observations illustrate the difficulty in achieving adequate schedule performance without
adequate requirements practices. Generally, the projects that had the majority of ARD processes
exercised in a way relevant to schedule development had fewer problems than those that had less
activity. Interestingly, Global Hawk and the Autonomous Helicopter program both had similar
issues with a lack of explicit prioritization of requirements and performing trade-offs on those
priorities. In the Autonomous Helicopter case, this apparently was not a problem. However, the
Autonomous Helicopter project had the lead graduate student spending non-funded time working
with the acquirer to ensure completion of the project. Due to various federal regulations, com-
panies are not allowed to volunteer their time for government projects and this would have turn
66
into a schedule issue. The third project to have problems was the robotic garage case, which is
expected, because technical issues arose that caused the vehicle lock in and required litigation
(which ultimately held up development).
The issue of not having sufficiently prioritized requirements to be able to do trade-offs is a
known issue. The use of methods such as Schedule-As-an-Independent-Variable (SAIV) (cite)
are meant to be able to utilize the prioritization of requirements to be able to deliver some ac-
ceptable subset on time. Ultimately, in the case of the government and Global Hawk, additional
analysis for SAIV would have realized that the stretching of the program and reduction of key en-
gineering efforts naturally resulted in lower production rates and operation on a lower efficiency
staff profile. This issue is not unique to robotic systems.
4.2.3 Requirements Performance
From the case studies, table 4.3 was constructed regarding the evidence of performance of ARD
practices, methods, and the final outcome with regard to requirements.
Table 4.3: Case Requirements Performance Related to ARD Practices Observed
67
In the cases, we can clearly see that two systems (surface assessment and haptic avatar con-
troller) clearly met their requirements and performed significant portions of the anticipated re-
quirements tasks.
In two cases, to understand why the requirements were not satisfied some further investigation
into what types of requirements were or were not satisfied. In the Global Hawk and autonomous
helicopter projects, requirements satisfaction is a little less than “black and white”. In the Global
Hawk task, we see that the places where inconclusive evidence that the acquisition organization
performed all the requirements practices coincide with the problems meeting maintenance and
sustainment requirements for the program. Global Hawk does show that where adequate effort
was undertaken (e.g. performance of the air vehicle and remote piloting capability), that the
system delivered on those requirements adequately. For the autonomous helicopter, the missed
requirement was to deliver a follow-on proposal for additional work. However, given that JPL
hired the lead graduate student and the USC lab’s move away from autonomous helicopters, the
failure to address this requirement is not surprising nor injurious to either party (despite that the
requirement was never formally relieved from the project). In both these cases, requirements
performance seems to have been in relation to how extensive the requirements developments
activities were completed and the emphasis put on those requirements to the developer.
In the case where inadequate requirements development was performed (the robotic garage),
the outcome was as expected that they didn’t meet their requirements. In this case, it should
have been a requirement of the transition that no patron lose access to their vehicle as a result
of the transition. Although not explicitly mentioned as a requirement, given the litigation, it is
clear that this is a requirement that should have been known and was acknowledged to have been
violated by the program. This notion of “not injuring parties uninvolved in the acquisition” seems
sufficiently self-evident as to be known practice for any engineer.
4.2.4 Suitability
From the case studies, the following table is constructed regarding the evidence of performance
of A VER practices, methods, and the final outcome with regard to suitability for the intended
purpose is summarized in Table 4.4.
The cases provide a different insight into satisfaction than the raw survey analysis. In the
cases, we can see that performing all the practices for verification didn’t lead as clearly to a
suitable system. Even in the survey, one respondent indicated minimal satisfaction and strong
achievement of verification tasks. Here, we examine how the methods employed differed from
each other to try to tease out a difference.
68
Table 4.4: Case Suitability Related to A VER Practices Observed
69
Of the unsuitable projects, the surface assessment robot had an issue in which the available
development and integration test road was not representative of a new road surface. Although
with hind-sight, the engineers were able to see that the minor wear would eliminate most of the
deviations detected in an early assessment; those same engineers, with civil engineering licenses
and 20+ years of experience with concrete, did not envision that phenomenon in advance. An-
other system, the PackBot controller, used developmental testing, but did not perform testing in
operational environments or with operational scenarios. In both cases, the acquirer didn’t desire
to fund the additional work required to build up operationally relevant testing. For the surface
assessment robot, building a separate road for testing was out of the question. For the PackBot
controller, no funding was available to establish user tests or continue the work past the 2 semester
development project that it was.
On the other hand, Global Hawk continued after the ACTD, which provided for operational
style flight testing that helped to partially demonstrate military utility. The suitability of Global
Hawk is currently in debate, but generally supported by its user base and continuing to be utilized
and requested. The USC autonomous helicopter was very suitable and was also flown in identical
flight conditions to those desired by the JPL and the engineers in their robotics section.
4.2.5 Requirements and Suitability Disconnect
Principally, the disconnect between requirements and suitability was demonstrated in the robotic
vacuum mini-study, which demonstrates an inability to generate a trade-off matrix of available
technical options (requirements) that would sufficiently differentiate between the different robotic
vacuum products. A number of the other cases also demonstrated that either requirements sat-
isfaction didn’t lead to a suitable system (the CMU system and PackBot Controller) or that lack
of satisfying requirements didn’t prevent a suitable system (autonomous helicopter and in a con-
flicted way the Global Hawk system).
The robotic case study considered 10 possible technical metrics and cost against a range of
satisfaction responses from users/clients, but failed to generate a good technical reason for select-
ing one robot over another. Using linear regression over the parameter sets and using the same
method to select parameters as the survey, the best model found was to select for auto-return
after partially discounting for the Trilobite and the RV-5000. This metric means that selection
should go to the Roomba 416 and 500 series. Essentially, this selection rule is a partition that
wasn’t explicitly created. The reason this was an unexpected result was that there would be no
reason for users to discount the auto-empty function for the bin, while also having given no ad-
ditional weighting to the Roomba 530 (which is a proper subset of functions for the 560, but
70
performed better in terms of satisfaction). Interestingly, the price difference was found to not be
a useful metric to building a model to predict satisfaction. Other plausible, but less likely, trade-
offs included selecting any Roomba (note that only Roomba offers a programming interface and
extensive corporate-sponsored user forms) or selecting for battery type (preferring NiMH over
NiCD). For the battery model, it is noted that the two worst robots used NiCD batteries, while all
other robots used NiMH. However, the robots’ duty cycle and charging cycles were not tightly
coupled to the type of battery and were not selected by the analysis method. The likelihood
that a user actually cared about the specific battery type and ignored battery performance seems
unlikely, and more likely the regression choose the battery as a set partition due to the acciden-
tal association between satisfaction and battery type. Thus, although there were relations to be
formed, the resulting engineering trade-offs do not make any conversational sense in a client or
user context.
Looking to the two cases that didn’t meet their acquisition requirements, a common trend of
unspecified benefits and tacit knowledge transfers seemed to have been the path to suitability. In
the case of the autonomous helicopter, the technical reports alone didn’t give a full picture of how
to build and operate the autonomous helicopter. Only through the direct involvement of the lead
graduate student (and his eventual hire into JPL) was the technology successfully acquired. This
need for personalized knowledge transfer indicates that the technical reports must have lacked
some form of tacit knowledge that the researchers had but were unable to communicate. Given
their expertise and history with autonomous helicopters and participation in multi-university ef-
forts, it was unlikely that the researchers were merely unaware of an effective means to commu-
nicate. The lead graduate student indicated that the community for autonomous helicopters was
small, and many of the members shared their results typically by working on a joint project. In
this way, the researchers likely tacitly imparted important parameters of the project that may not
be present in their technical reports.
In a different way, Global Hawk simultaneously succeeded and failed. Validation for many of
the sustainment and affordability requirement is failing (according to government testing reports),
yet operational users continued to support the system. The operational support is in spite of not
meeting some of the requirements of the system. In this case, the requirements failing are mostly
related to non-operational issues (such as maintainability, reliability, availability, etc.), however
this is not to be taken lightly. ”Ility” requirements can very much drive the acceptance of a system
over its life cycle, for example systems that fail more often or fail to be as available as desired. So,
despite problems in the ”-ilities”, the users continue to be satisfied, indicating that there is some
benefit that is overshadowing the potential “-ilities” issue that is not being directly communicated.
71
In the case of the surface assessment robot and the PackBot controller, we observed cases
where the requirements remained fairly stable and had buy in from the acquisition stakeholders.
However, in both cases, organizational changes invalidated the assumptions under which those
requirements were valid. Thus using Conway’s Law (which states that a systems structure tends
to mirror the structure of the organization the produces it) (Brooks 197x), and a generally held
belief that developers tend to organize in a similar fashion as their acquirer, we can identify that
this should have been identified as a risk. Thus, the requirements and success criteria that support
those requirements should have been revisited on these projects. However, what was surprising
was the extent to which these projects were dropped due to this exception to Conway’s law. While
other work in value based software engineering (Jain 2008) indicates that these mismatches can
cause a success model clash, it is not clear in the surface assessment robot’s case that the success
model clash should have been insurmountable, especially given the recent solicitation for CMU
to develop a similar system again. Somehow, the nature of the surface assessment robot having
acted in an undeniable fashion in the real-world appears to have accelerated or increased the
impact of the model clash.
Ultimately, we see in the cases the general trend that requirements satisfaction did not help
predict suitability, as was observed in the survey. One causes in the cases was falling victim to
Conway’s Law. Another possible cause would be the lack of requirements correlation to satisfac-
tion in the other cases, which indicated that robotics may lack a sufficient requirements language
to express suitability or operational desires and parameters.
72
Chapter 5
Conclusions and Directions for Future Work
5.1 Evaluation of Hypotheses
5.1.1 Rejection of H0 (Engineering methods do not impact the success of a robotic
acquisition)
From the survey data showing the positive correlation of engineering factors with robotic system
suitability, schedule, and requirements satisfaction, we have evidence to reject the notion that
engineering does not impact acquisition success. Indeed, this result is supported by other works
in systems engineering that generally shows that system engineering practices can and do have a
significant impact on the success of a complex, technical acquisition program (Honour 2004).
5.1.2 Rejection of H1 (Engineering Methods are Not Success Critical)
From the trivial observation of the robotic garage case, we can see that some minor amount
of engineering analysis would have informed the city of the difficulties with their strategy and
provided at least for the evacuation of the patron’s vehicles before the impasse. Additionally, in
the Global Hawk case, numerous reports have indicated that more upfront engineering would have
prevented the current schedule and cost problems being experienced by the program. For these
reasons, in addition to the impact of engineering completeness in the survey, we have evidence to
reject the hypothesis that the engineering methods are not success critical for the acquisition of a
robotic system.
5.1.3 Rejection of H2 (Engineering Provides a Complete Practice for Robotics)
The lack of correlation between requirements and suitability indicates a lack of engineering
knowledge about how to specify robotic systems. This gap is a first order problem, as with-
out adequate technical specification, it is hard to perform many of the other engineering tasks or
73
perform engineering trades between different robotic system concepts. The two cases that were
based on the most sizable requirements development activities were the least successful in being
suitable to the client. In the survey, the lack of strong or even moderate correlation between re-
quirements satisfaction and suitability of the final robot continues this trend. The tools engineers
have to specify a robot appear to be inadequate to specify a suitable robot. For this reason we
reject the hypothesis that engineering provides a complete practice to support the acquisition of
robotics systems.
5.1.4 Rejection of H3 (Lack of Robotic Engineering Methods)
Examining the acquisition verification methods employed in the cases, we observe that not all
engineering methods are equally applicable for robotic systems. At this time, the methods used
tend to be more comprehensive in nature. These comprehensive methods tend to prefer that
only fully developed robots be tested in actual situations more frequently during the development
of the robot. Unfortunately, such activities are more typical of validation and not in progress
testing. This begs the question as to what types of intermediate tests would have adequately
informed the development and acquisition life cycle. Given that Koan defines that engineers
should have heuristics to operate in the face of imperfect information, the could instead be phrased
as “what heuristic would substitute for operating the full robot in the actual environment?” No
such heuristic appears in the cases. Further, we see that use of typical ”concept of operations”
figures only moderately helps robotic systems, and that the precision of the concept of operations
with respect to its users is very important in success.
From analogy, many other application system domains have their own methods Aeronautical
engineering (see various FAA engineering method and evaluation guides) and nuclear power
engineering (in which the adherence to specifically generated Nuclear Regulatory Commission
methods, rather than industry standards, is of paramount importance) both extensively publish
their methods for engineers to study and adopt as they enter those fields. Their focus in on
methods that meet those domain’s specific “big challenges”, such as stability of flight, reliability,
and fail-safe behaviors. The presence of a set of methods to support autonomy, interaction, and
other technical goals for dealing with robotic systems in unstructured environments would seem
to be appropriate, but does not exist.
Although these arguments do not fully show the existence of specific robotic engineering
methods, the do provide evidence that current engineering methods need to be informed about
robotic systems. This need to inform, in turn, provides evidence to reject the hypothesis that there
are no engineering methods unique to robotic systems.
74
5.2 Feasibility Rationales for Robotic Systems Acquisition
Based on the survey and case studies, four feasibility rationales are proposed:
Changes in organizations destabilize success models
Success models are identified in (Jain 2007) and (Biffl 2006) and are the criteria by which
a stakeholder in the system either “wins” or “loses” (for example, Congress “wins” if a
project stays in budget but also “wins” if jobs are created in many states). Success mod-
els are tracked to ensure that needs are being satisfied while avoiding triggering a “lose”
condition for critical stakeholders.
When key stakeholders are added or change, these new individuals not only need to be
brought up to speed with decisions that have already been made on the project, but should
also be queried as to their own view of success for the project. This is done to ensure that
any new relevant success model is captured (per value-based software engineering practices
(Jain 2007)).
As can be seen in the surface assessment robot and the PackBot controller, the addition
of new stakeholders, even if not directly tied to the project, can sufficiently change orga-
nizational goals as to require a different risk profile (such as the surface assessment case)
or lead the project to take different paths (such as the haptic avatar controller case). The
stability of the stakeholders in the autonomous helicopter case helps support this rationale,
as does the frequent training normally given to new acquisition staff in the Global Hawk
case to keep them focused to the current vision.
Conway’s Law may have a greater than normal impact, since the robot may be alternately
viewed as an actor or a tool in the organization context. In this way, the change of a
stakeholder may change the fundamental identify of the robot from a means to an actor in
attribution of contributions or responsibilities.
This thesis strengthens the argument in favor of examining and affirming success models
on a regular basis, as has already been recommended by (Biffl 2006) and (Jain 2007).
The fidelity of test environment seems critical for sufficient verification
As an additional feasibility examination, considering how close the verification environ-
ment is to the actual operational environment is may gain some insight into the potential
for the project to have problems. In the case of the surface assessment robot, the minor
disconnect between the test roadway and a new roadway was sufficient to have caused nu-
merous problems with the fielding of the system. In the Global Hawk case, the verification
75
environments scaled up to full flight tests, which was a heritage point from the program
having been an ACTD.
Ultimately the more the test environment is the actual environment will help, as projects
have a poor track record of communicating environmental factors that may impact the
effort to design a suitable robot. The more a test environment diverges from the actual
environment will require extensive justification as to why the differences are not relevant
to the robot’s mission or how other tests will help control for those differences.
Verification with robots throughout the life cycle is critical for acceptance and suit-
ability
In this case, multiple verification and test events need to be established with the acquisi-
tion, client, and user community to ensure that they will understand exactly what they are
acquiring. Without this activity being done, there is little likelihood that the stakeholders
will have sufficient events to become familiar with the system and verify that they have
identified the correct new roles that will result from employing the robot. In the case of the
PackBot controller, the prototype was largely used for technical verification and not oper-
ational utility verification with the stakeholders, which may have contributed to its lack of
acceptance.
Examining the Global Hawk case, the availability of the test units for operational tests in
real battle environments early in 2002 lead to the groundswell of support from the opera-
tional users. In the autonomous helicopter case, the flying of the helicopter on a regular
basis helped the JPL sponsor see the progress and tune their own related internal efforts to
be ready to receive the final system.
If sufficient real robot tests are not being seen frequently in the life cycle, an engineer may
want to question why the project believes it will be able to succeed where others fail. Like
many other rationales, this one isn’t an absolute statement of what must happen, but an
indicator that there may be problems.
Be wary of stable requirements, especially when those requirements appear critical
to operational suitability - formality of the requirements process or language may not
help
The addition of a robot to an operational environment is a disruptive event, as can be seen
from the lack of suitability of the surface assessment robot and the PackBot controller.
Clients and users will need time to understand how their roles will change with the new
robot. This will most likely lead to new requirements as the underlying success models of
76
the participants change, as new participants are identified who will have to interact with the
robot, and as people change roles. The Global Hawk case also applies here as the changing
mission requirements of the system seem to be be related to the continued operational user
support.
If the requirements experience no change, especially when working with users and clients
who are less familiar with robots, this may indicate a lack of flexibility in the client and
user population to adapt to changing roles as the robot would start performing its tasks.
Further evidence to this is the lack of relationship identified between requirements and suit-
ability in the survey and in the robotic vacuum mini-study. An engineer should worry that
overly static robot requirements shows little in-progress learning about which requirements
are really needed for suitability purposes.
Ultimately, care would need to be taken to ensure that the rate of requirements changes does
not escalate to the point of being problematic, as happens with gold-plating of requirements
or scope creep. Although not shown in this study, it is likely that a large number of function
changes may be indicative of other problems. This feasibility rationale would have to be
used judiciously with other requirements-related rationales.
5.3 Future Work
5.3.0.1 Expand Project Pool
The rationales are based on only 24 projects, between the cases and the survey. Although the
methology of this dissertation gives confidence that the results should be stable, an expanded
pool should be able to refine not only the presence of the rationales, but also give information
as to the magnitude of cost, schedule, and performance impact of the methods that underly the
rationales.
5.3.0.2 Robotic Engineering Body of Knowledge
Although this work gives some examples of methods that are commonly employed by engineers
on projects, as embodied in the rationales, this work is not sufficiently exhaustive to enumerate
the body of knowledge a robotics engineer should be capable of handling to effectively perform
their job. Many skills, such as the ability to calculate mean-time-between failures for integrated
systems or the ability to calculate if features of a certain magnitude can be observed a signal
from a sensor, are not covered by this limited investigation. Both of these skills are likely to be
77
critical to the suitability of a robot to be employed by an organization or to be suitable for a task.
Ultimately, any sufficiently detailed work to create the body of knowledge would have to involve
many professionals of the field from industry and academia to ensure that practical and upcoming
topics are addressed.
78
References
(Akao 1990) Akao, Yoji, “Quality Function Deployment: Integrating Customer Requirements
into Product Design”, Productivity Press, 1990.
(Anonymous 2006) Anonymous Source, 7 Jun 2006 via telephone.
(Bierbaum 2008) Bierbaum, W. “UA Vs”, http://www.airpower.maxwell.af.mil/
airchronicles/cc/uav.html, accessed on 29 Feb 2008.
(Biffl 2006) Biffl, Stefan, et. al (eds.), Value-Based Software Engineering, Springer, 2006.
(Bigelow 2006) Bigelow, B., “Global Hawk’s soaring costs blasted”, San Diego Union-Tribune,
San Diego, CA, USA, http://www.signonsandiego.com/uniontrib/20060402/
news 1b2hawk.html, accessed on 29 Feb 2008, published on 2 Apr 2006.
(Brooks 1995) Brooks, “The Mythical Man-Month”, Addison-Wesley, ISBN: 0-201-83595-9,
1995.
(Campbell 2006) Campbell, B., “City Council to V ote on 916 Garden Street Garage Retrofit”,
http://www.hobokennj.org/html/news/news49.html, accessed on 25 Feb 2008, published
30 Nov 2006
(Cheng 2005) Cheng, “100 Questions for Technical Review”, Aerospace Corporation, Report
Number TOR-2005(8617)-4204, El Segundo, CA, 30 September 2005.
(Chick 2006) Chick, “CMM/CMMI Level 3 or Higher?”, Defense Acquisition Technology &
Logistics, V olume XXXV , Number 6, November/December 2006.
(CMMI Product Team 2006) CMMI Product Team, “CMMI for Development, Version 1.2”,
Carnegie Mellon University, Software Engineering Institute Technical Report, CMU/SEI-
2006-TR-008, Pittsburgh, PA, August 2006.
(CMMI Product Team 2007) CMMI Product Team, “CMMI for Acquisition, Version 1.2: Im-
proving Processes for Acquiring Better Products and Services”, Carnegie Mellon Uni-
versity, Software Engineering Institute Technical Report, CMU/SEI-2007-TR-017, Pitts-
burgh, PA, USA, November 2007.
79
(Coale 2006) Coale, G., Transitioning an ACTD to an Acquisition Program: Lessons Learned
from Global Hawk, Defense Acquisition Technology and Logistics, V olume XXXV , Num-
ber 5, September/October 2006.
(Consumer Reports 2006) Consumer Reports, “ConsumerReports.org - Vacuum clearner Rat-
ings: Sweepers, hand vacs & robots 3/06”, 2008, http://www.consumerreports.org/cro/
appliances/laundry-and-cleaning/vacuum-cleaners/vacuum-cleaners-306/ratings/
sweepers-hand-vacs-robots/index.htm, (subscription required), accessed: 25 Feb 2008,
published March, 2006
(Cook 1999) Cook, R. D. and Weisberg, S., “Applied Regression Including Computing and
Graphics”, Wiley Interscience, New York, 1999.
(Craig 2008) Craig, “Robotic ribbon-cutting on Jan. 24”, New Jersey Journal,
http://www.nj.com/hobokennow/index.ssf/2008/01/robotic ribboncutting on jan 2.html,
accessed on 25 Feb 2008, published 15 Jan 2008.
(Cureton 2006) Cureton, “Lecture #1: The Political System”, Slide 92, course notes for ISE 550
at the University of Southern California, 2006.
(Dangle 2005) Dangle, et. al., “Introducing the Department of Defense Acquisition Best Prac-
tices Clearinghouse”, Crosstalk Journal of Defense Software Engineering, May 2005.
(DAR CAA 2006) Defense Acquisition Regulations Council (DAR), Civilian Agency Acquisi-
tion Council (CAA), “Federal Acquisition Regulation”, last updated 27 Nov 2006.
(Darling 2008), Darling, D., ”Kettering Bug”, http://www.daviddarling.info/encyclopedia/K/
Kettering Bug.html, accessed on 29 Feb 2008,
(Dobbins 2004) Dobbins, “Planning for Technology Transition”, Defense Acquisition Technol-
ogy & Logistics, V olume XXXIII, Number 2, March/April 2004.
(DoD 2003) Department of Defense (DoD), “The Defense Acquisition System”, Department of
Defense Directive (DoDD) 5000.1, dated 12 May 2003.
(DoD 2003b) DoD, “Operation of the Defense Acquisition System”, Department of Defense Di-
rective 5000.2 dated 12 May 2003.
(Dodson 2006) Dodson, et. al, “Adapting CMMI for Acquisition Organizations: A Preliminary
Report”, Carnegie Mellon University, Software Engineering Institute Technical Report,
CMU/SEI-2006-SR-005, Pittsburgh, PA, June 2006.
(Drezner 2002a) Drezner, J., Leonard, R.; “Innovative Development: Global Hawk and Dark-
Star Transitions Within and Out of the HAE AUV ACTD Program”, RAND Corporation,
80
Santa Monica, CA, USA, 2002, ISBN: 0-8330-3114-7
(Drezner 2002b) Drezner, J., Leonard, R.; “Innovative Development: Global Hawk and Dark-
Star - Their Advanced Concept Technology Demonstrator Program Experience, Executive
Summary”, RAND Corporation, Santa Monica, CA, USA, 2002, ISBN: 0-8330-3111-2
(DTIC 2005) Defense Technical Information Center (DTIC), “Exhibit R-2, RDT&E Budget Item
Justification: Global Hawk Development/Fielding”,
http://www.dtic.mil/descriptivesum/Y2006/AirForce/0305220F.pdf, accessed on 29 Feb
2008, published in Feb 2005.
(Dym 2000) Dym, C., Little, P., “Engineering Design: A Project-Based Introduction”, Wiley &
Sons, Inc., New York, 2000
(Eisenhardt 1989) Eisenhardt, “Building Theories from Case Study Research”, The Academy of
Management Review, V olume 14, Number 4, October 1989, pages 532-550
(Elm 2007) Elm, J., et. al., “A Survey of Systems Engineering Effectiveness - Initial Results”,
Carnegie Mellon University, Software Engineering Institute Special Report, CMU/SEI-
2007-SR-014, Pittsburgh, PA, USA, November 2007.
(Erwin 2006), Erwin, S., “Soaring Costs Not Likely to Slow Down Global Hawk”,
http://www.nationaldefensemagazine.org/issues/2006/may/SoaringCosts.htm, accessed on
29 Feb 2008, published in May 2006.
(FAA 2003) FAA, “Software Approval Guidelines”, Order 8110.49, Federal Aviation Adminis-
tration, United States Department of Transportation, Dated 3 Jun 2003.
(Faria 2002) Faria, Jeff; “Report on the Hoboken 916 Garden automated garage: Anatomy of
a scandal”, http://mistersnitch.castpost.com/916 Garden Report (1 of 2).pdf, accessed on
25 Feb 2008, released on 23 Jan 2002.
(FAS 2008) Federation of American Scientists (FAS), “Civil NIIRS Reference Guide”,
http://www.fas.org/irp/imint/niirs c/guide.htm, accessed on 29 Feb 2008.
(Firesmith 2006) Firesmith, “QUASAR: A Method for the QUality Assessment of Software-
Intensive System ARchitectures”, Software Engineering Institute Handbook CMU/SEI-
2006-HB-001, Pittsburgh, PA, July 2006.
(Fisher 2002) Fisher, et. al., “Applying the Software Acquisition Capability Maturity Model”,
Crosstalk Journal of Defense Software Engineering, August 2002.
(Foreman 1995) Foreman, “An Introduction to the Political System”, course notes for ISE 550 at
the University of Southern California, 1995
81
(GAO 2004) Government Accounting Office (GAO), “GAO Report to the Chairman, Subcommit-
tee on Tactical Air and Land Forces, Committee on Armed Services, House of Represen-
tatives: Unmanned Aerial Vehicles: Changes in Global Hawk’s Acquisition Strategy Are
Needed to Reduce Program Risks”, http://www.gao.gov/new.items/d056.pdf, accessed on
29 Feb 2008, published in Nov 2004.
(Garrett 2007) Garret, J., “Personal Communication”, interview in April 2007.
(Gitlin 2007), Gitlin, S., “United States Marine Corps Awards AeroVironment $19.3 Million
BATMA V Contract for Wasp III Micro Unmanned Aircraft Systems”
http://www.avinc.com/pr detail.asp?ID=68, accessed on 29 Feb 2008, published on 20
Nov 2007.
(Glasman 1982) Glasman, “Comparative Studies in Software Acquisition”, Lexington Book Se-
ries in Computer Science (Kenneth Thurber ed.), Lexington, MA, 1982.
(Goebel 2008) Goebel, G., “[13.0] Modern US Endurance UA Vs (link direct to [13.5] Northrop
Grumman RQ-4 Global Hawk / Sensor Craft”, http://www.vectorsite.net/twuav 13.html,
accessed on 29 Feb 2008.
(Goebel 2008b) Goebel, G., “[7.0] US Battlefield UA Vs”,
http://www.vectorsite.net/twuav 07.html, accessed on 29 Feb 2008.
(Goebel 2008c) Goebel, G., “[5.0] Secret US Reconnaissance Drones / Early Soviet UA Vs”,
http://www.vectorsite.net/twuav 05.html, accessed on 29 Feb 2008.
(Hanratty 1999) Hanratty, et. al., “Open Systems and the Systems Engineering Process”, Acqui-
sition Review Quarterly, Winter 1999.
(Hess 2006) Hess, P., “US Air Force to Study A Pilotless U-2”, United Press International,
http://www.spacewar.com/reports/US Air Force To Study A Pilotless U 2 999.html, ac-
cessed on 29 Feb 2008, published on 12 Oct 2006.
(Hoboken 2001) Hoboken, City of, “Minutes of Meetings of the Council of the City of Hobo-
ken, New Jersey for the Year 2001”, http://www.hobokennj.org/pdf/amr/cc/minutes/2001/
2001.pdf, Accessed on 25 Feb 2008.
(Hoboken 2002) Hoboken, City of, “Minutes of Meetings of the Council of the City of Hobo-
ken, New Jersey for the Year 2002”, http://www.hobokennj.org/pdf/amr/cc/minutes/2002/
2002.pdf, Accessed on 25 Feb 2008.
(Hoboken 2003) Hoboken, City of, “Minutes of Meetings of the Council of the City of Hobo-
ken, New Jersey for the Year 2003”, http://www.hobokennj.org/pdf/amr/cc/minutes/2003/
2003.pdf, Accessed on 25 Feb 2008.
82
(Hoboken 2004) Hoboken, City of, “Minutes of Meetings of the Council of the City of Hobo-
ken, New Jersey for the Year 2004”, http://www.hobokennj.org/pdf/amr/cc/minutes/2004/
2004.pdf, Accessed on 25 Feb 2008.
(Hoboken 2005) Hoboken, City of, “Minutes of Meetings of the Council of the City of Hobo-
ken, New Jersey for the Year 2005”, http://www.hobokennj.org/pdf/amr/cc/minutes/2005/
2005.pdf, Accessed on 25 Feb 2008.
(Hoboken 2006) Hoboken, City of, “Minutes of Meetings of the Council of the City of Hobo-
ken, New Jersey for the Year 2006”, http://www.hobokennj.org/pdf/amr/cc/minutes/2006/
cc min 2006.pdf, Accessed on 25 Feb 2008.
(Hoboken 2008a) Hoboken, City of, “Welcome to the City of Hoboken New Jersey — Parking
Information”, http://www.hobokennj.org/html/parking/uptwng.html, accessed on: 25 Feb
2008.
(Honour 2004) Honour, E., “Understanding the Value of Systems Engineering”,
http://www.hcode.com/seroi/documents/ValueSE-INCOSE04.pdf, accessed 2008-02-01,
published 2004.
(Honour 2007) Honour, E, “Design of Experiments as Applied to Systems Engineering Return on
Investment,” Proceedings of the Conference on Systems Engineering Research, Hoboken,
NJ, 2007.
(Horowitz 2007) Horowitz, A.; et. al, “101 Dumbest Moments in Business - Asimov’s Fourth
Law of Robotics: Don’t Screw with the sys-admin... (89) Business 2.0”,
http://money.cnn.com/galleries/2007/biz2/0701/gallery.101dumbest 2007/89.html,
accessed on: 25 Feb 2008, published on 20 Jan 2007.
(IEEE 1998) IEEE Computer Society, “IEEE Recommended Practice for Software Requirements
Specifications”, Software Engineering Standards Committee of the IEEE Computer Soci-
ety, ISBN: 0-7381-033202, 20 October 1998
(iRobot 2008) iRobot, “iRobot Annual Report 2006”, available from
http://investors.irobot.com/index.cfm, accessed 24 Feb 2008.
(iRobot 2007) iRobot, ”iRobot PackBot Explorer Product Specification”,
http://www.irobot.com/filelibrary/GIspecsheets/iRobot PackBot Explorer.pdf,
accessed on 16-Feb-2007
(Jain 2007) Jain, “Theory of Value-Based Software Engineering”, PhD Dissertation, University
of Southern California, December 2007.
(Jennemann 2006) Jennemann, T; “No Parking: Biter Dispute may Close Automated Garage Aug.
1”, Hudson County Hoboken Reporter, http://www.hudsonreporter.com/
83
site/news.cfm?newsid=16951976&BRD=1291&PAG=461&dept id=523585&rfi=6,
accessed on 25 Feb 2008, released on 23 July 2006.
(Joint Robotics Program 2008), Joint Robotics Program, “Mobile Robot Knowledge Base”,
https://robot.spawar.navy.mil, accessed on 28 Feb 2008.
(Kardos 1979) Kardos, “Engineering Cases in the Classroom”, National Conference On Engi-
neering Case Studies, March 1979
(Kazman 2000) Kazman, Klein, Clements, “ATAM: Method for Architecture Evaluation”, Soft-
ware Engineering Institute, CMU/SEI-2009-TR-004, Pittsburgh, PA, August 2000.
(Kelly 2008) Kelly, J., “USC Autonomous Flying Vehicle Project”, Project Web Page,
http://robotics.usc.edu/ avatar/index.html, accessed 25 Feb 2008.
(Klein 2002) Larry Klein, L.;Axelrod, D.; “NOV A — Spies That Fly — Time Line of UA Vs
— PBS”, NOV A Production by Green Umbrella, LLC for WGBH/Boston in association
with Production Group, Inc., http://www.pbs.org/wgbh/nova/spiesfly/uavs.html, accessed
on 29 Feb 2009, published in Nov 2002.
(Koen 1984) Koen, “Toward a Definition of the Engineering Method”, Engineering Education,
VOlume 75, nubmer 3, pages 150-155, December 1984.
(Kosiak 2003) Kosiak, S.; “Analysis of the FY2004 Defense Budget Request”,
http://www.csbaonline.org/4Publications/Archive/R.20030411.FY 04 Analysis/
R.20030411.FY 04 Analysis.pdf, accessed on 29 Feb 2008, published in 2003
(Kosiak 2004) Kosiak, S.; “Analysis of the FY 2005 Defense Budget Request”,
http://www.csbaonline.org/4Publications/Archive/R.20040411.FY 05 Analysis/
R.20040411.FY 05 Analysis.pdf, accessed on 29 Feb 2008, published in 2004.
(Kosiak 2005) Kosiak, S.; “Analysis of the FY2006 Defense Budget Request”,
http://www.csbaonline.org/4Publications/Archive/R.20050517.FY06Bud/R.20050517.
FY06Bud.pdf, accessed on 29 Feb 2008, published in 2005.
(Kosiak 2006), Kosiak, S.; “Analysis of the FY2007 Defense Budget Request”,
http://www.csbaonline.org/4Publications/Archive/R.20060425.FY07Bud/R.20060425.
FY07Bud.pdf, accessed on 29 Feb 2008, published in April 2006.
(Krieg 2005) Krieg, Testimony of Kenneth J. Krieg, Under Secretary of Defense (Acquisition,
Technology & Logistics) Before the United States Senate Committee on Armed Services,
27 September 2005.
84
(Krieg 2006) Krieg, “It’s All About the Customer”, Defense Acquisition Technology & Logistics,
V olume XXXV , Number 3, May/June 2006.
(Lee 2005) Lee, S., et. al., ”Haptic Control of a Mobile Robot: A User Study”. In Presence,
14(3):345-365, Jun 2005.
(Lehane 2005) Lehane, P. and Huf, S., “Towards understanding system acceptance: the devel-
opment of an assessment instrument and workpractice”. Proceedings of the 19th con-
ference of the computer-human interaction special interest group (CHISIG) of Australia
on Computer-human interaction: citizens online: considerations for today and the future.
(Canberra, Australia, 2005), pp 1-9.
(Lott 2006) Lott, T. “U.S. Senator Trent Lott”,
http://lott.senate.gov/index.cfm?FuseAction=About.Biography, accessed on 12 Dec 2006.
(Marciniak 1990) Marciniak, J., Reifer, D., Software Acquisition Management: Managing the
Acquisition of Custom Software Systems, Wiley Series in Industrial Software Engineer-
ing Practice, New York, NY , 1990.
(Marz 2006) Mr. Theodore Marz, Senior Member of the Technical Staff, Software Engineering
Institute, Carnegie Mellon University, Pittsburgh, PA. Personal Interview via telephone
on 3 Dec 2006.
(Meyers 2001) Meyers, B. C. and O. Patricia . Managing software acquisition: open systems and
COTS products, Addison-Wesley Longman Ltd., 2001.
(Meyers 2001) Meyers, Craig and Oberndorf, “Managing Software Acquisition: Open Systems
and COTS Products”, Addison-Wesley, 2001.
(Montgomery 2005) Montgomery, J.; Sukhatme, G., “Towards a New Generation of Spacecraft
Landing Testbeds: Emulating Spacecraft Landing Dyanmics on a Helicopter - Proposal”,
unpublished report to the Jet Propulsion Laboratory, not dated, final version believed to
be 2005.
(Montgomery 2007) Montgomery, J.; Sukhatme, G.; Saripalli, S., “Towards a New Generation of
Spacecraft Landing Testbeds: Emulating Spacecraft Landing Dyanmics on a Helicopter
- Final Report”, unpublished report to the Jet Propulsion Laboratory, last edited October
2007.
(Naughton 2003) Naughton, R.; “Remote Piloted Aerial Vehicles”,
http://www.ctie.monash.edu/hargrave/rpav home.html#Beginnings, accessed on 29 Feb
2008, updated 2 Feb 2003.
(Northrop Grumman 2008) Northrop Grumman Public Affairs, “Integrated Systems Western Re-
gion Unmanned Systems HALE Program Overview”,
85
http://www.northropgrumman.com/unmanned/globalhawk/overview.html, accessed on 29
Feb 2008.
(Northrop Grumman 2008b) Northrop Grumman Public Affairs, “Integrated Systems Western
Region Unmanned Systems HALE Technical Specifications”,
http://www.northropgrumman.com/unmanned/globalhawk/techspecs.html, accessed on 29
Feb 2008.
(Northrop Grumman 2008c), Northrop Grumman Public Affairs, “Integrated Systems Western
Region Unmanned Systems Global Hawk Team Links”,
http://www.northropgrumman.com/unmanned/globalhawk/team.1.html, accessed on 29 Feb
2008.
(OMB 2005) Office of Management and Budget (OMB), “ExpectMore.gov: DoD Unmanned Air-
craft Systems (UAS)”, http://www.whitehouse.gov/omb/expectmore/detail/
10003201.2005.html, accessed on 29 Feb 2008, published in 2005.
(Peck 2003) Peck, M.; “NDM Article Global Hawk Crashes: Who’s to Blame?”, National De-
fense Magazine, http://www.nationaldefensemagazine.org/issues/2003/May/
Global Hawk.htm, accessed on 29 Feb 2008, published in May 2003.
(Perry 2004) Perry, et. al., “Case Studies for Software Engineers”, Proceedings of the 26th In-
ternational Conference on Software Engineering, Edinburgh, Scotland, UK, 23-28 May
2004.
(Quigley 2008) Quigley, Morgan, ”Human-miniUA V Interfaces”, http://students.cs.byu.edu/
mlq3/human-miniuav.html, accessed on 16-Feb-2008.
(Quinn 2006) Quinn, N., “Giant Robot Imprisons Parked Cars”, Wired,
http://www.wired.com/cars/coolwheels/news/2006/08/71554, accessed on 25 Feb 2008,
published on 08 Aug 2006.
(RTCA 2001) RTCA, “Final Annual Report For Clarification Of DO-178B ’Software Consid-
erations In Airborne Systems And Equipment Certification’ ”, RTCA report DO-248B,
December 2001.
(Saripalli 2007a) Saripalli, S.; Sukhatme G., “Landing a helicopter on a Moving Target”, in IEEE
International Conference on Robotics and Automation (ICRA) 2006, pages 2030-2035,
Orlando, Florida, USA, 15-19 May, 2006.
(Saripalli 2007b) Saripalli, S., “Personal Communications”, interview on 18 September 2007.
(Saripalli 2007c) Saripalli, S., “Identification, Control, and Visually-Guided Behavior for a Model
Helicopter”, PhD Dissertation, University of Southern California, August 2007.
86
(SCAMPI Upgrade Team 2006) SCAMPI Upgrade Team, “Standard CMMI Appraisal Method
for Process Improvement (SCAMPI), Version 1.2: Method Definition Document”, Carnegie
Mellon University, Software Engineering Institute Handbook, CMU/SEI-2006-HB-002,
Pittsburgh, PA, August 2006.
(Schween 1997) Schween, Heiner, “Modular automated parking system”, United States Patent
and Trademark Office, US Patent #5,669,753, September 23, 1997.
(Simpson 2008) Simpson, J., “Deployed Global Hawks Not ’Operationally Suitable’ in IMINT
Missions”, Inside the Air Force, V ol. 19, No. 1, 4 Jan 2008.
(Sukhatme 2005) Sukhatme, G, “Statement of Work”, unpublished document, dated15 July 2005.
(Sukhatme 2008) Sukhatme, G., “Personal Communication”, interview on 28 Jan 2008.
(Sullivan 2005) Sullivan, M.; “GAO-06-222R Global Hawk Unit Cost Increases”,
http://www.gao.gov/new.items/d06222r.pdf, accessed on 29 Feb 2008, published on 15
Dec 2005.
(Surakka 2007) Surakka, S., “What Subjects and Skills are Important for Software Developers?”,
Communications of the ACM, V olume 50, Issue 1, pp. 73-78, Jan 2007.
(Turner 2002) Turner, “A Study of Best Practice Adoption by Defense Acquisition Programs”,
Crosstalk Journal of Defense Software Engineering, May 2002.
(Turner 2004) Turner, “Why We Need Empirical Information on Best Practices”, Crosstalk Jour-
nal of Defense Software Engineering, April 2004.
(US House 2005) US House of Representatives, “House Report 109-119 Department of Defense
Appropriations Bill, 2006, http://thomas.loc.gov/cgi-bin/cpquery/
?&sid=cp109Nwob7&refer=&r n=hr119.109&db id=109&item=&sel=TOC 181302&,
accessed on 29 Feb 2008, published on 25 Dec 2005.
(US House 2006), US House of Representatives, “Department of Defense Appropriations Bill,
2007, Report on the Committee on Appropriations”,
http://thomas.loc.gov/cgi-bin/cpquery/T?&report=hr504&dbname=109&, accessed on 29
Feb 2008, published on 16 Jun 2006.
(US House 2006b) US House of Representatives, “House Report 109-411 Intelligence Authoriza-
tion Act for Fiscal Year 2007”, http://thomas.loc.gov/cgi-bin/cpquery/
?&sid=cp109b23IJ&refer=&r n=hr411.109&db id=109&item=&sel=TOC 50120&,
accessed on 29 Feb 2008, published on 1 May 2006.
87
(USAF 2006), US Air Force Public Affairs, “Factsheets : Global Hawk : Global Hawk”,
http://www.af.mil/factsheets/factsheet.asp?fsID=175, accessed on 12 Dec 2006.
(Van Riper 1997) Van Riper, P.; ”Statement of Lieutenant General Paul K. Van Riper, United
States Marine Corps, Commanding General, Marine Corps Combat Development Com-
mand to the 1997 Congressional Hearings on Intelligence and Security”,
http://www.fas.org/irp/congress/1997 hr/h970409r.htm, accessed on 29 Feb 2008, state-
ment given on 9 Apr 1997.
(Verner 2005) Verner, Cerpa, “Australian software development: what software project manage-
ment practices lead to success?”, Australian Software Engineering Conference, (29 March
- 1 April 2005), pp 70-77.
(V olpe 2008a) V olpe, R, “The Autonomous Helicopter Testbed”,
http://www-robotics.jpl.nasa.gov/systems/system.cfm?System=13, accessed 25 Feb 2008.
(V olpe 2008b) V olpe, R., “ALHAT: Autonomous Landing and Hazard Avoidance Technology”,
http://www-robotics.jpl.nasa.gov/tasks/showTask.cfm?FuseAction=ShowTask&TaskID
=84&tdaID=999986
(Weinstock 2004) Weinstock, Goodenough, Hudak, “Dependability Cases”, Software Engineer-
ing Institute CMU/SEI-2004-TN-016, Pittsburgh, PA, May 2004.
(Wilson 2001) Wilson, M.; Neal, H., “Diminishing returns of engineering effort in telerobotic
systems”. IEEE Transactions on Systems, Man and Cybernetics, Part A, V olume 31, Is-
sue 5, pp 459 - 465, September 2001.
(Wojcicki 2006) Wojcicki, M. A. and P. Strooper. “A state-of-practice questionnaire on verifi-
cation and validation for concurrent programs”. Proceeding of the 2006 workshop on
Parallel and distributed systems: testing and debugging. Portland, Maine, USA, pp 1-10.
(Yin 2002) Yin, “Case Study Research, Design and Methods”, 3rd edition, Newbury Park, Sage
Publications, 2002
88
Appendix A
Survey
A.1 Instrument
Each subsection indicates a different web page of the survey.
A.1.1 Acquisition Engineering Introduction
This survey is designed to elicit opinions and perception of completeness in performing engi-
neering analysis tasks by engineers and others involved in the acquisition of robotic systems.
Please note that this survey is not an assessment of your engineering knowledge or skill, nor an
assessment of any individual’s job performance.
Please note, there are 80 questions in this survey. You will need to press the ”Next ¿¿” button
at the bottom of each page to continue to the next question group. There are 5 pages in this
survey. The server will store your partial response after you press the ”Next ¿¿” button. If you
need to take a break, you can leave your browser open to the survey and resume it at any time. If
you close your browser before completing the survey, you will not be able to resume answering
questions; in this case feel free to start the survey again (incomplete surveys will be dropped from
analysis). There is no time limit, nor is your response being timed.
If you are unfamiliar with acquisition, robotic systems acquisition is the set of practices and
methods used to procure and maintain a robotic system with a desired capability or to generate
a desired effect. For example, imagine you are a graduate student and your advisor asks you to
buy a robot that will be used as an integration platform for new sensor experiments. You are an
acquirer and would respond in this survey about the analysis that lead you to select, specify, and
receive the robot from the manufacturer. You would not include the experience of performing
the sensor experiments with the robot (which would be the desired capability), as that is your
experience in using the system or performing independent development.
This survey is conducted in three parts: personal background, project background, and engi-
neering experiences.
In the first part, you will be asked questions about your background and general experience.
Questions will include your overall experience in the robotics and acquisition communities, as
well as information about your role with your company or organization.
The second part will focus on only your most recent robotic systems acquisition. In this
part, you will describe the robotic system and your experience with that acquisition. You may
have been involved in acquiring whole robot system or in acquiring upgrades for existing robot
89
systems. The questions will focus on your opinion of the success of the acquisition, as well as
ask several questions about the type of robot.
The third and final part will ask about your perceived importance and experience performing
various types of engineering tasks for the specific acquisition you described the second part. The
questions in this section are based on the Capability Maturity Model Integration (r) for Acquisi-
tion Organizations by the Software Engineering Institute.
As you may not have experience with all aspects of engineering covered in this survey or may
not feel comfortable answering a given question, the option of ”No Answer” or ”NA” is made
available for any question.
This survey will not collect any personal identification information. Your answers will be kept
anonymous from the researchers who will be analyzing your responses. Your individual response
will be kept confidential, however, summarized responses from the survey pool may be released
to the public. You may quit the survey at any time, for any reason, without your answers being
recorded before you press the submit survey button. After pressing the submit survey button,
your anonymous answers will be added to the set of responses.
If you are interested in discussing the subject of this survey with the researcher, or if you have
interesting stories about acquiring robots to share, you are encouraged to contact DeWitt Latimer
via email at dlatimer@usc.edu
A.1.2 Background and General Experience Questions
In this section, you are being asked about your background with robotic systems, the scale of the
systems you have acquired, and some basic demographic information about your field of work,
education, and industry segment. Please select the best answer for your situation.
1. Where do you work?
State/Province:
Country:
2. How many different times have you acquired robots (not quantity of robots, but separate
purchase/contract events)? Prefer not to answer
0
1
2-4
5-10
11 or more
3. What was the unit cost (per robot cost) of your most expensive robot acquisition? NA or
prefer not to answer
$0-99
$100-499
$500-999
90
$1,000-5,000
$5,000-9,999
$10,000-99,999
$100,00-999,999
$1 million+
4. Which of the following would best describe your employer?
Government, non-defense related
Government, defense related
Academia
Commercial, non-defense/aerospace
Commercial, defense or aerospace
Non-profit
Other (please specify)
5. Which of the following would best describe your role in your organization?
Engineer
Supervising Engineer
Technician
Scientist
Supervising Scientist
Manager
Student
Faculty
Other (please specify)
6. How many years experience do you have with robotic systems? Prefer not to answer
0
1-3
4-9
10-15
16+
7. How many years experience do you have in acquisition? Prefer not to answer
0
1-3
4-9
91
10-15
16+
8. What is the highest level of education you have completed?
Not Completed High School or Equivalent
High School
Associate
Bachelor
Master
PhD
Professional (e.g. MD, JD)
Other (please specify)
9. Are you currently in progress towards a degree?
Yes
No
10. If so, what level of degree are you pursuing?
NA
High School
Associate
Bachelor
Master
PhD
Professional (e.g. MD, JD)
Other (please specify)
11. In what field is your degree is your highest degree (in pursuit or completed)?
Computer Science (including Software Engineering)
Mechanical Engineering
Electrical Engineering
Computer Engineering
Systems Engineering
Robotics
Other Engineering
Management
Other (please specify)
92
A.1.3 Acquisition Project
In this section, you are being asked about the most recent, completed robotic system acquisition or
your acquisition in progress (if you have not yet completed your first acquisition). By examining
the most recent project, we hope to avoid bias in project selections (such as only successful
projects or only projects with problems).
1. How many years ago was your most recent project completed?
Years (enter 0 for projects completed less than 12 months ago)
2. What was the per robot unit cost of your most recent robot acquisition? (at the time of
completion, or your best estimate if the acquisition is in progress) NA or prefer not to
answer
$0-99
$100-499
$500-999
$1,000-5,000
$5,000-9,999
$10,000-99,999
$100,00-999,999
$1 million+
3. Was a robot acquired as a result of this project?
Yes
No
In Progress
4. How many robots were acquired (or are intended to be acquired) as a result of this project?
(remember, robots may be complete systems, upgrade packages, unit systems, or whatever
the scope of the acquisition was)
0
1
2
3-4
5-9
10-19
20+
5. What is the anticipated lifespan of a robot system, once employed for their purpose? (Lifes-
pan is in robot-hours, this question is asking how long the robot is expected to be powered
on and performing its desired purpose before a fatal failure may occur)
93
NA or prefer not to answer
Single-use up to 1 hour of usage
Single-use from 1 to 24 hours of usage
Single-use over 24 hours of usage
Multiple-uses up to 1 hour cumulative usage
Multiple-uses from 1 to 24 hours cumulative usage
Multiple-uses from 25 to 199 hours cumulative usage
Multiple-uses from 200 to 999 hours cumulative usage
Multiple-uses over 1000 hours cumulative usage
6. Did/does the acquisition include human-safety or other safety-critical factors?
Yes
No
NA or Not Sure
7. What level of autonomy did/should the robots exhibit?
NA
1 (Complete Human Operation, local or remote teleoperation)
2
3 (Human Executive Planning and unsupervised robot execution)
4
5 (Robot self-tasks and executes)
8. What is your opinion of the overall engineering effort performed to acquire the robot?
NA
No Effort Expended
Meager effort that was inadequate across the board
Fragmentary effort that was disconnected from itself or other management areas
Competent efforts that still had problems in the final system
Competent efforts that prevented any technical problems in the final system
9. Was this robot acquired as a commercial buy? (commercial buy is when an off-the-shelf or
catalog product is purchased)
Yes
No
NA or Not Sure
10. Which of the following statements best describes the strategy employed for the acquisition?
94
Single Deliverable Acquisition (once through on receipt of robot systems; may em-
ploy various development life cycle models such as spiral or incremental to achieve
that one deliverable system)
Incremental or Blocked Acquisition (known capability goals and known capability
end-state planned to be achieved over multiple, 2+, generations of robot systems)
Evolutionary or Spiral Acquisition (multiple generations of robots where successive
generations are informed by the experiences in the previous generations but the end
state is typically unknown)
NA (Not Sure)
Other (Describe)
11. What is your opinion of how well the robot met/will meet the technical specifications /
requirements that were supplied by the vendor or specified to the manufacturer?
NA
1 (Completely failed to meet specifications)
2 (Partially met specification, some desired functions possible)
3 (Partially met specifications, many desired functions available)
4 (Partially met specifications, all high-priority desired functions available)
5 (Met specifications)
12. What is your opinion of how well the robot was/will-be acquired within budget?
NA
1 (Significantly over-budget; exceed margin by over 10
2 (Over-budget; exceed margin by up to 10
3 (Within budget margins)
4 (Under-budget; under margin by up to 10
5 (Significantly under-budget; under margin by over 10
13. What is your opinion of how well the robot was/will-be acquired on-time?
NA
1 (Significantly late; delays caused serious operational problems)
2 (Late; delay caused minor operational problems)
3 (Within schedule margin)
4 (Ahead of schedule; up to 10
5 (Significantly ahead of schedule; over 10
14. What is your opinion of the success of the robot, in terms of fitness for the desired purpose?
NA
95
1 (Inappropriate for environment; system unusable in desired environment)
2 (Inappropriate for purpose; system usable in environment but will not be utilized)
3 (Partially appropriate for purpose; system usable in environment but likely to re-
quire modifications to be utilized on a regular basis)
4 (Mostly appropriate for purpose; system will be utilized but not as effectively as
desired)
5 (Completely appropriate for purpose)
A.1.4 Engineering Area Questions
In the next three sections, we are asking a series of matched questions. The first question is
about your opinion of how important a type of engineering activity was for the acquisition you
described above. The second question is about your perception of the completeness of the efforts
undertaken, in terms of helping that acquisition. There is no correct answer, as the factors in
different acquisition can vary widely. Further, there is no explicit or implied correlation between
questions; it is acceptable to indicate an activity is important, yet not have been completely ef-
fective, or to indicate an activity was unimportant but was successfully completed (in this case,
it may be successful even if only a small amount of effort was expended). You can also indicate
that a task had no effort expended.
For the first question, you will be asked to give your opinion on the importance of the en-
gineering tasks on a sliding scale. This scale ranges from Completely Unimportant to Critically
Important. How you rank the importance on this scale is project dependent, so you are asked
to put how important these tasks were to your specific acquisition’s environment (organization,
management, reviewers, engineers, etc.). Depending on your situation, it is acceptable not not
utilize the entire response range.
For the second question, you will be asked to give your opinion on the completeness of the
engineering effort for the given task. This scale is slightly more specific in its encoding. The
first option is No Effort Expended (NE), indicating that no work was done to perform the task
described. The second option is Meager (M), indicating that while some work was accomplished,
you feel the efforts were not adequate for the task described. The third option is Fragmentary
(F), indicating that while more work was accomplished, the work was not unified with project
efforts and may not have generated value. The fourth option is Competent but Incomplete (CI),
indicating that you believe responsible and competent work products were generated based on
knowledge and skill at the time, but the products were incomplete given the benefit of hindsight.
The fifth option is Strong (S), indicating that you believe responsible and competent work prod-
ucts were produced with no apparent defects. The final option is No Answer (NA), indicating
you do not feel comfortable answering the given question for any reason.
It is possible that your specific acquisition did not consider activities (either formally or infor-
mally) in some of these areas. You may perceive that the activity should be done by the developers
or by some other individuals. There is no problem with stating that some or many of these prac-
tices were ”not important” and/or had ”no effort expended” to achieve them. This study seeks to
understand what is done in practice by people who acquire robotic systems, and your report that
some engineering methods were not important is useful information.
Although questions are stated in a formal fashion, formality of your engineering method is
not required. It is possible for things to have been important and that you did significant effort,
96
but never generated formal reports or documents. Feel free to answer the questions based on the
importance and effectiveness of the work, conversations, and decisions made with respect to the
robotic system acquisition.
These questions ask how important and effective the efforts were by the acquisition staff as a
whole, but not the developer/vendor of the robot. Depending on your acquisition’s organization,
you may not have directly participated in some of these methods. However, if you feel you
observed enough of what other people in the acquisition organization were doing, feel free to
answer the questions to the best of your observations.
Each question in this section has an optional comment field, in which you can add any infor-
mation that may help clarify your answer, or express any confusion over the question.
A.1.4.1 Acquisition Requirements Development
This section addresses various tasks related to how requirements are developed. Management
of requirements is not addressed explicitly in this section, however, if you feel that exceptional
or poor requirements management influenced the ability to perform some of these requirements
development tasks, feel free to include comments.
1. How important was collecting needs, expectations, constraints, and interfaces from stake-
holders for all phases of the product lifecycle in this robotic system acquisition? (stake-
holders include your customers, operators of the robot, engineers or co-workers who will
be integrating the robot into another system, etc.)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
2. How complete do you feel the efforts were to collect needs, expectations, constraints, and
interfaces from stakeholders for all phases of the product lifecycle of the robotic system in
your most recent acquisition?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
97
3. How important was transforming stakeholder needs, expectations, constraints, and inter-
faces into prioritized customer requirements in this robotic system acquisition? (e.g., cus-
tomer requirements as prioritized statements at the level of detail that your sponsor / super-
visor / management used to describe his/her acquisition wants)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
4. How complete do you feel the efforts were to transform stakeholder needs, expectations,
constraints, and interfaces into prioritized customer requirements?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
5. How important was establishing and maintaining contractual requirements in this robotic
system acquisition? (e.g. document technical requirements to fully specify the product you
desire to procure, add specifications and standards from industry sources, etc.)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
6. How complete do you feel the efforts were to establish and maintain contractual require-
ments?
NE - No Effort Expended
M - Meager
F - Fragmentary
98
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
7. How important was allocating requirements to supplier deliverables in this robotic system
acquisition? (e.g. if you have multiple suppliers, or are buying multiple components from
the same supplier, do you have a map of how those requirements are spread between the
various products to be procured?)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
8. How complete do you feel the efforts were to allocate requirements to supplier deliverables
in the robotic product acquisition?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
9. How important was analyzing requirements to ensure they were necessary and sufficient in
this robotic system acquisition?
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
10. How complete do you feel the efforts were to analyze requirements to ensure they are
necessary and sufficient?
99
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
11. How important was establishing and maintaining operational concepts and associated sce-
narios for the robotic system in this acquisition? (e.g. creation of ”usage cartoons”, formal
capability descriptions, or descriptive use cases)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
12. How complete do you feel the efforts were to establish and maintain operational concepts
and associated scenarios for the robotic system in this acquisition?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
13. How important was analyzing requirements to balance stakeholder needs and constraints in
this robotic system acquisition? (e.g. assessment of requirements risks, using the results of
proven simulations/models/prototypes to analyze relations between competing stakeholder
needs)?
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
100
NA - No Answer
Comment
14. How complete do you feel the efforts were to analyze requirements to balance stakeholder
needs and constraints?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
15. How important was validating requirements to ensure the resulting product will perform
as intended when in the user’s environment in this robotic system acquisition? (Examples
techniques of validating requirements include using proven prototype methods or simu-
lations of the robot in relevant environments, comparison of the requirements with other
successful acquisitions, demonstrations, examining requirements to determine the risk that
the product may not perform appropriately in its intended-use environment, etc.)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
16. How complete do you feel the efforts were to validate requirements to ensure the resulting
product will perform as intended when in the user’s environment?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
17. Are there any other requirements development activities that you feel should have been
addressed by this survey that impacted the acquisition of your system?
No
Yes (Describe)
101
A.1.4.2 Acquisition Technical Management
This section addresses tasks performed in support of oversight, insight, collaboration,
and/or participation in the design and construction of the robotic system to be acquired.
Unlike the previous section, Acquisition Requirements Development, this section focuses
on how those requirements and specifications are used between you, as the acquirer, and
the developer to evaluate solutions and manage selected interfaces.
18. How important was selecting technical technical workproducts for analysis and the analysis
method used for each in this robotic system acquisition? (e.g. choosing the types of designs
or solutions to be evaluated, , choosing methods for performing the evaluations, setting
criteria for the evaluations, etc.; an example would be reviewing production designs and
plans before committing resources to full-scale production)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
19. How complete do you feel the efforts were to select technical technical work products for
analysis and the analysis method used for each?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
20. How important was following through with the planned analysis of the selected supplier
technical work products in this robotic system acquisition?
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
102
Comment
21. How complete do you feel the efforts were in following through with the planned analysis
of the selected supplier technical work products?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
22. How important was conducting technical reviews with the supplier in this robotic system
acquisition? (e.g. alternate system review, preliminary design reviews, critical design re-
view, test readiness review, physical configuration audit, etc.)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
23. How complete do you feel the efforts were to conduct technical reviews with the supplier?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
24. How important was selecting interfaces to be managed in this robotic system acquisition?
(e.g. selecting interfaces that impact the use of the system in the operational, support,
verification, validation environments, etc.)
Completely Unimportant
Somewhat Unimportant
Important
103
Very Important
Critically Important
NA - No Answer
Comment
25. How complete do you feel the efforts were to select interfaces to be managed?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
26. How important was managing selected interfaces in this robotic system acquisition? (e.g.
periodic reviews and analysis of the interface definitions/designs, verifing sufficient testing
is performed, resolve conflict/noncompliance/changes, etc.)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
27. How complete do you feel the efforts were to mange the selected interfaces?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
28. Are there any other technical management activities that you feel should have been ad-
dressed by this survey that impacted the acquisition of your system?
No
Yes (Describe)
104
A.1.4.3 Acquisition Verification
Verification concerns the process of examining the result of a given activity to determine
conformity with the stated requirement for that activity. A system may be verified to meet
the stated requirements, yet be unsuitable for operation by the actual users (as articulated
in ISO 9000:2000). In other words, verification checks that the work product or system
meets its specifications, but does not check if those specifications are correct or will satisfy
the actual user’s needs.
29. How important was selecting aspects of the system for verification activities in this robotic
system acquisition? (e.g. select those aspects to verify for planning purposes)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
30. How complete do you feel the efforts were to select aspects of the robotic system for
verification activities?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
31. How important was establishing a verification environment in this robotic system acquisi-
tion? (e.g. testbed, calibration courses, test equipments, etc.)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
32. How complete do you feel the efforts were to establish a verification environment?
105
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
33. How important was establishing verification procedures and criteria in this robotic system
acquisition? (e.g. Write test plans or test procedure, establish which variables are to be
measured in a verification task)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
34. How complete do you feel the efforts were to establish verification procedures and criteria?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
35. How important was performing all of the selected verification activities in this robotic
system acquisition?
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
106
36. How complete do you feel the efforts were to perform all of the selected verification activ-
ities?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
37. How important was analyzing the results of the verification activities in this robotic system
acquisition?
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
38. How complete do you feel the efforts were to analyze the results of the verification activi-
ties?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
39. How important was performing external verification reviews on selected work products?
(e.g. external to the project and/or outside the direct management chain; reviews such as
external audits, process assessments, standards compliance inspections, etc.)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
107
NA - No Answer
Comment
40. How complete do you feel efforts were to perform external verification reviews on selected
work products?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
41. Are there any other verification activities that you feel should have been addressed by this
survey that impacted the acquisition of your system?
No
Yes (Describe)
A.1.4.4 Acquisition Validation
Validation demonstrates that the system can be used by the users for their specific tasks.
[ISO 9000:2000] Validation is normally performed on final products under defined operat-
ing conditions. Validation may be necessary in earlier stages and on intermediary products.
Multiple validations may be carried out if there are different intended uses. In other words,
validation checks that a work product or system actually meets the user needs, regardless
if the work product or systems were verified to meet its specifications.
42. How important was selecting aspects of this robotic system for validation activities in this
acquisition? (e.g select those needs or scenarios to validate for planning purposes)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
43. How complete do you feel the efforts were to select aspects of the robotic system for
validation activities?
NE - No Effort Expended
108
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
44. How important was establishing a validation environment in this robotic system acquisi-
tion? (e.g. testbed, calibration courses, test equipments, etc.)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
45. How complete do you feel the efforts were to establish a validation environment?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
46. How important was establishing validation procedures and criteria in this robotic system
acquisition? (e.g. Write test plans or test procedure, establish which variables are to be
measured in a validation task)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
47. How complete do you feel the efforts were to establish validation procedures and criteria?
109
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
48. How important was performing all of the selected validation activities in this robotic system
acquisition?
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
49. How complete do you feel the efforts were to perform all of the selected validation activi-
ties?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
50. How important was analyzing the results of the validation activities in this robotic system
acquisition?
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
110
51. How complete do you feel the efforts were to analyze the results of the validation activities?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
52. How important was performing external validation reviews on selected work products?
(e.g. external to the project and/or outside the direct management chain; reviews such as
operational suitability assessments, expert technical reviews, fitness for purpose tests)
Completely Unimportant
Somewhat Unimportant
Important
Very Important
Critically Important
NA - No Answer
Comment
53. How complete do you feel efforts were to perform external validation reviews on selected
work products?
NE - No Effort Expended
M - Meager
F - Fragmentary
CI - Competent but Incomplete
S - Strong
NA - No Answer/Not Applicable
Comment
54. Are there any other validation activities that you feel should have been addressed by this
survey that impacted the acquisition of your system ?
No
Yes (Describe)
111
A.1.5 Final Question
1. Are there any other engineering activities that you feel should have been addressed by this
survey that impacted the acquisition of your system?
No
Yes (Describe)
A.2 Invitations
A.2.1 Invitation to Academics
Sir or Ma’am (as appropriate),
I’m writing to invite you to take a survey describing experiences in doing technical analysis
in support of a robotic system acquisition/purchase. The goal of the survey is to determine what
kinds of engineering methods are utilized in support of efforts to identify, purchase, and then
employ a robotic system by the customers.
You can find the survey here:http://www.surveymonkey.com/s.aspx?sm=DJHL_
2bYGWGbZHh2vxxAvNuw_3d_3d
Choosing the right robot to support a family of research projects can be very difficult. In this
case, we hope to identify what technical work goes is performed to support the acquisition of
base robot platforms that support advanced robotics research.
The survey is only 80 multiple choice questions, and should take no more than 30 minutes to
complete. Our goal is to collect responses until the end of December 2007.
Please consider taking this survey and sharing your experience as a customer who has ac-
quired and employed robots. If you know other individuals who have worked on customer-side
technical issues involving robotic systems, please feel free to send this survey link to them to take
as well.
If you have any questions about the survey, feel free to contact me via email (dlatimer@usc.edu)
or phone (310-722-8157)
Many kind regards,
–DeWitt ”Tal” Latimer IV PhD Candidate, Computer Science Department University of
Southern California dlatimer@usc.edu / dlatimer@ieee.org
A.2.2 Invitation to Government Acquisition Professionals
Sir or Ma’am (as appropriate),
I’m writing to invite you to take a survey describing experiences in doing technical analysis
in support of a robotic system acquisition/purchase. The goal of the survey is to determine what
kinds of engineering methods are utilized in support of efforts to identify, purchase, and then
employ a robotic system by the customers.
You can find the survey here:http://www.surveymonkey.com/s.aspx?sm=DJHL_
2bYGWGbZHh2vxxAvNuw_3d_3d
Choosing the right robot to perform a mission can be a difficult prospect when balancing cost,
schedule, and mission performance goals. In this case, we hope to identify what technical work
is performed to support the acquisition of the robotic system.
112
The survey is only 80 multiple choice questions, and should take no more than 30 minutes to
complete. Our goal is to collect responses until the end of December 2007.
Please consider taking this survey and sharing your experience as a customer who has ac-
quired and employed robots. If you know other individuals who have worked on customer-side
technical issues involving robotic systems, please feel free to send this survey link to them to take
as well.
If you have any questions about the survey, feel free to contact me via email (dlatimer@usc.edu)
or phone (310-722-8157)
Many kind regards,
–DeWitt ”Tal” Latimer IV PhD Candidate, Computer Science Department University of
Southern California dlatimer@usc.edu / dlatimer@ieee.org
A.2.3 Invitation to Industry Professionals
Sir or Ma’am (as appropriate),
I’m writing to invite you to take a survey describing experiences in doing technical analysis
in support of a robotic system acquisition/purchase. The goal of the survey is to determine what
kinds of engineering methods are utilized in support of efforts to identify, purchase, and then
employ a robotic system by the customers.
You can find the survey here:http://www.surveymonkey.com/s.aspx?sm=DJHL_
2bYGWGbZHh2vxxAvNuw_3d_3d
In industry, for example, choosing the right robot is a crucial business decision that can impact
the ability of the company to show profit and reliably perform its core missions. In this case, we
hope to identify what technical work is performed to support the acquisition of the robotic systems
that support commercial endeavors.
The survey is only 80 multiple choice questions, and should take no more than 30 minutes to
complete. Our goal is to collect responses until the end of December 2007.
Please consider taking this survey and sharing your experience as a customer who has ac-
quired and employed robots. If you know other individuals who have worked on customer-side
technical issues involving robotic systems, please feel free to send this survey link to them to take
as well.
If you have any questions about the survey, feel free to contact me via email (dlatimer@usc.edu)
or phone (310-722-8157)
Many kind regards,
–DeWitt ”Tal” Latimer IV PhD Candidate, Computer Science Department University of
Southern California dlatimer@usc.edu / dlatimer@ieee.org
113
Appendix B
Survey Analysis
B.1 Labels
For convenience of tables and utilization in equations, the following assignment of labels to sur-
vey questions is established. Each item in the following list first gives the label in bold and is
followed by an equal sign (=). On the right of the equal sign a plain language explanation of
the that parameter. For survey questions, this will include the question page and number. For
engineering questions, this will also include a tracing back to the relevant CMMI-ACQ specific
practice that concerns that question (e.g. ARD 1.1 for the specific practice about eliciting stake-
holder needs in the Acquisition Requirements Development process group). Note that there are
no labels X21 through x24, as those are labels as Y3 down to Y0 (respectively), owing to their
being response variables.
X00 = Background, Question 1, Random ID assigned to the respondent by surveymon-
key.com
X01 = Background, Question 2, Number of Previous Purchase Events of Robots
X02 = Background, Question 3, Highest Ever Unit Cost of Robots Acquired
X03 = Background, Question 4, Employer Type
X04 = Background, Question 5, Organizational Role
X05 = Background, Question 6, Robotics Experience
X06 = Background, Question 7, Acquisition Experience
X07 = Background, Question 8, Highest Level of Education Completed
X08 = Background, Question 9, Currently enrolled as a student?
X09 = Background, Question 10, Degree Level (being sought)
X10 = Background, Question 11, Field of Study for Degree (completed or as student)
X11 = Acquisition Project, Question 1, Time Since Acquisition?
X12 = Acquisition Project, Question 2, Robot Unit Cost
114
X13 = Acquisition Project, Question 3, Was a Robot Acquired?
X14 = Acquisition Project, Question 4, Quantity of Robots
X15 = Acquisition Project, Question 5, Robot Life Span
X16 = Acquisition Project, Question 6, Safety Critical?
X17 = Acquisition Project, Question 7, Level of Autonomy
X18 = Acquisition Project, Question 8, Opinion of Overall Engineering Effort
X19 = Acquisition Project, Question 9, Commercial Buy?
X20 = Acquisition Project, Question 10, Acquisition Strategy Employed
Y3 = Acquisition Project, Question 11, Robot Met Requirements?
Y2 = Acquisition Project, Question 12, Robot Acquired On Budget?
Y1 = Acquisition Project, Question 13, Robot Acquired On Schedule?
Y0 = Acquisition Project, Question 14, Robot Fitness for Purpose
X25 = Engineering, Question 1, Importance of ARD 1.1
X26 = Engineering, Question 2, Completeness of ARD 1.1
X27 = Engineering, Question 3, Importance of ARD 1.2
X28 = Engineering, Question 4, Completeness of ARD1.2
X29 = Engineering, Question 5, Importance of ARD 2.1
X30 = Engineering, Question 6, Completeness of ARD 2.1
X31 = Engineering, Question 7, Importance of ARD 2.2
X32 = Engineering, Question 8, Completeness of ARD 2.2
X33 = Engineering, Question 9, Importance of ARD 3.2
X34 = Engineering, Question 10, Completeness of ARD 3.2
X35 = Engineering, Question 11, Importance of ARD 3.1
X36 = Engineering, Question 12, Completeness of ARD 3.1
X37 = Engineering, Question 13, Importance of ARD 3.3
X38 = Engineering, Question 14, Completeness of ARD 3.3
X39 = Engineering, Question 15, Importance of ARD 3.4
X40 = Engineering, Question 16, Completeness of ARD 3.4
115
X41 = Engineering, Question 17, Other ARD practices?
X42 = Engineering, Question 18, Importance of ATM 1.1
X43 = Engineering, Question 19, Completeness of ATM 1.1
X44 = Engineering, Question 20, Importance of ATM 1.2
X45 = Engineering, Question 21, Completeness of ATM 1.2
X46 = Engineering, Question 22, Importance of ATM 1.3
X47 = Engineering, Question 23, Completeness of ATM 1.3
X48 = Engineering, Question 24, Importance of ATM 2.1
X49 = Engineering, Question 25, Completeness of ATM 2.1
X50 = Engineering, Question 26, Importance of ATM 2.2
X51 = Engineering, Question 27, Completeness of ATM 2.2
X52 = Engineering, Question 28, Other ATM practices?
X53 = Engineering, Question 29, Importance of A VER 1.1
X54 = Engineering, Question 30, Completeness of A VER 1.1
X55 = Engineering, Question 31, Importance of A VER 1.2
X56 = Engineering, Question 32, Completeness of A VER 1.2
X57 = Engineering, Question 33, Importance of A VER 1.3
X58 = Engineering, Question 34, Completeness of A VER 1.3
X59 = Engineering, Question 35, Importance of A VER 2.1
X60 = Engineering, Question 36, Completeness of A VER 2.1
X61 = Engineering, Question 37, Importance of A VER 2.2
X62 = Engineering, Question 38, Completeness of A VER 2.2
X63 = Engineering, Question 39, Importance of External Acquisition Verification
X64 = Engineering, Question 40, Completeness of External Acquisition Verification
X65 = Engineering, Question 41, Other A VER practices?
X66 = Engineering, Question 42, Importance of A V AL 1.1
X67 = Engineering, Question 43, Completeness of A V AL 1.1
X68 = Engineering, Question 44, Importance of A V AL1.2
116
X69 = Engineering, Question 45, Completeness of A V AL 1.2
X70 = Engineering, Question 46, Importance of A V AL 1.3
X71 = Engineering, Question 47, Completeness of A V AL 1.3
X72 = Engineering, Question 48, Importance of A V AL 2.1
X73 = Engineering, Question 49, Completeness of A V AL 2.1
X74 = Engineering, Question 50, Importance of A V AL 2.2
X75 = Engineering, Question 51, Completeness of A V AL 2.2
X76 = Engineering, Question 52, Importance of External Acquisition Validation
X77 = Engineering, Question 53, Completeness of External Acquisition Validation
X78 = Engineering, Question 54, Other A V AL practices?
X79 = Final Question Page, Question 1, other engineering activities for the acquisition of
robots?
ARDIMP = Average response for importance of ARD practices ((X19 + X21 + X23 + X25
+ X27 + X29 + X31 + X33) / 8)
ARDCOM = Average response for completeness of ARD practices ((X20 + X22 + X24 +
X26 + X28 + X30 + X32 + X34) / 8)
ATMIMP = Average response for importance of ATM practices ((X35 + X37 + X39 + X41
+ X43) / 5)
ATMCOM = Average response for completeness of ATM practices ((X36 + X38 + X40 +
X42 + X44) / 5)
A VERIMP = Average response for importance of A VER practices ((X45 + X47 + X49 +
X51 + X53 + X55) / 6)
A VERCOM = Average response for completeness of A VER practices ((X46 + X48 + X50
+ X52 + X54 + X56) / 6)
A V ALIMP = Average response for importance of A V AL practices ((X66 + X68 + X70 +
X72 + X74 + X76) / 6)
A V ALCOM = Average response for completeness of A V AL practices ((X67 + X69 + X71
+ X73 + X75 + X77) / 6)
117
B.2 Summary Statistics
The following section provides summary statistics for variables which were candidates for re-
gression analysis. The tables below provide the variable name, number of respondents providing
an answer to that variable (N), average, standard deviation, minimum, maximum, and median
values.
Data set = RAESurveyAll, Summary Statistics
Variable N Average Std Dev
ARDCOM 18 -0.031111 0.81815
ARDIMP 18 0.00055556 0.69409
ATMCOM 18 -0.039444 0.74397
ATMIMP 18 -0.013889 0.73996
AVALCOM 18 0.027778 0.83003
AVALIMP 18 -0.025 0.81474
AVERCOM 17 0.020588 0.80569
AVERIMP 18 -0.022778 0.85357
X01 17 -0.00000 1.0005
X05 18 -0.00000 0.99823
X06 18 -0.0011111 1.0008
X12 16 0.00000 1.0001
X13 18 -0.0038889 1.0015
X14 18 -0.0011111 1.0022
X15 16 -0.0025 0.99893
X16 18 -0.0022222 1.0033
X17 18 -0.00000 1.0009
X18 18 0.0016667 1.0011
X19 18 0.0022222 1.0033
X20 17 0.0017647 1.0016
Y0 17 0.00000 0.99817
Y1 17 -0.0017647 0.99977
Y2 16 -0.00375 0.99757
Y3 17 -0.00058824 0.99721
Variable Minimum Median Maximum
ARDCOM -1.9 -0.02 1.1
ARDIMP -1.27 0.13 0.97
ATMCOM -1.28 0.13 1.01
ATMIMP -1.56 -0.02 1.2
AVALCOM -2.11 0.31 1.
AVALIMP -1.95 0.155 1.28
AVERCOM -1.66 -0.21 1.17
AVERIMP -1.96 -0.08 1.38
X01 -2.13 -0.06 2.01
X05 -1.1 0 2.2
118
X06 -1.5 -0.08 1.34
X12 -1.78 0.29 0.81
X13 -0.41 -0.41 3.24
X14 -1.26 -0.71 2.06
X15 -2.12 0.14 0.89
X16 -0.78 -0.78 1.22
X17 -1.68 0.21 2.1
X18 -1.93 0.39 1.16
X19 -1.22 0.78 0.78
X20 -1.71 -0.74 1.2
Y0 -1.63 0 1.63
Y1 -1.84 0.66 0.66
Y2 -3.33 0.34 0.34
Y3 -0.95 0.13 1.2
B.3 Response Correlations
This section provides response correlations between the respondents across the variables.
B.3.1 Response Correlations between X18 and Engineering Practice Completeness
and Importance
The following table provides the response correlations between the X18 “Overall Engineering
Completeness” question and the completeness and importance of the several engineering process
areas. Several respondents had many unknowns in practices, meaning the process area was heav-
ily weighted to only one practice. A second set of correlation tables (and summary statistics) that
ignore those that didn’t give full response across the practices is presented.
Data set = RAESurveyAll, Sample Correlations
ARDCOM 1.0000 0.5894 0.4214 0.3733 0.8244 0.6010 0.7622
ARDIMP 0.5894 1.0000 0.4255 0.8039 0.6758 0.7826 0.5774
ATMCOM 0.4214 0.4255 1.0000 0.4434 0.4916 0.5068 0.3976
ATMIMP 0.3733 0.8039 0.4434 1.0000 0.4797 0.7028 0.3749
AVALCOM 0.8244 0.6758 0.4916 0.4797 1.0000 0.7790 0.9241
AVALIMP 0.6010 0.7826 0.5068 0.7028 0.7790 1.0000 0.6794
AVERCOM 0.7622 0.5774 0.3976 0.3749 0.9241 0.6794 1.0000
AVERIMP 0.4525 0.7190 0.2673 0.6820 0.6433 0.8798 0.6483
X18 0.0649 0.5548 0.4776 0.7407 0.2503 0.5896 0.0920
ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVALIMP AVERCOM
ARDCOM 0.4525 0.0649
ARDIMP 0.7190 0.5548
ATMCOM 0.2673 0.4776
119
ATMIMP 0.6820 0.7407
AVALCOM 0.6433 0.2503
AVALIMP 0.8798 0.5896
AVERCOM 0.6483 0.0920
AVERIMP 1.0000 0.4942
X18 0.4942 1.0000
AVERIMP X18
Computed from 17 complete cases
Data set = RAESurveyAll, Summary Statistics
Deleted cases are
(555438661 543663840 549890748 543132264 543109967)
Variable N Average Std Dev
ARDCOM 13 0.10538 0.73988
ARDIMP 13 -0.0084615 0.71818
ATMCOM 13 -0.0015385 0.7521
ATMIMP 13 -0.14769 0.79203
AVALCOM 13 0.016923 0.93472
AVALIMP 13 -0.11308 0.85734
AVERCOM 13 0.00076923 0.88752
AVERIMP 13 -0.06 0.63565
X18 13 0.091538 0.97472
Variable Minimum Median Maximum
ARDCOM -1.53 0.07 1.1
ARDIMP -1.27 0.29 0.79
ATMCOM -1.28 0.05 1.01
ATMIMP -1.56 -0.11 1.2
AVALCOM -2.11 0.34 1.
AVALIMP -1.95 0.15 1.04
AVERCOM -1.66 -0.21 1.17
AVERIMP -1.2 -0.09 1.28
X18 -1.93 0.39 1.16
Data set = RAESurveyAll, Sample Correlations
Deleted cases are
(555438661 543663840 549890748 543132264 543109967)
ARDCOM 1.0000 0.5704 0.6040 0.3346 0.8288 0.5837 0.7893
ARDIMP 0.5704 1.0000 0.6574 0.7821 0.6956 0.7587 0.5957
ATMCOM 0.6040 0.6574 1.0000 0.6541 0.6145 0.7681 0.4624
ATMIMP 0.3346 0.7821 0.6541 1.0000 0.4639 0.6637 0.3531
AVALCOM 0.8288 0.6956 0.6145 0.4639 1.0000 0.7983 0.9237
AVALIMP 0.5837 0.7587 0.7681 0.6637 0.7983 1.0000 0.6832
AVERCOM 0.7893 0.5957 0.4624 0.3531 0.9237 0.6832 1.0000
120
AVERIMP 0.4574 0.7510 0.5138 0.6862 0.6862 0.8807 0.6738
X18 -0.0678 0.5213 0.6812 0.7307 0.1645 0.5421 0.0091
ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVALIMP AVERCOM
ARDCOM 0.4574 -0.0678
ARDIMP 0.7510 0.5213
ATMCOM 0.5138 0.6812
ATMIMP 0.6862 0.7307
AVALCOM 0.6862 0.1645
AVALIMP 0.8807 0.5421
AVERCOM 0.6738 0.0091
AVERIMP 1.0000 0.4471
X18 0.4471 1.0000
AVERIMP X18
Computed from 13 complete cases
B.3.2 Correlations to Outcome Variables
This section provides response correlations to the outcome variables (meeting requirements,
meeting schedule, meeting budget, and being fit for purpose/suitable). As several respondents
didn’t give responses for different outcome variables, several different tables are presented to
give the correlations for each response variable.
Data set = RAESurveyAll, Sample Correlations
Y0 1.0000 -0.2659 -0.0000 0.4541
Y1 -0.2659 1.0000 -0.2005 0.1425
Y2 -0.0000 -0.2005 1.0000 -0.2956
Y3 0.4541 0.1425 -0.2956 1.0000
Y0 Y1 Y2 Y3
Computed from 14 complete cases
Data set = RAESurveyAll, Sample Correlations
Y0 1.0000 0.4401
Y3 0.4401 1.0000
Y0 Y3
Computed from 17 complete cases
Data set = RAESurveyAll, Sample Correlations
ARDCOM 1.0000 0.5442 0.4262 0.3488 0.8352 0.6040 0.7547
ARDIMP 0.5442 1.0000 0.4384 0.8128 0.6898 0.8087 0.5596
ATMCOM 0.4262 0.4384 1.0000 0.4418 0.4899 0.5052 0.3960
ATMIMP 0.3488 0.8128 0.4418 1.0000 0.4726 0.7005 0.3581
AVALCOM 0.8352 0.6898 0.4899 0.4726 1.0000 0.7772 0.9258
AVALIMP 0.6040 0.8087 0.5052 0.7005 0.7772 1.0000 0.6775
AVERCOM 0.7547 0.5596 0.3960 0.3581 0.9258 0.6775 1.0000
121
AVERIMP 0.4558 0.7495 0.2654 0.6822 0.6418 0.8796 0.6494
Y0 0.5626 0.3882 0.4380 0.3524 0.4808 0.4761 0.3307
Y3 0.4799 -0.0142 -0.1487 0.0822 0.1267 -0.0449 0.1244
ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVALIMP AVERCOM
ARDCOM 0.4558 0.5626 0.4799
ARDIMP 0.7495 0.3882 -0.0142
ATMCOM 0.2654 0.4380 -0.1487
ATMIMP 0.6822 0.3524 0.0822
AVALCOM 0.6418 0.4808 0.1267
AVALIMP 0.8796 0.4761 -0.0449
AVERCOM 0.6494 0.3307 0.1244
AVERIMP 1.0000 0.2961 -0.0900
Y0 0.2961 1.0000 0.4540
Y3 -0.0900 0.4540 1.0000
AVERIMP Y0 Y3
Computed from 16 complete cases
Data set = RAESurveyAll, Sample Correlations
X01 1.0000 0.3555 0.2232 0.3173
X05 0.3555 1.0000 0.2876 -0.1135
X06 0.2232 0.2876 1.0000 -0.0000
Y0 0.3173 -0.1135 -0.0000 1.0000
X01 X05 X06 Y0
Computed from 16 complete cases
Data set = RAESurveyAll, Sample Correlations
X12 1.0000 -0.4719 -0.0351 0.3653 -0.7348 -0.3940 0.6398
X13 -0.4719 1.0000 0.5289 0.2623 0.3273 0.1047 -0.4795
X14 -0.0351 0.5289 1.0000 -0.2359 0.1679 0.2130 -0.2978
X15 0.3653 0.2623 -0.2359 1.0000 -0.3574 -0.4365 0.4119
X16 -0.7348 0.3273 0.1679 -0.3574 1.0000 0.0528 -0.7335
X17 -0.3940 0.1047 0.2130 -0.4365 0.0528 1.0000 -0.1500
X18 0.6398 -0.4795 -0.2978 0.4119 -0.7335 -0.1500 1.0000
X19 0.4724 -0.2857 0.0207 0.3245 -0.3273 -0.1042 0.5491
X20 0.0465 -0.2352 -0.1068 -0.2166 0.0299 -0.1585 -0.0001
Y0 0.2727 -0.0000 0.1770 -0.0011 -0.4303 -0.1029 0.0818
X12 X13 X14 X15 X16 X17 X18
X12 0.4724 0.0465 0.2727
X13 -0.2857 -0.2352 -0.0000
X14 0.0207 -0.1068 0.1770
X15 0.3245 -0.2166 -0.0011
X16 -0.3273 0.0299 -0.4303
X17 -0.1042 -0.1585 -0.1029
122
X18 0.5491 -0.0001 0.0818
X19 1.0000 0.2352 0.2113
X20 0.2352 1.0000 -0.2319
Y0 0.2113 -0.2319 1.0000
X19 X20 Y0
Computed from 15 complete cases
Data set = RAESurveyAll, Sample Correlations
X01 -0.0315
X05 -0.3032
X06 -0.4433
Y1 1.0000
Y1
Computed from 16 complete cases
Data set = RAESurveyAll, Sample Correlations
X12 -0.4437
X13 0.1737
X14 -0.1941
X15 0.1547
X16 0.3411
X17 -0.0724
X18 -0.1184
X19 -0.0496
X20 0.0613
Y1 1.0000
Y1
Computed from 15 complete cases
Data set = RAESurveyAll, Sample Correlations
X01 0.0040
X05 -0.1495
X06 -0.0937
Y3 1.0000
Y3
Computed from 16 complete cases
Data set = RAESurveyAll, Sample Correlations
X12 -0.0592
X13 0.3068
X14 0.1395
X15 -0.0153
X16 0.0581
X17 -0.3799
X18 -0.1474
123
X19 -0.0672
X20 0.1426
Y3 1.0000
Y3
Computed from 15 complete cases
B.4 Regression for Meeting Requirements (Y3)
create full linear regressor on all average engineering examined cases of adding most appropri-
ate factors examined cases of deleting least useful factors deleting generated a more effective
model We can predict Y3 with a linear regressor from ARDCOM, ATMIMP, and A V ALIMP p-
value 0.003 / r-squared 0.738 / 14 cases used (eliminated middle Y3 cases) significan negative
coefficient for A V ALIMP, ATMIMP - positive for ARDCOM
B.4.1 Y3, Full Engineering Factors, and Submodel Considerations
This subsection contains the regression for predicting the Y3 with all the engineering areas, as
well as Malloc’s statistics for adding parameters to the null model and deleting from the full
model.
Data set = RAESurveyAll, Name of Fit = L1
Normal Regression
Kernel mean function = Identity
Response = Y3
Terms = (ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVALIMP
AVERCOM AVERIMP)
Cases not used and missing at least one value are:
(559609609 543132264)
Coefficient Estimates
Label Estimate Std. Error t-value p-value
Constant -0.0198936 0.227281 -0.088 0.9327
ARDCOM 1.79717 0.538829 3.335 0.0125
ARDIMP -0.581347 0.712051 -0.816 0.4411
ATMCOM -0.674197 0.363130 -1.857 0.1057
ATMIMP 1.00462 0.519660 1.933 0.0945
AVALCOM -1.21464 1.03277 -1.176 0.2780
AVALIMP 0.361790 0.802580 0.451 0.6658
AVERCOM 0.701062 0.872491 0.804 0.4481
AVERIMP -0.943934 0.837522 -1.127 0.2969
R Squared: 0.72163
Sigma hat: 0.771127
Number of cases: 18
124
Number of cases used: 16
Degrees of freedom: 7
Summary Analysis of Variance Table
Source df SS MS F p-value
Regression 8 10.7905 1.34881 2.27 0.1485
Residual 7 4.16246 0.594637
Data set = RAESurveyAll, Name of Fit = L1
Normal Regression
Kernel mean function = Identity
Response = Y3
Terms = (ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVALIMP
AVERCOM AVERIMP)
Cases not used and missing at least one value are:
(559609609 543132264)
Forward Selection: Sequentially add terms
that minimize the value of C_I.
All fits include an intercept.
Base terms: Intercept
df RSS | k C_I
Add: ARDCOM 14 11.509 | 2 7.355
Add: ATMCOM 14 14.6222 | 2 12.590
Add: AVALCOM 14 14.7127 | 2 12.742
Add: AVERCOM 14 14.7216 | 2 12.757
Add: AVERIMP 14 14.8319 | 2 12.943
Add: ATMIMP 14 14.8518 | 2 12.976
Add: AVALIMP 14 14.9229 | 2 13.096
Add: ARDIMP 14 14.95 | 2 13.141
Base terms: (ARDCOM)
df RSS | k C_I
Add: AVALCOM 13 7.79417 | 3 3.107
Add: AVALIMP 13 8.87151 | 3 4.919
Add: ATMCOM 13 9.22816 | 3 5.519
Add: AVERCOM 13 9.54495 | 3 6.052
Add: AVERIMP 13 9.70994 | 3 6.329
Add: ARDIMP 13 9.89772 | 3 6.645
Add: ATMIMP 13 11.3856 | 3 9.147
Base terms: (ARDCOM AVALCOM)
df RSS | k C_I
Add: ATMCOM 12 6.7345 | 4 3.325
Add: AVALIMP 12 7.49996 | 4 4.613
125
Add: AVERIMP 12 7.6572 | 4 4.877
Add: ATMIMP 12 7.67219 | 4 4.902
Add: ARDIMP 12 7.68316 | 4 4.921
Add: AVERCOM 12 7.69907 | 4 4.948
Base terms: (ARDCOM AVALCOM ATMCOM)
df RSS | k C_I
Add: ATMIMP 11 6.29602 | 5 4.588
Add: AVERIMP 11 6.53999 | 5 4.998
Add: AVALIMP 11 6.63805 | 5 5.163
Add: ARDIMP 11 6.70628 | 5 5.278
Add: AVERCOM 11 6.71677 | 5 5.296
Base terms: (ARDCOM AVALCOM ATMCOM ATMIMP)
df RSS | k C_I
Add: AVERIMP 10 5.20122 | 6 4.747
Add: ARDIMP 10 5.25611 | 6 4.839
Add: AVALIMP 10 5.59811 | 6 5.414
Add: AVERCOM 10 6.21737 | 6 6.456
Base terms: (ARDCOM AVALCOM ATMCOM ATMIMP AVERIMP)
df RSS | k C_I
Add: ARDIMP 9 4.54863 | 7 5.649
Add: AVERCOM 9 4.66739 | 7 5.849
Add: AVALIMP 9 5.19203 | 7 6.731
Base terms: (ARDCOM AVALCOM ATMCOM ATMIMP AVERIMP ARDIMP)
df RSS | k C_I
Add: AVERCOM 8 4.28329 | 8 7.203
Add: AVALIMP 8 4.54638 | 8 7.646
Base terms: (ARDCOM AVALCOM ATMCOM ATMIMP AVERIMP ARDIMP AVERCOM)
df RSS | k C_I
Add: AVALIMP 7 4.16246 | 9 9.000
Data set = RAESurveyAll, Name of Fit = L1
Normal Regression
Kernel mean function = Identity
Response = Y3
Terms = (ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVALIMP
AVERCOM AVERIMP)
Cases not used and missing at least one value are:
(559609609 543132264)
Backward Elimination: Sequentially remove terms
that give the smallest change in C_I.
126
All fits include an intercept.
Current terms: (ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVALIMP
AVERCOM AVERIMP)
df RSS | k C_I
Delete: AVALIMP 8 4.28329 | 8 7.203
Delete: AVERCOM 8 4.54638 | 8 7.646
Delete: ARDIMP 8 4.55883 | 8 7.667
Delete: AVERIMP 8 4.9178 | 8 8.270
Delete: AVALCOM 8 4.98497 | 8 8.383
Delete: ATMCOM 8 6.21221 | 8 10.447
Delete: ATMIMP 8 6.38485 | 8 10.737
Delete: ARDCOM 8 10.7774 | 8 18.124
Current terms: (ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVERCOM AVERIMP)
df RSS | k C_I
Delete: AVERCOM 9 4.54863 | 7 5.649
Delete: ARDIMP 9 4.66739 | 7 5.849
Delete: AVALCOM 9 5.01224 | 7 6.429
Delete: AVERIMP 9 5.24691 | 7 6.824
Delete: ATMCOM 9 6.31923 | 7 8.627
Delete: ATMIMP 9 6.44599 | 7 8.840
Delete: ARDCOM 9 10.8903 | 7 16.314
Current terms: (ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVERIMP)
df RSS | k C_I
Delete: AVALCOM 10 5.08355 | 6 4.549
Delete: ARDIMP 10 5.20122 | 6 4.747
Delete: AVERIMP 10 5.25611 | 6 4.839
Delete: ATMIMP 10 6.52982 | 6 6.981
Delete: ATMCOM 10 6.59363 | 6 7.088
Delete: ARDCOM 10 11.0444 | 6 14.573
Current terms: (ARDCOM ARDIMP ATMCOM ATMIMP AVERIMP)
df RSS | k C_I
Delete: ARDIMP 11 6.21321 | 5 4.449
Delete: AVERIMP 11 6.53966 | 5 4.998
Delete: ATMIMP 11 7.74774 | 5 7.029
Delete: ATMCOM 11 7.92594 | 5 7.329
Delete: ARDCOM 11 13.5452 | 5 16.779
Current terms: (ARDCOM ATMCOM ATMIMP AVERIMP)
df RSS | k C_I
Delete: ATMIMP 12 7.75756 | 4 5.046
Delete: AVERIMP 12 9.19495 | 4 7.463
127
Delete: ATMCOM 12 9.314 | 4 7.663
Delete: ARDCOM 12 13.5468 | 4 14.782
Current terms: (ARDCOM ATMCOM AVERIMP)
df RSS | k C_I
Delete: AVERIMP 13 9.22816 | 3 5.519
Delete: ATMCOM 13 9.70994 | 3 6.329
Delete: ARDCOM 13 14.5811 | 3 14.521
Current terms: (ARDCOM ATMCOM)
df RSS | k C_I
Delete: ATMCOM 14 11.509 | 2 7.355
Delete: ARDCOM 14 14.6222 | 2 12.590
B.4.2 Y3, Environmental Factors, and Submodel Considerations
This subsection contains the regression for predicting the Y3 with all the environmental factors,
as well as Malloc’s statistics for adding parameters to the null model and deleting from the full
model.
Data set = RAESurveyAll, Name of Fit = L2
Normal Regression
Kernel mean function = Identity
Response = Y3
Terms = (X01 X05 X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
Cases not used and missing at least one value are:
(549853339 549317657 559609609)
Coefficient Estimates
Label Estimate Std. Error t-value p-value
Constant 1.09152 0.403515 2.705 0.1138
X01 -1.11487 0.388252 -2.872 0.1029
X05 -0.0356303 0.307101 -0.116 0.9182
X06 -0.847106 0.309472 -2.737 0.1116
X12 0.747254 0.478471 1.562 0.2587
X13 3.51491 1.12160 3.134 0.0885
X14 -0.850294 0.418678 -2.031 0.1794
X15 -1.52155 0.414133 -3.674 0.0667
X16 -0.972253 0.445058 -2.185 0.1605
X17 -1.56890 0.248155 -6.322 0.0241
X18 -0.550974 0.381906 -1.443 0.2859
X19 1.03157 0.343783 3.001 0.0954
X20 0.152419 0.173523 0.878 0.4724
R Squared: 0.974734
128
Sigma hat: 0.434552
Number of cases: 18
Number of cases used: 15
Degrees of freedom: 2
Summary Analysis of Variance Table
Source df SS MS F p-value
Regression 12 14.5699 1.21416 6.43 0.1423
Residual 2 0.377671 0.188835
Data set = RAESurveyAll, Name of Fit = L2
Normal Regression
Kernel mean function = Identity
Response = Y3
Terms = (X01 X05 X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
Cases not used and missing at least one value are:
(549853339 549317657 559609609)
Forward Selection: Sequentially add terms
that minimize the value of C_I.
All fits include an intercept.
Base terms: Intercept
df RSS | k C_I
Add: X17 13 12.7897 | 2 56.730
Add: X13 13 13.5404 | 2 60.705
Add: X05 13 14.615 | 2 66.395
Add: X18 13 14.6229 | 2 66.437
Add: X20 13 14.6434 | 2 66.546
Add: X14 13 14.6567 | 2 66.616
Add: X06 13 14.8218 | 2 67.490
Add: X19 13 14.8801 | 2 67.799
Add: X12 13 14.8953 | 2 67.880
Add: X16 13 14.8971 | 2 67.890
Add: X01 13 14.943 | 2 68.133
Add: X15 13 14.944 | 2 68.138
Base terms: (X17)
df RSS | k C_I
Add: X05 12 10.7631 | 3 47.997
Add: X13 12 10.9742 | 3 49.115
Add: X01 12 11.6836 | 3 52.872
Add: X12 12 12.0179 | 3 54.642
Add: X06 12 12.0211 | 3 54.659
Add: X14 12 12.029 | 3 54.701
Add: X18 12 12.1511 | 3 55.348
129
Add: X15 12 12.1835 | 3 55.519
Add: X19 12 12.6174 | 3 57.817
Add: X20 12 12.6856 | 3 58.178
Add: X16 12 12.6982 | 3 58.245
Base terms: (X17 X05)
df RSS | k C_I
Add: X15 11 6.80595 | 4 29.042
Add: X14 11 8.7656 | 4 39.419
Add: X13 11 9.81733 | 4 44.989
Add: X18 11 9.86187 | 4 45.225
Add: X01 11 10.119 | 4 46.586
Add: X12 11 10.2613 | 4 47.340
Add: X06 11 10.3416 | 4 47.765
Add: X20 11 10.3692 | 4 47.911
Add: X19 11 10.3736 | 4 47.935
Add: X16 11 10.5252 | 4 48.737
Base terms: (X17 X05 X15)
df RSS | k C_I
Add: X14 10 4.59274 | 5 19.321
Add: X13 10 4.69981 | 5 19.888
Add: X16 10 6.73433 | 5 30.662
Add: X06 10 6.74285 | 5 30.708
Add: X01 10 6.75797 | 5 30.788
Add: X18 10 6.78382 | 5 30.924
Add: X20 10 6.79013 | 5 30.958
Add: X19 10 6.80235 | 5 31.023
Add: X12 10 6.80321 | 5 31.027
Base terms: (X17 X05 X15 X14)
df RSS | k C_I
Add: X13 9 4.25177 | 6 19.516
Add: X16 9 4.34399 | 6 20.004
Add: X20 9 4.4687 | 6 20.665
Add: X18 9 4.49678 | 6 20.813
Add: X01 9 4.53145 | 6 20.997
Add: X19 9 4.55484 | 6 21.121
Add: X06 9 4.57281 | 6 21.216
Add: X12 9 4.59018 | 6 21.308
Base terms: (X17 X05 X15 X14 X13)
df RSS | k C_I
Add: X18 8 3.19822 | 7 15.937
Add: X16 8 3.22642 | 7 16.086
130
Add: X12 8 3.28922 | 7 16.418
Add: X20 8 4.11542 | 7 20.794
Add: X19 8 4.17314 | 7 21.099
Add: X01 8 4.19901 | 7 21.236
Add: X06 8 4.21962 | 7 21.345
Base terms: (X17 X05 X15 X14 X13 X18)
df RSS | k C_I
Add: X12 7 2.39862 | 8 13.702
Add: X16 7 2.83919 | 8 16.035
Add: X06 7 3.00635 | 8 16.920
Add: X20 7 3.11207 | 8 17.480
Add: X01 7 3.14073 | 8 17.632
Add: X19 7 3.16404 | 8 17.756
Base terms: (X17 X05 X15 X14 X13 X18 X12)
df RSS | k C_I
Add: X19 6 2.13347 | 9 14.298
Add: X01 6 2.32588 | 9 15.317
Add: X20 6 2.34025 | 9 15.393
Add: X16 6 2.39291 | 9 15.672
Add: X06 6 2.3984 | 9 15.701
Base terms: (X17 X05 X15 X14 X13 X18 X12 X19)
df RSS | k C_I
Add: X06 5 1.94559 | 10 15.303
Add: X01 5 1.978 | 10 15.475
Add: X20 5 2.13165 | 10 16.288
Add: X16 5 2.13319 | 10 16.297
Base terms: (X17 X05 X15 X14 X13 X18 X12 X19 X06)
df RSS | k C_I
Add: X01 4 1.31735 | 11 13.976
Add: X16 4 1.93478 | 11 17.246
Add: X20 4 1.94559 | 11 17.303
Base terms: (X17 X05 X15 X14 X13 X18 X12 X19 X06 X01)
df RSS | k C_I
Add: X16 3 0.523367 | 12 11.772
Add: X20 3 1.27884 | 12 15.772
Base terms: (X17 X05 X15 X14 X13 X18 X12 X19 X06 X01 X16)
df RSS | k C_I
Add: X20 2 0.377671 | 13 13.000
131
Data set = RAESurveyAll, Name of Fit = L2
Normal Regression
Kernel mean function = Identity
Response = Y3
Terms = (X01 X05 X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
Cases not used and missing at least one value are:
(549853339 549317657 559609609)
Backward Elimination: Sequentially remove terms
that give the smallest change in C_I.
All fits include an intercept.
Current terms: (X01 X05 X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
df RSS | k C_I
Delete: X05 3 0.380213 | 12 11.013
Delete: X20 3 0.523367 | 12 11.772
Delete: X18 3 0.770709 | 12 13.081
Delete: X12 3 0.838254 | 12 13.439
Delete: X14 3 1.15654 | 12 15.125
Delete: X16 3 1.27884 | 12 15.772
Delete: X06 3 1.79254 | 12 18.493
Delete: X01 3 1.93472 | 12 19.246
Delete: X19 3 2.07792 | 12 20.004
Delete: X13 3 2.23222 | 12 20.821
Delete: X15 3 2.9267 | 12 24.499
Delete: X17 3 7.92558 | 12 50.971
Current terms: (X01 X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
df RSS | k C_I
Delete: X20 4 0.529672 | 11 9.805
Delete: X12 4 0.849522 | 11 11.499
Delete: X18 4 1.11454 | 11 12.902
Delete: X14 4 1.43285 | 11 14.588
Delete: X16 4 1.9209 | 11 17.172
Delete: X13 4 2.48115 | 11 20.139
Delete: X15 4 2.93028 | 11 22.518
Delete: X06 4 3.10449 | 11 23.440
Delete: X01 4 4.01903 | 11 28.283
Delete: X19 4 4.68392 | 11 31.804
Delete: X17 4 7.98251 | 11 49.272
Current terms: (X01 X06 X12 X13 X14 X15 X16 X17 X18 X19)
df RSS | k C_I
Delete: X12 5 1.05567 | 10 10.590
Delete: X18 5 1.14811 | 10 11.080
Delete: X16 5 1.94273 | 10 15.288
132
Delete: X14 5 2.01838 | 10 15.689
Delete: X06 5 3.12565 | 10 21.552
Delete: X13 5 3.26981 | 10 22.316
Delete: X01 5 4.12785 | 10 26.860
Delete: X15 5 4.39996 | 10 28.301
Delete: X19 5 5.31575 | 10 33.150
Delete: X17 5 8.04162 | 10 47.585
Current terms: (X01 X06 X13 X14 X15 X16 X17 X18 X19)
df RSS | k C_I
Delete: X18 6 1.96794 | 9 13.421
Delete: X14 6 2.28506 | 9 15.101
Delete: X06 6 3.12692 | 9 19.559
Delete: X13 6 4.23802 | 9 25.443
Delete: X19 6 5.3174 | 9 31.159
Delete: X15 6 5.3709 | 9 31.442
Delete: X01 6 5.79142 | 9 33.669
Delete: X16 6 6.13898 | 9 35.510
Delete: X17 6 12.3945 | 9 68.636
Current terms: (X01 X06 X13 X14 X15 X16 X17 X19)
df RSS | k C_I
Delete: X06 7 3.14672 | 8 17.664
Delete: X14 7 3.62589 | 8 20.201
Delete: X19 7 5.34775 | 8 29.320
Delete: X01 7 5.83852 | 8 31.919
Delete: X16 7 7.01927 | 8 38.171
Delete: X13 7 8.14506 | 8 44.133
Delete: X15 7 8.34178 | 8 45.175
Delete: X17 7 12.5113 | 8 67.255
Current terms: (X01 X13 X14 X15 X16 X17 X19)
df RSS | k C_I
Delete: X14 8 5.19874 | 7 26.531
Delete: X19 8 5.49435 | 7 28.096
Delete: X01 8 6.04257 | 7 30.999
Delete: X16 8 7.26497 | 7 37.472
Delete: X15 8 9.3074 | 7 48.288
Delete: X13 8 10.1638 | 7 52.824
Delete: X17 8 12.6007 | 7 65.728
Current terms: (X01 X13 X15 X16 X17 X19)
df RSS | k C_I
Delete: X19 9 5.99273 | 6 28.735
Delete: X01 9 7.79853 | 6 38.298
133
Delete: X16 9 7.83711 | 6 38.502
Delete: X15 9 9.41529 | 6 46.860
Delete: X13 9 11.0958 | 6 55.759
Delete: X17 9 13.1097 | 6 66.424
Current terms: (X01 X13 X15 X16 X17)
df RSS | k C_I
Delete: X01 10 8.1666 | 5 38.247
Delete: X16 10 8.45212 | 5 39.759
Delete: X15 10 9.41918 | 5 44.880
Delete: X13 10 11.1038 | 5 53.801
Delete: X17 10 13.1673 | 5 64.729
Current terms: (X13 X15 X16 X17)
df RSS | k C_I
Delete: X16 11 9.22331 | 4 41.843
Delete: X15 11 10.9542 | 4 51.009
Delete: X13 11 12.1833 | 4 57.518
Delete: X17 11 13.2236 | 4 63.027
Current terms: (X13 X15 X17)
df RSS | k C_I
Delete: X15 12 10.9742 | 3 49.115
Delete: X13 12 12.1835 | 3 55.519
Delete: X17 12 13.3931 | 3 61.925
Current terms: (X13 X17)
df RSS | k C_I
Delete: X13 13 12.7897 | 2 56.730
Delete: X17 13 13.5404 | 2 60.705
B.4.3 Final Regression for Y3
Using the information in the previous two sections, many parameter sets were attempted. The
best regression (defined here as the best F-value for a p-value less than 0.05) resulting was:
Data set = RAESurveyAll, Name of Fit = L16
Deleted cases are
(549853339 543663840 541916684)
Normal Regression
Kernel mean function = Identity
Response = Y3
Terms = (ARDCOM ATMIMP AVALIMP)
Cases not used and missing at least one value are:
134
(559609609)
Coefficient Estimates
Label Estimate Std. Error t-value p-value
Constant 0.182478 0.184730 0.988 0.3465
ARDCOM 1.72269 0.324882 5.303 0.0003
ATMIMP 1.93608 0.553628 3.497 0.0058
AVALIMP -2.14246 0.513191 -4.175 0.0019
R Squared: 0.737921
Sigma hat: 0.644483
Number of cases: 18
Number of cases used: 14
Degrees of freedom: 10
Summary Analysis of Variance Table
Source df SS MS F p-value
Regression 3 11.695 3.89833 9.39 0.0030
Residual 10 4.15359 0.415359
Other considered predictor functions include:
ARDCOM ATMCOM - F = 3.93 - p = 0.0442
ARDCOM ATMCOM A V ALIMP - F = 3.2 - p = 0.0588
ARDCOM ATMCOM X13 - F = 2.48 - p = 0.1075
ARDCOM ATMCOM X17 - F = 3.97 - p = 0.0327
ARDCOM ATMCOM X17 X15 - F = 5.34 - p = 0.0123
ARDCOM ATMCOM X17 X05 - F = 2.76 - p = 0.0776
ARDCOM ATMCOM X15 - F = 2.76 - p = 0.0848
ARDCOM A V ALIMP - F = 4.67 - p = 0.0279
ARDCOM ATMIMP - F = 2.62 - p = 0.1081
ARDCOM A V ALIMP X15 - F = 4.90 - p = 0.0189
ARDCOM ATMIMP X15 - F = 3.80 - p = 0.098
ARDCOM A V ALIMP X17 - F = 4.14 - p = 0.0289
ARDCOM ATMIMP X17 - F = 2.41 - p = 0.1141
ARDCOM A V ALIMP X15 X17 - F = 5.87 - p = 0.089
ARDCOM ATMIMP X15 X17 - F = 4.56 - p = 0.0205
135
ARDCOM ATMIMP A V ALIMP X17 - F = 4.44 - p = 0.0223
ARDCOM ATMIMP A V ALIMP X15 - F = 4.02 - p = 0.0270
ARDCOM A V ALCOM A V ALIMP - F = 2.95 - p = 0.0721
ARDCOM A V ALCOM ATMCOM - F = 2.70 - p = 0.0887
ARDCOM A V ALCOM ATMIMP - F = 2.52 - p = 0.1039
ARDCOM A V ALCOM X15 - F = 4.26 - p = 0.0288
ARDCOM A V ALCOM X17 - F = 3.37 - p = 0.0516
ARDCOM A V ALCOM X15 X17 - F = 4.80 - p = 0.0174
ARDCOM - F = 5.56 - p = 0.0324
ATMIMP X15 - F = 0.01 - p = 0.9931
ATMCOM A V ALIMP - F = 0.36 - p = 0.7039
X15 X17 - F = 1.08, p = 0.3689
The list is not exhaustive, as we observe from the forward and backward selection via Mal-
low’s statistic that any model not involving ARDCOM would likely be missing too much infor-
mation (some list entries confirm this observation).
B.5 Regression for Meeting Schedule (Y1)
B.5.1 Y1, Full Engineering Factors, and Submodel Considerations
This subsection contains the regression for predicting the Y3 with all the engineering areas, as
well as Malloc’s statistics for adding parameters to the null model and deleting from the full
model.
Data set = RAESurveyAll, Name of Fit = L1
Normal Regression
Kernel mean function = Identity
Response = Y1
Terms = (ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM
AVALIMP AVERCOM AVERIMP)
Cases not used and missing at least one value are:
(549853339 543132264)
Coefficient Estimates
Label Estimate Std. Error t-value p-value
Constant -0.0945844 0.277540 -0.341 0.7433
ARDCOM 1.53512 0.637010 2.410 0.0468
136
ARDIMP 0.364641 0.786799 0.463 0.6571
ATMCOM -0.00434105 0.452949 -0.010 0.9926
ATMIMP 0.0447492 0.793752 0.056 0.9566
AVALCOM -0.911829 1.14242 -0.798 0.4510
AVALIMP -0.247816 1.31582 -0.188 0.8560
AVERCOM 0.219605 0.997425 0.220 0.8320
AVERIMP -0.575985 1.04901 -0.549 0.6000
R Squared: 0.588942
Sigma hat: 0.954885
Number of cases: 18
Number of cases used: 16
Degrees of freedom: 7
Summary Analysis of Variance Table
Source df SS MS F p-value
Regression 8 9.14471 1.14309 1.25 0.3893
Residual 7 6.38264 0.911805
Data set = RAESurveyAll, Name of Fit = L1
Normal Regression
Kernel mean function = Identity
Response = Y1
Terms = (ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM
AVALIMP AVERCOM AVERIMP)
Cases not used and missing at least one value are:
(549853339 543132264)
Forward Selection: Sequentially add terms
that minimize the value of C_I.
All fits include an intercept.
Base terms: Intercept
df RSS | k C_I
Add: ARDCOM 14 12.9862 | 2 2.242
Add: AVERIMP 14 13.9896 | 2 3.343
Add: AVALIMP 14 14.9153 | 2 4.358
Add: ATMIMP 14 14.9584 | 2 4.405
Add: ATMCOM 14 15.5142 | 2 5.015
Add: ARDIMP 14 15.5162 | 2 5.017
Add: AVERCOM 14 15.5222 | 2 5.024
Add: AVALCOM 14 15.5226 | 2 5.024
Base terms: (ARDCOM)
df RSS | k C_I
Add: AVALCOM 13 8.11383 | 3 -1.101
137
Add: AVERIMP 13 8.14752 | 3 -1.064
Add: AVALIMP 13 8.17088 | 3 -1.039
Add: AVERCOM 13 9.8309 | 3 0.782
Add: ATMIMP 13 10.5283 | 3 1.547
Add: ARDIMP 13 11.0869 | 3 2.159
Add: ATMCOM 13 12.6071 | 3 3.827
Base terms: (ARDCOM AVALCOM)
df RSS | k C_I
Add: AVERIMP 12 6.6771 | 4 -0.677
Add: AVALIMP 12 7.01912 | 4 -0.302
Add: ATMIMP 12 7.62047 | 4 0.358
Add: ARDIMP 12 7.97534 | 4 0.747
Add: ATMCOM 12 8.1135 | 4 0.898
Add: AVERCOM 12 8.11381 | 4 0.899
Base terms: (ARDCOM AVALCOM AVERIMP)
df RSS | k C_I
Add: ARDIMP 11 6.56082 | 5 1.195
Add: AVERCOM 11 6.61886 | 5 1.259
Add: AVALIMP 11 6.65821 | 5 1.302
Add: ATMCOM 11 6.67185 | 5 1.317
Add: ATMIMP 11 6.67499 | 5 1.321
Base terms: (ARDCOM AVALCOM AVERIMP ARDIMP)
df RSS | k C_I
Add: AVERCOM 10 6.42616 | 6 3.048
Add: AVALIMP 10 6.43017 | 6 3.052
Add: ATMIMP 10 6.53714 | 6 3.169
Add: ATMCOM 10 6.54077 | 6 3.173
Base terms: (ARDCOM AVALCOM AVERIMP ARDIMP AVERCOM)
df RSS | k C_I
Add: AVALIMP 9 6.38554 | 7 5.003
Add: ATMCOM 9 6.41505 | 7 5.036
Add: ATMIMP 9 6.42412 | 7 5.045
Base terms: (ARDCOM AVALCOM AVERIMP ARDIMP AVERCOM AVALIMP)
df RSS | k C_I
Add: ATMIMP 8 6.38272 | 8 7.000
Add: ATMCOM 8 6.38554 | 8 7.003
Base terms: (ARDCOM AVALCOM AVERIMP ARDIMP AVERCOM AVALIMP ATMIMP)
df RSS | k C_I
Add: ATMCOM 7 6.38264 | 9 9.000
138
Data set = RAESurveyAll, Name of Fit = L1
Normal Regression
Kernel mean function = Identity
Response = Y1
Terms = (ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVALIMP
AVERCOM AVERIMP)
Cases not used and missing at least one value are:
(549853339 543132264)
Backward Elimination: Sequentially remove terms
that give the smallest change in C_I.
All fits include an intercept.
Current terms: (ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVALIMP
AVERCOM AVERIMP)
df RSS | k C_I
Delete: ATMCOM 8 6.38272 | 8 7.000
Delete: ATMIMP 8 6.38554 | 8 7.003
Delete: AVALIMP 8 6.41498 | 8 7.035
Delete: AVERCOM 8 6.42684 | 8 7.048
Delete: ARDIMP 8 6.57848 | 8 7.215
Delete: AVERIMP 8 6.65753 | 8 7.301
Delete: AVALCOM 8 6.96351 | 8 7.637
Delete: ARDCOM 8 11.678 | 8 12.808
Current terms: (ARDCOM ARDIMP ATMIMP AVALCOM AVALIMP AVERCOM
AVERIMP)
df RSS | k C_I
Delete: ATMIMP 9 6.38554 | 7 5.003
Delete: AVALIMP 9 6.42412 | 7 5.045
Delete: AVERCOM 9 6.4282 | 7 5.050
Delete: ARDIMP 9 6.58304 | 7 5.220
Delete: AVERIMP 9 6.74199 | 7 5.394
Delete: AVALCOM 9 6.96877 | 7 5.643
Delete: ARDCOM 9 11.6815 | 7 10.811
Current terms: (ARDCOM ARDIMP AVALCOM AVALIMP AVERCOM AVERIMP)
df RSS | k C_I
Delete: AVALIMP 10 6.42616 | 6 3.048
Delete: AVERCOM 10 6.43017 | 6 3.052
Delete: ARDIMP 10 6.61885 | 6 3.259
Delete: AVERIMP 10 6.74797 | 6 3.401
Delete: AVALCOM 10 6.98532 | 6 3.661
Delete: ARDCOM 10 11.6957 | 6 8.827
139
Current terms: (ARDCOM ARDIMP AVALCOM AVERCOM AVERIMP)
df RSS | k C_I
Delete: AVERCOM 11 6.56082 | 5 1.195
Delete: ARDIMP 11 6.61886 | 5 1.259
Delete: AVALCOM 11 7.63023 | 5 2.368
Delete: AVERIMP 11 7.97257 | 5 2.744
Delete: ARDCOM 11 11.7133 | 5 6.846
Current terms: (ARDCOM ARDIMP AVALCOM AVERIMP)
df RSS | k C_I
Delete: ARDIMP 12 6.6771 | 4 -0.677
Delete: AVERIMP 12 7.97534 | 4 0.747
Delete: AVALCOM 12 8.13613 | 4 0.923
Delete: ARDCOM 12 12.0849 | 4 5.254
Current terms: (ARDCOM AVALCOM AVERIMP)
df RSS | k C_I
Delete: AVERIMP 13 8.11383 | 3 -1.101
Delete: AVALCOM 13 8.14752 | 3 -1.064
Delete: ARDCOM 13 12.6967 | 3 3.925
Current terms: (ARDCOM AVALCOM)
df RSS | k C_I
Delete: AVALCOM 14 12.9862 | 2 2.242
Delete: ARDCOM 14 15.5226 | 2 5.024
B.5.2 Y1, Environmental Factors, and Submodel Considerations
This subsection contains the regression for predicting the Y3 with all the engineering areas, as
well as Malloc’s statistics for adding parameters to the null model and deleting from the full
model.
Data set = RAESurveyAll, Name of Fit = L2
Normal Regression
Kernel mean function = Identity
Response = Y1
Terms = (X01 X05 X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
Cases not used and missing at least one value are:
(549853339 549317657 559609609)
Coefficient Estimates
Label Estimate Std. Error t-value p-value
Constant -0.717843 1.09999 -0.653 0.5810
X01 -0.738740 1.05838 -0.698 0.5574
X05 -0.100169 0.837164 -0.120 0.9157
X06 -0.862444 0.843627 -1.022 0.4142
140
X12 -0.832737 1.30432 -0.638 0.5885
X13 -2.59988 3.05750 -0.850 0.4847
X14 0.666415 1.14132 0.584 0.6184
X15 1.12440 1.12893 0.996 0.4242
X16 -0.570023 1.21324 -0.470 0.6847
X17 -0.656447 0.676475 -0.970 0.4342
X18 -0.678675 1.04108 -0.652 0.5814
X19 0.287286 0.937158 0.307 0.7882
X20 0.486988 0.473027 1.030 0.4115
R Squared: 0.767735
Sigma hat: 1.1846
Number of cases: 18
Number of cases used: 15
Degrees of freedom: 2
Summary Analysis of Variance Table
Source df SS MS F p-value
Regression 12 9.2768 0.773066 0.55 0.7952
Residual 2 2.80654 1.40327
Data set = RAESurveyAll, Name of Fit = L2
Normal Regression
Kernel mean function = Identity
Response = Y1
Terms = (X01 X05 X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
Cases not used and missing at least one value are:
(549853339 549317657 559609609)
Forward Selection: Sequentially add terms
that minimize the value of C_I.
All fits include an intercept.
Base terms: Intercept
df RSS | k C_I
Add: X06 13 8.87933 | 2 -4.672
Add: X12 13 9.70422 | 2 -4.085
Add: X05 13 10.5751 | 2 -3.464
Add: X16 13 10.6771 | 2 -3.391
Add: X14 13 11.6282 | 2 -2.713
Add: X13 13 11.7188 | 2 -2.649
Add: X15 13 11.7943 | 2 -2.595
Add: X18 13 11.9139 | 2 -2.510
Add: X17 13 12.02 | 2 -2.434
Add: X01 13 12.0281 | 2 -2.429
Add: X20 13 12.038 | 2 -2.421
141
Add: X19 13 12.0536 | 2 -2.410
Base terms: (X06)
df RSS | k C_I
Add: X19 12 7.96137 | 3 -3.327
Add: X15 12 8.05755 | 3 -3.258
Add: X17 12 8.10916 | 3 -3.221
Add: X20 12 8.18185 | 3 -3.169
Add: X14 12 8.22154 | 3 -3.141
Add: X05 12 8.32945 | 3 -3.064
Add: X16 12 8.53373 | 3 -2.919
Add: X12 12 8.7085 | 3 -2.794
Add: X18 12 8.86515 | 3 -2.682
Add: X13 12 8.86802 | 3 -2.680
Add: X01 12 8.87881 | 3 -2.673
Base terms: (X06 X19)
df RSS | k C_I
Add: X17 11 7.02322 | 4 -1.995
Add: X14 11 7.17989 | 4 -1.883
Add: X20 11 7.37505 | 4 -1.744
Add: X16 11 7.38029 | 4 -1.741
Add: X15 11 7.50287 | 4 -1.653
Add: X12 11 7.63854 | 4 -1.557
Add: X05 11 7.72857 | 4 -1.492
Add: X18 11 7.80467 | 4 -1.438
Add: X01 11 7.93925 | 4 -1.342
Add: X13 11 7.96108 | 4 -1.327
Base terms: (X06 X19 X17)
df RSS | k C_I
Add: X01 10 6.05754 | 5 -0.683
Add: X05 10 6.11377 | 5 -0.643
Add: X12 10 6.28714 | 5 -0.520
Add: X16 10 6.52006 | 5 -0.354
Add: X14 10 6.52052 | 5 -0.353
Add: X20 10 6.53078 | 5 -0.346
Add: X18 10 6.75654 | 5 -0.185
Add: X15 10 6.94702 | 5 -0.049
Add: X13 10 7.0221 | 5 0.004
Base terms: (X06 X19 X17 X01)
df RSS | k C_I
Add: X20 9 5.24994 | 6 0.741
Add: X05 9 5.58834 | 6 0.982
142
Add: X14 9 5.65032 | 6 1.027
Add: X12 9 5.65918 | 6 1.033
Add: X18 9 5.78984 | 6 1.126
Add: X15 9 5.95629 | 6 1.245
Add: X16 9 5.9677 | 6 1.253
Add: X13 9 6.05622 | 6 1.316
Base terms: (X06 X19 X17 X01 X20)
df RSS | k C_I
Add: X05 8 4.56231 | 7 2.251
Add: X15 8 4.65624 | 7 2.318
Add: X14 8 4.94416 | 7 2.523
Add: X18 8 5.11582 | 7 2.646
Add: X12 8 5.1477 | 7 2.668
Add: X13 8 5.24452 | 7 2.737
Add: X16 8 5.24864 | 7 2.740
Base terms: (X06 X19 X17 X01 X20 X05)
df RSS | k C_I
Add: X18 7 4.43355 | 8 4.159
Add: X15 7 4.4506 | 8 4.172
Add: X16 7 4.45419 | 8 4.174
Add: X12 7 4.48867 | 8 4.199
Add: X14 7 4.49579 | 8 4.204
Add: X13 7 4.53125 | 8 4.229
Base terms: (X06 X19 X17 X01 X20 X05 X18)
df RSS | k C_I
Add: X15 6 4.22458 | 9 6.011
Add: X14 6 4.23126 | 9 6.015
Add: X13 6 4.27617 | 9 6.047
Add: X12 6 4.43133 | 9 6.158
Add: X16 6 4.43249 | 9 6.159
Base terms: (X06 X19 X17 X01 X20 X05 X18 X15)
df RSS | k C_I
Add: X13 5 3.53407 | 10 7.518
Add: X14 5 3.98242 | 10 7.838
Add: X16 5 4.20955 | 10 8.000
Add: X12 5 4.22458 | 10 8.011
Base terms: (X06 X19 X17 X01 X20 X05 X18 X15 X13)
df RSS | k C_I
Add: X16 4 3.41296 | 11 9.432
Add: X14 4 3.42745 | 11 9.442
143
Add: X12 4 3.53036 | 11 9.516
Base terms: (X06 X19 X17 X01 X20 X05 X18 X15 X13 X16)
df RSS | k C_I
Add: X12 3 3.28496 | 12 11.341
Add: X14 3 3.37853 | 12 11.408
Base terms: (X06 X19 X17 X01 X20 X05 X18 X15 X13 X16 X12)
df RSS | k C_I
Add: X14 2 2.80654 | 13 13.000
Data set = RAESurveyAll, Name of Fit = L2
Normal Regression
Kernel mean function = Identity
Response = Y1
Terms = (X01 X05 X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
Cases not used and missing at least one value are:
(549853339 549317657 559609609)
Backward Elimination: Sequentially remove terms
that give the smallest change in C_I.
All fits include an intercept.
Current terms: (X01 X05 X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
df RSS | k C_I
Delete: X05 3 2.82663 | 12 11.014
Delete: X19 3 2.93841 | 12 11.094
Delete: X16 3 3.1163 | 12 11.221
Delete: X14 3 3.28496 | 12 11.341
Delete: X12 3 3.37853 | 12 11.408
Delete: X18 3 3.40288 | 12 11.425
Delete: X01 3 3.4902 | 12 11.487
Delete: X13 3 3.82119 | 12 11.723
Delete: X17 3 4.12795 | 12 11.942
Delete: X15 3 4.19855 | 12 11.992
Delete: X06 3 4.2731 | 12 12.045
Delete: X20 3 4.29386 | 12 12.060
Current terms: (X01 X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
df RSS | k C_I
Delete: X14 4 3.32771 | 11 9.371
Delete: X19 4 3.35692 | 11 9.392
Delete: X12 4 3.38497 | 11 9.412
Delete: X16 4 3.49065 | 11 9.488
Delete: X13 4 3.85383 | 11 9.746
Delete: X18 4 4.08484 | 11 9.911
144
Delete: X17 4 4.17668 | 11 9.976
Delete: X15 4 4.24857 | 11 10.028
Delete: X20 4 4.34876 | 11 10.099
Delete: X01 4 4.73889 | 11 10.377
Delete: X06 4 5.93238 | 11 11.228
Current terms: (X01 X06 X12 X13 X15 X16 X17 X18 X19 X20)
df RSS | k C_I
Delete: X12 5 3.41496 | 10 7.434
Delete: X16 5 3.59027 | 10 7.559
Delete: X18 5 4.22074 | 10 8.008
Delete: X13 5 4.22257 | 10 8.009
Delete: X17 5 4.312 | 10 8.073
Delete: X20 5 4.45192 | 10 8.173
Delete: X19 5 4.68299 | 10 8.337
Delete: X15 5 4.70535 | 10 8.353
Delete: X01 5 4.79473 | 10 8.417
Delete: X06 5 6.93154 | 10 9.940
Current terms: (X01 X06 X13 X15 X16 X17 X18 X19 X20)
df RSS | k C_I
Delete: X16 6 3.59398 | 9 5.561
Delete: X13 6 4.24879 | 9 6.028
Delete: X17 6 4.31248 | 9 6.073
Delete: X18 6 4.35207 | 9 6.101
Delete: X19 6 4.72944 | 9 6.370
Delete: X15 6 4.86271 | 9 6.465
Delete: X01 6 4.88057 | 9 6.478
Delete: X20 6 4.8825 | 9 6.479
Delete: X06 6 8.93924 | 9 9.370
Current terms: (X01 X06 X13 X15 X17 X18 X19 X20)
df RSS | k C_I
Delete: X13 7 4.35146 | 8 4.101
Delete: X17 7 4.36065 | 8 4.107
Delete: X18 7 4.56153 | 8 4.251
Delete: X19 7 4.79698 | 8 4.418
Delete: X20 7 4.91694 | 8 4.504
Delete: X15 7 5.10432 | 8 4.637
Delete: X01 7 5.20438 | 8 4.709
Delete: X06 7 10.7247 | 8 8.643
Current terms: (X01 X06 X15 X17 X18 X19 X20)
df RSS | k C_I
Delete: X18 8 4.65624 | 7 2.318
145
Delete: X15 8 5.11582 | 7 2.646
Delete: X19 8 5.48568 | 7 2.909
Delete: X17 8 5.52038 | 7 2.934
Delete: X20 8 5.56183 | 7 2.963
Delete: X01 8 5.84448 | 7 3.165
Delete: X06 8 10.7263 | 7 6.644
Current terms: (X01 X06 X15 X17 X19 X20)
df RSS | k C_I
Delete: X15 9 5.24994 | 6 0.741
Delete: X19 9 5.50017 | 6 0.920
Delete: X17 9 5.8211 | 6 1.148
Delete: X20 9 5.95629 | 6 1.245
Delete: X01 9 6.15224 | 6 1.384
Delete: X06 9 11.035 | 6 4.864
Current terms: (X01 X06 X17 X19 X20)
df RSS | k C_I
Delete: X20 10 6.05754 | 5 -0.683
Delete: X01 10 6.53078 | 5 -0.346
Delete: X19 10 6.84103 | 5 -0.125
Delete: X17 10 7.28747 | 5 0.193
Delete: X06 10 11.6849 | 5 3.327
Current terms: (X01 X06 X17 X19)
df RSS | k C_I
Delete: X01 11 7.02322 | 4 -1.995
Delete: X19 11 7.70444 | 4 -1.510
Delete: X17 11 7.93925 | 4 -1.342
Delete: X06 11 11.7677 | 4 1.386
Current terms: (X06 X17 X19)
df RSS | k C_I
Delete: X17 12 7.96137 | 3 -3.327
Delete: X19 12 8.10916 | 3 -3.221
Delete: X06 12 11.98 | 3 -0.463
Current terms: (X06 X19)
df RSS | k C_I
Delete: X19 13 8.87933 | 2 -4.672
Delete: X06 13 12.0536 | 2 -2.410
146
B.5.3 Final Regression for Y1
Data set = RAESurveyAll, Name of Fit = L8
Normal Regression
Kernel mean function = Identity
Response = Y1
Terms = (ARDCOM AVERIMP X05 X17)
Cases not used and missing at least one value are:
(555438661)
Coefficient Estimates
Label Estimate Std. Error t-value p-value
Constant -0.0363716 0.182295 -0.200 0.8452
ARDCOM 0.541847 0.344757 1.572 0.1420
AVERIMP -0.843127 0.280020 -3.011 0.0108
X05 -0.399378 0.247594 -1.613 0.1327
X17 -0.405184 0.238325 -1.700 0.1149
R Squared: 0.582514
Sigma hat: 0.745917
Number of cases: 18
Number of cases used: 17
Degrees of freedom: 12
Summary Analysis of Variance Table
Source df SS MS F p-value
Regression 4 9.31594 2.32899 4.19 0.0238
Residual 12 6.6767 0.556392
B.6 Regression for Fitness for Purpose (Y0)
check if any project factors correlate with Y0 (X12-X20) X16 has a strong correlation check if any
individual factors correlate with Y0 (X1, X5, X6) Note: X6 (acq exp) has NO (0.00) correlation
with Y0 adding X16 in we find: A V ALIMP, A VERCOM positivly estimated to Y0 and A VERIMP
and X16 negatively estimated to Y0 p-value 0.0016 / r-squared 0.80 / 15 cases used
B.6.1 Y0, Full Engineering Factors, and Submodel Considerations
This subsection contains the regression for predicting the Y0 response with all the engineering
areas, as well as Malloc’s statistics for adding parameters to the null model and deleting from the
full model.
Data set = RAESurveyAll, Name of Fit = L1
Normal Regression
147
Kernel mean function = Identity
Response = Y0
Terms = (ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVALIMP
AVERCOM AVERIMP)
Cases not used and missing at least one value are:
(559609609 543132264)
Coefficient Estimates
Label Estimate Std. Error t-value p-value
Constant -0.00970983 0.329908 -0.029 0.9773
ARDCOM 0.775041 0.782135 0.991 0.3547
ARDIMP -0.376412 1.03357 -0.364 0.7265
ATMCOM 0.223037 0.527100 0.423 0.6849
ATMIMP 0.156079 0.754310 0.207 0.8420
AVALCOM 0.582965 1.49911 0.389 0.7089
AVALIMP 0.294686 1.16498 0.253 0.8076
AVERCOM -0.841625 1.26646 -0.665 0.5276
AVERIMP 0.0231296 1.21570 0.019 0.9854
R Squared: 0.449846
Sigma hat: 1.11933
Number of cases: 18
Number of cases used: 16
Degrees of freedom: 7
Summary Analysis of Variance Table
Source df SS MS F p-value
Regression 8 7.17117 0.896396 0.72 0.6771
Residual 7 8.77023 1.25289
Data set = RAESurveyAll, Name of Fit = L1
Normal Regression
Kernel mean function = Identity
Response = Y0
Terms = (ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVALIMP
AVERCOM AVERIMP)
Cases not used and missing at least one value are:
(559609609 543132264)
Forward Selection: Sequentially add terms
that minimize the value of C_I.
All fits include an intercept.
Base terms: Intercept
df RSS | k C_I
Add: ARDCOM 14 10.8952 | 2 -3.304
Add: AVALCOM 14 12.2557 | 2 -2.218
148
Add: AVALIMP 14 12.3283 | 2 -2.160
Add: ATMCOM 14 12.8832 | 2 -1.717
Add: ARDIMP 14 13.5391 | 2 -1.194
Add: ATMIMP 14 13.9614 | 2 -0.857
Add: AVERCOM 14 14.1977 | 2 -0.668
Add: AVERIMP 14 14.544 | 2 -0.392
Base terms: (ARDCOM)
df RSS | k C_I
Add: ATMCOM 13 10.13 | 3 -1.915
Add: AVALIMP 13 10.4292 | 3 -1.676
Add: ATMIMP 13 10.4523 | 3 -1.657
Add: AVERCOM 13 10.5689 | 3 -1.564
Add: ARDIMP 13 10.7429 | 3 -1.425
Add: AVERIMP 13 10.8636 | 3 -1.329
Add: AVALCOM 13 10.8889 | 3 -1.309
Base terms: (ARDCOM ATMCOM)
df RSS | k C_I
Add: AVERCOM 12 9.65917 | 4 -0.290
Add: AVALIMP 12 9.9645 | 4 -0.047
Add: ATMIMP 12 9.98039 | 4 -0.034
Add: AVALCOM 12 10.1037 | 4 0.064
Add: ARDIMP 12 10.105 | 4 0.065
Add: AVERIMP 12 10.1199 | 4 0.077
Base terms: (ARDCOM ATMCOM AVERCOM)
df RSS | k C_I
Add: AVALIMP 11 9.09196 | 5 1.257
Add: AVALCOM 11 9.16216 | 5 1.313
Add: AVERIMP 11 9.37458 | 5 1.482
Add: ATMIMP 11 9.4364 | 5 1.532
Add: ARDIMP 11 9.54442 | 5 1.618
Base terms: (ARDCOM ATMCOM AVERCOM AVALIMP)
df RSS | k C_I
Add: AVALCOM 10 8.93891 | 6 3.135
Add: ARDIMP 10 9.04383 | 6 3.218
Add: AVERIMP 10 9.04773 | 6 3.221
Add: ATMIMP 10 9.0919 | 6 3.257
Base terms: (ARDCOM ATMCOM AVERCOM AVALIMP AVALCOM)
df RSS | k C_I
Add: ARDIMP 9 8.83113 | 7 5.049
Add: AVERIMP 9 8.93649 | 7 5.133
149
Add: ATMIMP 9 8.93875 | 7 5.135
Base terms: (ARDCOM ATMCOM AVERCOM AVALIMP AVALCOM ARDIMP)
df RSS | k C_I
Add: ATMIMP 8 8.77068 | 8 7.000
Add: AVERIMP 8 8.82387 | 8 7.043
Base terms: (ARDCOM ATMCOM AVERCOM AVALIMP AVALCOM ARDIMP
ATMIMP)
df RSS | k C_I
Add: AVERIMP 7 8.77023 | 9 9.000
Data set = RAESurveyAll, Name of Fit = L1
Normal Regression
Kernel mean function = Identity
Response = Y0
Terms = (ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVALIMP
AVERCOM AVERIMP)
Cases not used and missing at least one value are:
(559609609 543132264)
Backward Elimination: Sequentially remove terms
that give the smallest change in C_I.
All fits include an intercept.
Current terms: (ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVALIMP
AVERCOM AVERIMP)
df RSS | k C_I
Delete: AVERIMP 8 8.77068 | 8 7.000
Delete: ATMIMP 8 8.82387 | 8 7.043
Delete: AVALIMP 8 8.8504 | 8 7.064
Delete: ARDIMP 8 8.9364 | 8 7.133
Delete: AVALCOM 8 8.95969 | 8 7.151
Delete: ATMCOM 8 8.99455 | 8 7.179
Delete: AVERCOM 8 9.32354 | 8 7.442
Delete: ARDCOM 8 10.0005 | 8 7.982
Current terms: (ARDCOM ARDIMP ATMCOM ATMIMP AVALCOM AVALIMP
AVERCOM)
df RSS | k C_I
Delete: ATMIMP 9 8.83113 | 7 5.049
Delete: ARDIMP 9 8.93875 | 7 5.135
Delete: AVALCOM 9 9.01098 | 7 5.192
Delete: AVALIMP 9 9.01856 | 7 5.198
Delete: ATMCOM 9 9.05151 | 7 5.225
Delete: AVERCOM 9 9.6438 | 7 5.697
150
Delete: ARDCOM 9 10.0035 | 7 5.984
Current terms: (ARDCOM ARDIMP ATMCOM AVALCOM AVALIMP AVERCOM)
df RSS | k C_I
Delete: ARDIMP 10 8.93891 | 6 3.135
Delete: AVALCOM 10 9.04383 | 6 3.218
Delete: AVALIMP 10 9.16215 | 6 3.313
Delete: ATMCOM 10 9.18168 | 6 3.328
Delete: AVERCOM 10 9.70713 | 6 3.748
Delete: ARDCOM 10 10.0443 | 6 4.017
Current terms: (ARDCOM ATMCOM AVALCOM AVALIMP AVERCOM)
df RSS | k C_I
Delete: AVALCOM 11 9.09196 | 5 1.257
Delete: AVALIMP 11 9.16216 | 5 1.313
Delete: ATMCOM 11 9.28981 | 5 1.415
Delete: AVERCOM 11 9.72026 | 5 1.758
Delete: ARDCOM 11 10.1751 | 5 2.121
Current terms: (ARDCOM ATMCOM AVALIMP AVERCOM)
df RSS | k C_I
Delete: ATMCOM 12 9.52639 | 4 -0.396
Delete: AVALIMP 12 9.65917 | 4 -0.290
Delete: AVERCOM 12 9.9645 | 4 -0.047
Delete: ARDCOM 12 11.4924 | 4 1.173
Current terms: (ARDCOM AVALIMP AVERCOM)
df RSS | k C_I
Delete: AVERCOM 13 10.4292 | 3 -1.676
Delete: AVALIMP 13 10.5689 | 3 -1.564
Delete: ARDCOM 13 12.3263 | 3 -0.162
Current terms: (ARDCOM AVALIMP)
df RSS | k C_I
Delete: AVALIMP 14 10.8952 | 2 -3.304
Delete: ARDCOM 14 12.3283 | 2 -2.160
B.6.2 Y0, Environmental Factors, and Submodel Considerations
This subsection contains the regression for predicting the Y0 response with all the environmental
factors, as well as Malloc’s statistics for adding parameters to the null model and deleting from
the full model.
Data set = RAESurveyAll, Name of Fit = L2
Normal Regression
151
Kernel mean function = Identity
Response = Y0
Terms = (X01 X05 X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
Cases not used and missing at least one value are:
(549853339 549317657 559609609)
Coefficient Estimates
Label Estimate Std. Error t-value p-value
Constant 0.584606 0.963889 0.607 0.6059
X01 0.0521032 0.927431 0.056 0.9603
X05 -0.103599 0.733583 -0.141 0.9006
X06 -0.577867 0.739247 -0.782 0.5162
X12 0.606168 1.14294 0.530 0.6489
X13 2.04182 2.67920 0.762 0.5256
X14 -0.637171 1.00011 -0.637 0.5893
X15 -1.39719 0.989253 -1.412 0.2934
X16 -1.05412 1.06313 -0.992 0.4259
X17 -0.696074 0.592776 -1.174 0.3612
X18 -0.783225 0.912271 -0.859 0.4811
X19 1.11132 0.821205 1.353 0.3086
X20 -0.607847 0.414501 -1.466 0.2802
R Squared: 0.864817
Sigma hat: 1.03803
Number of cases: 18
Number of cases used: 15
Degrees of freedom: 2
Summary Analysis of Variance Table
Source df SS MS F p-value
Regression 12 13.7864 1.14887 1.07 0.5816
Residual 2 2.15501 1.0775
Data set = RAESurveyAll, Name of Fit = L2
Normal Regression
Kernel mean function = Identity
Response = Y0
Terms = (X01 X05 X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
Cases not used and missing at least one value are:
(549853339 549317657 559609609)
Forward Selection: Sequentially add terms
that minimize the value of C_I.
All fits include an intercept.
Base terms: Intercept
df RSS | k C_I
152
Add: X16 13 12.9893 | 2 1.055
Add: X01 13 13.6382 | 2 1.657
Add: X12 13 14.7556 | 2 2.694
Add: X20 13 15.0843 | 2 2.999
Add: X19 13 15.2297 | 2 3.134
Add: X14 13 15.4421 | 2 3.331
Add: X05 13 15.736 | 2 3.604
Add: X17 13 15.7727 | 2 3.638
Add: X18 13 15.8347 | 2 3.696
Add: X15 13 15.9414 | 2 3.795
Add: X06 13 15.9414 | 2 3.795
Add: X13 13 15.9414 | 2 3.795
Base terms: (X16)
df RSS | k C_I
Add: X18 12 11.1025 | 3 1.304
Add: X14 12 11.9703 | 3 2.109
Add: X01 12 12.0581 | 3 2.191
Add: X20 12 12.2241 | 3 2.345
Add: X15 12 12.5508 | 3 2.648
Add: X06 12 12.5621 | 3 2.659
Add: X13 12 12.635 | 3 2.726
Add: X17 12 12.8867 | 3 2.960
Add: X05 12 12.8973 | 3 2.970
Add: X19 12 12.9007 | 3 2.973
Add: X12 12 12.9238 | 3 2.994
Base terms: (X16 X18)
df RSS | k C_I
Add: X19 11 9.90711 | 4 2.194
Add: X20 11 10.4118 | 4 2.663
Add: X01 11 10.4611 | 4 2.709
Add: X14 11 10.6473 | 4 2.881
Add: X06 11 10.762 | 4 2.988
Add: X17 11 10.7967 | 4 3.020
Add: X15 11 10.9815 | 4 3.192
Add: X05 11 11.0573 | 4 3.262
Add: X13 11 11.0945 | 4 3.296
Add: X12 11 11.1004 | 4 3.302
Base terms: (X16 X18 X19)
df RSS | k C_I
Add: X06 10 8.05009 | 5 2.471
Add: X20 10 8.50964 | 5 2.898
Add: X01 10 9.56405 | 5 3.876
153
Add: X17 10 9.61956 | 5 3.928
Add: X15 10 9.64785 | 5 3.954
Add: X14 10 9.73037 | 5 4.030
Add: X12 10 9.81952 | 5 4.113
Add: X13 10 9.89331 | 5 4.182
Add: X05 10 9.89516 | 5 4.183
Base terms: (X16 X18 X19 X06)
df RSS | k C_I
Add: X17 9 6.79618 | 6 3.307
Add: X20 9 7.30745 | 6 3.782
Add: X12 9 7.4118 | 6 3.879
Add: X05 9 7.70279 | 6 4.149
Add: X15 9 7.78085 | 6 4.221
Add: X01 9 7.84801 | 6 4.284
Add: X13 9 7.89522 | 6 4.327
Add: X14 9 8.00792 | 6 4.432
Base terms: (X16 X18 X19 X06 X17)
df RSS | k C_I
Add: X15 8 5.41883 | 7 4.029
Add: X20 8 5.90371 | 7 4.479
Add: X01 8 6.34025 | 7 4.884
Add: X13 8 6.52286 | 7 5.054
Add: X12 8 6.65396 | 7 5.175
Add: X14 8 6.6674 | 7 5.188
Add: X05 8 6.75885 | 7 5.273
Base terms: (X16 X18 X19 X06 X17 X15)
df RSS | k C_I
Add: X20 7 3.19248 | 8 3.963
Add: X01 7 4.70283 | 8 5.365
Add: X05 7 5.0997 | 8 5.733
Add: X13 7 5.32178 | 8 5.939
Add: X12 7 5.34885 | 8 5.964
Add: X14 7 5.37944 | 8 5.993
Base terms: (X16 X18 X19 X06 X17 X15 X20)
df RSS | k C_I
Add: X05 6 2.81569 | 9 5.613
Add: X01 6 3.00508 | 9 5.789
Add: X13 6 3.04526 | 9 5.826
Add: X12 6 3.13693 | 9 5.911
Add: X14 6 3.17571 | 9 5.947
154
Base terms: (X16 X18 X19 X06 X17 X15 X20 X05)
df RSS | k C_I
Add: X13 5 2.61573 | 10 7.428
Add: X14 5 2.79387 | 10 7.593
Add: X12 5 2.81267 | 10 7.610
Add: X01 5 2.81567 | 10 7.613
Base terms: (X16 X18 X19 X06 X17 X15 X20 X05 X13)
df RSS | k C_I
Add: X14 4 2.4583 | 11 9.281
Add: X01 4 2.59872 | 11 9.412
Add: X12 4 2.615 | 11 9.427
Base terms: (X16 X18 X19 X06 X17 X15 X20 X05 X13 X14)
df RSS | k C_I
Add: X12 3 2.15841 | 12 11.003
Add: X01 3 2.45809 | 12 11.281
Base terms: (X16 X18 X19 X06 X17 X15 X20 X05 X13 X14 X12)
df RSS | k C_I
Add: X01 2 2.15501 | 13 13.000
Data set = RAESurveyAll, Name of Fit = L2
Normal Regression
Kernel mean function = Identity
Response = Y0
Terms = (X01 X05 X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
Cases not used and missing at least one value are:
(549853339 549317657 559609609)
Backward Elimination: Sequentially remove terms
that give the smallest change in C_I.
All fits include an intercept.
Current terms: (X01 X05 X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
df RSS | k C_I
Delete: X01 3 2.15841 | 12 11.003
Delete: X05 3 2.1765 | 12 11.020
Delete: X12 3 2.45809 | 12 11.281
Delete: X14 3 2.59236 | 12 11.406
Delete: X13 3 2.78082 | 12 11.581
Delete: X06 3 2.81342 | 12 11.611
Delete: X18 3 2.94923 | 12 11.737
Delete: X16 3 3.21433 | 12 11.983
Delete: X17 3 3.64077 | 12 12.379
Delete: X19 3 4.12832 | 12 12.831
155
Delete: X15 3 4.30439 | 12 12.995
Delete: X20 3 4.47217 | 12 13.150
Current terms: (X05 X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
df RSS | k C_I
Delete: X05 4 2.18201 | 11 9.025
Delete: X12 4 2.4583 | 11 9.281
Delete: X14 4 2.615 | 11 9.427
Delete: X13 4 2.78373 | 11 9.583
Delete: X06 4 3.57887 | 11 10.321
Delete: X17 4 4.34937 | 11 11.037
Delete: X18 4 4.35615 | 11 11.043
Delete: X15 4 4.4329 | 11 11.114
Delete: X20 4 4.6466 | 11 11.312
Delete: X16 4 5.52441 | 11 12.127
Delete: X19 4 6.0286 | 11 12.595
Current terms: (X06 X12 X13 X14 X15 X16 X17 X18 X19 X20)
df RSS | k C_I
Delete: X12 5 2.55579 | 10 7.372
Delete: X14 5 3.01196 | 10 7.795
Delete: X13 5 3.13488 | 10 7.909
Delete: X06 5 3.89956 | 10 8.619
Delete: X18 5 4.50713 | 10 9.183
Delete: X15 5 4.65982 | 10 9.325
Delete: X20 5 4.92412 | 10 9.570
Delete: X17 5 5.07796 | 10 9.713
Delete: X16 5 5.57437 | 10 10.173
Delete: X19 5 7.99065 | 10 12.416
Current terms: (X06 X13 X14 X15 X16 X17 X18 X19 X20)
df RSS | k C_I
Delete: X14 6 3.04526 | 9 5.826
Delete: X13 6 3.17571 | 9 5.947
Delete: X06 6 3.9516 | 9 6.667
Delete: X15 6 4.95592 | 9 7.599
Delete: X18 6 5.02173 | 9 7.661
Delete: X20 6 5.31851 | 9 7.936
Delete: X17 6 6.63179 | 9 9.155
Delete: X19 6 8.22337 | 9 10.632
Delete: X16 6 11.8994 | 9 14.043
Current terms: (X06 X13 X15 X16 X17 X18 X19 X20)
df RSS | k C_I
Delete: X13 7 3.19248 | 8 3.963
156
Delete: X06 7 5.04738 | 8 5.684
Delete: X20 7 5.32178 | 8 5.939
Delete: X15 7 5.36234 | 8 5.977
Delete: X18 7 5.94133 | 8 6.514
Delete: X17 7 6.63698 | 8 7.160
Delete: X19 7 8.41774 | 8 8.812
Delete: X16 7 11.9044 | 8 12.048
Current terms: (X06 X15 X16 X17 X18 X19 X20)
df RSS | k C_I
Delete: X20 8 5.41883 | 7 4.029
Delete: X06 8 5.85483 | 7 4.434
Delete: X15 8 5.90371 | 7 4.479
Delete: X17 8 6.64902 | 7 5.171
Delete: X18 8 8.26003 | 7 6.666
Delete: X19 8 8.64564 | 7 7.024
Delete: X16 8 12.1722 | 7 10.297
Current terms: (X06 X15 X16 X17 X18 X19)
df RSS | k C_I
Delete: X15 9 6.79618 | 6 3.307
Delete: X17 9 7.78085 | 6 4.221
Delete: X06 9 8.94977 | 6 5.306
Delete: X19 9 9.64129 | 6 5.948
Delete: X18 9 10.1354 | 6 6.406
Delete: X16 9 14.3143 | 6 10.285
Current terms: (X06 X16 X17 X18 X19)
df RSS | k C_I
Delete: X17 10 8.05009 | 5 2.471
Delete: X06 10 9.61956 | 5 3.928
Delete: X19 10 10.1488 | 5 4.419
Delete: X18 10 11.6795 | 5 5.839
Delete: X16 10 14.5962 | 5 8.546
Current terms: (X06 X16 X18 X19)
df RSS | k C_I
Delete: X06 11 9.90711 | 4 2.194
Delete: X19 11 10.762 | 4 2.988
Delete: X18 11 12.0675 | 4 4.200
Delete: X16 11 14.9007 | 4 6.829
Current terms: (X16 X18 X19)
df RSS | k C_I
Delete: X19 12 11.1025 | 3 1.304
157
Delete: X18 12 12.9007 | 3 2.973
Delete: X16 12 15.2031 | 3 5.110
Current terms: (X16 X18)
df RSS | k C_I
Delete: X18 13 12.9893 | 2 1.055
Delete: X16 13 15.8347 | 2 3.696
B.6.3 Final Regression for Y0
Data set = RAESurveyAll, Name of Fit = L6
Deleted cases are
(549861855)
Normal Regression
Kernel mean function = Identity
Response = Y0
Terms = (ARDCOM AVERCOM X16 AVERIMP)
Cases not used and missing at least one value are:
(559609609 543132264)
Coefficient Estimates
Label Estimate Std. Error t-value p-value
Constant 0.172752 0.176404 0.979 0.3505
ARDCOM 0.505454 0.391830 1.290 0.2261
AVERCOM 0.603351 0.457424 1.319 0.2166
X16 -0.676788 0.210299 -3.218 0.0092
AVERIMP -0.865139 0.381677 -2.267 0.0468
R Squared: 0.720857
Sigma hat: 0.604882
Number of cases: 18
Number of cases used: 15
Degrees of freedom: 10
Summary Analysis of Variance Table
Source df SS MS F p-value
Regression 4 9.44855 2.36214 6.46 0.0078
Residual 10 3.65883 0.365883
Data set = RAESurveyAll, Name of Fit = L10
Deleted cases are
(549861855)
Normal Regression
Kernel mean function = Identity
158
Response = Y0
Terms = (X16 AVALIMP AVERCOM AVERIMP)
Cases not used and missing at least one value are:
(559609609 543132264)
Coefficient Estimates
Label Estimate Std. Error t-value p-value
Constant 0.324600 0.140713 2.307 0.0437
X16 -0.553884 0.186632 -2.968 0.0141
AVALIMP 0.899002 0.357597 2.514 0.0307
AVERCOM 0.879613 0.262355 3.353 0.0073
AVERIMP -1.66958 0.407025 -4.102 0.0021
R Squared: 0.800497
Sigma hat: 0.511367
Number of cases: 18
Number of cases used: 15
Degrees of freedom: 10
Summary Analysis of Variance Table
Source df SS MS F p-value
Regression 4 10.4924 2.6231 10.03 0.0016
Residual 10 2.61496 0.261496
159
Appendix C
Surface Assessment Robot Project Analysis
C.1 Introduction
In this case, we examine a company that desires a reduction in the time to inspect the smoothness
of a road surface while maintaining the same inspection quality as the manual method historically
employed. However, despite the delivered robotic system reducing the inspection time by a factor
of 100 and meeting return-on-investment goals, the robot was not transitioned into operational
use.
This case is interesting, as the acquisition involved well-exercised methods that had yielded
successes for the developer and acquisition organizations in the past, with most of the people
involved having personal experience in bringing projects to success. The main deviation for the
users were that this was now a robotic system, rather than a structure or manually-operated tool.
This case study will start with introducing the acquisition details of interest: the environment,
describing the organizations involved, the task to be performed by the robot, and the related
technologies. Then the case moves into discussing the development of the system from concept
exploration to robot construction. However, this description focuses on the interaction points
between the developers and acquirers, to maintain the focus on the acquisition issues. Next, the
final demonstrations and discovery that led to the system not being put into operational use are
examined. Finally, we conclude with a discussion about the various methods employed and the
effectiveness of those methods to meet the needs of this acquisition.
C.2 Background
This section covers the cast of players, a brief description of the technical task that was eventually
selected for automation, the environment in which the acquisition took place, and the organiza-
tional structure of the client, acquisition organization, and the development organization.
C.2.1 Cast of Players
Acquirer - refers to one of many specific individuals who work for the client. At various points,
this individual may be a management person or an engineer who works for the client. This
individual may or may not be part of the Acquisition Organization (AO (example of an acquirer
who is not part of the AO is an independent engineering expert who advises the AO, but does not
report to the AO).
160
Acquiring Organization - referred to as the “AO”, is an group within a corporation that is
responsible for executing acquisition tasks to bring in new systems or capabilities.
Developers - Collectively the technical staff and graduate students of Carnegie Mellon Uni-
versity who are working to design and deliver the robot.
Development Organization - in this case is collectively Carnegie Mellon University and the
Institute for Complex Engineered Systems.
Client - the acquiring corporation (taken as a whole company) is the client.
User - the user of the robotic system, in this case a field worker charged with performing an
assessment of a road surface.
C.2.2 Road Surface Assessment Domain Description
Assessment of constructed environments is a time-consuming activity, typically done late in the
build phase of a civil engineering project. Although some predictive measurements can be done
to determine the quality of the process that will yield a final built component; some requirements
are sufficiently important to a system to require verification that the end product meets the quality
standard. Thus comes a construction manager’s problem, inspections are expensive and time-
consuming, but delivering a structure that does not meet a critical requirement can be even more
damaging. Thus any system that can dramatically reduce the inspection cycle is a straight-forward
win for the construction manager. Indeed, in many cases, even sub-contractors desire quick,
thorough inspections, as that will trigger a release of their contingency payment (held in case
of latent defect discovery). Overall, a solution that reduces the inspection time has an easily
demonstrated return on investment.
The quality of road surfaces are usually assessed against their smoothness, which is measured
as a maximum deviation from a desired elevation profile (line) over a specified distance, also
termed roughness. The main standard for determining and expressing roughness is promulgated
by the World Bank (which funds a large percentage of public works road projects worldwide).
Typically a road roughness measuring devise will trace itself back through various tests and cer-
tifications to this standard, and this is the generally accepted standard by which to express quality
desires for a delivered road surface. A typical US highway road roughness standard is 1/4 inch in
10 feet.
The client’s desire is to operate vehicles at moderate speeds (30-40 miles per hour) in which
people would be standing in this vehicle. Passengers of this system can include children through
elderly. Hence, the vehicle must impart only low quantities of jerk (3rd derivative of position,
the “velocity of acceleration”) and yank (4th derivative of position, the “acceleration of acceler-
ation”) to passengers. This ends up being the performance requirement for the vehicle-roadway
combined system. To meet this requirement, typical US highway road roughness standards would
not work with their vehicle system - a smoother roadway is commissioned. Thus, the client ex-
presses a desired roughness to a subcontractor that will provide the concrete road surface.
The road needs to be inspected for suitability before further infrastructure is installed to sup-
port the vehicle (and hence would cause massive rework costs to correct the roadway and also
remove and reinstall the additional infrastructure). Since the vehicle cannot be installed and jerk
and yank measured directly, roughness is measured. The client has done this typically by manual
means, using an engineer with a extruded metal straight edge to look for dips under the edge
161
or lever action beyond the tolerated specification. Although this process is very slow, the engi-
neer receives two main benefits from this manual process. First, the engineer could perform dual
wheel profile correlation (determining variation between two related profiles, which was not done
by profile tools at the time). Second, the method allowed the engineer with pinpoint accuracy to
mark the deviations as they are detected, negating the cost or time for a survey team to relocate a
deviation given a report from a commercial road profiler.
Once a deviation is detected and identified the method for correcting the deviation depends on
the nature of the deviation. for a “bump” (deviation above the upper control limit - e.g. more than
1/4 inch in a highway), then a grinder could be brought in to grind the bump down to the level of
the rest of the profile. However, a “dip” (deviation below the lower control limit), required more
extensive work. A region 5 feet to either side of the dip would have to be demolished and the
concrete cast to reform a road in that region. No known patch bonds well enough to concrete to
fill shallow dips with an adequate lifespan; patches tend to delaminate or fail to bond sufficiently.
This “dip” case is the main motivation for early inspection, as the additional infrastructure is
installed on the roadway before the vehicle can operate (and thus would have to be removed to
correct a dip deviation).
C.2.3 Acquisition Environment
A concept was developed in the Summer of 2000 after one faculty member and one graduate
student at CMU completed an engineering design course project with the client and continued
through various concept explorations to the summer of 2001 to determine a future civil engineer-
ing inspection tool that would generate significant value to the client. A decision was made to
select one of the engineering designs presented and develop it into a tool to aid with their road
surface inspection.
This acquisition occurred as a sponsored (a.k.a. contracted) research agreement between
the client and CMU. Also, matching funding was secured from the Pennsylvania Infrastructure
Technology Alliance.
The client and CMU are both in the Pittsburgh area, and the client’s senior manager for R&D
maintained an office on the CMU campus and coordinated meeting rooms and test facilities at the
corporate development site (also in the Pittsburgh area). Travel time between the sites was less
than 30 minutes.
Due to the nature of CMU being an academic institution, no formal deliverable could be
required, thus hardware was to be purchased by the client and made available to the developer for
modification and engineering - with the final system remaining the property of the client.
The main road sensor was a commercial product, assessed to be ready for integration costing
$42k.
The project was funded over 3 years for $250k at CMU to engineer for prototype robot sys-
tem. Some of this cost was for the initial prototyping of the component technology for the mark-
ing system, however, this was a fairly low risk technology, and lead by an advanced undergraduate
student, and hence only cost around $25k for materials and labor.
Thus the total cost of prototypes of the component technologies was just $67k, while $225k
supported integration of the components into the robot and that robot’s design as part of the
client’s road assessment process.
162
C.2.4 Client and Acquiring Organization
The acquiring organization was the research and development office of the client. The AO had a
dedicated senior manager, who participated in weekly meetings with the developers and coordi-
nated access to various experts on the client’s system. The AO also had an engineer on staff who
helped with various research projects undertaken by the client. This engineer had volunteered to
work for the R&D office and had come from one of their engineering departments. The position
was apparently set up so that people could rotate into research and then back to main projects,
which happened once in the 2 years.
The client delivers large scale civil engineering projects to customers, and those projects are
managed by independent general managers who are directly responsible to the client’s president
and board of directors. The civil engineers who inspect the roadways report through their man-
agement chain to this general manager. Of specific interest is that only one civil engineer was
known to perform highly reliable inspections, and he was significantly past retirement age and
desiring to retire “with peace of mind that the job will be done well”. New engineers had been
trained, but they either disliked the assessment job so much that they requested to be transferred
or were poor performers at this inspection. The senior inspector was in high demand by all gen-
eral managers for inspection, but was available for teleconferences (from whatever job site he
was at) and the occasional in-person meeting. Also, the general manager who was going to uti-
lize this system on his project was also a graduate student in civil engineering at CMU (although
he did not work on this project). The general manager participated in the guidance of this project
through the reviews, major meetings, and document reviews.
The client also had a quality assurance organization that was their 6-Sigma
TM
and Total Qual-
ity Management
TM
champion. This organization established procedures for tracking and handling
deviations from specifications. They also maintained the corporate database on deviations, aiding
the organization in identifying processes to be improved and, very important to the general man-
agers, which subcontractors were the best performers. The inspecting civil engineer was required
to report deviations through the mechanisms specified by the quality assurance office, but did not
work directly for them.
This focus on total quality permeated the whole organization, consider the example of the
engineer who wouldn’t retire until he knew the quality would be okay. The engineers believed
in their products and believed that high quality engineering is what differentiated their systems
from those of any competitors.
C.2.5 Developing Organization
The Institute for Complex Engineered Systems (ICES) at CMU was the developmental organiza-
tion. Members of the development team included faculty from Civil Engineering and Computer
Science/Robotics departments, technical staff from the Robotics Institute, and graduate (MS and
PhD) students in Civil Engineering and Robotics. The technical staff and the graduate students
were the development team, while faculty preformed management functions and acted as senior
engineering internal reviewers. Two team members (one faculty and one graduate student) had
worked with the acquiring organization before in concept studies and/or larger projects. The
civil engineering department also provided access to a licensed civil engineer who specialized in
concrete construction to act as a special advisor for technical issues.
163
ICES was chosen by the client, faculty, and CMU to lead the development of the system due
to its experience with delivering engineering systems into complex environments and performing
concurrent engineering design and development in multiple disciplines. Further, ICES could pro-
vide concurrent engineering tools (concurrent versioning and authoring software for documents).
Overhead within ICES was minimal; the faculty were able to perform the organizational
management of the project without the need to involve developers (the students and technical
staff). Weekly team meetings kept the faculty and ICES organization, collectively the developer’s
management of the project, informed on status and progress, while developers would meet on a
more frequent “as-needed” basis.
Due to the involvement of ICES, there was some attention paid to the method of develop-
ment. As such, the team did produce a project management plan, a configuration management
plan, and a life-cycle plan for the project. Further, the team performed the tasks in the various
plans, including keeping records of audits of the configuration baseline and minutes of meetings
required by the various plans.
The Robotics Institute was able to provide laboratory and engineering space for the team to
build various test models (to include gaining experience with pouring concrete surfaces), as well
as room to assemble the deliverable system.
C.3 System Development
Briefly the time line for development was:
August 2000 through August 2001 - Concept Formation
August 2001 through January 2002 - Requirements Development
January 2002 through July 2002 - Preliminary Design
July 2002 through November 2002 - Critical Design
November 2002 through September 2003 - Robot Construction
September 2003 - Final Demonstrations
C.3.1 Concept Formation
From August 2000 to August 2001, CMU and the client (acquirers, engineers, general managers,
and quality assurance personnel) explored various concepts to increase the effectiveness of the
assessment of the civil subcomponents the client needed to deliver their total system. Studies
included budget, schedule, performance, safety, and corporate image as factors in determining
which concepts would be elaborated. Through several iterations with various engineers through-
out the client, the concept for a system to inspect and mark deviations automatically on a roadway
was introduced.
Eventually all the concepts were evaluated side-by-side to determine which project gave the
greatest return on investment in a way that integrated well with their current system delivery
process. In this way, with projected 100-1 time savings for inspection and a potential to reclaim
the entire R&D cost of the system on an upcoming roadway project, the roadway assessment and
marking system became the selected concept.
Due to this early analysis, many of the business case and engineering process integration goals
were well-established in this phase and the project was quickly sold to all organizations within
the client. This additional attention to the business case and engineering integration enabled this
164
project to become very popular with the acquiring office, as the project established great cred-
ibility and attracted the attention and even volunteer time of engineers in different departments
within the client corporation to aid in the project.
C.3.2 Requirements Development
Riding the success of the concept review, the formal requirements definition phase began in Au-
gust 2000, and ended around August 2001 with the acceptance of the baseline requirements docu-
ment. During this phase, the development team expanded greatly. Most of the new members were
technical staff and graduate students and focused on developing early prototypes of component
technologies and identifying potential long-lead purchasing needs. The original development
concept team that proposed the surface assessment concept continued as the leads for require-
ments development.
The requirements phase had three main goals for the developers: develop requirements for the
objective system, verify and validate the requirements to ensure no “gold-plating” of requirements
and as broad a design space as possible, and prototype as many components as possible to ensure
that requirements could be realized.
Initial requirements were elicited by several traditional methods: direct stakeholder inter-
views, direct observation of the inspection process at an actual job-site, and inspection of con-
struction specifications and contracts. One non-traditional method used was that developers built
a sample concrete roadway via the methods specified to fully understand the process of con-
structing a concrete roadway and observe first hand what all the specifications meant in physical
terms.
Requirements were formally tracked in a requirements document. Each requirement was
evaluated against known standards (World Bank, IEEE, ASME, ASCE) to verify a requirement
was stated as completely as needed. If there were standards, the requirement’s parameters became
the subject of a meeting between the developer and acquirer to ensure that both sides knew what
was being specified and that both had a mutual understanding of the intent of the requirement.
Validation of requirements was accomplished by tracing requirements back to the initial con-
cept, and tracing them to the business propositions or engineering processes from which the
requirements were derived. Further validation of the requirements was performed through the
prototyping and analysis efforts, to ensure that the requirements were reasonable. Finally, re-
quirements were validated through a formal requirements review, which the acquirer brought in
several engineering experts from all areas of competency that this project would impact (software,
civil structure, mechanical and control engineers). The review was not pro-forma, and the acquir-
ing engineers all had prior exposure to the project and time to review the requirements before the
formal acceptance meeting.
Constraints were also addressed in the requirements document, but typically were only traced
to the person or organization that levied the constraint. The idea was that constraints represented
areas where, in the professional judgement of the acquirer, they were unwilling to consider solu-
tions.
165
C.3.3 Preliminary Design
From August 2001 through January 2002, the preliminary design of the system progressed quickly.
The rapid progress was aided by the early technology prototypes and the apparent stability of the
requirements.
During this phase, one design was to have the assessment and marking system make an en-
gineering judgement as to the severity of a deviation and wether or not to mark a deviation. The
determination was to be made by using an industry standard method of simulating a quarter-car
model over the observed profile and marking those deviations that resulted in expected jerk and
yank conditions that exceeded a configurable safety margin around the system specification. The
concept would be to record the deviation in the formal report that the engineer would review.
Generating a document of deviations was a requirement by both the engineers and the quality
assurance organization, and thus represented no additional work, except to note which devia-
tions had not been physically marked. The engineer could then indicate which deviations, if any,
should be marked in an additional marking pass, if needed.
This solution was disliked by the acquiring engineers and the veteran inspecting engineer. In
their experience there were not many deviations, and that any deviation needed to be marked and
would be corrected. Only rarely would a deviation not be corrected, and that required many engi-
neers to consider the potential impact. Further, since the steel straight edge provided a continuous
physical edge, the manual system was considered by their engineers to be more accurate than the
best robotic technical solution, which used inertial sensors and a discrete, rapid-fire laser range
finder to determine deviations along a profile. It was believed that a slight deviation observed by
the laser range finder (which sampled every few millimeters) may miss deviations that the con-
tinuous straight edge would be able to detect (especially “bumps”, which would cause a straight
edge to lift off the road at one end or the other). Also, the quality assurance organization desired
to know about all deviations to track the effectiveness of the oversight of the subcontractors and
the effectiveness of the subcontractors for future project consideration.
Thus, a constraint was added to the requirements that the system make no engineering judge-
ment and physically mark all encountered deviations. Since this additional constraint was prop-
erly traced to the requiring organizations and individuals, and did not appear to cause any design
impact to the system, it was quickly accepted. Indeed, the main difference to the developers
was the model that would be used to determine where to place marks. Seeing the potential for
a follow-on modification, the designers continued to maintain a design that would facilitate easy
adaption of the system from the control limit method to determine a mark and the quarter car
model analysis method.
Many other subsystems were designed, such as the mobility subsystem (mounting on a Gator(tm)
4x4), user interface, marking subsystem, environmental enclosures, road profile sensing, and oth-
ers. Various development methods were used to prototype, verify, and validate each sub-system
design.
A final preliminary design was presented and traced to the requirements in a formal prelim-
inary design review. This design review was similar in method to the requirements review, and
involved many experts who were provided with the chance to critically review the designs and
provide inputs towards building the detailed designs.
166
C.3.4 Critical Design Review
From January 2002 to November 2002, the detailed design continued. Although minor require-
ments work continued, the focus was now extensively on prototyping and the design. Also there
was some engineering development beginning on long-lead purchase items.
At this phase, requirements and constraints remained largely stable, with only minor value en-
gineering changes to decrease production costs of the objective system. The largest requirements
change was to relax the environmental enclosure requirements for the electronics. Initially, a high
containment system was desired, but upon discovery of the high price of that level of protection,
a lower and more cost effective level was suggested and accepted by the acquirer.
Prototyping in this phase had a large increase in effort. In this phase, only subsystems that
were designated as low or very low risk were not prototyped; indeed the criteria to be low risk
was that the technology solution had to be straight forward to all members of the team, a member
had to have experience with performing that specific development work, and thus the subsystem
would be highly precedented. Prototypes for higher risk sub-systems were acknowledged to be
candidates for designs and even for inclusion in the final system construction. In retrospect, many
of these prototype subsystems were second or third generation prototypes and the prototypes
generated during this phase were successfully integrated into the objective system during the
robot construction phase.
The design effort continued, the goal being to have specified the design considerations for
all sub-systems and ensure all requirements are realized through one or more sub-systems in
a specified manner. The specifications also included information as to how the system was to
be transitioned, operated, and maintained by the client. Special attention was paid to software
development requirements for maintenance and robot transportation (between worldwide job-
sites).
Much as the previous two reviews, once the designs and specifications were completed, they
were circulated to the acquirer. Again, the reviews consisted of various experts from the client
who were able to validate that the designs satisfied their requirements or concerns with using the
system. After the review was accepted, the team moved into the robot construction phase.
C.3.5 Robot Construction
Full-focus was given to robot construction phase of the objective system November 2002 through
September 2003. In this phase, effort was focused principally on building final system compo-
nents. Prototypes that were found to have acceptable quality for the objective system were to be
integrated and tested with other components. Another main activity was to perform testing at the
client’s developmental facility.
Robot construction, while substantiative in effort, was essentially straight forward due to the
large quantity of prototypes that provided the necessary experience to build the final system.
Prototypes for the marking system were modified for inclusion to the final product. Experience
with prototyping the marking control software was quickly ported to the operational computa-
tional platform. The commercial road profiler was also integrated onto the platform. In all, the
few integration issues encountered were quickly solved and did not impact the cost, schedule, or
performance - as evidenced through the developmental tests.
The client provided access to their development track, which was at their primary facility
in the Pittsburgh area. This roadway was not fresh, having been built several years before for
167
the client’s system testing. Initially, several client engineers were present to observe the tests,
to ensure that the system did not damage the test track. This was important, as the track was
also used to test new vehicles that were produced at the facility. An additional requirement of
the system was to be able to inspect the roadway with the additional infrastructure installed,
hence the client engineers were eager to see that the system did not contact or damage any of
the installed infrastructure. Eventually, the amount of oversight was reduced to just one engineer.
Also, acquisition personnel were present to observe the system during later tests.
Due to the fact the track had been in use for a few years and well qualified for testing their
vehicles, there were no deviations to observe in the track. Hence the developers were forced
to induce deviations (foreign objects) so that the system could detect and mark the deviations.
Inspection rates were observed to be in line with the requirements, meaning that the system was
able to meet the business case goals of 100-1 improvement in the inspection time. Acquisition
personnel, and especially the interested general manager were thus able to see the system in action
on the qualification track before deciding to ship the system to the real-world track that it was to
inspect.
C.4 System Delivery and Demonstration
In September 2003, the system was shipped to a project site that had a real-world roadway pro-
duced and ready for inspection. This inspection was the final involvement of the developer with
the assessment and marking system before the system was transferred to the client.
The roadway had been produced by the best-qualified subcontractor for roadways. The client
had extensive experience with this specific subcontractor, and in the client’s opinion, only a few
deviations were expected in the several miles of roadway to be inspected. The acquirer brought
in the general manager, site engineers for the client, and management from the subcontractor to
observe the system in action. However, the system did not operate as expected.
The system entirely operated according to the specification. The problem was that, on the
marking pass of the roadway, the system marked 2 orders of magnitude (x100) more deviations
than expected. Initially, the observers thought the system was malfunctioning. The senior inspec-
tion engineer, present to observe the system that he hoped would enable him to retire in peace,
was then asked to check the marks. The surprise was that the deviations were valid, but the ma-
jority (99%+) were minor and in the engineer’s opinion would not need to be corrected for the
vehicle to meet ride quality requirements. Typically, these deviations were less than 1/32 of an
inch beyond the control limits, and the engineer indicated that the deviations would not even have
been recorded in his field observations.
This revelation by the inspecting engineer was problematic with the quality assurance repre-
sentative present to observe the system. As the deviation information was essential to determine
which road surface companies were “best qualified” to meet the contractual specification (re-
member that the specification to the subcontractor is in roughness), the improper handling of any
quality information may be construed as favoritism to specific subcontractors.
The general manager, preferring to ensure compliance to the delivery specification for ride
quality, felt that that many deviations would require too much explanation to the client’s customer
as to why there were so many unmitigated deviations being recorded during delivery - the client’s
client acts as an acquirer and reviews the client’s progress. In effect, the general manager felt
that too many deviations were being marked and made the construction effort appear to have
168
more problems than it really had. Further, expanding the control limit to a non-standard level
(the World Bank specifications provide for specified levels, with no level between 1/8” and 1/4”
in 10’) would cost cost increases due to the lack of standards support. Further, 5/32” in 10’
would be more expensive than 1/8” in 10’ due to the precision that must be employed by the
sub-contractor to control to that many decimal places.
As a follow up, the former PI (Garrett 2007) indicated that the company asked again in early
2007 for CMU to develop an automated system to mark defects on the roadways. On further
investigation, Prof. Garrett said that the official reason teh previous system was not accepted
was that the interface, while meeting requirements, still required training for the operator to
properly calibrate and initialize the vehicle on the track and that the previous managers felt the
calibration was “too complex”. The new management team asking for this seemed unfamiliar
with the previous work. CMU indicated that replicating the same system again was a development
activity with minimal research gain and hence out of scope.
169
Appendix D
Global Hawk Project Analysis
D.1 Introduction
The High-Altitude, Long Endurance Unmanned Aerial Vehicle (HALE UA V) Reconnaissance
program RQ-4A and RQ-4B, dubbed “Global Hawk”, illustrates some of the political frustrations
of converting an Advanced Capability Technology Demonstration (ACTD) to a formal acquisi-
tion program. Initially a research technology demonstrator under the direction of the Defense
Advanced Research Projects Agency (DARPA) until 2001, the program was transferred to the
US Air Force (USAF) as a Major Defense Acquisition Program (MDAP) of the highest oversight
category, “ID”. The category “I” implies programs of the largest size in dollar value, while the
“D” indicates that this project is important to the whole of the Department of Defense.
The Global Hawk is a reconnaissance UA V designed to be piloted remotely and send sensor
data back to intelligence analysts for interpretation. The pilot and ground crew can fly the UA V
and operate the sensors from a ground station that can potentially be anywhere in the world, for
example in a safe location thousands of miles away from the Global Hawk over a battlefield.
The government has a method for demonstrating advanced technologies through ACTD projects.
This type of project is supposed to aid the government prove technologies before requiring them
in a formal acquisition or procurement program. However, the political process can have is-
sues transferring from the “big science project” mentality of the ACTD to the formal, repeatable,
cost-constrained environment of formal acquisition programs.
As can be imagined, moving from a conceptual technology demonstration to production can
bring headaches to engineers and managers. Global Hawk is no exception here. Indeed, we can
trace some of the political challenges being experienced by the program in 2005/2006 back to
this transition in 2001. In order to trace some of these problems, it is important to consider the
various political facts of life (as proposed by Dr. Forman (Foreman 1995)) that the Global Hawk
program experienced during the 2005/2006 period as candidate reasons for how and why the
political process interacted with the Global Hawk program.
According to RAND, initial development of the Global Hawk ACTD paid $238 million to
contractors, with additional government costs of $40 million for the Phase II development for a
total of $278 million. According to a different RAND report (Drezner 2002b), only $390 million
was added to the budget line for the Global Hawk to cover non-recurring engineering, engineering
for manufacturing and development, as well as purchasing additional Global Hawk vehicles in the
new acquisition program. According to the report, the base amount in the budget was to continue
170
operation of the ACTD vehicles and procure more air vehicles. Ultimately, the program office
has considerably overrun this initial budget projection.
This case study first examines the history of military UA Vs and then more specifically the
history of the Global Hawk system, focusing on events around the transition from ACTD to a
MDAP in 2001 and then on political events in 2005/2006. From there, the political facts of life
are discussed and examples of how these facts occurred in this case, and where there may be
connections between the current events and the 2001 transition. Finally, the paper concludes by
pulling together these events and analysis to indicate potential issues that future ACTD to MDAP
transitions may face, with an emphasis on observing which engineering methods may need to be
changed to meet the challenges of delivering a highly robotic system in this political environment.
D.2 Background
D.2.1 Early History of Unmanned Aerial Vehicles
In the beginning, there were balloons. The earliest reported use of an unmanned aerial platform
was by the Austrians in 1849, using balloons to overfly Venice and deliver munitions (Naughton
2003). This early system was neither remotely piloted nor automatically performing flight con-
trol; a fuse was set to deliver and detonate the munition 30 minutes after launch, providing some
unmanned or autonomous operation. However, this initial foray was not entirely successful, as
a small number of the balloons were caught in a wind that brought them back over the Austrian
lines before delivering their munition (and hence dropping some on their own troops!).
Toward the end of World War I, several unmanned aerial vehicles were being considered and
built by the British and by the United States. Specifically, the US Army attempted to acquire a
system that acted as an aerial torpedo, dubbed the “Kettering Bug” (Darling 2008), and was able
to demonstrate the system at Dayton, Ohio in 1918. However, the system was never employed as
World War I ended before the program entered the production phase.
In addition to attempts to deliver munitions remotely, pre and early World War II (WWII) saw
a new class unmanned aerial platforms - militarized, remotely-piloted vehicles (RPV). The most
common use for these RPVs for the US was for target practice for anti-aircraft gunners (Klein
2002). Both the US Army and Navy procured many of these for this exact purpose. The US
Army program, OQ-2 Radioplane acquired about 15,000 units from the Radioplane Company in
California.
After WWII, the UA Vs began to appear in a new role, surveillance. This new role was
first exercised in in the 1960’s during the Vietnam with the AQM-34 Firebee (Klein 2002), the
Teledyne-Ryan Compass Arrow series, and Boeing Compass Cope series (Goebel 2008c). These
early aircraft carried cameras and were capable of carrying infrared cameras and various SIGINT
payloads. Also, many UA V systems were now being developed and deployed by the US Ma-
rine Corps (Van Riper 1997) and the US Air Force (Goebel 2008c) in addition to the US Army.
Advertising an error rate of around a half percent (meaning off by 5 km for a 1,000 km flight),
these systems were typically launched and recovered from the air and equipped with self-destruct
mechanisms to prevent the sensitive payloads from falling into enemy hands.
With the success of UA Vs, many acquisitions were started in the 1970’s and 1980’s. Most
acquisition programs were very aggressive in their goals, but had many problems common to
all the different UA V program offices. Eventually, Congress directed the formation of a Joint
171
Program Office (JPO) under the United States Air Force (USAF) around 1987 to consolidate all
the UA V programs (Goebel 2008b). One of the first programs out from that office was the Hunter
UA V , eventually redesignated the RQ-5A.
By this time a guiding strategy for surveillance UA Vs was needed. In answer to this, a frame-
work for UA Vs was created to guide their development. The USAF, through the JPO, created a
tier system (Bierbaum2008) as did the USMC (Gitlin 2007). These tier systems, although both
having three tiers and are still in use today, are not directly comparable and the various levels are
not satisfied by the same airframes. One reason for this is that the USMC focuses on its marine
operations support role, putting more emphasis on man and small-unit-operated vehicles. For
example, the USAF’s “Tier II+” requirement was for “endurance UA Vs” with mission durations
from 40-50 hours at very high altitudes (e.g. 60,000 feet).
D.2.2 Global Hawk ACTD Program and Transition to a Major Acquisition Program
The Defense Advanced Research Projects Agency (DARPA) began a High Altitude Endurance
Unmanned Aerial Vehicle (HAE UA V) ACTD in 1994 to meet the “Tier II+” requirements from
the USAF tier system. The government had just canceled a classified program entitled “Advanced
Airborne Reconnaissance System” at the end of 1993 due to budget overruns. It was thought at
the time that an open program would fare better. (Drezner 2002a) (Lott 2006)
The first prototype Global Hawk was delivered in Feb 1998. At the same time this was hap-
pening, the Global Hawk program had incurred funding hits and had been reduced from five down
to one developmental contractor (Ryan, later purchased by Northrop Grumman). The Global
Hawk flew only a few test flights before the USAF took possession of the ACTD in Oct 1998.
There are notes that this transition was a year behind schedule due to some production problems,
and no doubt, the budgetary issues. (Drezner 2002a) (Lott 2006)
The USAF completed the ACTD around Oct 2000, but had yet to have the post-ACTD ac-
tivities approved by the Defense Acquisition Board. Due to various delays in having the board
meet, final approval for entry into low-rate initial production and engineering and manufacturing
development wasn’t granted until March 2001. These approvals constituted the transition of the
program from its ACTD heritage to a MDAP. (Drezner 2002a)
One report, from Col Coale (USAF) (Coale 2006) listed five problems that were experienced
as the program transitioned between ACTD and MDAP. First, the testing community didn’t lever-
age actual battlefield experiences in determining the combat utility of the system. The author asks,
“Why try to simulate the combat environment if we can assess the system in actual combat?” In
many cases, it was felt that the military tests were trying to recreate environments and situations
in which vehicle had already achieved in operations.
Second, logistics planing needed to occur sooner. Specifically, due to the ACTD nature of the
system, neither DARPA nor the USAF invested in logistics planning. Hence, when the system
was converted to an MDAP, logistics planning was already behind.
Third, there was no assessment of the contractors ability to achieve with the expanded pro-
gram, as there is a large difference between building prototypes and building large numbers of
production units. Ideally, the government should have either re-competed the MDAP and exam-
ined the sufficiency of the bidders capability to perform on a large contract or have mentored
and developed the ACTD incumbent to develop the technical and management capability to run
the MDAP. Ultimately, Col Coale notes that the government did neither, and selected Teledyne
172
Ryan. Teledyne Ryan was bought out by Northrop Grumman. When problem occurred, Northrop
Grumman was able to invest corporate assets. Although this was not done early enough to prevent
later problems.
Fourth, this program, like many others, had requirements creep. Although the system went
from concept to flying in 5 years, that cycle could have been shorter if additional capabilities were
phased over more development spirals, rather than being front loaded.
Finally, Col Coale recommended that the manufacturing planning be accelerated. Essentially,
since the Global Hawk was launched into engineering manufacturing design and low-rate initial
production simultaneously, several problems occurred because the manufacturing established for
the low-quantity ACTD prototypes was not appropriate for producing the larger quantities for the
MDAP.
Rand (Drezner 2002a) also performed a study on the transition. Their report indicated that
the ACTD nature of Global Hawk prevented certain type of acquisition planning due to “color
of money” (a government term that refers to the fact that money can only be used for the limited
purposes for which Congress appropriated the funds) and regulatory issues. This impeded the
ability of the USAF to rapidly and effectively transition the program to an MDAP. The entire des-
ignation of the development as an ACTD, which the reports cites as an inadequately understood
mechanism, may have contributed to the transition issue, as the traditional acquisition commu-
nity didn’t know what to expect to receive. Also, the user base that services the two types of
activities (ACTD and MDAP) are different. ACTDs are supported by operational users in unified
commands, while MDAPs are supported by users in the force providing command (corporate
USAF). This distinction is important, as the priorities of users in these different commands are
different, as are their routine funding mechanisms. This can make for very different priorities for
requirements.
Talking with a member of the development team, it was revealed that the ACTD method for
accounting and describing the work to be done, the Work-Breakdown Structure (WBS), was very
different from the production system WBS (Anonymous 2006). The new WBS did not provide
managers and engineers clear guidance for where to account their costs, hence the cost to work on
one item may have fit in many potential categories in the WBS. Further, since the typical program
management is structured around the form of the WBS, some costs were lost either because the
development item was thought to be housed in a different part of the WBS or because a change
in one item’s functions didn’t reflect into the directly related-integrating components that were
being accounted in another distant WBS element.
Also, there were many problems in taking the prototype system to production. For example,
many of the hardware and software avionic interfaces were non-standard (custom) in order to
fit the space/weight/power requirements for the prototype. The engineering documentation was
lacking, meaning many of these interfaces were not documented, and in some cases their exis-
tence as being non-standard wasn’t recorded. This has led to tremendous difficulty taking the
system into Low-Rate Initial Production (LRIP), as the production line didn’t really exist and had
to be created. (Marz 2006)
Of the 7 prototype units produced by the ACTD program, 4 were lost (Peck 2003). The
first two were lost in 1999 during testing. In March 1999, a Global Hawk was flown too high
by operators, lost its control signal and aborted due to termination signal from another base. In
December 1999, another Global Hawk was badly damaged when it overran the runway during a
taxiing test. The USAF blamed known software problems, while Northrop Grumman says USAF
173
ignored a warning that a ground speed of 155 knots was excessive given both the length of the
Edwards AFB runway and the Global Hawk’s breaking capability. The second two were lost
during operational uses.
D.2.3 Early Transition to Operations
With the events of September 11th, 2001, the United States entered a new political situation,
the start of the Global War On Terrorism (GWOT). The politics of the time called for quick and
decisive action in Afghanistan entitled Operation Enduring Freedom (OEF). Unfortunately, OEF
was in Afghanistan, remote from many friendly bases for military operations. Fortunately for the
reconnaissance mission, there were several prototype Global Hawk systems available to support
the needs of US Central Command and the OEF task force. To all accounts, the system performed
admirably and became popular with commanders in the field, despite the loss of two air-vehicles
during operations.
The operational losses in December 2001 and July 2002, were over Afghanistan. Both crashes
were attributed to maintenance/material problems, pilot error, and restrictive landing policies;
policies that prohibited emergency landing at allied nation airstrips. For these two crashes, the
air-vehicle was in the air and under control for some time after the initial problems. However,
the Global Hawks were only allowed to land where it departed from, and both were unable to
complete the many hour flight to return to station. (Peck 2003)
D.2.4 Technical Description of the Global Hawk RQ-4A and RQ-4B
The Global Hawk Weapon System includes not just the air-vehicle, but also the ground station
and spares. The specification that the whole must be considered the weapon system means that
all elements must be present in order for the system to be combat effective.
Table D.1 summarizes the major technical measures of the two Global Hawk air-vehicle plat-
forms, RQ-4A and RQ-4B, as collected from various sources (GAO 2004) (USAF 2006) (Goebel
2008) (Northrop Grumman 2008b). Note that the RQ-4A is essentially similar to the ACTD plat-
form (Goebel 2008b), while the RQ-4B presented is in production and acquisition and is a new
design (although politically it is just a “modification”). Note from this table that the two system,
while often cited by management and Congress as incremental system, actually vary on every
aircraft metric listed on the table. In some respects, one must wonder how much of this system
really had to be redesigned from the ground-up or significantly modified from the, known to be
lacking, design documents (Marz 2006).
Note that for endurance, demonstrated and required levels of endurance vary greatly by
source. Other features had occasional and minor differences by source.
Beyond the aircraft, the RQ-4B sports improved communications equipment and an “open
architecture” meant to facilitate integration with various military command and control systems
(Goebel 2008); of course this open architecture meant the redesign of many software components,
both on ground station equipment and on the aircraft (Anonymous 2006).
174
Feature Global Hawk RQ-4A Global Hawk RQ-4B
Wingspan 116’2” 130’10”
Length 44’5” 47’7”
Height 15’2” NA
Empty Weight 8,490# NA
Take-Off Weight 26,750# 32,250#
Payload 2,000# 3,000#
Cruise Speed 404 mph / 350 knots 310 knots(@60,000’)
Service Ceiling 65,000’ 60,000’
Endurance 31-35 hours 33-36 hours
Endurance @ 60,000’ 14 hours 4 hours
Table D.1: Global Hawk Technical Measures
The ground system is a container module with connections for power and communications.
The ground system does not need to have line of sight to the Global Hawk to control the air-
vehicle. In typical operations, the Global Hawk operations crew is 4,000+ miles from the operat-
ing zone, safely inside the US, while the Global Hawk flies missions over Afghanistan and Iraq
(Northrop Grumman 2008).
In both aircraft, many sensors are mounted. Currently the sensors and demonstrated perfor-
mance on the aircraft are: Synthetic Aperture Radar: 1.0/0.3 M Resolution (WAS/Spot), Electro-
Optical: NIIRS 6.0/6.5 (WAS/Spot), Infrared: NIIRS 5.0/5.5 (WAS/Spot) (Northrop Grumman
2008b).
For those uninitiated to the sensor community, these modes, sensors, and metrics may not
make much sense. First we’ll cover the two modes common to all three sensors. All sensors are
measured over WAS and Spot modes. WAS means “Wide Area Search”, while Spot modes are
used when studying a specific thing. The reason for different modes can be most easily explained
by considering the zoom on your personal camera. Initially you may “zoom out” (WAS) to see if
you find anything of interest, then you “zoom in” (Spot) to get a detailed picture of an object of
interest.
The three sensors described are synthetic aperture radar, electro-optical, and infrared. Syn-
thetic aperture radar is a radar system where the radar’s receive time is integrated over the travel of
the receiver while listening for radar returns. Essentially, this means we pretend to have a bigger
radar antenna than we really have by listening better and longer. Electro-optical systems are sim-
ilar to a digital camera (not exactly, but that is a close analogy). In this case electro-optical means
that optical pictures are taken through electronic means. Finally, infrared sensors are essentially
cameras that take pictures in the infrared spectrum, rather than the visual spectrum.
Finally, the performance metrics may make sense for radar (1.0 and 0.3 M pixel resolution
for the two modes), but NIIRS is a different story. Civil NIIRS (National Imagery Interpretability
Rating Scale) is a task-based metric based on the concept that people using the images should be
able to do more sophisticated interpretation at successive levels (0-9). For example, civil NIIRS 5
requires that the analyst be able to “detect an open bay door of a vehicle storage area” or “identify
a Christmas tree plantation”, while NIIRS 6 should be able to “differentiate between sedans and
station wagons” or “distinguish between row crops and small grain (e.g. wheat) crops”. On the
175
extreme end of the scale, NIIRS 9 requires the ability to “identify individual grain heads on small
grain crops” (e.g. identify a specific wheat grain on a plant) (FAS 2008).
Between the air-vehicle, ground station, sensors, and integration, no one company would
be able to bring all these element together. Global Hawk is manufactured by a wide range of
contractors lead by Northrop Grumman. Northrop Grumman lists the team (Northrop Grum-
man 2008c) as being: Aurora, ATK, BAE Systems, Curtiss Wright Controls, Inc., Goodrich,
Honeywell, Kearfott, L3 Communications, Northrop Grumman, Parker, Raytheon, Rolls-Royce,
Saft, Sierra Nevada Corporation, Smiths, Parker, and V ought. Although the official USAF fact
sheet (USAF 2006) notes only the following contractors and their rolls Northrop Grumman’s
Ryan Aeronautical Center (prime contractor), Raytheon Systems Company (ground segment and
sensors), Rolls-Royce (turbofan engine), V ought Aircraft Company, (carbon-fiber wing), and L3
Com, (communications systems). The fact sheet does include the corporate sites, by city and state,
each of these 5 contractors is performing Global Hawk development work (6 sites, Raytheon is
operating in 2 different states for their two roles).
D.2.5 Recent Events
Examining the recent year budget requests, can provide some context for identifying how the
administration feels about the Global Hawk program. Starting with the FY2004 budget request,
half of the $1.39B requested for UA V programs ($686M) is for Global Hawk development and
production activities (Kosiak 2003). However, at this point Global Hawk is still in the same
program element (program elements are budget line items) as the Predator program.
A 2004 GAO study (GAO 2004) had changed the program from a 10 year production program
to a 20 year production program. Ultimately this was done to keep the per-annum cost of Global
Hawk down. The report forced the 2005/2006 budgets to be constrained, as without this leveling
some years would be as much as triple in the 10 year plan as in the 20 year plan.
Moving into FY2005, only about one third of the UAS program budget request $1.973B,
with $696M for the Global Hawk development. One item of note is that the overall UAS program
budget request increased $633M in FY 2004 (Kosiak 2004). Also, in Feb 2005 documents sup-
porting the split of Global Hawk and Predator into separate program elements (DTIC 2005) begin
to surface, providing for more detailed description and tracking of Global Hawk unique expenses
from the general UAS program element.
However, the budget requests for FY2006 UAS acquisition funding is $1.512B for 6 pro-
grams, a substantial decrease of $359M from FY2005 (Kosiak 2005). Half this funding ($706M)
is for Global Hawk. This is first revealed in Feb 2005 in the Presidents budget requests and var-
ious industry news following UAS developments. Of course, this may be a prelude to what will
happen just two months later.
In April 2005, the USAF joint program office overseeing Global Hawk files a required Nunn-
McCurdy breach notice to Congress for an 18% increase in the unit-cost of the Global Hawk
systems (Sullivan 2005). Nunn-McCurdy breach notices are needed for a variety of reasons, in
this case the program tripped the 15% unit-cost growth criteria.
Predictably debate arose over the cost estimate. Sen John Warner (R-Va, Chair of Armed
Services Committee) ordered the investigation of the 18% increase reported to the Department
of Defense (DoD) from the USAF but the investigation by the Government Accounting Office
(GAO the investigative arm of Congress) concluded a 31% increase (Erwin 2006). That increase
176
was debated by the DoD as including sensor upgrades and redesigns that were part of another
approved upgrade and stated the actual cost increase at 22.5%. Essentially the DoD partly con-
curred with the GAO that the USAF JPO office estimates were not fully correct, but didn’t agree
that the unit-cost growth was as bad as the GAO was implying. The final GAO report was not
delivered until December 2005.
At the end of 2005, the Office of Management and Budget listed the overall national UAS
program as “Moderately Effective” (the second highest rating) (OMB 2005). Global Hawk is
mentioned several times in the report, typically in a good light, such as adding user requested
SIGINT capability, following the DoD acquisition framework guidance structure of committees
and required documents, transparency of the re-baselining of the budget, meeting operational
needs, and inclusion of measurable key performance parameters from the outset and their use to
incentivize contractors to meet cost goals. However, Global Hawk was noted for not meeting its
production schedule and for not meeting its target objective endurance (target endurance is 40
hours, while actual for 2006 was 30 hours).
Congress, in building the FY2006 appropriation, received the FY2006 budget request for
Global Hawk of $327.7M for 5 vehicles and $70M for advance procurement costs of 6 additional
vehicles in 2007. Committee reduced 2006 by 2 vehicles and $110M and the advanced procure-
ment by 1 vehicle and $10M. (US House 2005) Of course, this funding amount is different than
the presidents budget, which considered the research and development funding separately from
the procurement funding. The $327.7M requested here is the procurement portion of the funding.
At the same time, Congress is seeing problems in another unmanned system project, Future Com-
bat Systems by the US Army. Congress reduced $449M from that budget with heavy restrictive
language and required reporting and heavily criticized for slipping deployment date to 2014 (US
House 2005). Congress is getting sensitive to schedule slips for autonomous/robotic systems.
In Jan 2006, the GAO revised cost projects for Global Hawk, citing a 35% increase from the
March 2001 unit price of $60.9M to $82.3M in 2006 (unit cost includes aircraft, ground station,
and required spares). However in Feb 2006, DoD endorsed Global Hawk in the Quadrennial
Defense Review despite increasing costs (Erwin 2006).
In Feb 2006, the administration requested a UAS budget of $1.687B for 7 programs in
FY2007 (Kosiak 2006). Half this funding ($752M) would be for the Global Hawk program.
This is a significant increase from both the previous year’s requested (+ $175M) and appropri-
ated amounts
Never far out of trouble, in March 2006, another GAO report criticizes the high-risk strategy
for development of the RQ-4B (Sullivan 2006). Further, the GAO unit cost has crept to $130.5M,
when calculated as the total cost divided by number of procurement units. Northrop Grumman
counters that the incremental unit-cost is only $56.5M. This new cost of $56.5M is for the RQ-4B
air-vehicle, ground station, and an improved sensor package over the previous unit price of $43M
for RQ-4A air-vehicle, ground station, and initial sensor package. (Bigelow 2006)
Costs continue to rise in April 2006, when DoD announces that Global Hawk unit costs have
risen over 50% from the original estimate. Since this is a difference from the original baseline,
and a new baseline was approved with the previous Nunn-McCurdy breach certification from
Congress, this is not another Nunn-McCurdy breach.
However, the media has not let go of the various ways of computing cost overruns. Continued
discussion forces a more detailed explanation of the costing differences to be explained in April
2006, nearly a year after the initial cost differences were noticed. In this most recent explanation,
177
many cost increases accounted by the GAO are being booked by the USAF as operational up-
grades, to be purchased with research funds, rather than procurement dollars. (Erwin 2006) cites
this as typical of USAF programs (pushing items into operational upgrades, rather than paying
for them upfront in the acquisition). A USAF spokesman states that the booking procedure is a
long-standing policy, and that the difference between the USAF and DoD (18 and 22.5%) num-
bers are due to the fact some sensors scheduled to be installed in the acquisition baseline are now
scheduled for field installation.
In May 2006, the intelligence appropriation picked up the cause of the Global Hawk. Congress
is questioning the ability of Global Hawk to replace the current manned system, the U-2 “Dragon
Lady” (US House 2006b). Questions have been coming up about the ability of the Global Hawk
to replicate all the capability of the U-2. While there is some opinion that eventually there will be
sensor packages for the full range of missions, that isn’t currently the case (Hess 2006). Retiring
the U-2 would free up $1B in funds, which the DoD wants to use ro continue to acquire and
develop next generation technologies.
Finally in June 2006, the initial appropriations bill from the House proposed to fund Global
Hawk at $341.3M for procurement, $88M less than the request. The committee’s rationale was
that production had slowed and the USAF was overly optimistic about being able to recover
the production lot delays back to schedule. Included was a reduction of 2 more aircraft from
the FY2007 request and reduced 2 advanced procurement costs intended for vehicles in 2008.
Committee articulated that they understood the delays were due to operational tempo draining
spares, operators, and maintainers to fight the current GWOT; however the committee wants
these facts to be considered in subsequent budget requests. (US House 2006)
Recently, in December 2007 (Simpson 2008), the Developmental Operational Test and Evalu-
ation directorate found that the Global Hawk system was not operationally suitable. The analysis
was based on a lower than expected number of flights allowed by the FAA, insufficient spares,
“less-than-predictable reliability”, and insufficient flight testing overall.
178
Appendix E
PackBot Explorer Robot Controller
E.1 Introduction
The project to develop and acquire an improved operator control unit for the PackBot Explorer,
a remotely piloted reconnaissance robot, demonstrates several problem with attempting to do a
hands-off, no direct cost project between an industrial and academic entity. Initially begun in the
summer of 2006, I worked as a volunteer, independent acquisition manager with several members
of the iRobot Government & Industrial Robots team. Bringing on three teams over the fall 2006
semester, one was eventually selected to deliver an operational prototype of a haptic-style con-
troller by summer 2007. Despite on-time delivery, meeting requirements, and preliminary patent
applications by the students involved, no follow-up has been conducted.
The PackBot Explorer is built on the PackBot base platform with an added pan-tilt sensor
package on the end of a boom arm extending from the body. The PackBot Explorer used for
reconnaissance applications and the sensor package mounts a forward optical zooming camera,
a backwards fish-eye camera, and several LED lights for illumination. The operator control unit
was admitted to be one of the areas of the robot that iRobot would most like to improve, as it
requires the most training of any component of the system.
The University of Southern California (USC) and DeVry - Long Beach (DVLB) have courses
that encourage students to partner with industry. At USC, the concept of upgrading the user
interface was put before a team of software engineers in the CSCI577a (Software Engineering I)
graduate course and was taken by a single graduate student as independent research in robotics.
At DVLB, a team of two undergraduate students undertook the project for their design capstone
course. After the fall 2006 semester, the DVLB team’s solution, named ”Haptic Avatar”, was
selected for development.
This case first gives an overview of the cast of players in this acquisition case. Then a brief
overview of the PackBot itself and its anticipated environment is discussed. Some related work in
haptic and miniature based control systems is given to give some background. Then the various
organizations involved are briefly described. The acquisition phases are then addressed, first
looking at the acquisition goals, then the competitive design phase, and finally the development
phase. From there, the final demonstration is considered and the state of the project after the
demonstration is described.
179
E.2 Background
E.2.1 Cast of Players
AcquirerDeWitt
Acquiring OrganizationUSC
Successful DevelopersTracy and Ethan
Successful Development OrganizationDeVry Long Beach Student Project Team
Unsuccessful Development Organization 1CSCI 577a Software Engineering I teams
Unsuccessful Development Organization 2USC Graduate Student Independent Study
Project, Don
ClientiRobot, Government & Industrial Robots Team, specific points of contact were Orin
and Aaron
UserArbitrary Operator of a PackBot Explorer
Robot OwnerUS Navy SPAWAR SSC, Small Robotics Center of Excellence, specific
point of contact was Ben
E.2.2 iRobot PackBot and Operational Environment Discussion
The PackBot Explorer was offered by iRobot as part of their PackBot line (iRobot 2007). The
PackBot is a tracked vehicle. The payload of the vehicle is a camera head on a neck actuator. The
PackBot mounts two forward cameras and one rear facing camera. The cameras are augmented
by white and infra-red illuminating LEDs. The front flippers are actuated to assist the robot
overcome difficult terrain. The robot as used in this case was only employed with a single battery,
but the robot can be employed with two battery packs for long duration operation.
The PackBot’s onboard computer controls robot functions. Communications is handled over
a 802.11b wireless ethernet connection. The control unit is a ruggedized laptop and and a USB
device with several buttons and two 3-axis joysticks.
The robot disassembles for easy transportation and can be assembled in the field with only a
screwdriver and employed in a matter of a few minutes.
E.2.3 Haptic and Miniature Based Interfaces
Haptic is from the Greek language and relates to the sense of touch. Generically, haptic refers to
providing touch, force, vibration, and/or motion feedback to a user through the physical interface
device. Many papers are available that describe the process of providing haptic feedback to users
and were provided to the DVLB team as background material. One of specific note was (Lee
2005) and discussed the use of haptics in the control of a PackBot Scout (similar to the Explorer,
but without the camera head or boom arm).
Miniature based controllers for robots are not highly common. One inspiration for the project
was the MiniUA V controller, call a ”Physical Icon”, which commanded a model aircraft via a
180
model handled and maneuvered by the operator (Quigley 2008). The Icon used sensors to de-
termine its orientation and relayed that information via radio to the model aircraft. The aircraft
would then attempt to approximate the pose of the Icon. The main problem noted with the minia-
ture controller was user fatigue, as the user had to hold the miniature plan aloft to provide control
inputs. The initial Icon was another model aircraft, but a later version was done as a palm-sized,
stylized aircraft.
E.2.4 The Haptic Avatar PackBot Controller
This controller operated as a peripheral that connected with the operator control unit laptop. The
controller, a physical miniature of the Explorer robot’s payload, would send pose information
about the payload over a serial/USB connection to the OCU, which would then be responsible for
sending the commands to drive the motors appropriately. In this way, an operator wouldn’t need
to know how to achieve any given pose, nor need to have speicalized knowledge about interface
settings – merely would have to put the controller into the desired pose and the robot would
follow.
In the 2007 prototype, the haptic feedback feature was not developed (having been designated
as low priority verses the control input mode). Also, the temporary code for the OCU only per-
formed linear interpolation between the PackBot’s current position and the destination indicated
by the controller. No intermediate way points were kept.
The haptic feedback feature, when implemented, was also intended to assist by visualizing
common motions and tasks in the miniature. In this sense, if a behavior needed to be trained, the
operator could ask the OCU to demonstrate the task on the controller. This gives the operator a
real-world perspective on how to accomplish the task, without having to worry about managing a
camera in a 3-D visualization. Ultimately, this feature was anticipated to close the full capability
gap.
E.2.4.1 Final, formal requirements
The size shall be approximately the size of a laptop bag
The angular positions reported shall be within 5 degrees of actual model poses
The pose shall be updated at over 15 Hz
The shoulder joint shall have a range of 180 degrees
The head pan and tilt joints shall have a range of 180 degrees each
The controller shall be implemented as a peripheral connecting via either USB or RS-232
E.2.4.2 Actual Delivered Properties
Total Height: Approximately Seventeen inches
Base Height: Three inches
Width: Ten inches
181
Depth: Thirteen inches
Weight: Three pounds
Power Consumption
– Motors idle - 17.4 mA
– Motors running - 540 mA
– Motors stall - 4,900 mA
Torque for Specified Motors
– Arm motor 18kg/cm
– Head Pan & tilt 3kg/cm
Motor Angle Accuracy - each step is within 5 degrees
Kinematics in mirror mode
– Pan head motor: 180 in five seconds
– Tilt head motor: 180 in three seconds
– Arm motor: 180 in seven seconds
Kinematics in command mode
– Pan head motor: 180
– Tilt head motor: 180
– Arm motor: 180
Communication between Haptic Avatar Controller and Operator Control Unit – RS-232 at
19.2 KHz
E.2.5 Acquisition Environment
In this environment, DeWitt, the acquirer, was supported through his US Air Force graduate
fellowship and had time to invest in helping a company work with students to acquire a novel
robotics technology. DeWitt was a member of the Robotics and Embedded Systems Laboratory
(RESL) at USC. After interviewing six different companies, DeWitt began work with iRobot as
a potential customer because a robot could be made available for the students to utilize and had
project ideas that were considered to be possible within a 2 semester academic time line. iRobot
was one of two projects selected to be acquisition studies.
iRobot was not interested in spending money to develop the new interface technology, but was
open to having staff communicate the issues with DeWitt and any students. DeWitt had a site visit
to the iRobot facility as part of the pre-systems analysis phase. iRobot assigned Aaron, a human-
robot interaction expert, to be a permanent point of contact for the project duration. Aaron acted
as the first line for questions into iRobot, but once contact was established with another engineer,
DeWitt or the students were allowed to directly contact people as needed. Ultimately, Aaron
182
ended up having weekly (initially) or bi-weekly (after entering design phase) telecons to answer
questions and provide input to the project.
The US Navy’s SPAWAR SSC, through their small robot loan pool, had a PackBot Explorer
that could be checked out by DeWitt for use in research at no charge for up to six months.
SPAWAR asked, but did not require, to see a presentation of the final interface control after
the project’s completion, as typical for their loan program.
iRobot considered several elements of the direct API to the robot to be proprietary and subject
to restrictions on exporting from the US. As such, only US nationals would be allowed to see the
API, and iRobot preferred that only DeWitt did the detailed programming to the API. DeWitt was
able to build and deliver a RS-232 (Serial) interface to be able to command the Explorer camera
payload and arm without the students having to directly know the API.
Ultimately, no contracts were established and all individuals operated on a consensus sched-
ule. Student time lines were considered to be critical, as the students needed to have work com-
pleted to their course’s satisfaction on required dates.
E.2.6 Client and Acquiring Organization
The client, iRobot, was organized into commercial and government/industrial robot divisions. A
director in the government group, Orin, initially worked on the definition of needs, then handed
the project to a new employee, Aaron, starting in Sep 2006.
The acquiring organization was RESL. DeWitt was an independent graduate student, with no
obligations to other projects within the lab, as his funding was an individual fellowship. RESL
had previously worked in haptics for control of PackBots, but didn’t consider haptics to be part of
their research portfolio anymore. DeWitt had access to faculty who were familiar with the issues
as independent experts, but generally required no other support from the RESL to accomplish this
acquisition.
E.2.7 Developing Organizations
Three developing organization took part in some portion of the effort: USC CSCI577a Software
Engineering I, USC Independent Study Project, and DVLB senior design project.
USC CSCI577a course - summer project write up, Sep team selection process; note that
description is truncated due to lack of a qualified team
USC Independent study project partners gruadte students with faculty advisors to undertake
research in their field of interest. The RESL director sponsored the student (a MS student) and
assigned DeWitt as his day-to-day overseer;
DVLB senior design project course is a two quarter course progression. Students self-form
teams and self-select their project, subject to the approval of the course instructor. The first quar-
ter focuses on design and planning. The second quarter focuses on detailed design and prototype
development culminating in a design competition between the teams. The course requires deliv-
ery of design artifacts in formats prescribed by the instructor.
E.3 System Development
Briefly the time line for development was:
183
Pre-System Analysis Summer 2006
System Concept Phase Fall 2006
Design/Build Phase Spring/Summer 2007
E.3.1 Pre-System Analysis Phase
During this phase, no developers were present. The pre-analysis work was low level of effort
activities intended to define the specific need, motivation for addressing that need, and what
would be a set of candidate solutions to meet that need. The initial point of contact at iRobot was
with a director in the government and industrial applications group, Orin.
DeWitt and Orin initially talked about which platform would be suitable for a 2-semester
student project. The PackBot, with some published API was a good candidate. Although the
PackBot was extremely popular with users and acquirers (reference to contract for EOD robots),
there were still issues in training. Typically, an organization trains one individual, who will then
train their replacement. After several generations of this training, a portion of the capability of
the robot was not being utilized, due to the features of that part not being as frequently performed
(but still needed in some mission scenarios). Eventually, a client would send a user back for
iRobot sponsored training to reset the skill level. Although this kept return business, due to the
nature of some of the robots being involved in safety or time-critical applications, improvements
to overcome this skill loss problem may be a selling point to potential customers.
In this vein, DeWitt and Orin set on improving the Operator Control Unit experience and
focused on the PackBot Explorer model. The selection of the Explorer was due to it having
an articulated payload, but that the payload was simpler (3 degrees of freedom) than the other
robot in the line (the explosives ordinance disposal robot with over 7 degrees of freedom). Initial
concepts called for a self-training interface that would guide an operator on how to achieved
different poses of the payload to accomplish various higher level goals. It was believed that
a training mode for the interface, likely virtual, would be a great way for an operator to “test
before driving” the robot. Both the statement of needs and this conceptualized solution was
communicated to all developers who joined the project.
This pre-system analysis phase concluded with DeWitt performing a site visit in early Septem-
ber 2006 to iRobot, experimenting with various PackBot models, and interviewing members of
the development and product teams about their positions decreasing the learning curve and more
trainable interface concepts. This visit solidified the team’s view of the task to be accomplished.
Also, at this site visit, a new staff member, Aaron, was introduced. Aaron was hired specifi-
cally to deal with human factors issues and was given the lead form the iRobot side to work with
DeWitt and the upcoming projects.
E.3.2 System Concept Phase
In Sep 2006, two concept development activities started at USC. Tracy, from DVLB, indicated
interest, but due to a different semester schedule (DVLB being on quarters rather than semesters),
would not be able to form a team until early Nov 2006.
184
The first USC team considered was to utilize the 5-student team from CSCI577a (Software
Engineering I). That course typically focuses on web services deliveries, but has done user inter-
faces for other systems, including one for a robot previously (REFERENCE). After working with
the course faculty, the project was advertised. Initial conversations with some students indicated
that most of the students were unfamiliar with robotic systems and preferred web services style
projects for their resume. Ultimately, since no student was suitable to lead a team, this project
was de-emphasized in favor of an unrelated robot simulator project.
Another graduate student, Don, solicited DeWitt and RESL to address the user interface sys-
tem as a graduate independent study project in robotics in the computer science department.
Don’s concept was to improve the graphical layout of the objects in the current OCU. During
September and October, he reviewed screen shots provided by iRobot, inquired about the differ-
ent functions, created some common use cases, and worked to figure out how to develop a virtual
training system.
At the end of October, DeWitt and Aaron were eager to see some concepts, but Don as
reluctant. At this time, Tracy had formed a 2-person team at DVLB with Ethan and again came
back to see if the project was still open. Without any viable prototype from Don, DeWitt and
Aaron agreed to provide them the chance to pitch a new idea. Since Tracy and Ethan were
engineering students, rather than computer science, they quickly moved on the idea to address
different input devices, so that they could augment Don’s graphical user interface. Also, DVLB’s
design project required them to utilize a specific micro-controller, which also moved them to
prefer working on input devices
During November, the two teams, USC and DVLB, met weekly to attempt to integrate a
solution of both graphics and a new physical input device. The DVLB team proposed input
devices from new joysticks, to a video game controller, and the initial idea that eventually became
the ”Haptic Avatar”. Although several new physical input devices were proposed, but the user
interface still lacked a prototype. After three meetings, and with Thanksgiving break approaching,
a new strategy was needed to see if there would ever be something delivered that improves the
graphical interface.
DeWitt and Aaron decided to force the submission of design candidates by indicating that
only one project would be able to move forward. By the second week of December, both teams
presented their ideas to Aaron. Don’s user interface design was determined be be only a minor
improvement over the current OCU graphical interface. Tracy and Ethan decided to push for the
Haptic Avatar concept.
After the presentations, Aaron and DeWitt quickly agreed that the DVLB had the most inno-
vative solution. Aaron was impressed by the quality of their proposal, which included a prioritized
requirements list and traced the requirements back to the statement of needs from the summer ef-
forts. He was slightly concerned that a miniature haptic controller would be too difficult for an
undergraduate team and felt that other attempts at similar systems in the past had failed. How-
ever, since the investment from iRobot was only minimal, felt that the risk was worth the potential
payoff. Hence, the DVLB team was selected to proceed into design.
185
E.3.3 Design/Built Phase
December 2006 started the first quarter of their senior design course. The first goals were to
deliver project plans, a description of the proposed system, a bill of materials and other esti-
mates, and construction of prototypes of subsystems. Additionally, the goal of the first academic
quarter prototypes was to validate that the students could acquire and then utilize (set and read
positions) from servo motors and begin fabrication of a simple miniature body. Those motors
were purchased from local hobby shops and then calibrated for positional accuracy (to meet re-
quirements) and evaluated for torque strength (to be able to move the miniature and provide force
feedback)
January 2007 Robot was expected to be available at the end of the month, but was delayed
to early February. Students proceeded with development using CAD models provided by iRobot.
There was some problem getting appropriate CAD viewers, as the models were in a format not
supported by the DVLB labs. Eventually, DeWitt was able to get preview the models in a USC
lab until Tracy acquired a student version of the CAD programs for their use.
One major requirements change happened in March. The PackBot Explorer head pans 360
degrees with no maximum number of revolutions. Initially, the requirement was to fully imitate
that capability, but due to the constraint to servo motors (as being cost affordable to the students),
DeWitt and Aaron agreed to a range of 180 degrees for the first prototype for head-pan.
February 2007, the PackBot’s arrival was announced to be delayed further, as the robot had
not yet been returned by the previous borrower. Mid-February was the quarter break for the
DVLB team. In their review for their first half efforts, the team was graded highly by their
faculty advisor, but there was some concern from DVLB that the project was too aggressive to be
completed. The new quarter, brought in a new faculty advisor, who needed to be brought up to
speed.
March 2007 had the robot arrive, but without some critical documentation for the API. As
the robot was expensive (over $70,000) and was signed out to him directly, the robot remained
secured at the USC lab or in DeWitt’s possession, reducing the students’ access to the unit on
demand (typically a once a week, but eventually daily as integration activities began in April).
“Mirror mode” (where the miniature assumes a commanded position) was demonstrated on all
three motors independently through a simulated controller (the connection to RS-232 was sched-
uled for implementation in April). Manual driving of each motor independently was demonstrated
for the two joints in the head assembly. The shoulder joint, which was a high-torque model servo
motor, was stronger than a person could manually drive, and alternate modes were being con-
sidered to set a desired pose. In this mode, the pose would be set, then the miniature would go
back and show the linear execution from the start pose to the final pose. However, ultimately, this
mode ended up not being needed.
In April 2007, the most expensive servo motor (which provided sufficient torque for the shoul-
der joint) was destroyed during integration testing due to a wiring accident. As the development
team was spending from their own pocket for this motor, they were leery to invest again in a
motor. A quick conversation with DeWitt and Aaron confirmed to the developers that the haptic
feedback was a “stretch” requirement and not required in the first version for success, as indicated
in the previous requirements documents. Tracy and Ethan quickly adapted a new procedure for
checking the wiring of the prototype before enabling power from unregulated sources.
186
Also in April 2007, the service that provided setting the three angles for the joints of the
Explorer payload was completed shortly before the demonstration to the DVLB faculty. The
services API had been set in February and the binary settings had finalized about 3 weeks before
the demonstration. However, due to the intellectual property concerns, only DeWitt could do
the API programming for the team. Unfortunately, as DeWitt was only available to work on the
project full time a week before the demonstration. Quickly some undocumented features were
discovered with the PackBot and overcome before the developers had connectivity with the robot
a few days before their course’s final design review.
E.4 System Delivery and Demonstration
The project, as delivered, included: the Haptic Avatar controller, a brochure describing the inter-
face, a poster, and a set of technical documents fully detailing the Avatar’s construction including
both hardware and software components, and digital movies demonstrating the system’s utility
and function. The controller provided, as specified, plus-or-minus 5 degree reporting accuracy
over the RS-232 at approximately 3000 Hz (well better than the 15 Hz required) with over a range
of motion over 180 degrees for each joint.
The students first demonstrated their system in a series of design competitions within the De-
Vry University system in May 2007. Tracy and Ethan took second place overall in the greater
Los Angeles area (covering 3 different campuses), and were asked by DeVry to have their project
demonstrated during various technology day demonstrations to industry and the community.
Their final project presentation was in May, and the team earned excellent marks for their project.
The iRobot demonstration, however, went differently. Initially, the team and Aaron believed
that they would be flown to th iRobot facility to give their demonstration. However, funds
were not available by the time of project completion for the trip, forcing a remote presentation.
Scheduling the final demonstration was also difficult, as the demonstration was moved from June
to July due to lack of availability of key members to be able to participate. During the presenta-
tion, despite the warning that full haptic would be part of a follow-on project (if desired), there
was considerable attention paid to the lack of haptic capability of the system. The path for the
PackBot to have a miniature-based controller was unclear.
Finally, in September, Tracy demonstrated the system to SPAWAR SSC. This demonstration
was scheduled for September to coincide with the robot’s return to the lending pool. This talk
went extremely well, and was fortunately timed in the same window with other reviews of alter-
nate input devices occurring at SPAWAR. This demonstration garnered about twice the number
of staff than anticipated, including the associate director of the center. The staff were extremely
pleased with the initial demonstration and thought the upgrade path to haptic capability seemed
reasonable given the extremely low budget and short turnaround time for implementation.
E.5 Post Delivery
After the formal end of the project, several events happen, by which we can attempt to judge the
success or failure of this project.
Tracy submitted a patent application for controlling a robot via a miniature based haptic
controller and that application is still being worked at the time of the writing of this case.
187
Aaron, DeWitt, Tracy, and Ethan collaborated on an ICRA 2008 poster paper in the fall 2007
timeframe. Although the paper was not accepted, other conferences were identified in the future
as potential targets for descriptions of this system.
SPAWAR invited Tracy to apply for a job based on his project presentation. Ethan had another
job lined up and wasn’t pursuing a robotic career.
No additional inquiries into applying this style of interface iRobot - including no effort to
complete anticipated upgrades to full haptic feedback
On academic metrics, this project did very well, earning the DVLB students high grades, a
patent application, an attempt to publish in a major robotics conference, and a job offer. However,
on a technology transfer front, this project seemed to fail to appeal to the larger iRobot develop-
ment community. The lack of a “success plan” to work this style of alternate interface into an
evaluation program for OCU input devices means that although the project met its requirements,
it ended up not being suitable for further development.
188
Appendix F
Autonomous Helicopter Safe and Precise Landing Capability
F.1 Introduction
The Jet Propulsion Laboratory has an initiative to study autonomous “safe and precise landing” of
spacecraft in their Machine Vision group. One method to study the system terrestrially is to study
the flight dynamics and control of an autonomous helicopter. JPL uses a “Partnership Research
and Development Fund” (PRDF) to encourage small projects to be strategically partnered with
outside organizations. In 2005, Dr. Montgomery, in the Machine Vision Group of the Mobility
and Robotic Systems Section, successfully worked with the University of Southern California’s
(USC) Robotics and Embedded Systems Laboratory (RESL), under Prof Sukhatme, to develop
a model to emulate models of spacecraft via an autonomous helicopter. This one year project
was intended to supplement current autonomous helicopter work already underway at RESL.
Ultimately, this project was able to deliver on most of its deliverables, JPL hired Srikanth , the
lead graduate student, and JPL was able to import software and models into their current efforts.
F.2 Background
F.2.1 Cast of Players
Acquirer Dr. Montgomery, JPL
Acquiring Organization JPL, Machine Vision Group, Mobility and Robotics Systems
Section
Developers Srikanth , Prof Sukhatme
Development Organization USC RESL
Client JPL
User JPL Developers in the Machine Vision Group, Mobility and Robotics Systems Section
F.2.2 Autonomous Helicopter and Operational Environment Discussion
The USC A V ATAR (Autonomous Vehicle for Aerial Tracking And Reconnaissance) system (Fig-
ure F.1) utilized during this PRDF was the third and final generation of autonomous helicopters
189
Figure F.1: USC A V ATAR Helicopter
developed in RESL (detailed in Saripalli 2007a). Srikanth, the lead graduate student, took over
the A V ATAR system in 2001, and was tasked to produce this third generation of A V ATAR plat-
form (Saripalli 2007b). (Kelly 2008) maintained the project web page.
In Jan 2001, Srikanth arrived at USC and was immediately tasked to upgrade the previous
A V ATAR to the generation that would eventually be used in the JPL PRDF. From 2001 through
2005, Srikanth switch the base platform, inertial navigation system (INS), the operating system,
the graphical user interface, the GPS, and the onboard computer. Ultimately, this resulted in an
almost completely different vehicle, but as these modifications were done partially sequentially,
there was time to adjust between each transition. Overall, JPL was not the only sponsoring agency
that worked with the A V ATAR, other sponsors included the National Science Foundation (NSF)
and the Defense Advanced Projects Research Agency (DARPA).
The Bergen Industrial Twin RC Helicopter was the base platform. The reason for this plat-
form was to be able to match the one JPL was using. Because of this JPL constraint, which forced
a US-based manufacturer selection, Bergen was the only vendor with an RC helicopter that lifted
sufficient payload (rated at 10kg) under $10,000. However, just upgrading the principal sensors
used half the payload, and adding other mission systems raised the payload to be 10.2kg. Ulti-
mately, Srikanth indicated that this contributed to several helicopter crashes due to gear stripping
on the main motor. These crashes persisted throughout Srikanth’s involvement and typically oc-
curred once ever 3-6 months and would take a few days or weeks to recover, depending on the
severity of damage.
Of the other systems onboard, the upgrade of the INS, involved Srikanth working with var-
ious JPL post-Doctorial Fellows and engineers. Two situations lead to problems using the JPL
software, which unfortunately Srikanth wasn’t (at the time) allowed direct access to view. Ulti-
mately, Srikanth found a work around until JPL was able to debug their own software (in one case
the Kalman Filter (KF) used to integrate the output of the INS was improperly parameterized and
190
in another incident the same KF diverged). Srikanth’s work around involved developing his own
KF to utilize until the JPL KF was ready. In both cases, JPL was eventually able to integrate the
INS into their own KFs.
The project included 2 pilots, who were responsible for the vehicle while in flight. This
was accomplished by the pilots mimicking the flight of the A V ATAR on their own control, while
depressing a “dead-man switch”. If a problem was encountered, the pilot would release the
switch, the helicopter would the respond to the pilot’s controls, rather than the autonomy system.
The pilots had to mimic the flight on their controls, so that when the transition occurred, the
pilots controls would be in a relevant state for the helicopter; otherwise, if the controls were out
of position, the A V ATAR may would have likely crashed.
F.2.3 Acquisition Environment
This research was performed via contract with explicit deliverables (Montgomery 2005 and
Sukhatme 2005). USC accepted informal technical meetings with Dr. Mongomery and his staff,
although no formal design reviews or other technical management activities were stated in the
contract. The contract was small, only about $18,000 for a period of performance of July 2005
through May 2006. The final report to the JPL director (Montgomery 2007) indicates success in
the objectives of the project with USC.
The contract (Sukhatme 2005) required the delivery of the following items:
1. a model and control system for a class of helicopters
2. a method and implementation of emulating EDL (entry, descent, and landing)
spacecraft dynamics on a helicopter platform
3. a draft proposal for future funding opportunities, and
4. a final report, including reprints of publications (as available)
(Sukhatme 2005)
F.2.4 Client and Acquiring Organization
JPL is a Federally-Funded Research and Development Center, owned by the California Institute
of Technology (CalTech) and funded by NASA. The status as an FFRDC enables JPL to be
a “trusted agent” for NASA and perform reearch and development activities. NASA can fund
additional research within JPL, but JPL is not allowed to compete with industry for any public
request for proposals (with it’s trusted agent role, JPL has inside information about the NASA
program and may be helping NASA select the winning bidder). JPL operates about 80% of the
civilian space missions and sources many of the development contracts for unmanned spacecraft
(providing those systems and data back to NASA’s enterprise).
Internal to JPL, a Mobility and Robotics Systems Section (section 347, part of the Au-
tonomous Systems Division of the Engineering and Science Directorate) conducts tasks to in
space exploration and planetary exploration. Within this section, there is a Robotic Software Sys-
tems Group (3472) which focuses on the robotic software needed for for the operation of mission
ground control and flight systems (flight systems, in this case, are the robots - flight refers to
the fact the robots are launched and hence ’fly’ as NASA-JPL missions). Dr. Montgomery had
previously accomplished research with RESL and Prof Sukhatme, including his PhD at USC.
191
The JPL Director provides funds to groups to conduct partnered research with industry and
academia through the PRDF activity. The activity can fund both the local groups in JPL in
addition to the external agency. In this way, the JPL group can conduct tasks to ensure the
integration and utilization of the outcome of their joint project. This PRDF is not meant to be a
grant mechanism, but one to contribute the the accomplishment of developmental research that
contributes to NASA-JPL missions and mission capability.
As mentioned, the sections and gorup operate via tasks, which are specific. This PRDF
supports the Autonomous Helicopter Testbed (V olpe 2008a) system, which is a research system
to aid with the ALHAT: Autonomous Landing and Hazard Avoidance Technology task (V olpe
2008b). Through these tasks and systems, requirements are developed from the lead agencies
and division (ALHAT is lead by Johnson Space Center with support from many other labs). In
this way, Dr. Montgomery knew what types of system dynamics models would be needed and in
what time line to be usable by JPL to address the task.
F.2.5 Developing Organization
RESL is Prof Sukhatme’s vehicle for conducting research in robotic systems. Srikanth is a grad-
uate student in RESL, having begun his PhD studies at USC in 2001. Srikanth’s research focus
was on dynamics of autonomous helicopters, so his selection to work on the PRDF contact was
consistent with his research. Srikanth also acted as the lab’s lead maintainer and developer of
the lab’s autonomous helicopter. Several other graduate student in the lab performed experiments
onboard the helicopter, but only Srikanth was directly involved in meeting the objectives of this
PRDF contract.
At USC, graduate research assistantships are given by faculty to graduate students to provide
a stipend and cover tuition in exchange for performing research on the faculty’s projects. Students
at USC sign documents indicating that a 50% assistantship is for 20 hours of work per week (USC
does not have assistantships for for than 50% time for graduate students). The students must
certify that they did not work more than 20 hours per week each semester on the assistantship.
According to the certification, this is to ensure that USC complies with employment policies.
The remainder of the graduate student’s time is to be devoted to class work or the student’s own
dissertation/thesis research.
F.3 System Development
Note that the type main activities, system analysis and experimentation, were not temporally
distinct. Both activities occurred over the project’s duration of July 2005 through May 2006.
As noted previously, the USC-RESL A V ATAR was under Srikanth’s developmental control
throughout this period. Srikanth admitted to working long days, well above the 20-hour per week
assistantship. Unfortunately, as time accounting is not standard practice for graduate students,
there is little idea about which specific activities Srikanth spent his time. However, Srikanth does
admit that the 2006-2007 timeframe was his time to focus on helicopter dynamics, which was
both related to the PRDF and the topic of his dissertation (Saripalli 2007c).
The time Srikanth spent on this activity related to managing the test flights, generation of the
models, which became the basis of his dissertation, and working with JPL, both on weekdays and
192
weekends. This additional time with JPL was crucial, according to him, in order that the JPL staff
know how to utilize the models and for him to locate problems in the model himself.
Although the funding was expended by the closeout of the final report (May 2006), work
continued on the models until Srikanth graduated in August 2007. Even after completion of
the formal completion of the contract, Srikanth admitted (Saripalli 2007b) that he continued to
meet with the JPL engineers and programmers as they had questions about his models, software,
and the use of the same in JPL’s own autonomous helicopter. JPL had previously purchased the
same robotic helicopter base platform and autonomy systems at the USC A V ATAR, which helped
facilitate the exchange of software from USC to JPL (Saripalli 2007b).
F.4 Contract Completion
As required, a final report on the project was jointly written by Dr. Montgomery and Prof
Sukhatme (Montgomery 2007). This report indicated completion of all items in the contract,
except for a draft proposal. The report made no mention of future funding opportunities, propos-
als, or other items that would indicate that the deliverable had been accomplished.
F.5 Post Delivery
Discussions with Prof Sukhatme (Sukhatme 2008) confirm that no such proposal was developed.
Prof Sukhatme also indicated that JPL was pursuing different research with RESL, due to Prof
Sukhatme’s decision to have RESL abandon autonomous helicopter research in favor of other
research areas (specifically aquatic and maritime robots).
In September 2007 Srikanth was hired to the robotics software group of JPL (Saripalli 2007b).
This practice of hiring graduate students who worked on contract is typical for research agencies
(Sukhatme 2008). Overall, this indicates great satisfaction with the work performed by Srikanth
on the contract.
According to Srikanth, JPL continues work in autonomous helicopters and unmanned aerial
vehicles and Srikanth is part of that team.
193
Appendix G
Hoboken, NJ Robot Garage Maintenance Change
G.1 Introduction
The case of the botched change of a maintenance contract for an automated (robotic) parking
facility was selected arbitrarily from the news media, partially because it made CNN’s list of
“101 Dumbest Moments in Business” for 2007 (Horowitz 2007). This case uses public records
and news media to create a time line of events leading up to the July 2006 garage shutdown that
trapped hundreds of Hoboken, NJ parking patrons vehicles for about two weeks. Ultimately, this
case shows that inadequate attention to technical maintenance issues, such as software licensing
and maintenance agreements, can severely impact a client and users of an automated system.
G.2 Background
G.2.1 Cast of Players
Acquirer: Parking Authority of Hoboken, NJ
Acquiring Organization: Parking Authority of Hoboken, NJ
Development Organization: Robotic Parking Inc. and Ultronics Inc.
Client: City of Hoboken, NJ
User: On-site parking staff and parking patrons in Hoboken, NJ
G.2.2 Robot Garage and Operational Environment Discussion
The garage operates by a series of pulley, sleds, and elevators to move patrons’ vehicles and store
them efficiently in the structure. Software is needed to effectively plan, store, and retrieve patron
vehicles. Patrons either use an RFID card or (later) a keypad with a personal identification number
to indicate which vehicle is to be retrieved. The Hoboken Parking Utility maintained 2 garage
attendants to help patrons with problems. Additionally, Robotic Parking either had personnel
living in Hoboken or commuting to Hoboken to provide technical support for the facility.
194
G.2.3 Client and Acquiring Organization
Initially, the Hoboken Parking Authority (HPA) let the contract for the construction and delivery
of a 314-space automated parking structure. The HPA was lead by various commissioners, ap-
pointed by the city, who were fully empowered to take actions on behalf of the parking program
with minimal to no oversight from the city council. As of 2002, the HPA was able to build up a
$8 million surplus in revenue generated from parking fees and enforcement activities.
The HPA was disbanded in 2002; the Authority’s budget surplus was used to close other gaps
in the budget, new bonds were issued, and a Hoboken Parking Utility (HPU) was created as the
Authority’s succession agency (Hoboken City Council Minutes, 2002). The HPU, as a depart-
ment of the city, was under the oversight of the city council and city government. Ultimately, this
meant that all contractual actions had to be ratified by the city council.
G.2.4 Developing Organization
Robotic Parking Inc. is based in Clearwater, Florida. The company has disclosed, via a patent
(US Patent #5,669,753), the key technologies and offers its patent through licensing agreements
to local clients. Robot Parking Inc. is not a general contractor in the state of New Jersey, and
and hence partners with construction firms that deliver the system, such as Belcor. Presently,
Robotic Parking has several licensed distributors working in different parts of the US to bring
their systems to market and deliver them to clients. One example of this arrangement is with
Parking Solutions LLC, which operates in the New England area.
G.3 Robotic Parking Garage Timeline
Pre-Contract Modification (1998-2006)
Contract Modification (July 2006)
Dispute (August 2006)
Legal Resolution (August 2006 - Present)
G.3.1 Pre-Contract Modification
The parking facility in question is located at 916 Garden Street in Hoboken, NJ, provides 324
parking spaces, has a waiting list for parking spaces, and currently charges $200/month for a
standard vehicle and $250/month for a sports utility vehicle (Hoboken 2008a). A Belcor/Robotic
Parking Inc. team won a contract in 1998 (after two previous rounds of bidding were thrown
out for administrative reasons) to build the facility. Robotic Parking, not a construction com-
pany, had to partnered with Belcor to provide construction and general contractor services in the
winning bid (indeed, Robotic Parking was partnered with all submitting general contractors, the
Belcor partnering provided the lowest cost). (Faria 2002) covers the construction period, which
went several years and millions of dollars over budget, partially due to allegations of intellectual
property theft and failure to perform between Robotic Parking and Belcor. Ultimately, the courts
intervened, and allowed Robotic Parking to finish their system despite objections from Hoboken
and Belcor.
195
The facility was initially opened in October 2002. From opening until the contract termination
in 2006, no less than three incidents of damage to patrons vehicles and one shut-down of 26 hours
occurred (Jennemann 2006). Fortunately, in all the mishaps, no patrons were injured, as no people
are allowed in the vehicle storage area.
Note that Robotic Parking only had month-to-month contracts with the city from 2002-2006
(Reference all Hoboken City Council Minutes from 2002-2006). As the contract was voted on
monthly by the city council, the city was well aware of the transient nature of the relationship
with their software provider. During this time, Robotic Parking was paid $23,250 per month
for software licensing, support, and management of the 916 Garden Street Facility (Jennemann
2006).
G.3.2 Contract Modification
In June 2006, Robotic Parking officials demanded to increase their fee to $27,900/month (Minutes
of Meetings of the Council of the City of Hoboken, 2006). Robotic officials argued the fee
increase was needed to pay for the additional on-site support that was being given and to account
for software modifications to address HPU concerns (having transitioned from RFID cards to PIN
codes and addressed safety issues from the various problems); but, HPU rejected the increase.
Robotic Parking give written notice to terminate their services, effective August 1st to the HPU,
hoping to force the issue to resolution in city council. However, in the June 12th city council
meeting, the city voted to authorize HPU to let an emergency contract to find another vendor
for support of the 916 Garden Street facility. It is not clear from the time of the request for
more funding a week before the council meeting on the 12th, what, if any engineering analysis
was done to support the “after Robotic Parking” scenario on the part of the HPU or Hoboken.
Although (Craig 2008) notes that extensive work was done on the request for proposals that went
out later, even the city council resolution on the 12th indicated that the city did not presently have
the resources to manage or operate the facility.
G.3.3 Dispute
On July 25th, a few days before the contract for management and support was to end, several
police officers and officials of HPU escorted the Robotic from the 916 Garden Street facility
(Quinn 2006). What is not clear, is how or when Ultronics was recruited by the city to perform this
task. Robotic Parking attempted an injunction against the use of its software, but was overruled.
The judge, however, enabled Robotic Parking to have an employee present to ensure that its
software was not being copied.
On August 1st, 2006, the 916 Garden Street automated garage stopped functioning, and
trapped hundred of vehicle in the structure (Quinn 2006). It took just over a week for the courts
to impose a settlement that forced the Hoboken and Robotic Parking to accept a $5,500/month
license for the software for up to three years to continue running the facility and to release the
trapped patrons vehicles.
G.3.4 Legal Resolution and Recent Events
In (Campbell 2006), the city announced that Ultronics (now identified as a US company based in
Massachusetts) would install temporary software in December 2006 and have the facility running
196
by March 2007. However, (Craig 2008) notes that legal battles continued and eventually forced
as much as 10 months of delays and closures, resulting in the garage opening for use in January
2008. Largely the delay was due to Robotic Parking winning injunctions preventing the city or
Ultronics (now a competitor of Robotic Parking) for utilizing its patented technology. This forced
the city to order Ultronics to install their own hardware at the site.
197
Appendix H
Robotic Vacuum Customer Satisfaction
H.1 Introduction
This study focuses on attempting to reverse-engineer a weighted-sum-of-features table (Dym
2000) or populate a Quality Function Deployment Chart to connect the V oice of the Engineer to
the V oice of the Consumer (Akao 1990) for purchasing a robotic vacuum. Presently there are over
39 different autonomous, or robotic, vacuum cleaners on the market. The leader, iRobot, has sold
over 2 million units in its Roomba line in the past five years (iRobot 2008). Given this very large
market, a natural question to ask would be “how does a customer select between the different
offerors?”. This study takes the top 9 robots in terms of number of Internet reviews and offered
technical specifications and unit cost by the vendors to attempt to build a linearly regressed model
of which features contribute to or reduce customer satisfaction. Ultimately, this study identifies
that the technical differences that discriminate between the robots are combinations of factors
that seem unnatural for a user to consider. Interestingly, the unit cost of the robotic vacuums did
not seem to impact customer satisfaction.
H.2 Background
H.2.1 Robotic Vacuums
The purpose of a robotic vacuum is to vacuum an area without the direct involvement of a human
in the vacuuming operation. Although this goal of “human-free” autonomy is desired by many
users, most robots require that users prepare the area to be cleaned by picking up small objects,
securing cables, ensuring tassels are not on the floor (e.g. as are attached sometimes to edge
of rugs), and that the area the robot is to clean is clearly demarked (by either closing doors or
blocking exits by various means). The majority of the robots indicate that they operate on bare
floors (wood, tile, ceramic, linoleum, etc.) and tight-weave carpets, but are not capable of dealing
with deep-pile, or shag, carpeting.
Consumer reports (Consumer Reports 2008) indicates that robotic vacuums are still largely
a luxury item and are not meant to replace normal vacuums yet. They published a comparison
and evaluation of 3 robotic vacuums: Kracher, Electrolux, and the Roomba 416. The comparison
indicates that the Kracher and Roomba perform in a similarly satisfactory manner with respect to
x, y, and z; the Electrolux performed slightly worse in terms of y. Ultimately, Consumer Reports
recommends the Roomba 416 over the Kracher due to the large price difference.
198
H.2.2 Vendors and Robots
List of full set of 39 vacuums identified, as well as the number of reviews available for each on
the selected sites, is summarized in Table H.1.
H.2.3 Internet Product Reviews
The following Internet sites were visited on 3-Feb-2008 to determine how many reviews were
present for the robots:
Amazon.com
epinions.com
target.com
sears.com
shopping.com
shopping.msn.com
buy.com
compuplus.com
wize.com
homeclick.com
Additionally, the following editorial review sites were included with equal weight to other
user reviews:
robotadvice.com
ascully.com
howstuffworks.com
pcmag.com
The goal in the end was to collect as many reviews as possible. Editorial reviews were given
no additional weight, but were generally useful at identifying additional robotic vacuums and
providing links to vendor specifications.
Note, that although iRobot provides user forums and user reviews on their webpage, that
source was not considered. It was felt that participants in the corporate forum may be biased in
favor of the system. We would expect that, even with some negative reviews present in the forum,
that people who actively participate in the forum would generally be satisfied customers and be
more likely to generate a review.
199
# Product Name # of Reviews Vendor Specs?
1 Karcher RC 3000 RoboCleaner 6 Yes
2 iRobot-Roomba Original (Silver) 268 No
3 iRobot-Roomba Pro 57 No
4 iRobot-Roomba 3005 Pro 17 No
5 iRobot-Roomba 3100 Pro Elite 24 No
6 iRobot-Roomba 4100(now 410) 478 Yes
7 iRobot-Roomba 4105 (now 416) 94 Yes
8 iRobot-Roomba 4110 (now 416) 20 Yes
with charging base
9 iRobot-Roomba 4150 (see #7) Yes
(4105 except color, exclusive distributors)
10 iRobot-Roomba 4160 (see #7) Yes
(4105)
11 iRobot-Roomba 4188 (see #7) Yes
(4105, color pink, 20% donated to charity)
12 iRobot-Roomba 4199 472 No
13 iRobot-Roomba 4210 Discovery 1,312 No)
14 iRobot-Roomba 4230 Scheduler 475 No
15 iRobot-Roomba 4225 (see #14) No
(4230, exclusive distributors)
16 iRobot-Roomba 4326 7 No)
17 iRobot-Roomba 4275 0 No
18 iRobot-Roomba 4296 4 No
19 iRobot-Roomba 510 0 Yes
20 iRobot-Roomba 530 55 Yes
21 iRobot-Roomba-540 0 Yes
22 iRobot-Roomba-550 2 No
(model only listed by distributors)
23 iRobot-Roomba 560 110 Yes
24 iRobot-Roomba 570 1 Yes
25 iRobot-Roomba 580 1 Yes
26 Roboking By LG 1 No
(production canceled)
27 CleanMate QQ-2 3 Yes
28 CleanMate QQ-1 17 Yes
29 iTouchless Robotic Vacuum 3 Yes
30 P3 P4900 Robotic Vacuum 2 Yes
31 Koolvac KV-1 7 No
32 Lentek RV01 IntelliVac 4 Yes
33 Black & Decker RV500 Zoombot 17 Yes
34 Black & Decker RV501 Zoombot 1 Yes
35 Electrolux EL520A Trilobite 1.0 13 Yes
36 P3 P4920 Robotic Vacuum 0 Yes
37 P3 P4940 Robotic Vacuum 0 Yes
38 Microbot UBOT 1 Yes
39 Robo Maxx 1 No
Table H.1: Robotic Vacuum Reviews
200
Robot 5-Star Ratio 4 and 5-Star Ratio
Karcher RC 3000 0.50 0.88
Electrolux Trilobite 0.46 0.69
Black&Decker ZoomBot 0.00 0.29
CleanMate QQ-1 0.06 0.53
iRobot Roomba 410 0.53 0.25
iRobot Roomba 416 0.45 0.26
iRobot Roomba 416 0.60 0.80
(with base station)
iRobot Roomba 530 0.62 0.91
iRobot Roomba 560 0.61 0.79
Table H.2: Robotic Vacuum Review Rates
H.3 Robotic Vacuum Selection
All robots identified were not eligible for inclusion in this study. Many robots (14) did not have
technical information available directly from the product vendor’s web pages. The use of editorial
reviews to provide technical information was considered, but rejected when several sites had
different technical information available for iRobot Roomba products (weights that varied by
more than 1 kg and dimensions that were as much as 2 cm different between sites). As only
authoritative information was desired, any product that was no longer supported by its vendor,
or whose vendor did not provide technical information, was excluded. This exclusion is needed,
because robots without verifiable technical specifications would be unable to participate in the
linear regression as they would either have no data or add too much uncertainty (in the case of
non-vendor review information).
Many vacuums also had a deficient number of reviews (17). Despite having only 6 reviews,
RC 3000 is kept in the pool due to its performance and recognition in many unranked comparisons
(e.g. the Consumer Reports comparison).
Ultimately this leads to the selection of the following robots for further analysis: Karcher RC
3000, Electrolux Trilobite 1.0, Black & Decker RV 5000, CleanMate QQ-1, and the following
iRobot models: 410, 416, 416 with charging base, 530, and 560. Note that all robots were released
in 2004, except for the Roomba 500’s, which were released in 2007.
H.4 Features of Selected Robotic Vacuums
Table H.2 lists the proportion of reviews at each 1-5 star level (partial star reviews were truncated,
except reviews less than 1-star were converted to a 1-star review).
The following list enumerates the technical features of each unit. When a value is unknown a
“?” is listed and that case will not be considered in analyses that examine that variable. Physical
attributes are for the mobile robot only, and thus are not shipping weights and do not include the
weights of associated items (such as base stations or remote controls). Two technical features,
manual start of cleaning and front oriented bump sensors, are present on all systems. Of note,
several expected technical parameters were unavailable except in a limited number of systems:
201
vacuum power (only 1 reported), speed (only 1 reported), and robot debris bin volume (only 3
reported).
Kracher RC 3000 (http://www.robocleaner.de/english/work1.html)
– US Distributor - False
– Actual Cost - $1,500 (Cost Rank #2)
– Disk Form Factor
– Diameter 28 cm
– Height 11 cm
– Weight 2 kg
– Noise 54 dbA
– Base Station (provides power, enables auto-charging/auto-return, transfers debris from
robot)
– Charge Time 0.3 hours
– NiMH Battery
– Battery Power 1.6 AH
– Cleaning Duty Time 60 minutes
– Overhead sensor - TRUE
– Non-contact front sensor - FALSE
– Dirt Sensor - TRUE
– Debris Bin Sensor - TRUE
– Number of filters included: 1
– UV Surface Disinfecting - False
– Fragrance Slot - True
– Virtual Walls - False
– Schedule Clean Time - False
– Remote Control - False
– Programming Interface Port - False
– Corporate-Sponsored User Forum - False
– Spot cleaning mode - False
Electrolux Trilobite 1.0 (http://trilobite.electrolux.com/node217.asp)
– US Distributor - True
– Actual Cost - $1,600 (Cost Rank #1)
– Disk Form Factor
– Diameter 35 cm
– Height 13 cm
202
– Weight 5kg
– Noise 75 dbA
– Base Station (provides power, enables auto-charging/auto-return)
– Charge Time 2 hours
– NiMH Battery
– Battery Power ? AH
– Cleaning Duty Time 60 minutes
– Overhead sensor - False
– Non-contact front sensor - True
– Dirt Sensor - False
– Debris Bin Sensor - TRUE
– Number of filters included: 5
– UV Surface Disinfecting - False
– Fragrance Slot - True
– Virtual Walls - True (permeant installed strips)
– Schedule Clean Time - True
– Remote Control - False
– Programming Interface Port - False
– Corporate-Sponsored User Forum - False
– Spot cleaning mode - False
Black & Decker ZoomBot
(http://www.everydayrobots.com/index.php?option=content&task=view&id=9&Itemid=.)
– US Distributor - True
– Actual Cost - $99 (Cost Rank #8)
– Square Form Factor
– Diameter 36 cm
– Height 10 cm
– Weight ? kg
– Noise ? dbA
– No Base Station
– Charge Time ? hours
– NiCD Battery
– Battery Power ? AH
– Cleaning Duty Time 45 minutes
– Overhead sensor - False
203
– Non-contact front sensor - False
– Dirt Sensor - False
– Debris Bin Sensor - TRUE
– Number of filters included: ?
– UV Surface Disinfecting - False
– Fragrance Slot - True
– Virtual Walls - False
– Schedule Clean Time - False
– Remote Control - False
– Programming Interface Port - False
– Corporate-Sponsored User Forum - False
– Spot cleaning mode - ?
CleanMate QQ-1 (http://www.metapo.com/products/home/cleanmate.php)
– US Distributor - True
– Actual Cost - $123 (Cost Rank #7)
– Disk Form Factor
– Diameter 36 cm
– Height 9 cm
– Weight 2.7 kg
– Noise 80 dbA
– No Base Station
– Charge Time 2.5 hours
– NiCD Battery
– Battery Power 2.5 AH
– Cleaning Duty Time 80 minutes
– Overhead sensor - True
– Non-contact front sensor - False
– Dirt Sensor - False
– Debris Bin Sensor - False
– Number of filters included: 3
– UV Surface Disinfecting - True
– Fragrance Slot - True
– Virtual Walls - False
– Schedule Clean Time - False
– Remote Control - True
204
– Programming Interface Port - False
– Corporate-Sponsored User Forum - False
– Spot cleaning mode - False
iRobot Roomba 410 (http://www.irobot.com &http://www.irobot.cz)
– US Distributor - True
– Actual Cost - $150 (Cost Rank #6)
– Disk Form Factor
– Diameter 34.5 cm
– Height 9 cm
– Weight 3 kg
– Noise 79 dbA
– No Base Station
– Charge Time 7 hours
– NiMH Battery
– Battery Power 1.6 AH
– Cleaning Duty Time 90 minutes
– Overhead sensor - False
– Non-contact front sensor - False
– Dirt Sensor - True
– Debris Bin Sensor - False
– Number of filters included: 2
– UV Surface Disinfecting - False
– Fragrance Slot - True
– Virtual Walls - IR from movable unit
– Schedule Clean Time - False
– Remote Control - False
– Programming Interface Port - True
– Corporate-Sponsored User Forum - True
– Spot cleaning mode - True
iRobot Roomba 416 (http://www.irobot.com &http://www.irobot.cz)
– US Distributor - True
– Actual Cost - $200 (Cost Rank #5)
– Disk Form Factor
– Diameter 34.5 cm
205
– Height 9 cm
– Weight 3 kg
– Noise 79 dbA
– No Base Station
– Charge Time 3 hours
– NiMH Battery
– Battery Power 1.6 AH
– Cleaning Duty Time 120 minutes
– Overhead sensor - False
– Non-contact front sensor - False
– Dirt Sensor - True
– Debris Bin Sensor - False
– Number of filters included: 1
– UV Surface Disinfecting - False
– Fragrance Slot - True
– Virtual Walls - IR from movable unit
– Schedule Clean Time - False
– Remote Control - False
– Programming Interface Port - True
– Corporate-Sponsored User Forum - True
– Spot cleaning mode - True
iRobot Roomba 416 (with base station) (http://www.irobot.com &http://www.
irobot.cz)
– US Distributor - True
– Actual Cost - $? (Cost Rank #?) (although this configuration is no longer sold as a
bundle, information for all components to upgrade from the base 416 is available)
– Disk Form Factor
– Diameter 34.5 cm
– Height 9 cm
– Weight 3 kg
– Noise 79 dbA
– Base Station (provides power, enables auto-charging/auto-return)
– Charge Time 3 hours
– NiMH Battery
– Battery Power 1.6 AH
206
– Cleaning Duty Time 120 minutes
– Overhead sensor - False
– Non-contact front sensor - False
– Dirt Sensor - True
– Debris Bin Sensor - False
– Number of filters included: 1
– UV Surface Disinfecting - False
– Fragrance Slot - True
– Virtual Walls - IR from movable unit
– Schedule Clean Time - False
– Remote Control - False
– Programming Interface Port - True
– Corporate-Sponsored User Forum - True
– Spot cleaning mode - True
iRobot Roomba 530 (http://www.irobot.com)
– US Distributor - True
– Actual Cost - $300 (Cost Rank #4)
– Disk Form Factor
– Diameter ? cm
– Height ? cm
– Weight ? kg
– Noise ? dbA
– Base Station (provides power, enables auto-charging/auto-return)
– Charge Time 3 hours
– NiMH Battery
– Battery Power 1.6 AH
– Cleaning Duty Time 120 minutes
– Overhead sensor - False
– Non-contact front sensor - True
– Dirt Sensor - True
– Debris Bin Sensor - False
– Number of filters included: 1
– UV Surface Disinfecting - False
– Virtual Walls - IR from movable, schedulable unit
– Schedule Clean Time - False
207
– Remote Control - False
– Programming Interface Port - True
– Corporate-Sponsored User Forum - True
– V oice demonstration mode - True
– Spot cleaning mode - True
iRobot Roomba 530 (http://www.irobot.com)
– US Distributor - True
– Actual Cost - $350 (Cost Rank #3)
– Disk Form Factor
– Diameter ? cm
– Height ? cm
– Weight ? kg
– Noise ? dbA
– Base Station (provides power, enables auto-charging/auto-return)
– Charge Time 3 hours
– NiMH Battery
– Battery Power 1.6 AH
– Cleaning Duty Time 120 minutes
– Overhead sensor - False
– Non-contact front sensor - True
– Dirt Sensor - True
– Debris Bin Sensor - False
– Number of filters included: 1
– UV Surface Disinfecting - False
– Virtual Walls - IR from movable, schedulable unit, also provides functions to guide
robot back to base and to control room access (contain to one area then allow passage
to move to another area for cleaning)
– Schedule Clean Time - True
– Remote Control - True
– Programming Interface Port - True
– Corporate-Sponsored User Forum - True
– V oice demonstration mode - True
– Spot cleaning mode - True
208
Overall, 28 feature sets are considered (including price). Many of the features are recorded in
a true/false nature to indicate if the feature is present (true) or not present (false). Because such
parameters give either 0 or 1 point, several parameter sets may look identical, form partitions
around specific vendors or robots, or be linear combinations of each other. We must carefully
examine these parameters to ensure that we understand how to interpret the resulting analysis.
Observe the following features have identical performance across all the robots (meaning
there is no way to distinguish the feature in a strictly numerical model):
Programming Interface = Corporate Sponsored User Forum
inverse of US Distributor = Base Station Empties Debris Bin of Robot
UV Disinfection = Fragrance Slot = Battery Power (2.5 AH)
Spot cleaning mode = movable virtual wall
Schedule virtual wall = V oice demonstration mode
We must be careful when observing an impact due to one of these features, as the impact may
be due to either or both features. Essentially, any of these identical areas are indistinguishable
and regressions based on either or both of them will be equally likely. This examination helps
reduce the total number of regressions that need to be examined, as identical feature sets would
be redundant, the pool of potential parameters is reduced by 6 to 22.
Observe that some features are linear combinations (meaning that knowing the items on the
left tells us the value of the item on the right). Such combinations may be accidental or may be
natural results of the variables, examples include:
Form factor + UV disinfection = Battery Type
Installable Virtual Wall + V oice Demonstration Mode = Non-contact Front Sensor
Programming Interface + Base Station Empties Debris Bin = Dirt Sensor
Virtual wall supports robot room control and navigation + Installable Virtual Wall = Sched-
ule Clean Time
Virtual wall supports robot room control and navigation + UV disinfection = Remote Con-
trol
Schedule Virtual Wall + Installable Virtual Wall = Non-contact front sensor
Base station empties debris bin of robot + UV Disinfection = Overhead sensor
We must be careful when examining the impact of any of these parameters (especially those
on the right) to ensure that the other factors do not play some role in satisfaction. Ultimately, this
means that resulting regressions that utilize any of the features on the right are indistinguishable
from regressions that utilize the all the parameters on the left instead. This also helps reduce the
number of regressions that need to be examined, as variables that are linear combinations of other
factors do not need to be considered. The pool of potential parameters is reduced by 7 to 15.
Observe that the following 8 features form set partitions (meaning that the feature only iden-
tifies a specific robot or a specific brand of robot):
209
Spot Cleaning Mode = iRobot (with 1 unknown for the ZoomBot)
Programming Interface = iRobot
Schedule Virtual Wall = iRobot Roomba 500’s
Virtual wall supports robot room control and navigation = iRobot Roomba 560
Installable Virtual Wall = Electrolux Trilobite
UV Disinfection = CleanMate QQ-1
Base Station Empties Debris Bin of Robot = RC 3000
Form Factor Square = Black & Decker ZoomBot
We must be careful when examining the impact of weights assigned to partitions. Such
weights may not be directly due to that factor and may represent satisfaction or dissatisfaction
based on other, non-stated, features of the robot.
Conversely, there are 9 parameters that do not seem to have any linear relationship with the
other parameters:
Cost
Diameter, height, and weight
Noise
Cleaning Duty Cycle
Debris-level sensor
Base station supports auto-return and auto-recharge
Filters included
H.5 Reverse Engineering a Weighted-Sum Trade-Off Matrix
H.5.1 Method
This work uses linear regressions to match technical features to satisfaction rates. For model
selection, Mallow’s statistic to used to discriminate which model parameters are best candidates
for selection or elimination from a model. Finally, models are compared with respect to their
f-value, number of cases fit, the p-value, the residual sum of squares error, and other statistical
factors to differentiate between model candidates.
210
H.5.2 Analysis
After forward and backward selection among parameter sets for the 4-or-5-star references (the
“Liked” the system responses”) the following regression appears to be the best available (X17 is
the variable code for “Battery Type”):
Data set = robovac, Name of Fit = L31
Normal Regression
Kernel mean function = Identity
Response = Liked
Terms = (X17)
Coefficient Estimates
Label Estimate Std. Error t-value p-value
Constant 0.410000 0.0689720 5.944 0.0006
X17 0.380000 0.0782069 4.859 0.0018
R Squared: 0.771309
Sigma hat: 0.0975412
Number of cases: 9
Degrees of freedom: 7
Summary Analysis of Variance Table
Source df SS MS F p-value
Regression 1 0.224622 0.224622 23.61 0.0018
Residual 7 0.0666 0.00951429
Pure Error 7 0.0666 0.00951429
Examination of the same regression for 5-star (“X01”) reviews only:
Data set = robovac, Name of Fit = L33
Normal Regression
Kernel mean function = Identity
Response = X01
Terms = (X17)
Coefficient Estimates
Label Estimate Std. Error t-value p-value
Constant 0.0300000 0.0484663 0.619 0.5555
X17 0.508571 0.0549556 9.254 0.0000
R Squared: 0.924439
Sigma hat: 0.0685417
Number of cases: 9
Degrees of freedom: 7
211
Summary Analysis of Variance Table
Source df SS MS F p-value
Regression 1 0.402337 0.402337 85.64 0.0000
Residual 7 0.0328857 0.00469796
Pure Error 7 0.0328857 0.00469796
Notice that selection by battery type, while strong for 4 and 5 star reviews, is exceptionally
strong for prediction complete satisfaction. Note, however, that regressions involving the duty
cycle or charging time fail to provide a comparably strong model. We also observe that the Black
& Decker and CleanMate systems are significantly worse performers in satisfaction ranks than
the other robots.
The following summary for 5-star and“Liked” reviews indicates that we may desire to treat
the Black & Decker and CleanMate as outliers.
Data set = robovac, Summary Statistics
8 cases are missing at least one value.
Variable N Average Std Dev Minimum Median Maximum
X01 9 0.42556 0.23324 0 0.5 0.62
Liked 9 0.70556 0.1908 0.29 0.78 0.93
In this case, we find no good regression for the “Liked” case. However, on examining only
5-Star reviews, the following regression appears to satisfy (“X30” is “installable virtual wall”,
“X15” is “Base Station Empties Debris Bin of Robot”, and “X24” is “Base station supports auto-
return and auto-recharge”):
Data set = robovac, Name of Fit = L60
Deleted cases are
(2 3)
Normal Regression
Kernel mean function = Identity
Response = X01
Terms = (X30 X15 X24)
Coefficient Estimates
Label Estimate Std. Error t-value p-value
Constant 0.490000 0.0238048 20.584 0.0003
X30 -0.150000 0.0388730 -3.859 0.0308
X15 -0.110000 0.0388730 -2.830 0.0662
X24 0.120000 0.0307318 3.905 0.0298
R Squared: 0.890625
Sigma hat: 0.033665
Number of cases: 9
Number of cases used: 7
212
Degrees of freedom: 3
Summary Analysis of Variance Table
Source df SS MS F p-value
Regression 3 0.0276857 0.00922857 8.14 0.0594
Residual 3 0.0034 0.00113333
Pure Error 3 0.0034 0.00113333
Observe the excellent p-value and f-stat for the selected model for liked over all cases - this
is a very high confidence in the model as being “valid” for the data presented
The best resulting regression for the reduced comparison set is less confident, but still a valid
model. Given the nature of this as a human perception response, p-values under 0.2 are generally
considered acceptable.
H.6 Interpretation of Results
Observe that for the whole data set, the selection of type of battery was the most important.
However, user perceivable criterion such as duty cycle and charge time were not selected factors.
Indeed, models fit to those factors do not perform as well. NiMH batteries are considered to be
more environmentally friendly than NiCD. However, reading some of the negative reviews of the
the ZoomBot and CleanMate QQ-1 systems indicates that battery type was not the reason for
a poor review. In these cases, users seemed dissatisfied with suction power, vacuuming perfor-
mance, and coverage patterns. It cannot be clear from the reviews as to why the selection of the
battery type would be critical. Ultimately, this appears to be a “don’t like these brands” partition.
After eliminating the two worst performing robots, no regression was able to adequately
predict if a user would “Like” the system (4 or 5 star reviews). For the case of 5-star only
reviews, the weighting of the parameter set seemed unnatural. Although obvious why a user
may prefer a robot that can auto-return and auto-recharge and not favor installed virtual walls,
it is less clear why users would discount the base station emptying the robots debris bin. In the
written comments, the base station emptying the bin was a lauded feature. No logic can seem to
be applied to the noted technical difference between the units. Essentially, the feature set selects
for certain models within a specific vendor, becoming a previously unanticipated set partition
within the iRobot line. The preference would seem to be iRobot systems with auto-return and
auto-recharging. Interestingly, no difference was noted that would differentiate the Roomba 530
technically (which had the best satisfaction for “liked” at over 0.93, which was 0.1 better than the
next competitor) and had only a proper subset of features of the Roomba 560..
213
Abstract (if available)
Abstract
This thesis is concerned with the identification of engineering practices that most influence the ability of an organization to successfully acquire and employ a robot. Of specific interest are the matches or mismatches between our technical efforts and achieving robotic systems that are suitable for the intended purpose. From a survey of engineers (n=18) who have advised or performed the acquisition of robots, candidate relations between engineering practices and system success metrics are proposed. Those relationships are then evaluated against a 5 case studies and one mini-study to examine more closely how the practices are implemented as specific engineering methods in context. From those observations, a series project feasibility rationales are proposed to aid engineers and managers evaluate the feasibility of their robotic system acquisition.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
A robotic system for benthic sampling along a transect
PDF
Coalition formation for multi-robot systems
PDF
Impacts of system of system management strategies on system of system capability engineering effort
PDF
Macroscopic approaches to control: multi-robot systems and beyond
PDF
Organizing complex projects around critical skills, and the mitigation of risks arising from system dynamic behavior
PDF
A user-centric approach for improving a distributed software system's deployment architecture
PDF
A value-based theory of software engineering
PDF
Architecture and application of an autonomous robotic software engineering technology testbed (SETT)
PDF
Quality diversity scenario generation for human robot interaction
PDF
Design and control of a two-mode monopod
PDF
Estimating systems engineering reuse with the constructive systems engineering cost model (COSYSMO 2.0)
PDF
Advancing robot autonomy for long-horizon tasks
PDF
Biologically inspired mobile robot vision localization
PDF
Self-assembly and self-repair by robot swarms
PDF
Interaction and topology in distributed multi-agent coordination
PDF
Situated proxemics and multimodal communication: space, speech, and gesture in human-robot interaction
PDF
Machine learning of motor skills for robotics
PDF
Multi-robot strategies for adaptive sampling with autonomous underwater vehicles
PDF
Robust loop closures for multi-robot SLAM in unstructured environments
PDF
Design of adaptive automated robotic task presentation system for stroke rehabilitation
Asset Metadata
Creator
Latimer, DeWitt T.
(iv)
Core Title
Effectiveness of engineering practices for the acquisition and employment of robotic systems
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Computer Science
Publication Date
03/14/2008
Defense Date
03/04/2008
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
acquisition,engineering practice,OAI-PMH Harvest,robotics
Language
English
Advisor
Boehm, Barry W. (
committee chair
), Settles, F. Stan (
committee member
), Sukhatme, Gaurav S. (
committee member
)
Creator Email
dlatimer@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m1050
Unique identifier
UC1100463
Identifier
etd-Latimer-20080314 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-48905 (legacy record id),usctheses-m1050 (legacy record id)
Legacy Identifier
etd-Latimer-20080314.pdf
Dmrecord
48905
Document Type
Dissertation
Rights
Latimer, DeWitt T., IV
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
acquisition
engineering practice
robotics