Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Quantifying the impact of requirements volatility on systems engineering effort
(USC Thesis Other)
Quantifying the impact of requirements volatility on systems engineering effort
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
QUANTIFYING THE IMPACT OF REQUIREMENTS VOLATILITY ON SYSTEMS
ENGINEERING EFFORT
by
Mauricio Eduardo Peña
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(INDUSTRIAL AND SYSTEMS ENGINEERING)
August 2012
Copyright 2012 Mauricio Eduardo Peña
ii
Dedication
This dissertation is dedicated to my mother, Martha Sonia, and in memory of my father,
José Armando
iii
Acknowledgements
I am eternally grateful to my family, friends, academic advisors, colleagues, and
industry affiliates, for their support, guidance, collaboration, and patience, which made
this dissertation possible.
First, I would like to thank my family; my mother and father, Martha Sonia and
José Armando, for instilling in me the value of academic achievement and the
importance of setting high expectations for oneself. Although my father is no longer
with us, his strength and positive influence are with me always. My mother’s courage
in the face of adversity provided me the motivation that propelled me in this endeavor.
To my girlfriend Crystal, my study partner, whose love, companionship, and support
helped me through this journey. And thank you to my brother and sisters, Armando,
Sonia, and Marcela – your intelligence and achievements inspire me.
This research built upon the solid foundations laid by Dr. Barry Boehm and Dr.
Ricardo Valerdi. I am extremely grateful for their guidance, intellectual generosity, and
creative insights. I also want to thank Dr. Valerdi for his mentorship, encouragement,
and patience that guided me in the path to becoming a researcher. My gratitude also goes
to Dr. Stan Settles and Dr. Roger Ghanem whose suggestions and valuable advice
improved the quality of this dissertation.
Additionally, I would like to thank the Center for Systems and Software
Engineering Corporate Affiliates for participating in the research workshops and
providing resources for this effort. This research also received support from members of
the Los Angeles Chapter of the International Council on Systems Engineering and from
iv
the Practical Software and Systems Measurement organization. I am also grateful to my
management at the Boeing Company for supporting the research and allowing me the
flexibility to complete this dissertation. In particular, I would like to thank Catherine
Keller for her continuing support and insightful recommendations. In addition, my
thanks go to Dr. Tony Lin for his tutorials on statistical methods and his intellectual
inputs.
v
Table of Contents
Dedication .......................................................................................................................... ii
Acknowledgements........................................................................................................... iii
List of Tables ................................................................................................................... vii
List of Figures ................................................................................................................... ix
Abbreviations ..................................................................................................................... x
Abstract ............................................................................................................................ xii
Chapter 1: Introduction ..................................................................................................... 1
1.1 Importance of the Research……………………………………… ................ ...1
1.2 Requirements Volatility and Cost Estimation……………………… ............... 4
1.3 Proposition and Hypotheses .............................................................................. 6
Chapter 2: Background ..................................................................................................... 9
2.1 Systems Engineering and Requirements Development .................................. 10
2.1.1 Systems Engineering Definition ............... ……………………………10
2.1.2 Requirements Definition ....................................................................... 10
2.1.3 Requirements Development and Management ..................................... 11
2.1.4 Requirements Management and the Acquisition Process ..................... 13
2.2 Requirements Trends and Volatility Metrics .................................................. 17
2.2.1 Requirements Volatility Definitions ..................................................... 17
2.2.2 Requirements Metrics ........................................................................... 18
Chapter 3: Related Research ........................................................................................... 22
3.1 Requirements Volatility Research ................................................................... 22
3.1.1 Causes of Requirements Volatility ....................................................... 23
3.1.2 Effects of Requirements Volatility ....................................................... 26
3.2 Parametric Cost Estimation Models ................................................................ 34
3.2.1 COSYSMO ........................................................................................... 34
3.2.2 COCOMO ............................................................................................ 39
Chapter 4: Model Definition ......................................................................................... 45
4.1 Development of the Model .............................................................................. 45
4.2 Modeling Approach ........................................................................................ 46
4.3 Model Evolution.............................................................................................. 47
vi
Chapter 5: Research Methodology ................................................................................ 58
5.1 Research Design .............................................................................................. 58
5.2 Data Collection ................................................................................................ 60
5.2.1 Workshop Surveys ................................................................................ 60
5.2.2 Project Data Collection ......................................................................... 64
5.3 Data Analysis .................................................................................................. 67
5.3.1 Linear Regression ................................................................................. 67
5.3.2 Bayesian Calibration ............................................................................. 69
5.3.3 Model Evaluation .................................................................................. 70
5.4 Threats to Validity ........................................................................................... 73
Chapter 6: Results ....................................................................................................... 77
6.1 Behavioral Analysis and Identification of Relevant Variables ....................... 77
6.2 Expert Assessment of Model Parameters ........................................................ 87
6.3 Model Selection ............................................................................................... 92
6.4 Data Analysis .................................................................................................. 94
6.5 Cross-Validation and Sensitivity Analysis .................................................... 106
Chapter 7: Conclusions ............................................................................................... 111
7.1 Contributions to the Field of Systems Engineering....................................... 112
7.2 Future Research ............................................................................................. 114
References ...................................................................................................................... 117
Appendices
Appendix A: Process Categories and Activities: ANSI/EIA 632 ......................... 125
Appendix B: Organizations that Participated in the Research .............................. 126
Appendix C: Requirements Volatility Survey ...................................................... 127
Appendix D: PSM Conference Survey Exercise .................................................. 134
Appendix E: Requirements Volatility Survey #2 ................................................. 135
Appendix F: Requirements Volatility Delphi Survey ........................................... 143
vii
List of Tables
Table 1 Requirements Definition ................................................................................. 11
Table 2 Requirements Trends Base Measure Specifications ....................................... 19
Table 3 Requirements Volatility Derived Measures .................................................... 20
Table 4 Summary of Cause and Effect Research of Requirements Volatility ............. 22
Table 5 COSYSMO Drivers of Systems Engineering Effort ....................................... 35
Table 6 Number of System Requirements Definition .................................................. 38
Table 7 Ada COCOMO Σ factor: Requirements Volatility ......................................... 42
Table 8 Requirements Volatility Observations ........................................................... 45
Table 9 Spiral 2 Sub-models ........................................................................................ 54
Table 10 Requirements Volatility Rating Level ............................................................ 56
Table 11 Summary of Survey and Workshops .............................................................. 61
Table 12 Data Collection Measures ............................................................................... 67
Table 13 Comparison of Model Alternatives ................................................................. 93
Table 14 Alternative Scale Factor Constants ................................................................. 94
Table 15 Regression Analysis Results ........................................................................... 97
Table 16 Bayesian-Calibrated Volatility Weighting Factors ......................................... 98
Table 17 Breakdown of Requirements Volatility per Life Cycle Phase ........................ 99
Table 18 Requirements Volatility across Life Cycle Phases ......................................... 99
Table 19 Volatility Ratings with Weighted Scoring .................................................... 101
Table 20 Model Performance Comparison .................................................................. 102
Table 21 Requirements Volatility Scale Factor Correlation Matrix ............................ 105
viii
Table 22 Cross-Validation Results ............................................................................. 106
Table 23 Sensitivity Analysis Scenarios .................................................................... 108
Table 24 Sensitivity Analysis Results: Scenarios 1 and 2 .......................................... 109
Table A-1 ANSI/EIA 632 Standard Systems Engineering Activities .......................... 124
Table B-1 Organizations that Participated in the Research .......................................... 125
ix
List of Figures
Figure 1 Literature map ................................................................................................ 9
Figure 2 Systems Engineering Reviews and Acquisition Life Cycle Phases .............. 14
Figure 3 Requirements and the Evolutionary Acquisition Process .............................. 16
Figure 4 Graphical Representation of Requirements Volatility Trends....................... 21
Figure 5 Types of Requirements Changes over Time .................................................. 21
Figure 6 ISO/IEC 15288 Life Cycle phases and COSYSMO Scope ........................... 37
Figure 7 Seven Step Modeling Methodology .............................................................. 60
Figure 8 Potential Causes of Requirements Volatility ................................................. 78
Figure 9 Requirements Volatility Causal Model Diagram .......................................... 79
Figure 10 Impact of Volatility on Rework and Project Size ......................................... 81
Figure 11 Impacts of Volatility Causal Model Diagram ................................................ 82
Figure 12 Expected Level of Requirements Volatility per Life Cycle Phase ................ 83
Figure 13 Requirements Volatility Life Cycle Profile (N = 9) ...................................... 85
Figure 14 Representative “Ease of Change” Profile ...................................................... 87
Figure 15 Life Cycle Effort Penalty due to Volatility ................................................... 89
Figure 16 Weighting Factors per Change Category ....................................................... 90
Figure 17 Requirements Volatility Profiles ................................................................. 100
Figure 18 Effort and Size Baseline Coefficient of Determination ............................... 103
Figure 19 Effort and Size Coefficient of Determination with Volatility Effects ......... 104
Figure 20 Sensitivity Analysis – Scenario 1 ................................................................ 108
Figure 21 Sensitivity Analysis – Scenario 2 ................................................................ 109
x
Abbreviations
AAS Advanced Automation System
CDR Critical Design Review
CER Cost Estimating Relationship
COCOMO Constructive Cost Model
COSYSMO Constructive Systems Engineering Cost Model
CSSE Center for Systems and Software Engineering
DAMS Defense Acquisition Management System
DoD Department of Defense
DOORS Dynamic Object Oriented Requirements System
ECP Engineering Change Proposals
EMD Engineering and Manufacturing Development
GAO General Accountability Office
FA Focus Area
FAA Federal Aviation Administration
INCOSE International Council of Systems Engineering
KSLOC Thousand Source/Software Lines of Code
LAI Lean Aerospace Initiative
MMRE Mean Magnitude of Relative Errors
MSE Mean Square Error
OLS Ordinary Least Squares
PDR Preliminary Design Review
xi
PSM Practical Software and Systems Measurement
QDR Quadrennial Defense Review
REVL Requirements Evolution and Volatility
RSS Residual Sum of Squares
SDRF Software Development Risk Factors
SEK Standard Error of Kurtosis
SES Standard Error of Skewness
SF Scale Factor
SRF System Functional Review
SRR System Requirements Review
WBS Work Breakdown Structure
xii
Abstract
Although changes in requirements are expected as part of a system’s
development, excessive volatility after the requirements baseline is likely to result in
cost overruns and schedule extensions in large complex systems. Furthermore, late
changes in requirements may cause significant rework of engineering products and lead
to project failure. Changes in requirements should be expected, accounted for and
managed within the context of the system of interest. Unfortunately, system developers
lack adequate methods and tools to anticipate and manage the impact of volatile
requirements, and cost estimating techniques often fail to account for their economic
consequences.
This dissertation presents an extension to COSYSMO, a generally-available
parametric systems engineering cost model, which incorporates requirements volatility
as a predictor of systems engineering effort within COSYSMO’s structure and scope
with the aim of improving the model’s cost estimation capabilities. The requirements
volatility model extension was developed through a combination of expert judgment
gathered through surveys and discussions in six different research workshops and
historical data collected from 25 projects. The null hypothesis that the volatility of
requirements throughout a system’s life cycle is not a statistically significant predictor
of systems engineering effort was rejected in factor of the alternative hypothesis with a
P-value of 0.03. A comparison of the estimation accuracy of COSYSMO to that of the
model that includes volatility effects shows an improvement in predictive accuracy
from 52% to 80% at the PRED (20) level and a reduction in the mean magnitude error
xiii
(MMRE) from 21% to 16%. In addition, the coefficient of determination between the
predictor, systems engineering size adjusted for diseconomies of scale, and the
response, actual systems engineering effort improved from an R
2
of 0.85 to an R
2
of
0.92 when the volatility factor was applied to the model.
In addition to the mathematical model that quantifies the impact of volatility on
systems engineering effort; the contributions of the research include a-) a set of project
organizational, technical, and contextual factors ranked by subject matter experts in
terms of their influence on requirements volatility; b-) the operationalization of the
requirements volatility parameter in COSYMO through a 5-point rating scale; c-) a
documented set of observations, developed from the literature and the research
workshops, that describe the behaviors and effects of requirements volatility throughout
the system life cycle.
1
Chapter 1: Introduction
1.1 Importance of the Research
Changes to requirements are part of our increasingly complex systems and
dynamic business environment. The customer may not be able to fully specify the
system requirements at the beginning of the project because stakeholder needs evolve
rapidly and new requirements may emerge as knowledge of the system evolves (Reifer,
2000). In addition, a change in the marketplace may obviate the need for a product
feature that was initially desired (Kotonya and Sommerville, 1998). Conversely, changes
in technology during the system development could make new features or interfaces
more desirable, resulting in the addition or modification of requirements (Kulk and
Verhoef, 2008).
Although changes in requirements during the life cycle of a system should be
expected, requirements volatility has been shown to cause significant cost and schedule
overruns (GAO, 2004; Kulk and Verhoef 2008). Requirements have been described as
the foundation upon which the entire system is built (Hammer, Huffman, and
Rosenberg, 1998). Because they convey design authority, late changes in requirements
may ripple through the system design and cause significant rework. In a 2004 report, the
Government Accountability Office concluded that missing, vague, or changing
requirements are a major cause of project failure (GAO, 2004). Furthermore, the 2010
Quadrennial Defense Review Report (QDR) concluded that the U.S. Department of
Defense (DoD) can no longer afford to pursue requirements that continue to change
throughout a program’s life cycle. The QDR also stated that the current system of
2
defining requirements encourages reliance on overly optimistic cost estimates (DoD,
2010). Consequently, understanding and anticipating the impact of volatile
requirements is crucial to the success of large-scale systems. Nevertheless, cost
estimating techniques frequently fail to account for the economic consequences of
requirements changes (Jones, 1994). The importance of managing changes in
requirements is highlighted in the International Council of Systems Engineering
(INCOSE) handbook (2010):
System requirements are the foundation of the system definition and form the basis
for the architectural design, integration, and verification. Each requirement carries a
cost. It is, therefore, essential that a complete but minimum consistent set of
requirements be established from defined stakeholder requirements early in the
project life cycle. Changes in requirements later in the development cycle can have a
significant cost impact on the project, possibly resulting in cancellation. (p. 70)
The addition, deletion, and modification of requirements over the system life
cycle are collectively referred to as requirements volatility (MIL-STD 498, 1994).
Other terms used in connection with requirements volatility are requirements creep and
requirements churn. The former refers to an increase in the number of requirements,
while the latter indicates frequent changes and instability in the requirements set.
Trends in the growth and modifications of requirements may be used to evaluate the
completeness and correctness of the system objectives and definition. These trends are
considered systems engineering “leading indicators” which provide insights into the
effectiveness of project activities that are likely to affect the system performance
objectives (Rhodes, Valerdi, and Roedler, 2009).
Requirements changes can be costly, particularly in the later stages of the system
life cycle, because the change may require rework of the design, verification strategies,
3
and deployment plans (Kotonya and Sommerville, 1995). In a study of the impact of
requirements uncertainty on project risk and performance, Nidumolu (1996) stated that
“proper management of the requirements can have the single biggest impact on project
performance, and frequent changes create major problems.” According to a General
Accountability Office (GAO, 2004) report, 37% of the critical problem reports of the
F/A-22 fighter jet avionics system were caused by changes in requirements and design.
Similarly, requirements growth has delayed the completion of the Advanced Automation
System (AAS) by several years. The program was commissioned by the U.S Federal
Aviation Administration (FAA) to modernize the Air Traffic Control System. However,
due to the instability of requirements and associated schedule extensions, AAS has been
declared to be a high-risk program by the U.S. General Accounting Office (Kulk and
Verhoef, 2008). Likewise, the Comanche helicopter program reported numerous
software development problems that were attributed to inadequate requirements analysis
and requirements volatility (GAO, 2004).
The GAO’s conclusions are supported by the Tri-Service Assessment Initiative,
an independent group commissioned by the DoD to evaluate the performance of
software-intensive systems across Army, Air Force, and Navy programs, which found
that requirements development and management shortfalls were primary causes of project
performance issues (Charette, McGarry, and Baldwin, 2003). Specifically, they
concluded that requirements management problems result in poor product quality,
product rework, and progress shortfalls. Furthermore, the cost of repairing specification
requirements errors grows significantly if the errors are not caught in the requirements
4
phase of a project (Davis et al., 1993; Leffingwell, 1997). These findings suggest that the
effort associated with changing requirements should be taken into account when
estimating the cost of engineering projects (Houston, 2000).
1.2 Requirements Volatility and Cost Estimation
Since requirements changes are an expected and potentially costly occurrence in
the development of a system, their impact should be anticipated and accounted for in the
project plan. However, project managers and systems engineers lack adequate methods
and tools to account for the effects of volatile requirements. Furthermore, cost
estimating techniques often fail to account for the additional effort caused by unstable
requirements (Jones, 1994). Reifer (2000) suggests that project managers can meet their
budget and schedule targets by maintaining and allocating financial reserves aimed at
addressing the potential impact of volatile requirements. This means that improved
estimates of the cost associated with requirements changes could be used as a basis for
requesting additional budget (Boehm et al., 2000).
Despite the logical connection between higher levels of requirements volatility
and the cost of a system, there is no quantifiable evidence that supports the specific
impacts on systems engineering effort. In recent years, cost estimation of systems
engineering effort has improved significantly due to the development of the Constructive
Systems Engineering Cost Model (COSYSMO) at the University of Southern California
Center for Systems and Software Engineering. COSYSMO is an industry-validated
parametric model developed for the purpose of estimating systems engineering effort in
large-scale systems. The model established a set of Cost Estimating Relationships
5
(CERs) that relate the functional size of a system, along with other technical and
organizational factors, to systems engineering effort (Valerdi, 2005). COSYSMO built
on experience from the Constructive Cost Model (COCOMO), a widely-accepted
software parametric cost model that was also developed at the USC-CSSE (Boehm,
1981, Boehm et al, 2000). During the development of COSYSMO, requirements
volatility was identified as a relevant factor that may contribute to a significant increase
in the functional size of the system and, consequently, systems engineering effort.
However, due to lack of sufficient data, volatility effects were not incorporated in the
initial version of the model (Valerdi, 2005).
The lack of quantifiable evidence that supports the specific impact of
requirements volatility on systems engineering effort persists. Most of the studies of the
consequences of changing requirements conducted to date have focused on software
development projects and they have not specifically addressed systems engineering
effort (Zowghi and Nurmuliani, 2002; Kulk and Verhoef, 2008; Ferreira et al., 2009).
These earlier studies have utilized various qualitative and quantitative research methods
to investigate the causes and effects of requirements volatility on software projects.
These methods include surveys, interviews, computer models, and simulations. Most of
these studies concluded that requirements volatility caused growth in the functional size
of the project and increased the amount of rework, which in turn led to increases in
engineering effort (Houston, 2000; Zowghi and Nurmuliani, 2002; Ferreira et al., 2009).
Overall, requirements instability was found to be a chronic problem in software
development that affects project schedule and cost (Kulk and Verhoef, 2008).
6
While prior research has provided valuable insights into the factors and
methodologies needed to understand the impact of requirements volatility, additional
studies are needed to cover a broader base of systems and evaluate specific impacts to
systems engineering effort. The work described herein fills this gap in the research and
is centered on the following research question:
How much systems engineering effort, measured in terms of labor hours, should
be allocated to account for the impact of changing requirements during the
conceptualize, development, operational test and evaluation, and transition to operation
phases of large-scale systems?
This dissertation aimed to shed light on this question by investigating the causes
and effects of requirements volatility in large-scale engineering systems using a mixed
methods approach that utilized both field research and quasi-experimental research
methodologies. The focus of the study was to improve the ability to predict the impact
of requirements volatility on systems engineering effort. To this end, an extension to
COSYSMO was developed, within the structure and scope of the model, which
estimates the additional effort due to requirements changes. Since the research focused
on the project factors that affect systems engineering effort specifically, prescriptions for
controlling total project cost are outside of this study’s scope.
1.3 Proposition and Hypothesis
After having discussed the importance of understanding the effects of
requirements volatility on project performance and systems engineering effort, the
central hypothesis of this research is proposed:
7
There exists a subset of systems engineering activities from a defined application
domain for which it is possible to create a parametric model that will estimate the
amount of systems engineering effort throughout specific life cycle phases (a) for a
specific system of interest that experiences requirements volatility (b) with improved
statistical accuracy over COSYSMO.
This statement defines the objective of the research and it establishes its
boundaries. The system of interest will be defined at a specific hierarchical level in
accordance with the framework established by COSYSMO. For example, the system of
interest could be a constellation of satellites or a subsystem within one of the satellites.
Similarly, the life cycle phase under study will be specified within the scope of
COSYSMO. The term systems engineering effort excludes project labor that is outside of
the activities outlined by the systems engineering standard ANSI/EIA 632. For the
purposes of this research, requirements volatility is defined as changes in requirements
(additions, modifications, or deletions) that occur after the requirements baseline has
been agreed upon by the stakeholders. The statement in part (b) explicitly expresses the
goal to improve the cost estimating accuracy of COSYSMO. In most cases, COSYSMO
is capable of estimating systems engineering effort within 30% of the actuals, 50% of the
time (Valerdi, 2005; Fortune, 2009).
A set of sub-hypotheses was developed to facilitate the validation of the central
hypothesis. They are described below:
8
Null hypothesis:
H
01
: The volatility of requirements throughout a system’s life cycle is not a
statistically significant factor in the accurate estimation of systems engineering effort.
Alternative hypothesis:
H
A
: The volatility of requirements throughout a system’s life cycle is a
statistically significant factor in the accurate estimation of systems engineering effort.
The following chapters provide background in systems engineering and the
requirements development process, and a summary of relevant research on the causes and
effects of requirements volatility. Subsequently, the research methodology, data
collection methods, analysis techniques, and results that led to the development of the
COSYMO extension and validation of the hypotheses are described in detail.
9
Chapter 2: Background
This chapter provides an overview of systems engineering, requirements
definition and management processes, and a summary of the state of the practice in
systems engineering metrics as they relate to requirements trends and volatility. Chapter
3 summarizes the methodologies and key findings of previous studies on the causes and
impacts of requirements volatility and discusses their implications to this research. In
addition, the Constructive Systems Engineering Cost Model (COSYSMO) and its
predecessor, the Constructive Cost Model (COCOMO), are described and their handling
of requirements volatility as a cost factor is explored. Figure 1 depicts the major
sections that constitute the literature review.
Requirements
Development and
Management
Systems Engineering
Requirements Metrics
Requirements Volatility
Cause and Effect
Research
Parametric Cost
Estimation (COSYSMO
And COCOMO)
Impact of Requirements
Volatility on Systems
Engineering Effort
Chapter 2 Chapter 3
Figure 1: Literature Map
10
2.1 System Requirements Development and Management
2.1.1 Systems Engineering Definition
Systems engineering emerged as a formal discipline in the late twentieth century
in order to define and manage engineering endeavors of increasing complexity. Its
methodologies make it possible to decompose large engineering projects into
successively smaller elements, which are then designed, built, and aggregated into the
complete system (Sage, 1992). A system is defined by INCOSE (2010) as a
combination of interacting elements organized to achieve a stated objective while
systems engineering is defined as:
An interdisciplinary approach and means to enable the realization of successful
systems. It focuses on defining customer needs and required functionality early in
the development cycle, documenting requirements, and then proceeding with
design synthesis and system validation while considering the complete problem:
operations, cost and schedule, performance, training and support, test,
manufacturing, and disposal. SE considers both the business and the technical
needs of all customers with the goal of providing a quality product that meets the
user needs. (p. 6)
2.1.2 Requirements Definition
Requirements have been described as the foundation upon which the entire
system is built (Hammer et al., 1998). Requirements are typically categorized as: 1-)
Customer or operational requirements, which define the expectations of the system in
terms of operational scenarios and environments, mission objectives, and measures of
effectiveness; 2-) functional requirements, which specify what has to be done in terms
of tasks or activities; 3-) performance requirements, which define how well or to what
extent the activities must be accomplished; 4-) design requirements, which dictate how
to build products and what manufacturing processes to follow; 5-) derived requirements,
11
which are spawned from higher-level requirements; and 6-) allocated requirements,
where high level requirements are partitioned into multiple lower-level requirements
(DoD, 2001). There are many definitions of requirements in the literature, some of
which are captured in table 1.
Requirements Definition Source
“Requirements are descriptions of how the system should
behave, application domain information, constraints on
the system operation, or specifications of a system
property or attribute.”
Kotonya, G. and Sommerville,
I. (1998)
“Requirements are intended to change vague desires into
explicit and unambiguous statements of what the
customers want.”
Weinberg, G. (1983)
“A requirement is a capability that a system must supply
or a quality that a system must possess in order to solve a
problem or achieve an objective within the system’s
conceptual domain.”
Costello and Liu (1995)
“A requirement is:
(1) A condition or capability needed by a user to solve a
problem or achieve an objective
(2) A condition or capability that must be met or
possessed by a system or system component to satisfy a
contract, standard, specification, or other formally
imposed documents.”
IEEE (1990)
Table 1: Requirements Definition
2.1.3 Requirements Development and Management
Requirements management is defined in the Systems Engineering Capability
Model (EIA, 2002) as a Technical Problem Focus Area (FA). There are five themes
within this Focus Area, each of which influences the generation and evolution of
requirements. The five themes are problem refinement; requirements analysis;
requirements quality; requirements evolution and maintenance, and feedback and
12
verification. In addition, ISO/IEC 15288:2008, addresses requirements in the following
systems engineering technical processes: Stakeholder Requirements Definition and
Requirements Analysis (INCOSE, 2010). In some cases, these activities are collectively
referred to as requirements engineering, which emphasizes the need to use systematic and
repeatable techniques to identify, document, develop and maintain a complete and
consistent requirement set (Kotonya and Sommerville, 1998).
During the problem refinement stage, the customer’s needs and objectives are
developed into operational concepts to satisfy them. In order to adequately specify the
system, all stakeholders and their needs are identified; the constraints and environments
under which the system must operate are documented, and major deliverables are
identified. Using this information, system functional and performance requirements are
developed which define specifically what the system must do and under what
environments and constraints it must perform. Analyses and trades are performed to
ensure the requirements are clear, verifiable, and feasible. This process is iterative and
requires the involvement of the system stakeholders in key trades and decisions (DoD,
2001).
Once the baseline system requirements and system architecture are established,
they are decomposed through successively lower levels. System level requirements are
allocated to functional partitions, objects, people, or support elements (EIA, 2002). High
level functions are decomposed and performance requirements are allocated to lower
level functions. In addition, the decomposition determines the relationship between
functions such as sequence (concurrent or sequential) and whether some functions are
13
alternative paths. Trade studies are conducted to evaluate trade-offs between
decomposition/allocation alternatives to physical elements in order to arrive at a physical
architecture.
The physical and functional requirements and characteristics of the system are
then formally documented and placed under configuration control. Modifications to the
baseline set of engineering work products, including specifications, are requested through
Engineering Change Proposals (ECP). Typically, a formal management approval
structure is established in order to prioritize the change proposals, evaluate their impact,
and make decisions regarding their approval and implementation. The specific systems
engineering practices associated with requirements changes and maintenance included in
EIA-731.1 are: 1-) Document changes to Requirements; 2a-) establish a requirements
management process in order to proactively control changes, and 2b-) consider the
impact to all stakeholders when a requirement changes. In addition to Engineering
Change Proposals, other change documents used to control the requirements baseline are
Requests for Deviation and Requests for Waivers. These documents propose a departure
from the baseline by allowing the acceptance of products that are non-conformant or do
not meet the requirements as stated (DoD, 2001).
2.1.4 Requirements Management and the Acquisition Process
This section will provide an example of the requirements process in context with
the Defense Acquisition Management System (DAMS), as outlined in DoD Instruction
5000.02 (2008), which describes a management framework for translating capability
needs into stable and affordable acquisition programs of large-scale systems. Figure 2
14
depicts the life cycle phases of the DAMS along with associated systems engineering
activities such as system definition, technical reviews, and specification release.
Figure 2: Systems Engineering Reviews and Acquisition Life Cycle Phases
Similarly to the system requirements development process, the Defense
Acquisition Management System starts with the identification of a mission capability
need. Different materiel solutions intended to satisfy the capability need are analyzed
and evaluated for technology maturity and risk. The proposed solution and a draft of the
technology development strategy must be reviewed and approved at Milestone “A”
before the project is allowed to pass to the formal technology development phase (DoD,
2008).
Materiel
Solution
Analysis
Technology
Development
Engineering and
Manufacturing
Development
Production &
Deployment
Operations &
Support
Pre-Systems Acquisition Systems Acquisition Sustainment
A B C
(Program
Initiation)
IOC FOC
Acquisition Milestones Evolutionary Acquisition or Single Step to
Full Capability
ASR SRR SFR PDR CDR FCA PCA Tech Reviews
Documents
Sys Perf. Spec.
Item Perf. Specs.
Item Detail
Requirements Review
System
Definition
SYS
CI
Sys Tech
Requirements
Item Design
Rqmts.
Detailed Design
Rqmts.
Draft
User Needs
Materiel
Solution
Analysis
Technology
Development
Engineering and
Manufacturing
Development
Production &
Deployment
Operations &
Support
Pre-Systems Acquisition Systems Acquisition Sustainment
A B C
(Program
Initiation)
IOC FOC
Acquisition Milestones Evolutionary Acquisition or Single Step to
Full Capability
ASR SRR SFR PDR CDR FCA PCA Tech Reviews
Documents
Sys Perf. Spec.
Item Perf. Specs.
Item Detail
Requirements Review
System
Definition
SYS
CI
Sys Tech
Requirements
Item Design
Rqmts.
Detailed Design
Rqmts.
Draft
User Needs
15
Different design concepts are explored during the technology development phase.
It is an iterative process intended to refine user requirements while enabling technologies
are evaluated for feasibility and risk of implementation. At this stage, the Technology
Development Strategy includes a rationale for adopting an acquisition approach. A
single-step-to-full capability strategy may be selected if the system involves mature
technologies and the requirements can be fully specified up-front (such as commercial off
the shelf items). An evolutionary approach is typically preferred because user capability
needs and system requirements tend to evolve as knowledge of the system increases
(Boehm and Lane, 2010).
The evolutionary acquisition process acknowledges that the full scope of user
needs may not be known at the beginning of the program. Consequently, the needed
operational capabilities are developed over several incremental phases. These increments
are preceded by a technology development phase and result in a useful operational
capability that can be deployed and sustained. A successful evolutionary approach
recognizes the need for a phased definition of system requirements and evidence that
early architectural decisions will support evolution requirements in subsequent
increments (Boehm and Lane, 2010). Specifying evolution requirements allows the
developer to design a system that is more readily adaptable to change (Boehm, 2000).
The evolutionary acquisition process is depicted in Figure 3.
16
Figure 3: Requirements and Evolutionary Acquisition Process (reproduced from DoD
Instruction 5000.02)
At Milestone B, the proposed acquisition approach, design and operational
concepts, and technology readiness are reviewed before authorizing the program to
proceed with system acquisition (DoD, 2008). As the program transitions to the
Engineering and Manufacturing Development (EMD) phase, a System Requirements
Review (SRR) is usually held to ensure that the user’s requirements have been
understood and appropriately translated into system requirements. The draft system
specification, concept of operations, functional analysis, and initial design documentation
are evaluated to confirm the contractor is ready to establish a functional baseline. After
completion of the SRR, the system functions are decomposed and allocated to
subsystems and then to progressive lower levels. Trade studies, simulations and
prototype tests are performed to arrive at the released system performance specification
and drafts of the subsystem and control item performance specifications. As this stage
the system requirements are placed under configuration control as described in the
17
previous section (DoD, 2001). The system and detailed design are then evaluated across
increasing maturity levels through the System Functional Review (SFR), Preliminary
Design Review (PDR) and Critical Design Review (CDR). Production Readiness is
evaluated post-CDR at Milestone “C” before the program is allowed to proceed to the
Production and Deployment phase.
2.2 Requirements Trends and Volatility Metrics
After having reviewed the requirements development and management process,
the use of requirements metrics and the concept of requirements volatility are explored
further in the following sections.
2.2.1 Requirements Volatility Definitions
Requirements volatility is defined as changes in requirements over a given time
interval during the system’s life cycle. These changes may include additions,
modifications or deletions (Costello and Liu, 1995). The following terminology is found
in the literature to describe the type of requirements changes: Requirements creep – also
known as scope creep - describes the situation when the number and scope of project
requirements increases after the initial set of requirements has been baselined; the term
also refers to the failure to anticipate and account for the potential of changing
requirements. Requirements scrap refers to the decrease in the total number of
requirements as individual requirements deemed to be unnecessary are deleted as the
project progresses. Requirements churn – indicates instability in the requirements set;
the number of requirements frequently increases or decreases (Jones, 1994; Ferreira,
2002). Even if the size of the project is in the end the same as it was during the
18
requirements baseline, the frequent change in requirements would have a similar impact
as that of scope creep or requirements scrap (Kulk and Verhoef, 2008).
2.2.2 Requirements Metrics
Metrics associated with requirements, when collected throughout the project life
cycle, can be used in the early detection and mitigation of problems that may impact the
product’s performance, cost, or schedule (Costello and Liu, 1995). As described by the
Systems Engineering Leading Indicators Guide (Roedler and Rhodes, 2007),
requirements trends are used to evaluate the growth, change, completeness and
correctness of system requirements.
The Systems Engineering Leading Indicators guide was developed by the Lean
Advancement Initiative (LAI) Consortium, INCOSE, Practical Software and Systems
Measurement (PSM), and the Systems Engineering Advancement Research Initiative
(SEARI). Leading indicators are distinct from conventional systems engineering
measures because they not only provide status and historical data, but also trends and
interactions that can be used to by decision makers to proactively manage a project and
make course corrections if necessary (Roedler and Rhodes, 2007). Leading Indicators
are defined as “measures for evaluating the effectiveness of the systems engineering
activities on a program in a manner that provides information about impacts that are
likely to affect the system or program performance objectives” (Rhodes et al., 2009,
p.21).
Thirteen leading indicators were developed through this collaborative effort
based on inputs from members of the LAI working groups and other industry experts.
19
One of these leading indicators is Requirements Trends, which “can help determine the
stability and completeness of the system requirements which could potentially impact
design and production.” The requirements trends measurement methods and base
measures defined in the Systems Engineering Leading Indicators Guide are listed in
table 2 (Roedler and Rhodes, 2007).
Base Measures Measurement Methods Units
# of Requirements
# of Requirements defects
# of Requirements changes
Impact of each requirement change
Start/complete times of change
Count the # of requirements
Count the # of defects per category
Count the # of changes per category
Estimate effort hours per change
Record from actual dates/times
Requirements
Defects
Changes
Effort hours
Date and Time
Table 2: Requirements Trends Base Measure Specifications
In addition, requirements derived measures are included to provide metrics for
requirements volatility in terms of the number of requirements added, deleted, or
modified as a percentage of the total number of requirements in the current baseline.
Furthermore, the Leading Indicators Guide recommends estimating the cumulative
impact of the requirements changes to total project effort over a given time interval. The
derived measures associated with requirements volatility and their measurement
methods are listed in Table 3 (Roedler and Rhodes, 2007). For the purposes of this
research, the measurement of the impact of requirements changes will be modified to
include systems engineering effort only as opposed to total project effort.
20
Base Measures Measurement Methods
% Requirements Growth
% Requirements Modified
Estimated Impact of
Requirements Changes for
time interval (in effort hours)
((# of requirements in current baseline -# requirements in
previous baseline) / (# requirements in previous baseline) *100
(# of requirements modified / total # of requirements) * 100 as a
function of time
Sum of estimated impacts for changes during defined time
interval during defined time interval
Table 3: Requirements Volatility Derived Measures
Graphical depictions of requirements trends through line graphs or bar charts are
also recommended. A notional example of such a graph is shown in Figure 4 – the
expected level of requirements volatility is depicted as the number of requirements
changed as a percentage of the total number of requirements over time. The actual level
of volatility over the same time interval is also depicted to aid in determining whether
the system is evolving as expected (Roedler and Rhodes, 2007).
Another view of requirements volatility over time is captured in Figure 5, where
the type of requirements changes are divided into categories (new, modified, and
deleted) and depicted in a bar chart (Hammer et al., 1998). The appropriate thresholds
or expected level of requirements volatility may change depending on the system type
and the life cycle phase under evaluation. As previously mentioned, requirements are
fluid in the concept and technology development phase, but they are expected to
stabilize in the development and operational phase of the system.
21
Figure 4: Graphical Representation of Requirements Volatility Trends
Figure 5: Types of Requirements Changes over Time
Building upon the systems engineering and requirements management
background, the following Chapter provides a summary and review of prior research on
the causes of requirements volatility and its impact on project performance.
% of Requirements added, deleted or modified
Time (Months)
0%
5%
10%
15%
20%
25%
30%
35%
40%
45%
J F M A M J J A S O N
Actual Volatility
Expected Volatility
0
100
200
300
400
500
F M A M J J A S O N
New
Modified
Deleted
Time (Months)
# of Requirements
22
Chapter 3: Related Research
3.1 Requirements Volatility Research
In recent years, several research projects have explored the causes and impacts of
requirements volatility on an engineering project. However, most of these studies have
focused on software development projects and none have undertaken a quantitative
investigation of the impacts of requirements volatility on systems engineering effort. The
methods employed to conduct these research projects have ranged from case studies and
interviews to computer simulations. A small number of studies have utilized surveys to
aid in the development of simulation models to analyze the impact of requirements
volatility on project performance under different scenarios (Houston, 2000; Ferreira,
2002). The most relevant studies of the causes and effects of requirements volatility in
engineering projects are captured in Table 4.
Research Method Reference
Parametric Cost Estimation Boehm (1981); Boehm and Royce (1989);
Boehm at al. (2000)
Survey of S/W development
organizations
Jones (1994, 1998); Nidumolu (1996); Zowghi
and Nurmuliani (2002)
Data analysis of S/W project portfolio Finnie et al. (1993) ; Malaiya and Denton
(1998); Stark et al. [1999]
Interviews and Case Studies of S/W
development organizations
Zowghi and Nurmuliani (1998); Javed et al.
(2004); Loconsole and Börstler (2005); Kulk
and Verhoef (2008)
Simulation model of S/W development Smith et al. (1993) ; Lin et al. (1997); Pfahl and
Lebsanft (2000); Madachy et al. (2000);
Thakurta and Dasgupta (2011)
Survey and simulation model of S/W
development
Houston (2000); Ferreira et al. [2009]
Table 4: Summary of Cause and Effect Research of Requirements Volatility
23
3.1.1 Causes of Requirements Volatility
Changes in requirements on large-scale scale engineering projects may be driven
by a number of factors that are external or internal to the developer’s organization.
External factors include changes in customer priorities and architectures, shifts in the
political and business environment, the addition or change of stakeholders, and the
development of new technologies (Kotonya and Sommerville, 1998; Ferreira et al, 2009).
As described by Reifer (2000), the customer may not have the ability to fully specify the
system requirements at the beginning of the project because new requirements may
emerge as knowledge of the system evolves. In addition, a change in the marketplace
may obviate the need for a product feature that was initially desired (Kotonya and
Sommerville, 1998). An economic downturn may result in a reduction in the budget
available to government and commercial customers; driving decisions to reduce
functionality in order to meet revised cost targets. Furthermore, changes in technology
during the product development phase could make new features or interfaces more
desirable, resulting in the addition or modification of requirements. Conversely, new
technologies may render some of the original product features obsolete, prompting the
scrap of requirements associated with the outdated technology (Kulk and Verhoef, 2008).
The application of immature technology by the system developer could also be
considered an internal factor that makes a project more susceptible to requirements
volatility (Jones, 1994; GAO, 2004). Other internal factors that drive requirements
volatility include: deficient requirements development processes, lack of experienced
systems engineering resources applied to requirements analysis, poor initial
24
understanding or interpretation of customer needs by the development team, and changes
in organizational structure and policies (Kotonya and Sommerville, 1998; Pfahl and
Lebsanft, 2000; Zowghi and Nurmuliani, 2002; Ferreira, 2002).
In a 2004 report on the Department of Defense’s (DoD) weapons systems
acquisitions, the General Accounting Office (GAO, 2004) concluded that a disciplined
requirements management process, coupled with the use of metrics and periodic reviews
of the project’s technology maturity, generally resulted in positive project outcomes. In
addition, the GAO found that a stable and well-defined requirements baseline was an
essential factor in the reduction of defects and the fulfillment of the system’s desired
functionality. Companies that were considered to have a successful requirements
management process, as evidenced by their process maturity and ability to meet cost and
schedule targets, reported that approximately 98% of their project’s requirements are set
by the end of the design phase. While representatives from these companies
acknowledged that some requirements changes are inevitable, they were are able to
minimize volatility after the initial technical baseline through careful evaluation,
documentation, and validation of the requirement set. If a change to a requirement was
proposed, its impact to project cost and schedule would be carefully examined before
approving it. Conversely, programs with significant cost and schedule overruns, such as
the F/A 22 Fighter Jet and the Comanche Helicopter, were found to have experienced
significant requirements growth. These programs lacked a strong requirements
management processes and the application of consistent metrics and periodic technical
reviews (GAO, 2004).
25
Further evidence of the importance of requirements definition and management is
provided by a series of systems engineering case studies conducted by the Center for
Systems Engineering at the Air Force Institute of Technology. The case studies utilized
the Friedman-Sage framework to evaluate the application of systems engineering
principles in several large aerospace programs (Friedman and Sage, 2004). It was found
that customer requirements that are ill-conceived and difficult to achieve lead to technical
issues during the system development. For example, the case study of the development of
the F-111 tactical fighter bomber describes the difficulties in meeting disparate multi-
role/multi-service requirements intended to satisfy Air Force and Navy stakeholders. In
this case, poor requirements development resulted in costly delays and design problems
throughout the development of the aircraft (Richey, 2005).
In addition to poor requirements definition, the dynamics and characteristics of a
project may lead to requirements instability. Some research studies have found that
excessive schedule pressure contributes to a reduction in process rigor and morale
leading to requirements errors (Houston, 2000; Ferreira et al, 2009). These factors may
result in errors, omissions and conflicts in the baseline set of requirements (Kotonya and
Sommerville, 1998; Jones, 1994). In addition, researchers have postulated that there is a
relationship between the functional size of the project and requirements volatility.
However, the results of these studies have not been conclusive. In some cases, large
projects exhibited a higher level of volatility than smaller projects; while in other cases,
project size and volatility had no significant correlation (Zowghi and Nurmuliani, 2002;
Loconsole and Börstler, 2005; Kulk and Verhoef, 2008).
26
3.1.2 Effects of Requirements Volatility
Although changes in requirements in complex systems are a natural part of the
system evolution, project managers and systems engineers often fail to adequately
anticipate and manage the impact of requirements volatility on project performance
(Jones, 1994; Kotonya and Sommerville, 1998). Consequently, numerous governmental,
industry, and academic studies have concluded that frequent and late changes in
requirements increase project cost and cause schedule extensions (Stark et al., 1999;
Houston, 2000; GAO, 2004; Ferreira et al., 2009; DoD, 2010). Observations from the
literature also indicate that requirements added late in the system life cycle carry an effort
penalty due to added rework and the collateral impact to other engineering products
(Blanchard and Fabrycky, 1998; Houston, 2000; Ferreira et al, 2009). The “ease of
change curve” captured by Blanchard and Fabrycky (1998) in their depiction of “Cost
Commitment on Projects” further illustrates this point. The added number of
requirements and additional rework in turn lead to an increase in effort, cost, and
schedule duration (Jones, 1996; Houston, 2000; Kulk and Verhoef, 2008, Ferreira et al.,
2009).
Zowghi and Nurmuliani (2002) utilized a cross-sectional survey of 450 software
development companies to investigate the impact of requirements volatility on software
project performance. They found that projects with higher levels of requirement
volatility exhibited poor performance as measured against cost and schedule targets. The
impact of other project factors such as the size of the project and the size of the
development organization were tested as control variables, but they were found not to
27
have a significant effect on the results. A survey of software development firms was also
utilized by Nidumolu (1996) for the purposes of investigating the effects of requirements
uncertainty on software project performance. The term “requirements uncertainty”
includes the notion of requirements instability, the diversity in user requirements, and the
extent to which customer needs can be analyzed and translated into system requirements.
While the study concluded that uncertainty in the requirements set increases residual
project performance risk, no specific quantitative impacts in terms of engineering effort,
project cost or schedule were provided.
Other researchers have utilized historical analysis of project data in order to
reconstruct the relationship between changes in requirements and project performance
measures. As is the case in most requirements volatility research to date, these studies
focused on the evaluation of a portfolio of software projects. In some cases, a positive
correlation between the number of change requests and the total number of defects was
found (Javed et al., 2004). Similarly, Malaiya and Denton (1999) examined program data
and found that changes in requirements influence defect density. Furthermore, it was
concluded that there is an exponential temporal dependency between requirements
changes and their effect on defect density – changes later in the project have a much
greater impact on the number of defects than changes that occur in the early stages of
development.
The time-dependent impact of requirements changes was echoed by Kulk and
Verhoef (2008) in their study of three sub-portfolios of software projects of varying
degrees of risk. The compound interest rate formula was used as a basis for calculating
28
the effect of the rate of requirements change on the size of a software project. It was
argued that adding or changing requirements may trigger the creation of other
requirements or affect the lower level specifications, therefore compounding the impact
of the change. Furthermore, it was theorized that adding requirement later in the project
will tend to lower productivity at a greater degree than if the change had occurred earlier
in the project. The relationship between the monthly requirements volatility rate and the
project size (total number of requirements) was expressed as:
t
r
t SizeAtStar SizeAtEnd
+ ⋅ =
100
1 [Eq. 3-1]
100 1 ⋅
− =
t
t SizeatStar
SizeAtEnd
r [Eq. 3-2]
Where,
r = compound monthly volatility rate
SizeAtStart = Initial size of the project in terms of the # of requirements
SizeAtEnd = Size of the project (# of requirements) at the end of the period of
interest
Another study of historical project performance used regression analysis to model
the relationship between changes to the requirements baseline and additional engineering
effort, risk, and schedule duration. The engineering effort required to implement the
change was calculated by requirements type (performance, interface, operational, etc.).
Requirements volatility and schedule duration were measured as percentages of their
baseline values. For the 44 software projects under consideration, requirements volatility
29
was positively correlated to an increase in engineering effort, cost, and schedule
extension (Stark et al., 1999).
In addition to surveys and data analysis of case studies, simulation modeling has
also been used as a method for evaluating the impact of changing requirements on project
performance. Simulations model real system performance through mathematics and
logic relationships using variables that can be manipulated to predict performance under
different scenarios (Pritsker et al., 1997). They are a useful tool in engineering research
because experiments are difficult to conduct in real-life industrial settings. System
dynamics is a modeling methodology utilized in several requirements volatility studies.
Developed by Jay Forrester (1961), systems dynamics is a continuous simulation
approach that uses cause and effect relationships and feedback loops to model the
interaction between elements of complex systems. The dependent variables in systems
dynamics change continuously over time as opposed to a discrete simulation model
where the dependent variables change during specific events (Madachy, 2009). Pfahl and
Lebsanft (2000) used a system dynamics simulation to evaluate the impact of unstable
software requirements on project duration and engineering effort. One of the conclusions
from the study is that increasing the level of requirements engineering effort results in a
higher quality and more stable set of baseline requirements. They also found that
requirements instability was correlated to increases in project schedule duration. The
results of the model were based on qualitative data from process assessments conducted
at a Siemens Corporate Technology business unit.
30
Through the use of a survey and a system dynamics simulation model of software
development, Houston (2000) concluded that requirements creep results in increases in
job effort due to additional requirements and rework. The model was intended as a tool
in the management of software development risks through stochastic simulations of six
Software Development Risk Factors (SDRFs). The risk factors most commonly cited in
the literature were identified and down-selected to six based on results of a qualitative
survey and their potentially adverse effect on projects. Data on the potential effects of
the six factors were collected through a quantitative survey of software professionals.
One of the SDRFs incorporated into the simulation was creeping user requirements.
According to the survey results, requirements creep occurred in 60% of the projects
studied. The researcher defined the occurrence of a requirements creep problem as 1-)
greater than 10% growth in project size due to requirements additions and changes, or 2-)
rework of more than 10% of the work products (designs and code). The survey responses
also indicated that requirements additions and changes would result in an average
increase in product size and rework that, combined, typically add 20% to project effort.
These findings suggest that additional effort should be added when estimating a project in
order to account for the likely growth in project size and rework (Houston, 2000).
In order to model requirements creep, Houston (2000) assumed a continuous flow
of requirements additions and changes that increases linearly until reaching a maximum
value after which it decreases linearly. Respondents were asked to estimate the timing of
the peak and the percent of requirements volatility (additions and changes) that occur
after the peak. It was found that a late peak in requirements changes results in a large
31
percentage of the changes occurring after the peak, suggesting that early management of
volatility is important in controlling changes throughout the life cycle of the project. The
system risk simulation was completed by establishing logical relationships between the
six risk factors, assigning a probability of occurrence to each one, propagating the total
system probability of success/failure using a fault-tree diagram, and performing a
sensitivity analysis by varying the model variables and measuring the impact on the
results.
Two existing system dynamics simulations of software projects were utilized to
compose a base model that provided constructs addressing staffing, effort allocation,
productivity, project planning and control, workflow and quality management (Houston,
2000; Tvedt, 1996; Abdel-Hamid and Madnick, 1991). This base model was extended by
adding the stochastic risk simulation described above. The extended model was called
Software Project Actualized Risk Simulator (SPARS). As previously mentioned, the
potential effects of requirements creep were modeled as increases in job size and rework.
The increase in job size was influenced by random variables that included the timing of
peak requirements creep and the percent of requirements creep after the maximum. The
base model was run to simulate a software project without the use of the risk factors in
order to establish a baseline. The risk factors were then introduced to determine the
effects on the project. Requirements creep was found to be the most significant risk
factor modeled due to its high degree of unpredictability and large impact on project
outcomes (Houston, 2000).
32
Thakurta and Dasgupta (2011) utilized the Abdel-Hamid and Madnick system
dynamics model of software projects as a starting point by in their study of the influence
of requirements change patterns on software project development. The parameters of the
model were calibrated using data from a software development project conducted at an
information technology organization. The simulation was run to evaluate the impact of a
change request generation rate on project performance in terms of workforce level, effort
expended, productivity, schedule completion date, and error generation. The results of
the simulation were compared to the actual project data. The researchers concluded that
the requirements volatility pattern contributed to schedule and effort overruns and an
increase in error generation.
Ferreira et al. (2009) explored the cause and effect relationships between
requirements volatility and software development factors through an extensive empirical
survey. The results of the survey were used to develop a more detailed system dynamics
model to analyze the stochastic effects of requirements volatility on the performance of
software projects. The model introduces relationships between requirements volatility
and project factors such as productivity, staff morale, and schedule pressure. Two cases
were run by the simulator for the purposes of comparison. The first case did not add
requirements volatility effects; the second case included the stochastic effects of
requirements volatility derived from the survey results. Each case was run 100 times
each. The study concluded that projects with requirements volatility incur increases in
cost, rework, schedule duration, and number of defects as compared with projects with no
requirements volatility. Specifically, the case with volatility, exhibited a 25% increase in
33
project size, over 50% increase in cost, 40% increase in schedule duration, and
approximately 19% increase in defects per function point as compared to the baseline
project ( Ferreira et al., 2009).
The correlation between requirements volatility and increases in engineering
effort, cost, and schedule is a common conclusion across these research studies. The
volatility in requirements often results in a net increase in the functional size of the
project and additional rework, which in turn drive an increase in engineering effort.
There was less agreement, however, in the relationship between the initial size of the
project and the expected level of requirements volatility. Some studies found that large
projects exhibited a higher level of volatility than small projects, while others found not
significant correlation between these two variables.
The approach for modeling the impact of requirements volatility also differed in
prior studies. Some researches simply counted the number of requirements that were
modified or added as an increase in the functional size of the project resulting in the
associated increase in effort, cost, and schedule. Other studies asserted that the impact of
requirements volatility has a temporal dependency that increases the impact of the change
the later it occurs in the life cycle.
An overarching observation that can be made from the review of the literature is
that all of the relevant studies of the causes and effects of requirements volatility focused
on software projects. While the prior research has advanced the state of the art in
understanding the effects of changing requirements on project performance, additional
study on the subject is required to cover a broader base of systems. The research
34
described herein utilized these previous studies as building blocks that have provided
valuable insight into the factors and methodologies needed to understand the impact of
volatility on systems engineering effort.
3.2 Parametric Cost Estimation Models
3.2.1 COSYSMO
The Constructive Systems Engineering Cost Model (COSYSMO) is an industry-
validated parametric model developed at the University of Southern California Center
for Systems and Software Engineering (USC-CSSE) for the purpose of estimating
systems engineering effort in large-scale systems. The model established a set of Cost
Estimating Relationships (CERs) that relate the functional size of a system, along with
other technical and organizational factors, to systems engineering effort (Valerdi, 2005).
Parametric techniques focus on system characteristics that have a predominant effect on
system cost, which are also known as cost drivers (NASA, 2002). The drivers of systems
engineering effort in COSYSMO are divided into four size drivers and fourteen cost
drivers, which are shown in table 5. The size drivers represent the functional size of the
system and have an additive effect on systems engineering effort. The cost drivers in
COSYSMO are referred to as effort multipliers because they have a global effect on the
overall system (Valerdi, 2005).
35
Size Drivers (4) Cost Drivers (14)
Number of System Requirements
Number of System Algorithms
Number of Major Interfaces
Number of Operational Scenarios
Requirements Understanding
Architecture Understanding
Level of Service Requirements
Migration Complexity
Technology Risk
Documentation
Number and Diversity of Installations
Number of Recursive Levels
Stakeholder Team Cohesion
Personnel/Team Capability
Personnel Experience/Continuity
Process Capability
Multisite Coordination
Tool Support
Table 5: COSYSMO Drivers of Systems Engineering Effort
The COSYSMO cost-estimating relationships are represented by the following
equation (Valerdi, 2005):
( )
∏ ∑
=
⋅
Φ + Φ + Φ ⋅ =
14
1
, , , , , ,
j
j
E
k
k d k d k n k n k e k e
EM w w w A PM [Eq. 3-3]
Where:
PM= effort in Person Months
A = calibration constant derived from historical project data
k = {Requirements, Interfaces, Algorithms, Scenarios}
w
x
= weight for “Easy”, “Nominal”, or “Difficult” size driver
Ф = quantity of “k” size driver
E = represents (dis)economies of scale
EM = effort multiplier for the j
th
cost driver
36
As shown in equation 3-3, the four size drivers are weighed based on level of
complexity using the discrete values of “easy,” “nominal,” and “difficult.” In addition,
COSYSMO includes an exponential factor to account for human diseconomies of scale.
This factor represents global and emergent effects on the system and indicates that as the
size of the project increases, productivity decreases due to greater communication and
system integration overhead. As previously mentioned, the cost drivers have a
multiplicative effect on effort and are rated using a scale (Very Low, Low, Nominal,
High, Very High, and in some cases Extra High) that represents their impact on a given
system. The default rating is “nominal,” which is assigned an effort multiplier of 1.0 and
has no effect on the effort calculations. Ratings above or below “nominal” result in an
effort multiplier less than 1.0 or greater than 1.0, depending on the cost driver. Finally,
the calibration constant “A” is derived from historical performance using industry data.
The constant can be locally calibrated to reflect the context and productivity of a specific
organization (Valerdi, 2005).
It is important to note that a size adjustment factor was recently added to
COSYSMO (version 2.0) to account for the reuse of systems engineering products. The
size driver quantities can now be placed into five different reuse categories that are each
assigned a weighting factor. The COSYSMO 2.0 cost-estimating relationships are
captured below (Fortune, 2009):
( ) )
∏ ∑ ∑
=
⋅
Φ + Φ + Φ
⋅ =
14
1
, , , , , ,
j
j
E
k
k d k d k n k n k e k e
r
r
EM w w w w A PM [Eq. 3-4]
Where:
37
r = {New, Design for Reuse, Modified, Deleted, Adopted, Managed}
w
r
= weight for reuse category
The New category is assigned a weighting factor of 1.0, while the Modified,
Deleted, Adopted, and Managed categories are assigned weighting factors of less than
1.0 to account for the effort savings expected from reuse. The Design for Reuse
category is assigned a weighting factor greater than 1.0 to represent the investment
required to make a product reusable (Fortune, 2009).
The boundaries of COSYSMO are defined by systems engineering standards.
The systems engineering processes and activities covered by the model are based on the
ANSI/EIA 632 Processes for Engineering a System. This standard facilitates the
definition of a systems engineering Work Breakdown Structure (WBS) that can be used
as a basis for cost estimation (Valerdi, 2005). The ANSI/EIA 632 (1999) systems
engineering process categories and activities are found in Appendix A.
The scope of COSYSMO is further defined by a slightly modified version of the
Systems Engineering standard ISO/IEC 15288 – System Life Cycle Processes (ISO/IEC
2002). As shown in Figure 6, COSYSMO estimates the systems engineering effort for
the first four life cycle phases, namely: Conceptualize Development, Operational Test
and Evaluation, and Transition to Operation (Valerdi, 2005).
Figure 6: ISO/IEC 15288 Life Cycle Phases and COSYSMO Scope
Conceptualize Develop
Operational
Test and
Evaluation
Transition to
Operation
Operate,
Maintain or
Enhance
Replace or
Dismantle
COSYSMO Scope
38
The COSYSMO User Manual (Valerdi, 2006) provides guidelines regarding the
counting of requirements, which may be provided at different levels of decomposition.
Because customer requirements may be decomposed into many contractor and sub-
contractor requirements, it is important for the researcher to define the system of interest
and the requirements that are applicable to it. The level of decomposition of interest for
COSYSMO is equivalent to the Type A, system specification (MIL-STD 490-A, 1985).
Requirements are counted in the system specification for the level of design in which
systems engineering is taking place. The guidelines recommend decomposing high-level
system objectives into requirements that can be designed and tested, and are therefore
representative of the systems engineering effort required to deliver the system. The table
below contains the COSYSMO definition of the number of system requirements (Valerdi,
2006).
Number of System Requirements:
The number of requirements for the system-of-interest at a specific level of design.
The quantity of requirements includes those related to the effort involved in system
engineering the system interfaces, system specific algorithms, and operational
scenarios. Requirements may be functional, performance, feature, or service-oriented
in nature depending on the methodology used for specification. They may also be
defined by the customer or contractor. Each requirement may have effort associated
with it such as verification and validation, functional decomposition, functional
allocation, etc. System requirements can typically be quantified by counting the
number of applicable shalls/wills/shoulds/mays in the system or marketing
specification.
Table 6: Number of System Requirements Definition
39
During the development of COSYSMO, volatility was identified as a relevant
factor in cost estimating with the potential of greatly increasing system cost. Along with
the complexity and reuse of requirements, volatility was identified as an adjustment
factor that would account for changes or uncertainties in the size drivers. However, the
volatility factor was not included in the initial version of the model because of lack of
data required to calibrate it (Valerdi, 2005).
3.2.2 COCOMO
COSYSMO is built on a platform that is similar to the one utilized by its
predecessor, the Constructive Cost Model (COCOMO). COCOMO is a widely-accepted
parametric cost model designed to estimate the cost, effort, and schedule required for
software development projects. Both models estimate engineering effort as a function of
project size which COCOMO expresses as thousands of source lines of code (KSLOC)
and COSYSMO defines as the number of system requirements, algorithms, major
interfaces, and operational scenarios (Boehm, 1981; Valerdi, 2005).
In addition, the models account for technical and organizational characteristics
that drive effort such as personnel capability and technology risk. COCOMO operates in
the realm of software engineering while COSYSMO is designed for a broader base of
systems that include both hardware and software (Valerdi, 2005). Consequently,
COCOMO follows a software development life cycle (Boehm et al., 2000) while
COSYSMO operates within the systems engineering life cycle phases defined by
ISO/IEC 15288 (Valerdi, 2005). COCOMO II, the successor to the original COCOMO,
40
incorporates modern software development processes to improve its estimation
capabilities (Boehm, et al., 2000).
An earlier version of the original COCOMO included a requirements volatility
factor to account for software breakage: the number of source instructions that had to be
discarded or rebuilt due to changes in requirements. The effects of requirements
volatility were incorporated as a calibrated effort multiplier that was rated using a scale
(Low, Nominal, High, Very High, and Extra High) resulting in a productivity range of
1.78. Although the change in requirements was considered a significant factor affecting
the cost of software, the requirements volatility factor was not included in the final
version of COCOMO because it was not precisely defined and was considered to be too
subjective (Boehm, 1981).
The effects of requirements volatility were addressed in a revised version of the
original COCOMO intended to capture software development efficiencies specific to the
Ada programming language. This version of COCOMO was called Ada COCOMO and
it utilized a process model that improved software development productivity by reducing
the exponent in the COCOMO embedded mode equation. This exponent represents the
diseconomies of scale associated with the increase in communication overhead and
process integration effort related to the size of a project. The parametric effort equation
for the COCOMO embedded mode is reproduced below (Boehm and Royce, 1989).
( )
20 . 1
8 . 2 KDSI MM
nom
= [Eq. 3-5]
41
Where,
MM
non
= number of man-months required to develop a nominal software product
KDSI = thousands of delivered source instructions.
The Ada process model strategy attempted to reduce the diseconomies of scale by
mitigating three primary sources of project inefficiency: interpersonal communications
overhead, late rework, and unstable requirements. Some of the methods utilized to
reduce the inefficiencies caused by changing requirements included raising the threshold
of allowable requirements changes and using an incremental development process that
would defer the implementation of requirements changes to future increments (Boehm,
and Royce, 1989).
The improvements in diseconomies of scale resulting from the Ada process model
are captured through the updated nominal effort equation shown below:
( )
∑
=
=
+
4
1
04 . 1
8 . 2
i
i
W
nom
KDSI MM [Eq. 3-6]
The parameter Σ indicates the degree of implementation of the Ada process model
in terms of: 1) Early software architecture definition; 3) mitigation of risks by PDR; 2)
requirements stabilization by PDR; and 4) the development team’s experience with the
Ada process model. Each of these four elements is rated using a scale from 0.00 to 0.05,
and the ratings are summed to determine the parameter Σ. The value of Σ is 0 when the
project is fully compliant with the Ada process model. The rating scale for the
requirements volatility element is reproduced below.
42
Characteristic Rating Scale
No
Changes
Small
Non-
critical
Changes
Frequent
non-
critical
Changes
Occasional
Moderate
Changes
Frequent
Moderate
Changes
Many
Large
Changes
.00 .01 .02 .03 .04 .05
System requirements baselined,
under rigorous change control
Fully Mostly Generally Some Little None
Level of uncertainty in key
requirements areas, mission, user
interfaces, hardware, other
interfaces
Very
Little
Little Some Consider
able
Significant Extreme
Organizational track record in
keeping requirements stable
Excellent Strong Good Moderate Weak Very
Weak
Use of incremental development
to stabilize requirements
Full Strong Good Some Little None
System architecture modularized
around major sources of change
Fully Mostly Generally Some Little None
Table 7: Ada COCOMO Σ factor: Requirements Volatility
The expectation of a higher level of requirements volatility, as defined by the
rating levels, would result in larger diseconomies of scale and additional effort. A well-
defined set of requirements, managed under robust organizational and product
development processes is expected to experience less volatility. In this case, a lower
rating would be selected.
COCOMO II accounts for the impact of requirements volatility on the effective
size of software products through the use of and adjustment factor called REVL
(Requirements Evolution and Volatility). This factor, which is also referred to as the
breakage parameter, is defined as the percentage of code discarded due to changes in
43
requirements. The relationship between software size and the REVL factor is given by the
following equation (Boehm et al., 2000):
D
Size
REVL
Size ×
+ =
100
1 [Eq. 3-7]
Where,
Size
D
is the initial equivalent size of the software product adjusted for reuse
The requirements evolution and volatility factor (REVL) can be used to estimate
the risk of cost growth due to uncertainty in the system requirements. This uncertainty
would be expressed in terms of a percentage of the total number of system requirements
that are likely to change due to new technology developments or evolving knowledge of
the system. For example, for a REVL of 20% and software size of 100 KSLOC, the
effective size of the product would be 120 KSLOC. The difference in cost between the
original software size estimate and the product size with the REVL factor could be put
aside as management reserve or used as a basis for requesting additional budget (Boehm
et al., 2000).
The requirements evolution and volatility factor can also be adapted to the
incremental development life cycle process, where a system is delivered in time phased
increments of increasing functionality. This approach tends to reduce risk as compared to
the traditional single-step development process that requires the full capability of the
system to be met in one delivery. A requirements volatility factor can be specified for
each development increment based on the level of understanding of the system at that
particular stage (Boehm et al., 2000). The approach for estimating the cost impact of
44
requirements volatility in COCOMO II was considered as a basis for incorporating this
factor in COSYSMO.
The COSYSMO and the COCOMO family of parametric models have
acknowledged the need to account for requirements volatility in project effort estimates.
In some cases, the impact of unstable requirements has been characterized as a
multiplier to the size of the project. As described above, an alternate approach is to
incorporate volatility effects into the diseconomies of scale exponent. These methods
informed the development of the requirements volatility extension to COSYSMO, which
will be described in detail in the following chapter.
45
Chapter 4: Model Definition
4.1 Development of the Model
Prior studies indicate that requirements volatility is correlated with an increase in
the functional size of a project (Houston, 2000; Ferreira et al., 2009). In addition, late
changes in requirements have been linked to an increase in rework caused by a higher
defect rate and the collateral impact to engineering work products (Houston, 2000; Kulk
and Verhoef, 2008; Ferreira et al, 2009). The increased number of requirements and
additional rework in turn lead to an increase in effort, cost, and schedule duration (Jones,
1996; Houston, 2000; Kulk and Verhoef, 2008, Ferreira et al, 2009). The level of
requirements volatility also changes as a function of the system life cycle, as the
requirements set is expected to stabilize as the system matures (Houston, 2000, Roedler
and Rhodes, 2007). Based on the review of the literature and workshop discussions, a set
of observations regarding the precursors and impacts of volatility were developed and
captured in Table 8.
# Observations
1 Requirements volatility is caused by an identifiable set of project and
organizational factors.
2 The level of requirements volatility is a function of the system life cycle phase.
3 Requirements volatility leads to an increase in project size and cost
4 The cost and effort impact of a requirements change increases the later the change
occurs in the system life cycle
5 The impact of requirements volatility varies depending on the type of change:
added, deleted, or modified.
Table 8: Requirements Volatility Observations
46
4.2 Modeling Approach
The foundations of the model are the cost estimating relationships (CERs)
represented in COCOMO and COSYSMO. Through industry validation, COSYSMO
has demonstrated that the parametric relationships in the model (the four size drivers and
fourteen cost drivers) are good predictors of systems engineering effort (Valerdi, 2005).
Consequently, one of the principles guiding the requirements volatility extension to
COSYSMO is the preservation of the structure and basic parametric equation of the
original model:
( )
∏
=
× × =
n
i
i
E
EM SIZE A PM
1
) ( [Eq. 4-1]
Where,
PM = Systems engineering effort in person months
A = Calibration constant derived from historical project data
SIZE = measure of functional size of the system (requirements, interface,
algorithms, operational scenarios)
n = number of cost drivers (14)
EM = Effort multiplier for the i
th
cost driver
E = scale factor or diseconomies of scale
The scope of the model extension will be limited by the boundaries of
COSYSMO. The systems engineering processes and activities covered by the model are
based on the standard ANSI/EIA 632 (1999). The scope of COSYSMO is further defined
by a slightly modified version of the Systems Engineering standard ISO/IEC 15288 –
47
System Life Cycle Processes (ISO/IEC 2002). COSYSMO estimates the systems
engineering effort for the first four phases of the life cycle, namely: Conceptualize
Development, Operational Test and Evaluation, and Transition to Operation (Valerdi,
2005).
The starting point for the requirements volatility extension will be the original
COSYSMO. Fortune (2009) recently developed an update to the model, COSYSMO 2.0,
which allows the user to quantify the effects of reuse on systems engineering effort. In
order to isolate the effects of the requirements volatility factors introduced by this
research, the effects of reuse will not be applied to the model.
4.3 Model Evolution
The requirements volatility model evolved in two spirals or model families. The
first iteration of the COSYSMO extension built upon the COCOMO II method of using a
size adjustment factor to account for Requirements Evolution and Volatility (REVL).
The second iteration incorporates the effects of requirements volatility as a scale factor
(SF) that is added to the diseconomies of scale exponent (E). This method was utilized
to model volatility effects in Ada COCOMO (Boehm and Royce, 1989). In addition, the
modification of the diseconomies of scale exponent has been proposed as an appropriate
method for modeling project characteristics with variable life cycle impacts (Wang et al.,,
2008).
The effort multipliers in Academic COSYSMO were left unchanged. While
some of the potential contributors to requirements volatility are captured in the model’s
cost drivers (e.g. requirements understanding, technology risk), the potential for overlap
48
or double counting is limited. According to workshop discussions and survey results,
there are several other project factors that influence volatility that are not currently
captured in the COSYSMO cost drivers. In addition, because the size of the project
represents the majority of the model’s explanatory power, accounting for volatility as an
adjustment to the size drivers has a more direct impact on the effort estimate.
Spiral # 1: COSYSMO’s predecessor in the realm of software engineering,
COCOMO II, utilizes a size adjustment factor to account for Requirements Evolution and
Volatility (REVL). This factor captures the percentage of software code that is likely to
be discarded due to new technology developments or evolving knowledge of the system
(Boehm et al., 2000). For the purposes of estimating the impact of requirements volatility
on systems engineering effort, REVL is redefined as the percentage of the baseline set of
requirements that changes throughout the system life cycle. The effective increase in the
number of requirements results in an associated increase in systems engineering effort,
which is consistent with observation # 3.
( )
0
100
1 R
REVL
R
eq
×
+ = [Eq. 4-2]
Where,
R
0
= Baseline number of requirements
R
eq
= Equivalent number of requirements
REVL = % of baseline requirements that changed over the system life cycle
It is important to note that the requirements evolution and volatility factor
(REVL) is applied to the entire requirements set. In COSYSMO, the system
49
requirements are categorized by level of complexity as “easy,” “nominal,” and
“difficult.” It is assumed that changes are equally likely across the three levels of
complexity. This assumption was made for several reasons. First, the data available
were insufficient to justify a different likelihood for each type of change. Secondly,
feedback from industry representatives indicated that it would not be feasible to obtain
reliable metrics that indicate whether a modification was made to an easy, nominal, or
difficult requirement. Nevertheless, it is expected that a change to a difficult
requirement will involve more effort than a change to an easy requirement. This
expectation is accounted for in the model. The effective number of requirements and
associated effort across the three levels of complexity is scaled linearly by the volatility
factor. Since COSYSMO already allocates additional effort for difficult requirements, a
change in this complexity category would result in proportionally larger effort (Peña
and Valerdi, 2011).
In addition to requirements, the functional size of the system is composed of
system interfaces, algorithms, and operational scenarios. During the development of
COSYSMO, it was found that the four size drivers were moderately correlated and they
were combined into a single SIZE predictor. This means that a change in requirements
will likely impact the other size drivers. Consequently, modifying only the
requirements size driver may under represent the effects of volatility in the system.
However, metrics on changes to interfaces, algorithms, and operational scenarios are
not uniformly collected by engineering organizations. This observation is supported by
a survey of 13 systems engineers and project managers conducted during the 2011 USC
50
Center for Software and Systems Engineering (CSSE) Annual Research Review. The
respondents represented nine different organizations and had an average of over 20
years of experience in either systems engineering or software engineering. Only 15%
of the respondents indicated that their organization tracks metrics of changes to
operational scenarios, while 31% indicated that changes to algorithms are consistently
measured. Approximately half (53%) of the respondents indicated that changes to
interfaces are tracked. By comparison, metrics on requirements changes were more
easily obtained because of the use of electronic requirements databases. Moreover, it
was difficult to acquire reliable expert judgment on the impacts of changes to
interfaces, algorithms, and operational scenarios. Based on discussions with industry
experts conducted during several research workshops, it was evident that systems
engineers have a more intuitive feel for evaluating changes in requirements as opposed
to the other size drivers.
Consequently, requirements volatility was used as a proxy for changes to
interfaces, algorithms, and operational scenarios. This approach reduced the number of
parameters in the model and streamlined the data collection process. In addition,
requirements typically represent over half of the functional size of the systems that
were evaluated by prior researchers (Valerdi, 2005; Fortune, 2009). Thus, equation 4-2
is redefined as:
( )
0
100
1 SIZE
REVL
SIZE
eq
×
+ = [Eq. 4-3]
Where,
51
SIZE
0
= Baseline Functional Size
SIZE
eq
= Equivalent Functional Size
Spiral #2: The second version of the model utilized a different approach and
incorporated the effects of requirements volatility on SIZE through a scale factor (SF)
that is added to the diseconomies of scale exponent (E) in the COSYSMO equation.
This method was based on a similar approach used to model volatility effects in Ada
COCOMO (Boehm and Royce, 1989) and prior research that points to the compounding
or exponential effect of project factors with variable life cycle impact (Kulk and
Verhoef, 2008; Wang, Valerdi, Boehm, and Shernoff, 2008). Given the variable life
cycle impact of requirements changes previously observed in the literature, this
approach was selected as a more appropriate representation of volatility effects in the
model. In addition, comparisons of the a priori model performance between the two
spirals indicated that the incorporation of a volatility factor in the diseconomies of scale
exponent resulted in a greater prediction accuracy than a multiplicative adjustment to
SIZE. The COSYSMO equation is modified as follows:
( )
∏
=
+
× × =
n
i
i
SF E
EM SIZE A PM
1
) ( [Eq. 4-4]
Where,
E = Diseconomies of scale factor (1.06)
SF = Requirements volatility scale factor
In Ada COCOMO, the requirements volatility scale factor SF
1
was rated using a
range of 0.00 to 0.05, with higher values indicating a greater expected level of volatility
52
(Boehm and Royce, 1989). Utilizing a similar methodology, the COSYSMO scale
factor is calculated using a constant value of 0.05 that is scaled based on REVL, the %
of the baseline requirements that is expected to change over the system life cycle. A
REVL of 0% indicates no volatility and would result in a scale factor equal to 0, while a
REVL of 100% would yield a scale factor of 0.05. This relationship is captured as
follows:
× =
100
05 . 0
REVL
SF [Eq. 4-5]
As previously mentioned, observations from the literature indicate that
requirements added or modified after the requirements baseline carry an effort penalty
due to the potential rework and collateral impact to other engineering products (Houston,
2000; Kulk and Verhoef, 2008; Ferreira et al, 2009). It follows that a weighting factor
(w
v
) would be required to account for this additional effort. These relationships are
captured through the following equation:
× × =
v
w
REVL
SF
100
05 . 0 [Eq. 4-6]
Where,
SF = Weighted scale factor
w
v
= Volatility weighting factor
Other aspects to consider are captured by observations #2 and #4, which state
that both the level and impact of requirements volatility change as a function of time.
In order to capture this time dependency, different weighting factors for each life cycle
phase (w
l
) are included in the model. These weighting factors are then multiplied by
53
the % of the total number of requirements changes that occurred during each life cycle
phase. The sum of the % breakdown of changes (Θ
l
)
across all 4 life cycle phases
equals 100%. These factors are summed in order to develop an aggregate volatility
weighting factor as shown below.
[Eq. 4-7]
Where
w
v1
= aggregate life cycle phase volatility weighting factor
w
l
= weighting factor for each life cycle phase
Θ
l
= % of total requirements changes per life cycle phase
l = life cycle phase
In addition, as indicated by observation #5, the effort penalty may be different
for added, deleted or modified requirements. Using a similar approach as the one
described above, different weighting factors for each life cycle phase and change type
(w
x,l
) were developed. In order to aggregate their impact, the individual weighting
factors are multiplied by the % contribution of the corresponding change category
(added, deleted, or modified) to the total number of changes during a given life cycle
phase. The sum of the % breakdown of changes per category (Θ
x,l
)
across all 4 life
cycle phases equals 100%. These products are then summed as shown below:
[Eq. 4-8]
Where
w
v2
= aggregate volatility factor for change type
( )
∑
Θ + Θ + Θ =
l
l m l m l d l d l a l a v
w w w w
, , , , , , 2
( )
∑
Θ =
l
l l v
w w
1
54
w
x,l
= weighting factor for added, deleted, or modified requirements
Θ
x,l
= % of total requirements changes that were added, deleted or modified
x = change type: added, modified, and deleted
l = life cycle phase
Based on these equations and observations, three sub-models were considered for
further evaluation and are listed in Table 9. They were selected in terms of
progressively increasing number of terms in order to determine their incremental effect
on prediction accuracy.
Model # Description
2a SIZE diseconomies of scale exponent (E) adjusted by REVL
2b SIZE diseconomies of scale exponent (E) adjusted by REVL and
w
vl
(aggregate life cycle weighting factor)
2c SIZE diseconomies of scale exponent (E) adjusted by REVL and
w
v2
(aggregate change type and life cycle weighting factor)
Table 9: Spiral 2 Sub-models
The prediction accuracy of the three model variants were compared using a
priori expert data. Models 2b and 2c resulted in greater prediction accuracy over model
2a and the original COSYSMO. However, the incorporation of the additional
weighting factors for each type of change in model 2c did not result in an appreciable
improvement in prediction accuracy over model 2b. For reasons of simplicity and
parsimony, model 2b was selected as the final model.
The operational equation of the COSYSMO requirements volatility extension is
shown below.
55
( )
∏ ∑
=
+
⋅
Φ + Φ + Φ ⋅ =
14
1
, , , , , ,
j
j
SF E
k
k d k d k n k n k e k e
EM w w w A PM
l
[Eq. 4-9]
Where:
PM = effort in Person Months
A = calibration constant derived from historical project data
k = {Requirements, Interfaces, Algorithms, Scenarios}
w
x
= weight for “Easy”, “Nominal”, or “Difficult” size driver
Ф= quantity of “k” size driver
E = represents (dis)economies of scale
SF
l =
requirements volatility scale factor
EM = effort multiplier for the j
th
cost driver.
The difference between the original COSYSMO and the COSYSMO
requirements volatility extension is the scale factor defined by equation 4-6. The
requirements volatility weighting factors are derived from industry data and surveys of
experienced systems engineers, project managers, and software engineers from a variety
of industries but with emphasis on aerospace and defense. The expected level of
volatility (REVL) for a given project is selected using a rating scale ranging from Very
Low to Very High. The specific rating level is determined based on a set of project
characteristics developed from relevant literature and expert judgment captured through
industry workshops and surveys. This approach is consistent with the application of
ratings based on technical and organizational factors that has been utilized in prior
parametric cost models (Boehm, 1981; Boehm and Royce, 1989; Chulani, 1999; Baik,
56
2000; Boehm et al., 2000; Valerdi, 2005; Nguyen, 2010). The numerical value
associated with each REVL rating level is derived from data gathered through expert
surveys and historical project data. The requirements volatility rating levels and project
characteristics are shown in table 10.
Characteristic Rating Very Low Low Moderate High
Very
High
System requirements baselined
and agreed to by key
stakeholders
Fully Mostly Generally Somewhat
No
agreement
Level of uncertainty in key
customer requirements, mission
objectives, and stakeholder needs
Very Low Low Moderate High Very High
Number of co-dependent systems
with influence on system
requirements
Very Low Low Moderate High Very High
Strength of your organization’s
requirements development
process and level of change
control rigor
Very High High Moderate Low Very Low
Precedentedness of the system ,
use of mature technology
Very High High Moderate Low Very Low
Stability of the customer’s
organization and business
environment
Very High High Moderate Low Very Low
Experience level of the systems
engineering team in requirements
analysis and development
Very High High Moderate Low Very Low
Table 10: Requirements Volatility Rating Level
Following the guidelines of COSYSMO, a systems engineer or project manager
trying to estimate the systems engineering effort required to develop a specific system
would first count the number of system requirements, operational scenarios, interfaces,
and algorithms at the desired level of decomposition. The quantities of each size driver
may be placed in three categories based on level of complexity: “easy”, “nominal” and
57
“difficult.” The user would then select the expected level of requirements volatility
using the rating scale based on a set of project characteristics. Once the size of the
system is determined, systems engineering effort can be adjusted by rating the cost
drivers as appropriate for the system of interest (Valerdi, 2005).
Having described the model evolution, the following sections provide additional
insights into the methodology used to determine the specific model parameters and the
validation of its improvement in cost estimation accuracy over the original COSYSMO.
58
Chapter 5: Research Methodology
The selection of an appropriate research methodology requires the careful
examination of the problem context and domain. Systems engineering research is
particularly challenging because it spans a wide range of fields, from the social sciences
to traditional engineering problems with a physical or mathematical base. A mixed
methods research design was selected because it facilitates the gathering of data from
different perspectives. Moreover, quantitative and qualitative approaches by themselves
are often inadequate to deal with complex socio-technical problems (Creswell, 2009). In
addition, a mixed methods design has been described as being crucial in the success of
systems engineering research because it addresses the human-intensive aspect of systems
engineering practice while enabling the testing of cause and effect relationships between
variables and validation of specific hypotheses (Valerdi, Liu, and Fortune, 2010).
5.1 Research Design
The mixed-methods research strategy selected for this dissertation can be
described as sequential exploratory, which involves a qualitative data collection phase
used to explore a subject and develop the hypotheses, followed by a quantitative phase
that builds upon the qualitative findings and tests the hypotheses (Creswell, 2009). This
research design follows both an interpretivist and positivist research approach. The
interpretivist approach relies on individuals’ experiences and views to find meaning in a
complex human situation (Crotty, 1998). Theories are generated based on observations
of the participants’ context and the use of interviews, discussions and surveys. The
59
positivist approach begins with a theory, collects data, and utilizes quantitative
techniques to determine whether the theory is supported or refuted (Creswell, 2009).
Following this strategy, the first phase of the research was a qualitative exploration
of the perspectives of systems engineering practitioners on the factors influencing
requirements volatility and its relative impact on systems engineering effort. Data were
collected through field research: interviews, discussions and surveys conducted at
industry conferences and workshops. The findings from the field research were used to
frame the problem and further define the hypotheses, which were tested through the use
of a quasi-experimental research approach (Creswell, 2009). Historical project data were
collected to evaluate the quantitative relationship between requirements volatility and
systems engineering effort in large-scale engineering projects. This phase of the
research is considered quasi-experimental because the contextual variables influencing a
systems engineering organization cannot be fully controlled, as they would be in a true
experimental setting. Nevertheless, this method has proven useful in systems
engineering research because it allows the investigation of cause-and-effect relationships
and testing of hypotheses using a variety of projects under different conditions (Valerdi
et al., 2010).
This research approach follows the seven-step methodology used to develop
COSYSMO and COCOMO II as shown in Figure 7 (Boehm et al, 2000; Valerdi, 2005).
The process starts with the analysis of existing literature for factors associated with
requirements volatility that affect systems engineering effort. This is followed by
behavioral analysis of the system under different conditions and the identification of
60
relative significance between variables. The next step is to conduct expert assessments
of the proposed model parameters, which is followed by the gathering of project data.
Historical project data on requirements volatility, systems engineering effort, and the
COSYSMO size and cost drivers were collected (step 5). Subsequently, the Bayesian
approach is used to formally combine expert judgment with sample data to develop an
optimal a-posteriori update of model parameters (Boehm et al, 2000).
Figure 7: Seven Step Modeling Methodology
5.2 Data Collection
5.2.1 Workshop Surveys
Data on the relative significance of variables associated with requirements
volatility were collected through surveys and interviews of subject-matter experts from
a variety of industries but with specific emphasis on aerospace and defense in six
different workshops. The organizations that participated in the workshops can be found
Analyze Existing
literature
1
2
3
4
5
6
7
Perform
Behavioral Analysis
Identify Relative
Significance
Perform Expert-
Judgement, Delphi
Assessment
Gather Project Data
Determine Bayesian
A-Posteriori Update
Gather more data;
refine model
A-PRIORI MODEL
+
SAMPLING DATA
=
A-POSTERIORI MODEL
61
in Appendix B. The surveys were administered using a purposive or judgmental
sampling method, which selects the sample on the basis of the researcher’s knowledge
of the population as opposed to a random probability (Babbie, 1990). The workshop
participants had, on average, over 20 years of experience in systems engineering,
software engineering, or cost estimation. In addition, the workshop setting facilitated
the collection of data at one point in time and allowed for discussion and clarification of
the questions and concepts being explored. Workshops #1, #2, and #3 involved a
qualitative exploration of the perspectives of subject-matter experts on the factors
influencing requirements volatility and its relative impact on project performance;
workshops #4, #5, and #6 focused on the development of a quantitative characterization
of the impact of requirements volatility on systems engineering effort. A summary of
the surveys and the workshops during which they were administered can be found in
Table 11.
Survey Workshop Type of Survey Questions
# 1 Workshop # 1: 2010 USC-CSSE Annual Research
Review
Workshop # 2: 2010 MIT LAI knowledge exchange
event
Closed-ended questions
using a Likert scale and
multiple choice
# 2 Workshop # 3: 2010 Practical Software and Systems
Measurement (PSM) Users Group Conference
Open-ended questions
Structured graphs
# 3 Workshop # 4: 25th annual COCOMO Forum
Workshop # 5: 2011 USC-CSSE Annual Research
Review
Constant sum questions
Expert ratings
# 4 Workshop # 6: 2011 Practical Software and Systems
Measurement (PSM) Users Group Conference
Delphi Survey
Expert ratings
Table 11: Summary of Surveys and Workshops
62
Four different data collection instruments were utilized at these workshops. Each
instrument intended to gather a different perspective while building an increasingly
deeper understanding of requirements volatility. The use of a variety of techniques and
venues was intended to improve the validity of the study by gathering data through
different sources and methods in order to converge on a theory (Jick, 1979).
The first survey (Appendix C) was designed to elicit the viewpoints of
experienced systems engineers and industry professionals on the causes and effects of
requirements volatility. The survey participants were also asked to discuss the
occurrence of requirements volatility in their past projects and their experience using
metrics to track trends in requirements changes. The survey utilized closed-ended
questions that provided multiple choices or inquired about the participants’ level of
agreement with a selection of responses using a Likert scale (Likert, 1932).
The second survey (Appendix D) asked the participants to graph, based on their
industry experience, the level of requirements volatility they would expect during the
life cycle phases of their system of interest. The use of a freeform graph was intended to
allow the respondents more flexibility in expressing their views and expectations. This
exercise was followed by a set of open-ended questions regarding the influence of
various project characteristics on their responses. The third data collection instrument
(Appendix E) was designed to gather expert judgment of the impacts of requirements
volatility by asking the participants to provide a numerical estimate of the increase or
decrease in systems engineering effort caused by the addition, modification, or deletion
of a requirement during the system life cycle phases. The effect on effort was requested
63
in terms of a multiplier, where the value of “1” indicated no impact, a value less than 1
signified less effort, and a value greater than 1 represented additional effort due to the
change. In addition, the survey utilized constant sum questions to estimate the %
breakdown of requirements changes per life cycle phase and per type of change (added,
modified, deleted) for the participant’s system of interest.
Expert opinion on the impact and expected level of requirements volatility was
collected using a Wideband Delphi survey (Appendix F). The Delphi technique,
developed by the RAND Corporation, elicits the perspectives of subject-matter experts
and attempts to converge on a group consensus through several rounds of assessment
(Dalkey, 1967). The participants are given a questionnaire which requests their
numerical ratings of factors relevant to the phenomena under study. The responses are
collected, summarized, and provided back to the participants. Armed with the
knowledge of the results of the first round of questioning, the group is once again asked
to assess the questions and revise their responses if needed (Valerdi, 2011). The
Wideband Delphi methodology is a variation of the Delphi technique that allows for
group discussion between assessment rounds and has been found to yield more accurate
results in the development of parametric cost models (Boehm, 1981; Valerdi, 2011).
The survey data and workshop discussions resulted in a set of observations
regarding the expected level and impact of requirements volatility across the system life
cycle. These observations were used to develop a mathematical framework for
quantifying the impact of requirements volatility on systems engineering effort. The
foundations of the modeling approach are based on the cost estimating relationships
64
(CERs) established by the COCOMO and COSYSMO parametric cost models. The
modeling framework resulted in a requirements volatility adjustment factor that can be
applied to COSYSMO in order to account for the impact of requirements volatility on
systems engineering effort.
5.2.2 Project Data Collection
In order to validate the proposed cost estimating relationships, historical data on
requirements changes, systems engineering effort, and COSYSMO size and cost drivers
were collected for 25 systems engineering projects in a large aerospace company.
Requirements volatility data were collected in terms of the number of added, deleted, and
modified requirements over the life cycle phases covered by COSYMO: Conceptualize
Development, Operational Test and Evaluation, and Transition to Operation. The life
cycle phases were defined using a modified version of the Systems Engineering standard
ISO/IEC 15288 – System Life Cycle Processes (ISO/IEC, 2002).
Several guidelines were developed and consistently applied throughout the
project data collection phase. First, the specific level of design (system, subsystem or
component) that is within the scope of the research was defined. System level
requirements, typically captured in type “A” specifications, were chosen to be the subject
of this study because they are considered to be the major drivers of systems engineering
effort (Valerdi, 2005). Second, only changes in requirements after the initial release of
the specifications were counted. Modifications to a draft specification that occurred
before initial release were not considered volatility. This assumption was supported by
feedback from industry participants during workshop discussions. Furthermore, changes
65
in requirements while the technical baseline of the system is being negotiated are
considered a healthy part of the requirements development process (Zowghi and
Nurmuliani , 2002). Third, administrative changes to requirements were not counted. A
change was considered administrative if it did not have any effect on the technical
content or intent of the requirement. These changes included corrections of
typographical errors, re-numbering or moving requirements within the same
specification, and changes to organizations’ names with no impact on technical content.
The counting rules for the number of system requirements followed the
procedures used by COSYSMO (Valerdi, 2006). With respect to counting the number
and type of requirements changes, several techniques were utilized based on the format
and availability of historical project data. The first approach was to determine the number
of requirements changes electronically by using a requirements management tool such as
the Dynamic Object Oriented Requirements System (DOORS). These tools help system
engineers manage requirements throughout the system life cycle by storing requirements,
their attributes, and links between system elements in a multi-user database. They also
facilitate the definition a requirements baseline and keep track of changes and specific
edits to individual requirements (Wiegers, 1999). DOORS has customizable change
management processes that provide metrics on the number of requirements that have
been modified, added, or deleted, and allow the user to view the specific changes made to
a particular requirement.
For projects that did not manage their requirements using a database, a text
differencing tool was utilized as an alternative technique to determine the number and
66
type of requirements changes between two revisions of a specification. Similar methods
have been successfully implemented in software systems engineering (CSSE, 2010). The
“compare” feature in Microsoft Word was utilized to compare two versions of a
document and highlight the differences between. This technique was labor-intensive
because it required the careful evaluation of the entire document and manual counting of
the changes. However, it facilitated the identification and exclusion of administrative
changes.
Change notices associated with a specification revision were reviewed when an
electronic copy of a specification was not available or to cross-reference the data
collected through the methods previously discussed. Most of the change notices included
a detail description of each requirements change. However, some of them only provided
a summary of the changes and were not useful in determining which requirements had
been added, deleted or modified. In this case, the change reports from DOORS or the
text differencing technique were used as primary sources of data.
Effort data were collected through an examination of financial reports for each
project. An evaluation of the project’s Work Breakdown Structure (WBS) was performed
first to ensure the scope of the systems engineering team was consistent across the
projects and with systems engineering standards (ANSI/EIA, 1999). In some cases, older
projects utilized a slightly different version of the WBS and the effort data had to be
mapped across WBS elements in order to ensure consistency in the reported scope. Most
of the projects reported systems engineering effort in terms of hours and dollars.
However, in case where projects reported only cost, the labor rate for that particular year
67
was utilized to translate dollars to effort hours. Effort data outside of the scope of the
systems engineering WBS or total project effort were not counted.
The base measures that were collected and the measurement actions are
summarized below:
Measures Measurement Actions
# of Requirements in the initial
specification release
Count the number of applicable
shalls/wills/shoulds/mays in the system
specification.
# of operational scenarios, interfaces,
and algorithms
Follow the COSYSMO counting rules
(Valerdi, 2006)
# of Requirements changes over time Count the # of requirements changes
Count the # of requirements added, deleted
or modified
Actual systems engineering effort Obtain the number of system engineering
effort-hours in actual expenditures
Table 12: Requirements Data Collection
5.3 Data Analysis
5.3.1 Linear Regression
COSYSMO can be described as a multiple linear regression model, where
systems engineering effort is the response and the size and cost drivers are the predictors
(Valerdi, 2005). Following the COSYSMO structure, ordinary least squares (OLS)
regression was used to develop the requirements volatility extension to the model. This
technique has been successfully utilized to derive cost estimation relationships from
historical project data in the development of software and systems engineering cost
68
models (Chulani, 1999; Baik, 2000; Boehm et al., 2000; Valerdi, 2005; Fortune, 2009;
Nguyen, 2010).
The OLS approach carries a number of assumptions. First, the response variable
is assumed to be normally distributed, which may not be the case in real-value data (Cook
and Weisberg, 1999). The preferred method for dealing with data that are not normally
distributed is the use of transformations to linearize the regression model. Logarithmic
transformations were applied to software engineering and systems engineering data in the
development of COCOMO II and COSYSMO. Inspection of the histograms of log
transformed effort and size data confirmed the suitability of this approach (Boehm et al,
2000; Valerdi, 2005). The linearized multiple regression equation used by COSYSMO
can be written as:
[Eq. 5-1]
( ) ( ) ( ) ( )
i i
EM EM EM SIZE Hrs SE ln ..... ln ln ln ) _ ln(
2 3 1 2 1 0
β β β β β + + + + + =
Where the response, systems engineering effort, is expressed in terms of labor
hours; SIZE represents the number of system requirements, operational scenarios, and
algorithms; EM
i
is the i
th
effort multiplier, and β
0
…β
i,
are the regression coefficients
estimated through the OLS technique (Cook and Weisberg, 1999; Valerdi, 2005).
Second, linear regression is vulnerable to the presence of extreme cases or outliers
(Chulani et al, 1999). In most cases, outliers are caused by imprecision in the data
collection process and may be mitigated. This emphasizes the importance of establishing
consistent and reliable data collection procedures (Chulani et al., 1999). Third, the OLS
regression approach requires the number of data points to be large relative to the number
69
of model parameters. As experienced by prior researchers, data collection is often
challenging and time consuming due to the lack of consistent application of systems
engineering metrics and the sensitive nature of effort data (Valerdi, 2005; Fortune, 2009).
The rule of thumb is that for every parameter calibrated, there should be at least 5 data
points (Chulani, 1999). The OLS approach also assumes that the predictor variables (size
drivers and effort multipliers) are not highly correlated. In practice, because project data
are not typically collected through controlled experiments, some correlation between
predictors cannot be completely avoided. If this is the case, predictor variables with high
correlation may be aggregated (Boehm et al, 2000).
5.3.2 Bayesian Calibration
The Bayesian approach has been utilized in the development of parametric cost
models to combine expert judgment or a-priori knowledge with sample data to develop
an optimal a-posteriori model (Box and Tao, 1973; Chulani, 1999; Boehm et al, 2000,
Valerdi, 2005; Nguyen, 2010). Bayesian calibration formalizes the incorporation of
prior information into the results and has been shown to yield greater prediction
accuracy over models built solely on sample data (Chulani, 1999). The a-posteriori
mean , b**, and variance, Var(b**) are defined as (Box and Tao, 1973; Chulani, 1999):
[Eq. 5-2]
And
[Eq. 5-3]
Where:
+ ×
+ =
−
* *
2
1
*
2
* *
'
1
'
1
b H Xb X
s
H X X
s
b
1
*
2
* *
'
1
) (
−
+ = H X X
s
b Var
70
X = matrix of predictor variables
s = variance of the residual for the sample data
H* = Precision (inverse of variance) of prior information
b* = Mean of prior information
The a-priori data were derived from the expert judgment gathered through the
research workshops. The sample data were collected from 25 systems engineering
projects in the aerospace and defense application domain. Using the a-priori mean and
variance and the data-determined mean and variance; the a-posteriori mean and variance
are calculated.
5.3.3 Model Evaluation
The requirements volatility model was tested using the coefficient of
determination (R
2
), predictive accuracy levels, the mean magnitude of relative errors
(MMRE), and the F-test, which are common measures of goodness used to evaluate
effort estimation models (Conte, Dunsmore, and Shen, 1986). The coefficient of
determination (R
2
) was calculated between the response, systems engineering effort, and
the predictor, systems engineering size. The MMRE is expressed as:
∑
=
∧
−
=
n
i i
i i
y
y y
n
MMRE
1
1
[Eq. 5-4]
Where,
n = sample size of projects
y
i
= actual value for the i
th
project
71
i
y
∧
= estimated value for the i
th
project
The prediction accuracy at a particular level (l) is defined as:
n
k
l PRED = ) ( [Eq. 5-5]
Where,
k = number of projects in the set whose Magnitude of Relative Error is ≤ l
COSYSMO has a prediction accuracy in most applications of PRED(.30) = 50%.
This means that the model estimates systems engineering effort within 30% of the
actuals, 50% of the time (Valerdi, 2005). The prediction accuracy levels and MMRE of
Academic COSYSMO and the requirements volatility model were compared under
several scenarios.
The F-test is typically used to compare regression models and determine whether
the null hypothesis is supported or refuted. In this case, the null hypothesis is that the
simpler model (without a requirements volatility factor) is correct. The Residual Sum of
Squares (RSS) of the null hypothesis is compared to the RSS of the alternative
hypothesis, or unrestricted model (with a requirements volatility factor). The difference
in degrees of freedom (df) between the two models is also taken into account as well as
the Mean Square Error (MSE) of the alternative hypothesis, as shown below (Cook and
Weisberg, 1999).
AH
AH NH AH NH
MSE
df df RSS RSS
F
) /( ) ( − −
= [Eq. 5-6]
72
Values of the F ratio near one (1) support the null hypothesis, while larger values
counter it in favor of the alternative hypothesis. The probability that the difference
between the models happened by chance (P-value) is calculated based on the F-ratio
and degrees of freedom. The p-value is then compared to statistical significance levels
(α) associated with this test, which are typically in the 5% to 10% range (Cook and
Weisberg, 1999).
In addition to the model performance tests described above, cross-validation tests
and sensitivity analysis were performed to demonstrate the robustness of the model. The
value of a cost estimation model lies in its ability to accurately predict effort for projects
outside of the data set used to develop it. In order to validate the performance of the
requirements volatility model, a K-fold cross validation was performed. This method
divides the data into K subsets; one of the subsets is excluded from the data set from
which the model is built. The resulting model is used to predict effort for the excluded
cases. This method is repeated for all subsets and the mean magnitude of relative errors
(MMRE) and prediction accuracy PRED (l) across all K trials is calculated (Kohavi,
1995; Nguyen, 2010).
Sensitivity analysis is a useful tool for evaluating the impact of the variability in
key model parameters to the results. High sensitivity to changes in the value of variables
may uncover deficiencies in the model. The first step in this process is to identify the
variables to which the model’s results may be most sensitive. The model results are then
re-calculated using a range of alternative values for those variables selected based on
73
their uncertainty. The sensitivity of the model to those changes is evaluated by
comparing the alternative results to those of the baseline model (Guidelines, 1997).
5.4 Threats to Validity
The validity of the research approach must also be addressed in terms of various
criteria. These include: construct validity, internal validity, external validity and
reliability (Valerdi et al., 2010). Construct validity refers to the correspondence between
the operational constructs and measures and the systems engineering phenomena that are
being studied (Lipsey, 1990). With respect to the requirements volatility effect on
systems engineering effort, the operational construct and measures involve system
engineering size and cost drivers, engineering change metrics, and engineering effort.
Clear definitions of each of these parameters were utilized in order to avoid measuring an
unintended effect. In addition, the hierarchical level of requirements to be counted was
clearly defined by using the same guidelines developed for Academic COSYSMO.
Different interpretations of the proper timing for tracking requirements changes as
volatility were another potential threat to construct validity. This threat was mitigated by
discussions with the organizations providing the data and by providing clear instructions
that only changes after the initial release of the specifications were to be counted. This
assumption was supported by feedback from industry participants in workshop
discussions.
Internal validity demonstrates a causal relationship between variables. Systems
engineering research in general poses a challenge to internal validity because an
engineering organization is influenced by many factors and all the relevant variables
74
cannot be fully controlled as they would be in a true experiment (Valerdi, 2005). An
approach to mitigate the threat to internal validity is the use of a mixed methods
techniques where the empirical data are enhanced by discussions and interviews with
subject matter experts and case studies of systems engineering organizations. Six
different research workshops were held to identify and discuss the relative significance of
variables related to requirements volatility with experienced software and systems
engineers.
External validity refers to the ability to apply the research results to other contexts.
The researcher needs to define the domain that the system engineering phenomena can be
applied to. The external validity of this study will be limited by the application domains
of the engineering organizations that contributed data and the background of the industry
experts that participated in the research. The strategy for generalizing the research is to
involve a variety of experts from industry, academia, and governmental agencies.
Although historical project data were collected from only one organization, the systems
represented were diverse in terms of their complexity, scope, and hardware/software
content. In addition, the projects included both government and commercial applications.
Nevertheless, while a model calibrated to a single organization’s data can be useful, it is
less than fully definitive for other organizations. The original COSYSMO research
showed that the calibration coefficient varied depending on the organization and
predictive accuracy improved when the model was calibrated using data specific to the
individual organization that intends to use it (Valerdi, 2005).
75
The use of experts also carries a threat to validity because it is often difficult to
check the qualifications of the respondents to ensure they truly have expertise in a
particular field. In order to mitigate this threat, the data collection instruments included a
background section that asks the participants to provide information regarding their
industry experience and professional background. Furthermore, the workshops settings
were selected because of the high level of experience of the attendees in the disciplines of
systems engineering, software engineering, measurement, and cost estimation. The
participants had, on average, over 20 years of experience and represented industry,
government, and academia (Valerdi, 2005).
Another threat to validity is the reliability of the measurements. In order to ensure
the reliable measurement of requirements volatility data, several steps were followed.
First, standard counting rules and definitions were established. Administrative changes to
requirements such as corrections of typos and clarifications that do not affect the
technical intent of the requirement were excluded. Second, a consistent definition of
when to start counting requirements changes as volatility was utilized: Only changes in
requirements after the initial release of the specifications were counted. Third; the
specific methods and tools utilized by the organization to count the number and type of
requirements changes were reviewed to ensure they were being applied consistently
across projects. Once the metrics were retrieved, a sanity check was performed with
experts within the organization to ensure the results made sense based on their
experience.
76
Fourth; the data collection instruments were developed using best practices in
survey design. Expert and historical project data were be collected using surveys and
questionnaires. The data collection instruments included: knowledge questions to
establish the credibility of the respondent, consistent measurement scales, clear and
unambiguous questions, logical order of questions - building from easy to difficult,
adequate level of difficulty of questions, concise and efficient questionnaire design, and
sufficient time to fill-out the questionnaire (Babbie, 1990, Sudman and Bradburn, 2004;
Valerdi, 2005).
77
Chapter 6 – Results
6.1 Behavioral Analysis and Identification of Relevant Variables
The behavior and relative significance of the factors influencing requirements
volatility and its impact on effort were investigated through surveys conducted during
research workshops. The objective of this qualitative phase of the research was to
identify and evaluate potential model parameters. As previously discussed, a set of
observations regarding the causes and effects of requirements volatility were identified
based on a review of existing literature. First, requirements volatility is caused by an
identifiable set of technical, organizational, and project factors. Second; the level of
requirements volatility changes throughout the life cycle of the system. Third; the
changes in requirements have been found to result in a net increase in the functional size
of the project and additional rework, which in turn drive an increase in engineering effort.
Fourth; the effort impact of requirements volatility increases the later the change occurs
in the system life cycle. Fifth; the effort impact of a requirements change varies
depending on the type of change (added, deleted, and modified). These observations
formed the basis for the survey questions and workshop discussions.
In order to gather additional data on the first observation, participants of
workshops #1, #2, and #5 were asked to state their level of agreement with a set of
postulated causes of requirements volatility frequently cited in the literature. The question
provided a 5-point Likert scale ranging from “strongly disagree” to “strongly agree.” A
total of 38 responses were received from subject matter experts representing major
engineering contractors, academic institutions, and the Armed Services. The
78
organizations represented included: The Aerospace Corporation, Northrop Grumman
Corporation, Lockheed Martin, The Boeing Company, Raytheon, United Launch
Alliance, the US Army, US Navy, and US Air force among others. On average, the
survey participants had 24 years of engineering experience. The list of postulated causes
included in the question and the percentage breakdown of responses for each of the 5-
points in the Likert scale are depicted in Figure 8..
Figure 8: Potential Causes of Requirements Volatility (N = 38)
In order to rank the causes of volatility by level of agreement, each response was
given a numerical value. “Strongly agree” was given a value of +2; “agree” a value of
+1; “neither agree nor disagree” received a value of 0; “disagree” was assigned a value
of -1; and “strongly disagree” was given a value of -2. The average score for each
proposed cause was calculated. The cause that received the highest level of agreement
with a score of 1.7 was “poor initial understanding of the system and customer needs.”
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Poor initial understanding of the
system and customer needs
Lack of SE process maturity
Inexperienced staff
Customer-requested scope change
Immature technology
Changes in external environment
(political/business climate)
Changes in COTS products
Changes in co-dependent systems
Internal factors: Change in policies,
organizational structure.
Strongly Disagree Disagree Neither agree nor disagree Agree Strongly Agree
79
Two of the proposed causes received average scores that were near zero or slightly
negative; these were: “changes in COTS products” (-0.02) and “changes in policies and
organizational structure” (0.09). Excluding the 2 lowest-rated items, a causal diagram
was developed in order to provide a graphical representation of the relationship between
these project factors and requirements volatility. As shown in Figure 9, the causes of
requirements volatility are labeled from 1 to 7, indicating the ordinal ranking of subject-
matter expert agreement.
Figure 9: Requirements Volatility Causal Diagram
Causal diagrams help visualize theories and interconnections between variables;
arrows are drawn from the independent to dependent variables and the nature of the
relationships are represented by labeling the arrows with a positive or negative sign. A
positive sign indicates that the variables change in the same direction while a negative
sign indicates that the variables change in opposite direction (Creswell, 2009). The
Changes in
External Factor,
Business Env.
Experienced
Staff
Poor
Understanding of
the System &
Customer Needs
Customer-driven
Scope Change
Changes in
Co-dependent
Systems
Requirements
Process
Maturity
Requirements
Volatility
Technology
Maturity
1 2 3 4
6 7 5
Changes in
External Factor,
Business Env.
Experienced
Staff
Poor
Understanding of
the System &
Customer Needs
Customer-driven
Scope Change
Changes in
Co-dependent
Systems
Requirements
Process
Maturity
Requirements
Volatility
Technology
Maturity
1 2 3 4
6 7 5
80
presence of a plus or minus sign (+/-) indicates that the variables may change in either
direction. This technique was inspired by the methods used in system dynamics
(Forrester, 1961; Blalock, 1985) as applied to software process modeling (Abdel-Hamid
and Madnick, 1989; Madachy, 2008; Ferreira et al., 2009). The causal diagram depicts
requirements volatility as the dependent variable and the project factors influencing
volatility as independent variables. The diagram utilized a similar concept as the causal
diagram developed by Ferreira et al. (2009) which depicted cause and effect
relationships between software development factors and requirements volatility.
The causal diagram was useful in the evaluation of the qualitative behavior of the
variables. For example, as indicated by the positive arrow, an increase in changes in co-
dependent systems results in a higher level of requirements volatility. Conversely, an
increase in technology maturity would result in less volatility. These variables and
relationships are the basis for the project characteristics used to rate the expected level of
requirements volatility for a given project (see table 10).
Having explored the potential causes of volatility, a similar methodology was
utilized to evaluate the qualitative impact of requirements volatility on project
performance and systems engineering effort. As previously discussed, the observations
from the literature were drawn primarily from studies of software development projects.
In order to gather data from a broader base of application domains, the participants of
Workshops # 1 and # 2 were asked to rate the impact of requirements volatility on rework
and the functional size of the project in terms of the number of system requirements. A
total of 22 responses were received from subject matter experts representing 10 major
81
engineering contractors, academic researcher institutions, and the U.S. Armed Services.
On average, the survey participants had 23 years of engineering experience. The
questions used a 5-point Likert (1932) scale defined as follows: “1” large decrease, “2”
moderate increase, “3” no impact, “4” moderate decrease, and “5” large increase; a
summary of the results in shown in Figure 10.
Figure 10: Impact of Volatility on Rework and Project Size
More than half of the respondents (55%) indicated that they expect requirements
volatility to result in a large increase in rework. The rest of the participants (45%)
believed that volatility would result in moderate increase in rework. With respect to the
impact on the functional size of the project, 87% of respondents expected a large to
moderate increase in the total number of requirements, while 9% expected no impact, and
4% stated that there would be a moderate decrease in the number of requirements. These
results support the findings of prior research but also indicate that in some cases the
functional size of the project may decrease due to the deletion of requirements.
0%
20%
40%
60%
80%
100%
Re-work of work products Project size (Total # of
requirements)
% of respondents
Re-work of work products
Large Increase
Moderate increase
No impact
Moderate decrease
Large decrease
Project Size
(# of requirements)
100%
80%
60%
40%
20%
0%
0%
20%
40%
60%
80%
100%
Re-work of work products Project size (Total # of
requirements)
% of respondents
Re-work of work products
Large Increase
Moderate increase
No impact
Moderate decrease
Large decrease
Project Size
(# of requirements)
100%
80%
60%
40%
20%
0%
0%
20%
40%
60%
80%
100%
Re-work of work products Project size (Total # of
requirements)
% of respondents
Re-work of work products
Large Increase
Moderate increase
No impact
Moderate decrease
Large decrease
Project Size
(# of requirements)
100%
80%
60%
40%
20%
0%
0%
20%
40%
60%
80%
100%
Re-work of work products Project size (Total # of
requirements)
% of respondents
Re-work of work products
Large Increase
Moderate increase
No impact
Moderate decrease
Large decrease
Project Size
(# of requirements)
100%
80%
60%
40%
20%
0%
82
Based on the survey results and the observations from the literature, another causal
diagram was developed. In this case, requirements volatility is depicted as the
independent variable and the number of system requirements and level of rework as
dependent variables. In turn, these variables influence the amount of systems engineering
effort, which is then linked to project cost and schedule. It is important to note that prior
studies on the value of systems engineering indicate that an increase in systems
engineering effort may in fact reduce the total program cost and schedule (Honour, 2004,
Frederick and Sauser, 2007). Hence, as shown in Figure 11, the relationship between
these variables is depicted as either positive or negative. Similarly, the relationship
between requirements volatility and systems engineering effort is labeled with a +/- sign
indicating that some requirements changes may reduce work scope and result in less
effort.
Figure 11: Impacts of Volatility Causal Diagram
The variables labeled as “project schedule” and “project cost” are shaded in grey
to indicate that they are outside of the scope of the research. The focus of this study is
Systems
Engineering
Effort
Rework
Number of
System
Requirements
Project Cost
Project
Schedule
Requirements
Volatility
± ± ± ± ± ± ± ±
± ± ± ± ± ± ± ±
± ± ± ± ± ± ± ±
± ± ± ± ± ± ± ±
± ± ± ± ± ± ± ±
83
in on the factors influencing requirements volatility and systems engineering effort, not
total program effort or cost.
The next step in this phase of the research was to gather additional information
on observation # 2: The level of requirements volatility is a function of the system life cycle
phase. The participants of workshops #1 and #2 were asked to estimate the expected
percentage change in the baseline set of requirements during each the four life cycle
phases covered by COSYSMO: (1) Conceptualize, (2) Development, (3) Operational
Test and Evaluation, and (4) Transition to Operation. They were given a set of volatility
ranges to select from: <5%, 5-10%, 10-20%, >20% and an option for “I don’t know.”
The results of the survey are depicted in Figure 12. The y-axis represents the number
of responses and x-axis represents the life cycle phases.
>20%
>20%
>20% >20%
10-20%
10-20%
10-20%
10-20%
5-10%
5-10%
5-10%
5-10%
<5%
<5%
<5%
<5%
don't know
don't know
don't know don't know
0
5
10
15
20
Conceptualize Development Operational Test &
Evaluation
Transition to
Operation
Expected Requirements Volatility (%)
Number of Respondents
Figure 12: Expected Level of Requirements Volatility per Life cycle Phase (N = 22)
Although the results were consistent with the observation that requirements
volatility varies as a function of life cycle phase, the multiple choice selections limited
84
the responses to the pre-determined ranges. In addition, during the workshop discussions
some participants expressed the opinion that changes during the Conceptualize phase
should not be considered “volatility” because they are part of the typical negotiations and
trades that occur at the early stages of the system development. In order to obtain
additional insights, a second type of survey was administered at the 2010 Practical
Software and Systems Measurement (PSM) Users Group Conference in New Orleans,
LA (workshop #3). Data were provided by nine participants (N = 9) representing a
diverse set of organizations including: Distributed Management, Northrop Grumman,
Lockheed Martin, Ericsson España, Samsung SDS, and the U.S. Navy. This time, the
participants were given a blank graph with a Cartesian coordinate system where the x-
axis represented the life cycle phases and y-axis represented requirements volatility.
They were then asked to draw the expected level of volatility throughput the life cycle for
a typical system within their application domain. The survey instructions are reproduced
in Appendix D. The purpose of the freeform graphs was to allow flexibility in the
responses. In addition, the respondents were encouraged to annotate their graphs with
their assumptions and comments which were later discussed in a group setting. The
curves drawn by the respondents were electronically reproduced and superimposed onto
one graph as shown in Figure 13.
The general trend of the curves is consistent with the prior survey results that
indicate that requirements volatility decreases as the system matures. However, the
graphs showed some additional characteristics. Some of the graphs exhibited a bell shape
that peaks during the Conceptualize, Development, or Operational Test and Evaluation
85
phases of the life cycle. This is in contrast to the prior survey results which implied a
decrease in volatility as a function of time from a peak during the Conceptualize phase.
Figure 13: Requirements Volatility Life Cycle Profile (N = 9)
In addition, the question of when to start counting requirements changes as
volatility was discussed at length. Nearly half of the participants (44%) indicated in
their graphs that requirement changes during the Conceptualize phase are part of the
typical development process and are already accounted for in effort estimates. Their
comments emphasized the importance of distinguishing between planned iterations and
requirements changes not accounted for in the original plan. In fact, some degree of
volatility during the early stages of a project may result in a more optimal outcome for
the contractor and the customer. However, the conclusion reached through group
discussion is that once the system requirements are agreed upon by the stakeholders and
a baseline is established, subsequent changes may impact systems engineering effort
Development
% of Requirements, Added, Deleted or Modified
Conceptualize
Operational
Test &
Evaluation
Transition to
Operations
Development
% of Requirements, Added, Deleted or Modified
Conceptualize
Operational
Test &
Evaluation
Transition to
Operations
86
and should be tracked as volatility. This guideline was used during the data collection
phase of the research.
The last step in the exploratory phase of the research was to evaluate observation
# 4: The effort impact of requirements volatility increases the later the change occurs in
the system life cycle. Inspired by the “ease of change curve” captured by Blanchard and
Fabrycky (1998) in their depiction of “Cost Commitment on Projects,” the participants
of workshop # 3 were asked to draw an “ease of change” profile across the four life
cycle phases previously discussed. The results showed general agreement with the
observation that expects the cost of making a change to increase as a function of time.
In this case, an “ease of change” y-axis scale was provided ranging from a value of “1,”
indicating that a requirements change would not have a cost impact on the system, to a
value of “0” indicating a prohibitive cost impact due to a change. The cost impact of
implementing a requirements change was defined as being inversely proportional to the
“ease of change” factor. For example, a value of 0.5 indicates that a requirement would
cost twice as much if introduced later in the life cycle as compared to the cost of the
same requirement if it had been part of the original baseline. As in the prior diagram,
the x-axis scale represented the system life cycle phases covered by COSYSMO. An
“ease of change” profile that is representative of the workshop participants’ input is
depicted in Figure 14. All of the responses indicated an expectation of a cost penalty
due to requirements changes that increases as a function of time. It is important to note
that this exercise was not intended to quantify the impact on effort but to study the
qualitative behavior of project characteristics.
87
Figure 14: Representative “Ease of Change” Profile from Workshop #3
The exploratory surveys validated the observations from the literature and provided
valuable insights into the behavior of project characteristics relevant to requirements
volatility. The following section will discuss the quantitative expert judgment of the
effort penalty associated with late changes and the expected level of volatility throughout
the system life cycle.
6.2 Expert Assessment of Model Parameters
Expert assessments of the model parameters were collected through three research
workshops from experienced systems engineers, software engineers, and project
managers. The model parameters for which data were collected are: The requirements
volatility weighting factors (w
l
); the percentage of system requirements expected to
change throughout the system life cycle (REVL), and the percentage of the total
requirements changes per life cycle phase (Θ
l
).
Conceptualize Development
Operational Test
& Evaluation
Transition to
Operation
Ease of Change
1
0.5
0.1
0
0.2
0.3
0.4
0.9
0.6
0.7
0.8
Conceptualize Development
Operational Test
& Evaluation
Transition to
Operation
Ease of Change
1
0.5
0.1
0
0.2
0.3
0.4
0.9
0.6
0.7
0.8
Representative of Participant Feedback
Conceptualize Development
Operational Test
& Evaluation
Transition to
Operation
Ease of Change
1
0.5
0.1
0
0.2
0.3
0.4
0.9
0.6
0.7
0.8
Conceptualize Development
Operational Test
& Evaluation
Transition to
Operation
Ease of Change
1
0.5
0.1
0
0.2
0.3
0.4
0.9
0.6
0.7
0.8
Representative of Participant Feedback
88
Based on observation # 4, The effort impact of requirements volatility increases
the later the change occurs in the system life cycle, and observation #3 requirements
volatility leads to an increase in project size; participants of two conference workshops
(#4 and #5) were asked to estimate the effect that changing a requirement has on systems
engineering effort as a function of the life cycle phase during which the change occurs.
The workshops were attended by a total of 27 participants from 12 different engineering
organizations and academic institutions, which included The Boeing Company, the
Aerospace Corporation, Lockheed Martin, TI Metricas, Rolls Royce, Softstar, MIT,
Texas Tech, and the US Air Force. The participants had an average of 23 years of
experience in variety of industries but with specific emphasis on aerospace and defense.
The participants were asked to provide their assessment in terms of a numerical
effort multiplier or weighting factor. For example, a rating of “1” indicated no additional
effort is required by the change. A value of “2” indicated the change would require
twice the effort of the original requirement; while a value less than one denoted an effort
savings as a result of the change. The mean and standard deviation of the weighting
factors per life cycle phase provided by the workshop participants are shown in Figure
15. Changes during the Conceptualize phase were rated with a weighting factor of 1.1 (σ
= 0.3), which indicates minimal impact on the planned systems engineering effort.
However, the effort multiplier due to a requirements change increases in subsequent life
cycle phases. The expected weighting factors for changes made during the development
phase and the operational test and evaluation phase were 2.2 (σ = 1.2) and 4.0 (σ = 2.4)
respectively. The average effort multiplier increases to 5.9 (σ = 4.1) in the transition to
89
operations phase. The diversity in the participant’s background and application domains
is evident by the standard deviation of the responses. However, the data show a clear
expectation that changing a requirement late in the life cycle has a greater impact on
systems engineering effort.
Figure 15: Life Cycle Effort Penalty due to Volatility
A two-round Wideband Delphi survey (n = 9) was conducted during the 2011
Practical Software and Systems Measurement workshop in Mystic, CT to further explore
the life cycle impacts of requirements volatility. Nine survey participants were also
asked to estimate the effort impact of requirements volatility as a function of the type of
change (added, deleted, or modified – observation # 5). The survey participants had, on
average, over 20 years of experience in a variety of industries and engineering
organizations, which included: BAE, Raytheon, TI Metricas Ltda., IBM, and the
Australian Department of Defense, among others. The average values and standard
deviation of the responses per change category were plotted as shown in Figure 16. The
0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
9.0
10.0
11.0
Systems Engineering Effort Penalty Due to Volatility
Conceptualize Development
Operational Test
& Evaluation
Transition to
Operation
90
data showed that the effort penalty of adding a requirement follows a pattern similar to
that of prior findings: minimal impact during the Conceptualize phase and progressively
larger effort penalty as the system progresses through the life cycle. The modification of
requirements also follows a similar profile but its expected impact on effort is less than
that of adding a requirement in the same life cycle phase.
It is interesting to note that, in some cases, deleted requirements were given a
penalty factor of < 1 during the Conceptualize phase. However, as the system matures,
the effort penalty associated with the deletion of a requirement increases to above 1.
This means that deleting a requirement is actually expected to result in an increase in
effort and cost. Discussions during the workshop indicated that deleting a requirement
after the development phase is expected to have a collateral impact on other engineering
products, resulting in a net increase in effort.
Figure 16: Systems Engineering Effort Weighting Factor per Change Category
91
As mentioned during the model development discussion, incorporating the
weighting factors for added, modified, and deleted requirements for each life cycle phase
(12 parameters) into the calculations did not result in an improvement in prediction
accuracy as compared to using the weighting factors per life cycle phase shown in Figure
15 (4 parameters). For purposes of model parsimony, the weighting factors per type of
change were not incorporated into the final model.
In addition to the volatility weighting factors, the requirements volatility Scale
Factor (SF) includes the parameters REVL, the percentage of the baseline requirements
expected to change throughout the system life cycle, and the breakdown of those changes
per life cycle phase (Θ
l
). Expert judgment on the value of these parameters was gathered
through workshops #4, #5 and #6. The participants of those workshops were asked to
estimate: 1) The total percentage of baseline requirements expected to change over the
Conceptualize, Development, Operational Test and Evaluation, and Transition to
Operation life cycle phases; and 2) the level of requirements volatility partitioned per life
cycle phase for a typical project from their application domain. Workshop #6 utilized a
two-round Wideband Delphi survey (n = 9), while workshops #4 and #5 collected expert
ratings in one round. In total, 36 responses were gathered from participants that had, on
average, over 20 years of experience in a variety of industries and engineering
organizations, which included: BAE, Raytheon, TI Metricas Ltda., IBM, and the
Australian Department of Defense, The Boeing Company, the Aerospace Corporation,
Lockheed Martin, Rolls Royce, Softstar, MIT, Texas Tech, and the US Air Force, among
others.
92
The expert-determined weighting factors per life cycle phase (w
,l
), the expected
level of requirements volatility (REVL), and the percentage of total requirements changes
per life cycle phase (Θ
l
) form the basis of the a-priori model. The following sections
discuss the analysis of project data and the Bayesian calibration that results in the a-
posteriori model.
6.3 Model Selection
One of the first steps in the development of the model was to evaluate the
incremental benefit of including additional model parameters. The performance of
different model alternatives was compared using the expert-determined volatility
weighting factors and project sample data. Systems engineering effort, COSYSMO
size driver data (number of requirements, operational scenarios, interfaces, and
algorithms), and cost driver ratings were collected for 25 projects from a large
engineering organization in the aerospace and defense application domain.
The prediction accuracy performance of Academic COSYSMO, in terms of
PRED level and mean magnitude of relative errors (MMRE), was compared to the
model alternatives defined in Table 9:
a) The SIZE diseconomies of scale exponent with a Scale Factor (SF) adjusted by
the percentage of the baseline number of requirements expected to change
(REVL)
b) The SIZE diseconomies of scale exponent with a Scale Factor (SF) adjusted by
REVL multiplied by a requirements volatility weighting factor that accounts for
additional effort as a function of life cycle phase.
93
c) The SIZE diseconomies of scale exponent with a Scale Factor (SF) adjusted by
REVL multiplied by a requirements volatility weighting factor that accounts for
additional effort as a function of change type (added, modified, and deleted) and
life cycle phase.
The results, captured in Table 13, indicate that incorporating a Scale Factor
adjusted by REVL only has a marginal improvement over the performance of
Academic COSYSMO. The performance of the model improves when including the
life cycle effects of the timing of the change. However, the incorporation of the type of
change (added, modified, deleted) in addition to the life cycle effects does not result in
an incremental improvement in performance. For purposes of parsimony, model “b”
was selected for further development and the Bayesian calibration.
#
Case Performance PRED(15) PRED(20) PRED(30) MMRE
0 Baseline 48% 52% 80% 21%
a SF using REVL only 48% 56% 84% 20%
b SF using REVL and w
l
72% 80% 88% 16%
c SF using REVL and w
al, ml, dl
68% 80% 88% 16%
Table 13: Comparison of Model Alternatives
As previously discussed, the initial value of the scale factor constant was selected
as 0.05 based on the Ada COCOMO requirements volatility scale factor rating (Boehm
and Royce, 1989). Alternative values to the 0.05 constant were tested in the range of
0.03 to 0.07 in increments of 0.01. Predictive accuracy degraded slightly with values
94
less than 0.04 and greater than 0.06. There was no appreciable difference in prediction
accuracy between constant values of 0.04 and 0.06. Consequently, the scale factor
constant of 0.05 was retained. The results of this analysis are shown below.
Model Performance PRED(15) PRED(20) PRED(30) MMRE
SF using 0.03 constant 60% 80% 88% 17%
SF using 0.04 constant 76% 80% 88% 16%
SF using 0.05 constant
72% 80% 88% 16%
SF using 0.06 constant
72% 80% 88% 16%
SF using 0.07 constant 64% 76% 88% 17%
Table 14: Alternative Scale Factor Constants
6.4 Data Analysis
As previously discussed, COSYSMO can be described as linear regression
model, where systems engineering effort is the response and the size and cost drivers
are the predictors (Valerdi, 2005). Following the analysis approach used in the
development of COCOMO II and COSYSMO, ordinary least squares (OLS) regression
was used to derive cost estimation relationships from the historical project data
(Chulani, 1999; Baik, 2000; Boehm et al., 2000; Valerdi, 2005; Fortune, 2009; Nguyen,
2010).
The OLS approach assumes the response variable normally distributed and is
vulnerable to the presence of extreme values (Cook and Weisberg, 1999). A review of
the histogram of real-value effort and size data revealed the need for logarithmic
95
transformations. The data were log transformed and the resulting linearized effort and
size histograms were compared to the normal distribution. An inspection of the
histograms confirmed the transformed distribution is approximately normal. In order to
test this assumption of normality, the z-score of the data’s skewness and kurtosis was
calculated. Kurtosis is a measure of how peaked or flat the data are in comparison to the
normal distribution. A flat distribution has negative kurtosis while a peaked distribution
has positive kurtosis. Skewness describes the shape of the probability distribution in
terms of its horizontal symmetry. If the data are concentrated on the left side of the
distribution curve, the skewness is positive. Conversely, if the concentration is on the
right side of the curve, the skewness is considered negative. The z-score is a statistical
test that examines how closely the sample data approximates the normal distribution.
The z-scores for an approximately normal distribution with confidence α = 0.05 must
fall between -1.96 and +1.96 (Corder and Foreman, 2009). Skewness and kurtosis are
defined as:
( ) [ ]
( )
3
3
1
1 σ
μ
⋅ −
−
=
∑
=
n
x
S
i
n
i
[Eq. 6-1]
( ) [ ]
( )
4
4
1
1 σ
μ
⋅ −
−
=
∑
=
n
x
K
i
n
i
[Eq 6-2.]
Where,
x
i
= i
th
data point
μ = sample mean
96
σ = sample standard deviation
n = number of data points
The z-score for skewness and kurtosis is calculated through the following:
SES
S
Z
s
= [Eq. 6-3]
SEK
G
Z
k
= [Eq. 6-4]
Where SES and SEK are the skewness and kurtosis standard errors, and G is the
excess kurtosis = 3 – K. The skewness and kurtosis z-scores were 0.80 and 0.86 for the
effort data, respectively, and 1.0 and 0.92 for the size data. Both sets of z-scores fell
within the critical range of values at the 95% confidence level, which means the
assumption of normality failed to be rejected (Corder and Foreman, 2009). In addition,
the effort and size data were inspected for outliers. A productivity measure in terms of
SIZE divided by systems engineering effort was used as a test for extreme values
(Valerdi, 2005). The productivity ratios for all 25 projects fell within 3 standard
deviations of the mean, which is a typical threshold used to detect outliers.
The linearized COSYSMO equation that incorporates the volatility scale factor
as a predictor is shown below.
) ln( ) ln( ) ln( 06 . 1 ln ) _ ln(
1 i
EM SIZE SF SIZE A hrs SE + ⋅ + ⋅ + = β [Eq. 6-5]
As previously stated, one of the guiding principles of the modeling approach was
to extend the capabilities of COSYSMO while maintaining its original form and
structure. Consequently, the diseconomy of scale exponent (E =1.06) and the number
97
and scope of the effort multipliers were unchanged. The calibration constant “A” was
calculated using the SystemStar software (version 2.01) without the new requirements
volatility predictor in order to reflect the context and productivity of the organization
providing the data. The regression coefficient (β
1
) was estimated only for the
requirements volatility scale factor. Equation 6-5 is re-written as:
) ln( ) ln( ) ln( 06 . 1 ln ) _ ln(
1
SIZE SF EM SIZE A hrs SE
i
⋅ = − ⋅ − − β [Eq. 6-6]
The term SF is the un-weighted Scale Factor defined by equation 4-5. The
regression coefficient β
1
and standard error were calculated using the Excel data analysis
tool kit. The results are shown in Table 15.
Coefficients
Standard
Error t Stat P-value
Predictor 1.961 0.851 2.304 0.03
Table 15: Regression Analysis Results
The t statistic is the ratio between the estimated coefficient and the standard
error. The t-value of approximately 3.0 indicates statistical significance for predictor
variables. A P-value less than 0.05 is an indication of statistical significance. Using the
expert judgment or a-priori mean and variance and the data-determined mean and
variance; the a-posteriori mean and variance were calculated using the Bayesian
calibration approach. The coefficient β
1
is equivalent to the aggregate weighting factor
(w
v1
) that is calculated using equation 4-7. The a-priori mean and standard deviation of
the aggregate weighting factor were calculated to be 2.09 (σ = 1.3). Using the Bayesian
calibration equations 5-2 and 5-3, the a-posteriori mean and variance were calculated to
98
be 2.01 and 0.47, respectively. Since the distribution of requirements changes across
the life cycle phases (Θ
l
) is known for the sample data, and assuming a constant ratio
between life cycle weighting factors, the Bayesian-calibrated volatility weighting factors
per life cycle phase can be derived and are included in table 16.
Life cycle Phase Conceptualize Development
Operational
Test & Eval.
Transition to
Operation
Weighting Factor 1.0 2.1 3.8 5.6
Table 16: Bayesian-Calibrated (a-posteriori) Volatility Weighting Factors
Having determined the a-posteriori weighting factors, the next step in the model
development was to determine the range of expected requirements volatility and the
distribution of those changes across the life cycle phases. As previously discussed,
expert judgment on the expected level of requirements volatility (REVL), and the
percentage of total requirements changes per life cycle phase (Θ
l
) was gathered through
the surveys conducted during workshops #4, #5, and #6. The diversity of engineering
companies represented in the workshops, as shown in Appendix B, increases the
external validity of the model. Therefore, the responses from the expert surveys (N = 36)
were combined with the requirements volatility data collected from the sample projects
(N=25) to develop a composite requirements volatility life cycle profile. The results
indicate that, on average, the percentage of the baseline system requirements expected to
change over the four life cycle phases covered by COSYSMO is 22% (σ = 16). The
distribution of those changes per life cycle phase are (Θ
l
) captured in Table 17.
99
Life cycle Phase Conceptualize Development
Operational
Test & Eval.
Transition to
Operation
% contribution of
changes (sum = 100%)
30% 46% 18% 6%
Table 17: Breakdown of Requirements Volatility per Life Cycle Phase
A requirements volatility life cycle profile was calculated by multiplying the
average volatility over the four life cycle phases (22%) by the percentage contribution of
each phase. A range of expected REVL rating levels (Very Low, Low, Moderate, High,
and Very High) was derived using the mean and standard deviation of the data. A
“Moderate” rating is equivalent to the arithmetic mean of the data. “Low” or “High”
ratings subtract or add the standard deviation to the mean. A “Very High” rating adds
two standard deviations to the man, while a “Very Low” rating results in no volatility or
a REVL of 0. The requirements volatility values per rating level across the life cycle
phase covered by COSYSMO are captured in Table 18.
Cumulative
Volatility
(%)
Rating
Level
Volatility per Life Cycle Phase
Conceptualize Development Operational
Test & Eval.
Transition
to Operation
6% Low
(- 1σ)
2% 3% 1% 0%
22% Moderate
(μ)
7% 10% 4% 1%
38% High
(+ 1σ)
12% 17% 7% 2%
54% Very High
(+ 2σ)
16% 25% 10% 3%
Table 18: Requirements Volatility across Life Cycle Phases
A graphical version of the requirements volatility life cycle profile for each
REVL rating level is shown in Figure 17. The curves follow the same concept as those
100
collected through the 2010 Practical Software and Systems Measurement (PSM) Users
Group Conference (workshop #3) and shown in Figure 13 (Peña, 2012).
Figure 17: Requirements Volatility Profile
The expected requirements volatility rating level for the system of interest is
selected by systems engineers familiar with the project based on a set of characteristics
developed from relevant literature and expert surveys. The project characteristics
correspond to the seven causes of requirements volatility that received the highest level
of agreement from the survey participants. Once a rating level is selected (Very Low,
Low, Moderate, High, Very High), the corresponding numerical values of volatility per
life cycle phase in Table 16 are used for the calculations (Boehm and Royce, 1989).
Initially, the overall rating level was selected based on a qualitative summary of the
evaluation of each characteristic. Based on respondent feedback, a more quantitative
method of aggregating the responses was developed. The evaluation of each
characteristic was assigned a numerical value of 1-5 as shown in Table 19. The
Very Low
Low
Moderate
High
Very High
mean + 1σ
mean + 2σ
mean
mean - 1σ
30%
25%
20%
15%
10%
5%
0%
% Volatility per Life Cycle Phase
Very Low
Low
Moderate
High
Very High
mean + 1σ
mean + 2σ
mean
mean - 1σ
30%
25%
20%
15%
10%
5%
0%
% Volatility per Life Cycle Phase
Tech Reviews ATP SRR PDR CDR TRR FCA PCA
Conceptualize Development Operational
Test & Eval
Transition to
Operation
101
characteristics were also weighed based on the level of agreement from the survey of the
causes of requirements volatility. As previously discussed, a score was calculated for
each of the postulated causes using the Likert scale responses. The systems engineers
familiar with each project did not see the numerical scores for the characteristics. Rather,
they were asked to provide a qualitative assessment by selecting from a set of multiple
choice responses shown below. The weighted sum of the corresponding project
characteristic scores was then calculated to determine the overall volatility rating. For
example, a score less than 1.5 received a rating of “Very Low,” while a score between 3.5
and 4.5 received a rating of “High.”
Characteristic Rating
Very
Low
Low Moderate High
Very
High
Weight
< 1.5 >1.5-2.5 >2.5-3.5 >3.5-4.5 > 4.5
System requirements baselined and
agreed to by key stakeholders
Fully
1
Mostly
2
Generally
3
Somewhat
4
No
Agreement
5
26%
Level of uncertainty in key customer
requirements, mission objectives, and
stakeholder needs
Very Low
1
Low
2
Moderate
3
High
4
Very High
5
22%
Number of co-dependent systems with
influence on system requirements
Very Low
1
Low
2
Moderate
3
High
4
Very High
5
16%
Strength of your organization’s
requirements development process and
level of change control rigor
Very High
1
High
2
Moderate
3
Low
4
Very Low
5
8%
Precedentedness of the system , use of
mature technology
Very High
1
High
2
Moderate
3
Low
4
Very Low
5
9%
Stability of stakeholders' organizations
(developer, customer)
Very High
1
High
2
Moderate
3
Low
4
Very Low
5
14%
Experience level of the systems
engineering team in requirements
analysis and development
Very High
1
High
2
Moderate
3
Low
4
Very Low
5
6%
Table 19: Volatility Ratings with Weighted Scoring
102
Using the a posteriori weighting factors and the requirements volatility rating
levels, the prediction accuracy levels and MMRE of Academic COSYSMO and the
requirements volatility model were compared for the 25 projects using two cases. The
first case used the standard COSYSMO calibration constant (A) and the second case
utilized a locally calibrated “A” to further isolate the effects of the requirements
volatility terms. The local calibration was performed using the generally-available
Calico tool for SystemStar (version 2.01) developed by SoftStar systems. The results
can be found in Table 20.
Model
Estimation Accuracy
PRED(.15) PRED(.20) PRED(.30) MMRE
Academic COSYMO 48% 52% 80% 21%
Req. Volatility Model 72% 80% 88% 16%
Calibrated Model*
Academic COSYMO* 52% 64% 84% 20%
Req. Volatility Model* 76% 84% 88% 15%
Table 20: Model Performance Comparison
The model that accounts for requirements volatility improved the estimation
accuracy of Academic COSYSMO across prediction accuracy levels and in terms of the
mean magnitude of relative errors. The results are similar for both the calibrated and un-
calibrated models. In addition, the coefficient of determination (R
2
) between the
functional size of the system and actual systems engineering effort was calculated for the
Academic COSYSMO model (baseline case) and for the model with the requirements
volatility factor. Systems engineering size was calculated using the COSYSMO Cost
103
Estimating Relationships as the weighted sum of the measured size drivers for each
project (number of system requirements, operational scenarios, interfaces, and
algorithms). The baseline model adjusts systems engineering size using the diseconomy
of scale exponent E. As shown in Figure 18, the coefficient of determination for the
baseline case is 0.85. When the requirements volatility scale factor SF is incorporated
into the model, the coefficient of determination improves to 0.92. The results for this
case are shown in Figure 19. Because the effort and size data used to calculate the
coefficient of determination were considered proprietary, a relative scale is provided on
the x and y axes instead of the actual values.
Figure 18: Effort and Size Baseline Coefficient of Determination
R² = 0.85
0
5
10
15
20
25
30
0 2 4 6 8 10 12 14
Systems Engineering Size
(weighted sum of the number of requirements, operational scenarios, interfaces, and
algorithms; adjusted using diseconomy of scale exponent E )
Actual Systems Engineering Effort
(Person-hours)
X X X X X X X
Y
30Y
25Y
20Y
15Y
10Y
5Y
(weighted sum of the number of requirements, operational scenarios, interfaces, and
algorithms; adjusted using the diseconomies of scale exponent E)
R² = 0.85
0
5
10
15
20
25
30
0 2 4 6 8 10 12 14
Systems Engineering Size
(weighted sum of the number of requirements, operational scenarios, interfaces, and
algorithms; adjusted using diseconomy of scale exponent E )
Actual Systems Engineering Effort
(Person-hours)
X X X X X X X
Y
30Y
25Y
20Y
15Y
10Y
5Y
(weighted sum of the number of requirements, operational scenarios, interfaces, and
algorithms; adjusted using the diseconomies of scale exponent E)
104
Figure 19: Effort and Size Coefficient of Determination with Volatility Effects
Along with the prediction accuracy levels, the MMRE, and the coefficient of
determination, an F-test was performed to validate the significance of the model. The F-
test is typically used to compare regression models and determine whether the null
hypothesis is supported or refuted. In this case, the null hypothesis is that the simpler
model (without a requirements volatility factor) is correct; values of the F ratio near one
(1) support the null hypothesis, while larger values counter it in favor of the alternative
hypothesis (Conte, Dunsmore, and Shen, 1986). Given that only one regression
coefficient was estimated in the model; the F-value was calculated to be 7.62 using 24
degrees of freedom. The P-value, the probability that the difference between the models
happened by chance, was calculated to be 0.03 which indicates statistical significance.
As previously discussed, the model was calibrated to a single organization’s data.
R
2
= 0.92
0
5
10
15
20
25
30
0 2 4 6 8 10 12 14
Systems Engineering Size
(weighted sum of the number of requirements, operational scenarios, interfaces, and
algorithms; adjusted using diseconomy of scale exponents E and SF)
Actual Systems Engineering Effort
(Person-hours)
X X X X X X X
Y
30Y
25Y
20Y
15Y
10Y
5Y
R
2
= 0.92
0
5
10
15
20
25
30
0 2 4 6 8 10 12 14
Systems Engineering Size
(weighted sum of the number of requirements, operational scenarios, interfaces, and
algorithms; adjusted using diseconomy of scale exponents E and SF)
Actual Systems Engineering Effort
(Person-hours)
X X X X X X X
Y
30Y
25Y
20Y
15Y
10Y
5Y
105
Consequently, the results may be less than fully definitive when applied to other
organizations.
In addition to evaluating the performance of the model, the correlation between the
new requirements volatility predictor and the COSYSMO cost drivers was evaluated
using a correlation matrix. Specifically, three cost drivers were identified as having a
potentially confounding effect on the requirements volatility predictor; they are:
Requirements Understanding, Technology Risk, and Personnel Experience/Continuity.
These cost drivers correspond to three causes of volatility captured in Figures 8 and 9:
Poor Initial Understanding of Customer Needs, Immature Technology, and Inexperienced
Staff. A correlation threshold of 0.66 was used to flag strongly correlated predictor
variables, which would be considered for elimination or aggregation with other
predictors. This threshold was used during the development of COCOMO II and
COSYSMO (Chulani, 1999; Valerdi, 2005). As shown in Table 21, the correlations
between the requirements volatility scale factor and the three cost drivers were below the
0.66 criterion. Some correlation between requirements volatility and the level of
requirements understanding and technology risk was expected and is consistent with the
results.
Volatility
Scale Factor
Requirements
Understanding
Personnel
experience
Technology
Risk
Volatility Scale Factor (SF) 1
Requirements Understanding 0.3878 1
Personnel experience 0.0074 0.4815 1
Technology Risk 0.4462 0.2060 0.0226 1
Table 21: Requirements Volatility Scale Factor Correlation Matrix
106
6.5 Cross-Validation and Sensitivity Analysis
The value of a cost estimation model lies in its ability to accurately predict effort for
projects outside of the data set used to develop it. In order to validate the performance of
the requirements volatility model, a K-fold cross-validation was performed. This method
divides the data into K subsets; one of the subsets is excluded from the data set from
which the model is built calculated (Kohavi, 1995; Nguyen, 2010). The resulting model
is used to predict effort for the excluded cases. The 25 projects were randomly divided
into K=6 subsets: 5 subsets composed of 4 projects and 1 subset with 5 projects.
The results of the cross-validation are shown in table 22, where the MMRE and
prediction accuracy across the K trials is compared to the performance of the calibrated
Academic COSYSMO and the requirements volatility model. While there was a slight
degradation in performance as compared to the Bayesian-calibrated model, the
improvement of the cross-validated results over COSYSMO is still evident (Peña, 2012).
Calibrated Model
Estimation Accuracy
PRED(.15) PRED(.20) PRED(.30) MMRE
Academic COSYMO 52% 64% 84% 20%
Req. Volatility Model 76% 84% 88% 15%
Cross-Validation 76% 80% 88% 16%
Table 22: Cross-Validation Results
Following the model cross-validation, a sensitivity analysis was performed to
evaluate the impact of the variability in key model parameters to the results. As
evidenced by workshop discussions, prior studies, and project data, the level of
107
requirements volatility over the system life cycle varies depending on the project’s
characteristics and the requirements management processes utilized by the organization
developing the system. The requirements volatility profiles drawn by the participants of
workshop # 3, and shown in Figure 13, showed a fairly wide range of expected volatility
profiles across the system life cycle phases. In addition, the experienced systems
engineers who participated in the surveys and workshops had differing opinions
regarding the counting of requirements changes during the conceptualize phase of the
project. In some cases, the experts believed a system would not be mature enough to
count requirements changes as volatility during the conceptualize phase. However, other
system developers baseline the system requirements during the conceptualize phase and
place them under configuration control.
Table 23 includes the average value and standard deviation of the % breakdown
of changes (Θ
l
)
across the 4 life cycle phases used to develop the requirements volatility
extension to COSYSMO. The largest variability in the data occurs in the Conceptualize
phase with a standard deviation of roughly equal magnitude as the mean (μ = 30%, σ =
30%). Using the standard deviation as a basis for selecting the alternative values, two
scenarios were developed. The first scenario assumes that none of the requirements
changes occur during the Conceptualize phase. In this case, 100% of the changes are
proportionally distributed across the development, operational test and evaluation, and
transition to operation phases.
The second scenario assumes that 61% of the changes occur during the
Conceptualize phase (μ + 1 σ) and the rest of the changes are proportionally distributed
108
across the other three life cycle phases. These scenarios along with the baseline case are
captured below.
Case
% Breakdown of Volatility per Life Cycle Phase (Θ
,l
)
Conceptualize Development Operational Test
& Eval.
Transition to
Operation
Baseline 30% (σ = 30%) 46% (σ = 29%) 18% (σ=17%) 6% (σ = 6%)
Scenario 1 0% 66% 26% 8%
Scenario 2 61% 26% 10% 3%
Table 23: Sensitivity Analysis Scenarios
A requirements volatility life cycle profile was developed for each scenario by using
the alternative values for Θ
l
, and the range of expected levels of REVL (Very Low, Low,
Moderate, High, and Very High) defined in Table 17. The resulting set of curves is
shown in Figure 20 for Scenario 1 and Figure 21 for Scenario 2 (Peña, 2012).
Development Conceptualize Operational
Test & Eval
Transition to
Operations
% Volatility per Life Cycle Phase
Figure 20: Sensitivity Analysis – Scenario 1
109
Development Conceptualize
Operational
Test & Eval
Transition to
Operations
% Volatility per Life Cycle Phase
Figure 21: Sensitivity Analysis – Scenario 2
The performance of the model under Scenario 1 and 2 was calculated using the K-
fold cross validation method and compared to results of the Bayesian-calibrated model
and Academic COSYSMO. The results are captured in Table 24 in terms of prediction
accuracy and mean magnitude of relative errors (MMRE). The sensitivity analysis cases
resulted in a small reduction in prediction accuracy as compared to the calibrated
requirements volatility model. However, as in the prior cross-validation calculations, the
improvement in performance over Academic COSYSMO remains largely unchanged
(Peña, 2012).
Calibrated Model
Estimation Accuracy
PRED(.15) PRED(.20) PRED(.30) MMRE
Academic COSYMO 52% 64% 84% 20%
Req. Volatility Model 76% 84% 88% 15%
Cross-Val Scenario 1 72% 80% 88% 16%
Cross-Val Scenario 2 72% 80% 88% 16%
Table 24: Sensitivity Analysis Results: Scenarios 1 and 2
110
The sensitivity analysis was intended to validate the robustness of the model
against the uncertainty in the value of its key variables. The cases selected for analysis
are by no means exhaustive, but represent a reasonable expectation in the variability of
model parameters based on the data.
111
Chapter 7: Conclusions
A mathematical formulation designed to quantify the effect of requirements
volatility on systems engineering effort was developed and incorporated into
COSYSMO. The model was developed through a combination of expert judgment
gathered through surveys and discussions in six different research workshops and
historical data collected from 25 projects. The aim of the research was to provide a way
for systems engineers to improve their ability to estimate and manage the potential
consequences of volatile requirements.
Restatement of the hypotheses:
Null hypothesis:
H
01
: The volatility of requirements throughout a system’s life cycle is not a
statistically significant factor in the accurate estimation of systems engineering effort.
Alternative hypothesis:
H
A
: The volatility of requirements throughout a system’s life cycle is a
statistically significant factor in the accurate estimation of systems engineering effort.
The alternative model (incorporating requirements volatility effects) was tested
using the coefficient of determination (R
2
), predictive accuracy levels, mean magnitude
of relative errors (MMRE), and the F-test. The performance of the model was compared
to that of null model (Academic COSYSMO). The prediction accuracy at the PRED
(.20) level improved from 52% to 80%; the MMRE decreased from 21% to 16%, and the
coefficient of determination between systems engineering effort and systems
engineering size adjusted for diseconomies of scale increased from 0.85 to 0.92. The
112
results of the F-test and the statistically significant P-value of 0.03 indicate that the
alternative hypothesis is supported. In addition, a K-fold cross validation was performed
to demonstrate the accuracy of the model in predicting effort for new projects.
7.1 Contributions to the Field of Systems Engineering
The following is a summary of the contributions of this dissertation to the field of
systems engineering:
1-) A mathematical formulation for quantifying the impact of requirements
volatility on systems engineering effort.
A modeling approach aimed to quantify the impact of requirements volatility on
systems engineering was developed by taking into account three key parameters: a-) the
overall level of requirements volatility or total percentage of the baseline requirements
that change over the system life cycle; b-) the life cycle phase during which the
requirements changes occur; and c-) the effort impact caused by the change as a
function of life cycle phase. Although this formulation was incorporated into
COSYSMO, it could be adapted by systems engineering practitioners to other
applications or tailored as a stand-alone model.
2-) Improvement in COSYSMO’s ability to account for requirements volatility.
During the initial development of COSYSMO, there were several areas
recommended for future work. Along with the effects of reuse, requirements volatility
was identified as a relevant parameter that could have a significant impact on systems
engineering size. The addition of a requirements volatility predictor allows systems
engineers and project managers to use COSYSMO to estimate the consequences of
113
changing requirements on systems engineering effort. The use of the model also allows
systems engineers and project managers to develop a basis for requesting additional
budget or allocating financial reserves aimed at addressing the potential impact of
volatile requirements.
3-) Identified and ranked potential causes of requirements volatility.
Although other research projects have explored the causes of requirements
volatility, the specific contributions of this research are: a-) the ranking of those causes
by subject matter experts and b-) the identification of a set of project characteristics
based on those causes that can be used to rate the expected level of requirements
volatility for the system of interest. In addition, the relative significance of the project
technical, organizational, or contextual factors that influence volatility could be used to
assess the risk of requirements changes during the life cycle of a system.
4-) Operationalized the requirements volatility parameter through a 5-point
rating scale.
In order to ensure the extension to COSYSMO is useful to systems engineers in
the field, the input to the model is a simple 5-point requirements volatility scale (Very
Low, Low, Moderate, High, Very High) that is rated based on the evaluation of a set of
project characteristics derived from relevant literature and expert surveys. The
information required to develop the rating is available early in the system life cycle,
which allows systems engineers to make informed and timely decisions.
5- ) Documentation of observations related to requirements volatility.
A set of observations regarding the precursors, behavior, and impact of
114
requirements volatility throughout a system’s life cycle were developed based on
relevant literature and surveys and discussions conducted during research workshops.
These observations could help systems engineers increase their understanding of the
characteristics and phenomena related to requirements volatility, thus improving their
ability to anticipate and manage the effects of changing requirements.
7.2 Future Research
While this dissertation has provided additional methods and tools to assist
systems engineers in managing and estimating the impact of changing requirements,
additional research is required to enhance these products and improve their applicability
to a wider range of engineering domains. The following are some recommended areas
of future research.
1- ) Increase the quantity and diversity of the data.
As with any parametric model, obtaining additional project data from a variety
of organizations would expand the external validity of the model. The data used to
develop the model was obtained from subject matter experts and organizations
representing a variety of industries but with an emphasis in aerospace and defense.
Furthermore, the historical data were collected from projects using a single step
development process. Characterizing the level and effects of requirements volatility in
incremental development projects would be a logical expansion to the current scope. In
addition, organizations using the model may choose to tailor it by including a
requirements volatility life cycle profile derived specifically from their project history.
115
2-) Evaluate the potential interaction between requirements volatility and reuse.
As mentioned in the model development discussion, the starting point for the
requirements volatility extension was Academic COSYSMO not COSYSMO 2.0. This
choice was made to isolate the effects of incorporating volatility effects into the
baseline model. Additional research is required to determine the effect of reuse on the
results and its potential interaction with requirements volatility. It could be theorized
that the level of reuse is inversely proportional to changes in requirements. This theory
could be developed further and validated using project data.
3-) Further investigation of the impact of systems engineering effort depending
on the type of change: added, modified, and deleted.
Although observations from the literature and expert judgment indicate that the
impact of requirements volatility on systems engineering effort may be different
depending on the type of change (added, deleted or modified), the results of the model
analysis indicated no additional improvement in estimation accuracy by introducing
parameters associated with the change type. This means that accounting for the number
of changes in general provided equivalent results to accounting for the number of
changes broken down by added, modified, and deleted requirements. More data and
analysis may be required to explore this apparent disconnect.
4-) Increase the model fidelity.
The impact of requirements volatility on systems engineering effort was modeled
as a function of the life cycle phases covered by COSYSMO: Conceptualize,
development, operational test and evaluation, and transition to operation. Different
116
effort weighting factors were determined per each of the life cycle phases. The results
provide evidence that the impact of changing a requirement increases the later the
change is made in the life cycle. Nevertheless, the life cycle dependencies are fairly
coarse. Instead of using only the four life cycle phases, the fidelity of the model may be
increased by establishing a time-dependent relationship of requirements changes and
effort using shorter time steps (e.g. quarterly, monthly, continuous).
Requirements volatility is a complex phenomenon that deserves further study due
to its influence on the success or failure of system development projects. As the
COSYSMO extension is implemented and tested by industry affiliates, new areas of
research will certainly arise.
117
References
Abdel-Hamid, T., and Madnick, S., (1991). Software Project Dynamics: An Integrated
Approach. Prentice-Hall.
ANSI/EIA (1999). ANSI/EIA-632-1988 Processes for Engineering a System.
Babbie, E. (1990). Survey Research Methods (2
nd
ed.). Wadsworth Publishing
Baik, J. (2000). The Effects of Case Tools on Software Development. Ph.D.
Dissertation. University of Southern California. Los Angeles, CA.
Blalock, H. (1985). Causal Models in the Social Sciences. Aldine.
Blanchard, B., and Fabrycky, W. (1998). Systems Engineering & Analysis, Prentice Hall.
Boehm, B. (1981). Software Engineering Economics. Prentice Hall.
Boehm, B. and Royce, W. (1989). “Ada COCOMO and the Ada Process Model.” TRW
Defense Systems Group
Boehm, B. (2000). “Requirements that Handle IKIWISI, COTS, and Rapid Change."
IEEE Computer. Vol. 33 (No. 7), pp: 99-102.
Boehm, B., Abts, C., Brown, A.W., Chulani, S., Clark, B., Horowitz, E., Madachy, R.,
Reifer, D.J., and Steece, B. (2000). Software Cost Estimation with COCOMO II.
Prentice Hall.
Boehm, B., and Lane, J.A. (2010). “DoD Systems Engineering and Management
Implications for Evolutionary Acquisition of Major Defense Systems.” Technical
Report. University of Southern California Center for Systems and Software
Engineering.
Box, G., and Tiao, G. (1973). Bayesian Inference in Statistical Analysis. Addison
Wesley.
Center for Systems and Software Engineering (2010). “Software User’s Manual – UCC
Tool (v.2010.07).” University of Southern California Center for Systems and
Software Engineering.
118
Charette, R., McGarry, J., and Baldwin, K. (2003). “Tri-Service Assessment Initiative
Phase 2 Systemic Analysis Results.” Conference on the Acquisition of Software
Intensive Systems. Arlington, VA.
Chulani, S. (1999). Bayesian Analysis of Software Cost and Quality Models. Ph.D.
Dissertation. University of Southern California. Los Angeles, CA.
Conte, S.; Dunsmore, H., and Shen, V. (1986). Software Engineering Metrics and
Models. Benjamin/Cummings Publishing Company.
Cook, D., and Weisberg, S. (1999). Applied Regression Including Computing and
Graphics. John Wiley & Sons.
Corder, G., and Foreman, D. (2009) Nonparametric Statistics for Non-Statisticians: A
Step-by-Step Approach. John Wiley & Sons.
Costello, R., and Liu, D. (1995). “Metrics for Requirements Engineering,” Journal of
Systems and Software. Vol. 29 (No. 1), pp: 39-63.
Creswell, J. (2009). Research Design: Qualitative, Quantitative, and Mixed Method
Research Approaches (3
rd
ed.). SAGE Publications Inc.
Crotty, M. (1998). The Foundations of Social Research: Meaning and Perspective in the
Research Process. SAGE Publications Inc.
Davis, A., Overmyer, S., Jordan, K., Caruso, J., Dandashi, F., Dinh, A., Kincaid,G.,
Ledeboer, G., Reynolds, P., Sitaram, P., Ta, A., and Theofanos, M. (1993).
“Identifying and Measuring Quality in a Software Requirements Specification.”
Proceedings of the First International Software Metrics Symposium, IEEE
Computer Society Press.
Dalkey, N. (1967). The Delphi Method: An Experimental Study of Group Opinion. The
RAND Corporation.
Department of Defense, Systems Management College (2001). Systems Engineering
Fundamentals. Defense Acquisition University Press.
Department of Defense (2008). Instruction 5000.02: Operation of the Defense
Acquisition System. Retrieved from
http://www.dtic.mil/whs/directives/corres/pdf/500002p.pdf
Department of Defense (2010). Quadrennial Defense Review Report.
EIA (2002). EIA-731.1 - Systems Engineering Capability Model.
119
Economics and Development Resource Center (1997). Guidelines for the Economic
Analysis of Projects.
Ferreira, S. (2002). Measuring the Effects of Requirements Volatility on Software
Development Projects. Ph.D. Dissertation. Arizona State University. Tempe, Az.
Ferreira, S., Collofello, J., Shunk, D., and Mackulak, G. (2009). “Understanding the
Effects of Requirements Volatility in Software Engineering by Using Analytical
Modeling and Software Process Simulation.” The Journal of Systems and
Software. Vol. 82 (No. 10), pp: 1568-1577.
Finnie, G., Witting, G. and Petkov, D. (1993). Prioritizing Software Development
Productivity Factors Using the Analytic Hierarchy Process. Journal of Systems
and Software. Vol. 22 (No. 2), pp: 129-139.
Flynn, T. (2007). Evaluation and Synthesis of Methods for Measuring System
Engineering Efficacy within a Project and Organization. Master’s Thesis.
Massachusetts Institute of Technology.
Forrester, J. 1961. Industrial Dynamics. The M.I.T. Press.
Fortune, J. (2009). Estimating systems engineering reuse with the constructive systems
engineering cost model (COSYSMO 2.0). Ph.D. Dissertation. University of
Southern California. Los Angeles, CA.
Frederick, C., and Sauser, B. (2007). “Studies on Systems Engineering Benefits.”
5th Conference on Systems Engineering Research. Hoboken, NJ.
Friedman, G., and Sage, A. (2004). “Case Studies of Systems Engineering and
Management in Systems Acquisition.” Systems Engineering Vol. 7 (No. 1), pp:
84-97.
General Accounting Office (2004). Stronger Management Practices are Needed to
Improve DOD’s Software-intensive Weapon Acquisitions (GAO-04-393).
Defense Acquisitions.
Hammer, T., Huffman, L., and Rosenberg, L. (1998). “Doing requirements right the first
time.” Crosstalk, the Journal of Defense Software Engineering. Vol. 11 (No. 12),
pp: 20-25.
Honour, E. (2004). “Understanding the Value of Systems Engineering.” INCOSE 14th
Annual International Symposium: Systems Engineering Managing Complexity
and Change. Toulouse, France.
120
Houston, D. (2000). A Software Project Simulation Model for Risk Management, Ph.D.
Dissertation. Arizona State University. Tempe, Az.
IEEE (1990). IEEE Standard Glossary of Software Engineering Terminology.
INCOSE (2010). Systems Engineering Handbook. INCOSE. Version 3.2,
ISO/IEC (2008). ISO/IEC 15288:2008 (E) Systems Engineering - System Life Cycle
Processes.
Javed, T., Maqsoo, M., and Durrani, Q. (2004). “A Study to Investigate the Impact of
Requirements Instability on Software Defects.” ACM Software Engineering
Notes. Vol. 29 (No. 4), pp: 1–7.
Jick, T (1979). “Mixing Qualitative and Quantitative Methods.” Administrative Science
Quarterly. Vol. 24 (No. 4), pp: 602-611.
Jones, C. (1994). Assessment and Control of Software Risks. PTR Prentice Hall.
Jones, C. (1996). “Strategies for managing requirements creep.” IEEE Computer. Vol. 20
(No. 6), pp: 92-94.
Kohavi, R. (1995). “A Study of Cross-validation and Bootstrap for Accuracy Estimation
and Model Selection.” International Joint Conference on Artificial Intelligence.
Montréal, Canada.
Kotonya, G., Sommerville, I., (1998). Requirements Engineering: Processes and
Techniques. John Wiley and Sons.
Kulk, G, and Verhoef, C. (2008). “Quantifying requirements volatility effects.” Science
of Computer Programming. Vol. 72 (No. 3), pp: 136–175.
Leffingwell, D. (1997). “Calculating the Return on Investment from More Effective
Requirements Management.” American Programmer. Vol. 10 (No. 4), pp: 13–16.
Likert, R. (1932). "A Technique for the Measurement of Attitudes.” Archives of
Psychology. Vol. 22 (No. 140), pp: 1–55.
Lin, C., Abdel-Hamid, T., and Sherif, J. (1997). “Software-Engineering Process
Simulation Model (SEPS).” Journal of Systems and Software. Vol. 38 (No. 3), pp:
263-277.
121
Lipsey, M. (1990). Design Sensitivity: Statistical Power for Experimental Research.
Sage.
Loconsole, A., and Börstler, J. (2005). “An Industrial Case Study on Requirements
Volatility Measures.” Proceedings of the 12th Asia-Pacific Software Engineering
Conference. Taipei, Taiwan.
Madachy, R. and Tarbet, D. (2000). “Initial Experiences in Software Process Modeling.”
Software Quality Professional. Vol. 2 (No. 3), pp: 1-13
Madachy, R. (2008). Software Process Dynamics. John Wiley & Sons.
Malaiya, Y., and Denton, J. (1999). “Requirements Volatility and Defect Density.”
Proceedings of the International Symposium on Software Reliability Engineering.
MIL-STD 490-A. (1985). Specification Practices.
MIL-STD-498. (1994). Software Development and Documentation.
NASA. (2002). NASA Cost Estimating Handbook.
Nguyen, V. (2010). Improved Size and Effort Estimation Models for Software
Maintenance. Ph. D. Dissertation. University of Southern California, Los
Angeles, CA.
Nidumolu, S. (1996). Standardization, Requirements Uncertainty and Software Project
Performance. Information and Management. Vol. 31 (No. 3), pp: 135-150.
Peña, M. (2012). “Mathematical Formulation and Validation of the Impact of
Requirements Volatility on Systems Engineering Effort.” University of Southern
California, Center for Software and Systems Engineering Annual Research
Review. Los Angeles, CA.
Pfahl, D., and Lebsanft, K. (2000). “Using Simulation to Analyze the Impact of Software
Requirements Volatility on Project Performance.” Information and Software
Technology. Vol. 42 (No. 14), pp: 1001-1008.
Pritsker, A., O’Reilly, J., and LaVal, D. (1997). Simulation with Visual SLAM and
AweSim. John Wiley & Sons.
Reifer, Donald J. (2000). “Requirements Management: The Search for Nirvana.” IEEE
Software. Vol. 17 (No. 3), pp: 45-47.
Richey, G. (2005). “F-111 Systems Engineering Case Study.” Center for Systems
Engineering at the Air Force Institute of Technology.
122
Rhodes, D., Valerdi, R., and Roedler, G. (2009). “Systems engineering leading indicators
for assessing program and technical effectiveness.” Systems Engineering Vol. 12
(No. 1), pp: 21-35.
Roedler, G. and Rhodes, D. (2007). Systems Engineering Leading Indicators Guide.
Version 1. Massachusetts Institute of Technology, INCOSE, and PSM.
Rossman, G., and Wilson B. (1998). Learning in the Field: An Introduction to Qualitative
Research. Sage.
Sage, A. (1992). Systems Engineering, John Wiley & Sons.
Snee, R. (1977). “Validation of Regression Models: Methods and Examples.”
Technometrics.Vol. 19 (No. 4), pp: 415-428.
The Standish Group (1995). The Chaos Report. Retrieved from
http://www.standishgroup.com/chaos.html.
Stark, G., Oman, P., Skillicorn, A., and Ameele, A. (1999). ”An Examination of the
Effects of Requirements Changes on Software Maintenance Releases.” Journal of
Software Maintenance: Research and Practice. Vol. 11 (No. 5), pp: 293-309.
Sudman, S.,and Bradburn, N. (2004). Asking Questions: The Definite Guide to
Questionnaire Design. Jossey Bass Publishers.
Thakurta, R. and Dasgupta, S. (2011). “Impact of Software Requirements Volatility
Pattern on Project Dynamics: Evidences from a Case Study.” International
Journal of Software Engineering & Applications. Vol. 2 (No. 3), pp: 22-33.
Tvedt, J. (1996). An extensible model for evaluating the impact of process improvements
on software development cycle time. Ph. D. Dissertation. Arizona State
University. Tempe, Az.
Valerdi, R., Rieff, J., Roedler, J. and Wheaton, M. (2004). “Lessons Learned
From Collecting Systems Engineering Data.” Ground Systems Architecture
Workshop. El Segundo, CA.
Valerdi, R. (2005). The Constructive Systems Engineering Cost Model (COSYSMO).
Ph. D. Dissertation. University of Southern California. Los Angeles, CA.
Valerdi, R. (2006). “A Practical Guide for Industry & Government.” Academic
COSYSMO User Manual. Massachusetts Institute of Technology.
123
Valerdi, R., Brown, S., and Muller, G. (2010). “Towards a Framework of Research
Methodology Choices in Systems Engineering.” 8
th
Conference on Systems
Engineering Research, Hoboken, NJ.
Valerdi, R, Liu, K., and Fortune, J. (2010). “Lessons Learned about Mixed Methods
Research Strategies in Systems Engineering: Evidence from PhD Dissertations.”
8th Conference on Systems Engineering Research. Hoboken, NJ.
Valerdi. R. (2011) “Convergence of expert opinion via the Wideband Delphi method: An
application in cost estimation models.” 21st Annual INCOSE International
Symposium. Denver, CO.
Wang, G., Boehm, B., Valerdi, R., and Shernoff, A. (2008). “Proposed Modification to
COSYSMO Estimating Relationship.” Technical Report. University of Southern
California, Center for Systems and Software Engineering. Los Angeles, CA.
Weinberg, G. (1993). Quality Software Management: Volume 2 First-Order
Measurement. Dorset House.
Wiegers, K. (1999). “Automating Requirements Management.” Software Development.
Vol. 7 (no. 7), pp: S1-S5.
Yin, R. (2003). Case Study Research Designs and Methods (3
rd
ed). Sage.
Zowghi, D., and Nurmuliani, N. (2002). “A Study of the Impact of Requirements
Volatility on Software Project Performance.” Proceedings of the Ninth Asia-
Pacific Software Engineering Conference. Queensland, Australia.
124
Appendix A: Process Categories and Activities: ANSI/EIA 632
Process Categories Activities
Supply Process (1) Product Supply
Acquisition Process (2) Product Acquisition
(3) Supplier Performance
Planning Process (4) Process Implementation Strategy
(5) Technical Effort Definition
(6) Schedule and Organization
(7) Technical Plans
(8)Work Directives
Assessment Process (9) Progress Against Plans and Schedules
(10) Progress Against Requirements
(11) Technical Reviews
Control Process (12) Outcomes Management
(13) Information Dissemination
Requirements Definition Process (14) Acquirer Requirements
(15) Other Stakeholder Requirements
(16) System Technical Requirements
Solution Definition Process (17) Logical Solution Representations
(18) Physical Solution Representations
(19) Specified Requirements
Implementation Process (20) Implementation
Transition to Use Process (21) Transition to use
Systems Analysis Process (22) Effectiveness Analysis
(23) Tradeoff Analysis
(24) Risk Analysis
Requirements Validation Process (25) Requirement Statements Validation
(26) Acquirer Requirements
(27) Other Stakeholder Requirements
(28) System Technical Requirements
(29) Logical Solution Representations
System Verification Process (30) Design Solution Verification
(31) End Product Verification
(32) Enabling Product Readiness
End Products Validation Process (33) End products validation
Table A-1 ANSI/EIA 632 Standard Systems Engineering Activities
125
Appendix B: Organizations that Participated in the Research
Workshop #1: 2010 LAI
Knowledge Exchange
Air Force Institute of Technology Raytheon
The Boeing Company MIT
United Launch Alliance
Workshop # 2: 2010 USC-
CSSE Annual Research
Review
The Boeing Company Raytheon
The Aerospace Corporation Northrop Grumman
SoftStar Systems USC
Workshop # 3: 2010
Practical Software and
Systems Measurement
Conference
Distributed Management Northrop Grumman
Lockheed Martin Ericsson España
Samsung SDS The US Navy
MIT USC
Workshop # 4: 25
th
Annual
COCOMO Forum
The Boeing Company MIT
The Aerospace Corporation TI Metricas Ltda.
USC Lockheed Martin
Drabant Group SoftStar Systems
Banco Bradesco Rolls-Royce
Texas Tech
Workshop #5: 2011 USC-
CSSE Annual Research
Review
The Aerospace Corporation Lockheed Martin
The Boeing Company Northrop Grumman
TI Metricas The US Air Force
Quanterion Solutions USC
IEEE Computer Society
Workshop # 6: 2011
Practical Software and
Systems Measurement
Conference
TI Metricas Ltda. Software Metrics.
Tecolote Research Inc. IBM
Australian Department of Defence BAE Systems
Galorath Incorporated Raytheon
Systems and S/W Quality Institute MIT
USC
Table B-1 Organizations that Participated in the Research
126
Appendix C: Requirements Volatility Survey
1. Research Project Background
I am a doctoral student at the University of Southern California, Industrial and Systems
Engineering Department. You are being invited to participate in a research project that
investigates the impact, causes, and expected level of requirements volatility during an
engineering project’s life cycle.
Requirements volatility is defined as the change in requirements (added, deleted, and
modified) over a given time interval. It is one of the requirements trends leading
indicators included in the Systems Engineering Leading Indicators Guide developed by
the Lean Advancement Initiative (LAI). The guide defines a leading indicator as “a
measure for evaluating the effectiveness of a how a specific activity is applied on a
project in a manner that provides information about impacts that are likely to affect the
system performance objectives.”
Your participation in this research study is voluntary. If you chose to take part, we ask
you to provide your name for the purpose of validating your participation. Your e-mail
address and phone number are requested only if you would like to receive additional
information regarding this research project.
The results of this study will be used for scholarly purposes only. The survey should
take approximately 15 minutes to complete. Your participation is greatly appreciated.
If you have any questions about this research project, please feel free to contact me.
Thank you for your interest and support,
Mauricio E. Peña
University of Southern California
E-mail address: mauricip@usc.edu
127
2. Survey Participant Background Information
2.1 Please provide your contact information:
Name:
Email
Address:
Phone
Number:
The following questions request information regarding your professional background
2.2 How would you describe your role in your organization? (Check all that apply)
Project leader/manager
Functional Manager
Requirements owner/developer
System/software designer
System Analyst
Validation/Verification Engineer
Information manager (configuration/data management)
Quality Assurance
Other, please specify
2.3. Please state your years of experience in systems or software engineering
Years of experience
128
3. Organizational Background
The following questions request information on the background of your organization or
business.
3.1. Which of the following best describes your organization’s application domain?
(Check all that apply).
Aircraft/Avionics
Automotive
Data Systems / IT
Infrastructure
Military / Defense
Scientific / Research
Space Systems
Telecommunications
Transportation Systems
Other (please specify),
3.2 Which of the following best describes the category of systems that are produced
by your organization? (Check all that apply).
Information processing (i.e. software intensive system)
Command, Control, Communications, Computers, Intelligence, Surveillance and
Reconnaissance (C4ISR)
Machine (i.e. automobile, aircraft, spacecraft)
System of Systems (i.e. GPS network)
Other, please specify
3.3 What is the approximate ratio of H/W and S/W in terms of systems engineering
effort of a typical system produced by your organization (assume the first article in
production)?
100% Hardware
75% Hardware, 25% Software
50% Hardware, 50% Software
25% Hardware, 75% Software
100% Software
129
3.4 Which of the following best describes the Systems Engineering Capability level
of your organization (Based on CMMI and/or Systems Engineering Capability
Model EIA-731.1)?
Level 0 – Ad hoc approach to project performance
Level 1 – SE processes performed, but not managed or controlled
Level 2 – SE processes are performed and managed; driven by project-specific
needs
Level 3 – SE processes are well defined; organization has standard processes and
tailoring guidelines
Level 4 – SE processes are quantitatively managed; tailored process is measured
and its performance can be predicted
Level 5 – Optimizing SE processes; continuous improvement is achieved
through the setting of effectiveness goals for the standard process
Not Applicable
4. Requirements Volatility
The following questions will ask you to provide information on requirements volatility
based on your professional experience.
Requirements volatility is defined as the percentage of the project's total number of
requirements that are added, modified or deleted over a specified period of time.
4.1 Please state you assessment of the following statement in terms of level of
agreement on a scale of 1-5; from 1: strongly disagree, to 5: strongly agree.
“I believe the use of requirements volatility thresholds/metrics would enable my
organization to monitor and improve the performance of the project.”
1- Strongly disagree
2- Disagree
3- Neither agree nor disagree
4- Agree
5- Strongly agree
130
4.2 Please state you assessment of the following statement in terms of level of
agreement on a scale of 1-5; from 1: strongly disagree, to 5: strongly agree.
“My organization uses requirements volatility thresholds/metrics to monitor and
improve the performance of the project.”
1- Strongly disagree
2- Disagree
3- Neither agree nor disagree
4- Agree
5- Strongly agree
The following questions deal with the causes of requirements volatility.
4.3 Please state your assessment of the following potential causes of requirements
volatility on a scale of 1-5; from 1: strongly disagree; to 5: strongly agree.
Causes of Requirements
Volatility
1
Strongly
Disagree
2
Disagree
3
Neither agree
nor disagree
4
Agree
5
Strongly
Agree
Poor initial understanding of the
system and customer needs
1 2 3 4 5
Lack of SE process maturity 1 2 3 4 5
Inexperienced staff 1 2 3 4 5
Customer-requested scope change 1 2 3 4 5
Immature technology 1 2 3 4 5
Changes in external environment
(political/business climate)
1 2 3 4 5
Changes in COTS products 1 2 3 4 5
Changes in co-dependent systems 1 2 3 4 5
Internal factors: Change in
policies, organizational structure
and/or leadership
1 2 3 4 5
Other causes; please specify:
131
4.4 Based on your experience, how would you rate the influence of the following
project contextual factors on requirements volatility? Please rate them on a scale of
1-5, from 1 signifying no influence to 5: very high influence.
Project Context 1
No
influence
2
Very Low
Influence
3
Moderate
Influence
4
High
Influence
5
Very High
Influence
System application domain
(defined in question 3.1)
1 2 3 4 5
System Category (defined in
question 3.2)
1 2 3 4 5
The H/W or S/W breakdown
( defined in question 3.3)
1 2 3 4 5
The type of project (experimental,
development, production, etc.)
1 2 3 4 5
The size of the system (total # of
requirements)
1 2 3 4 5
The duration of the project 1 2 3 4 5
4.5 Please provide your best estimate of the requirements volatility expected during
each of following life cycle phases (as defined by ISO/IEC 15288 and EIA/ANSI
632) of your organization’s typical products/systems (assume stabilized
evolutionary development, not agile development).
Requirements volatility is defined as the percentage of the project's total number
of requirements that are added, modified or deleted over a specified period of
time.
Life Cycle Phase < 5% 5-10% 10-20% >20% Don’t
Know
Conceptualize < 5% 5-10% 10-20% > 20%
Development < 5% 5-10% 10-20% > 20%
Operational Test & Eval < 5% 5-10% 10-20% > 20%
Transition to Operation < 5% 5-10% 10-20% > 20%
Operate, maintain,enhance < 5% 5-10% 10-20% > 20%
Replace or dismantle < 5% 5-10% 10-20% > 20%
132
Questions 4.6 deals with the impact of requirements volatility on a project when the
volatility occurs after the requirements have been baselined (post Systems Requirements
Review).
4.6 Based on your experience, how would you rate the impact of requirements
volatility (post requirements baseline) on the following (from large increase to large
decrease)?
Project Factors Large
Increase
Moderate
Increase
No
impact
Moderate
Decrease
Large
Decrease
Rework of work products
Large
Increase
Moderate
increase
No impact
Moderate
decrease
Large
decrease
Project size (Total # of
requirements)
Large
Increase
Moderate
increase
No impact
Moderate
decrease
Large
decrease
This concludes our survey –
Thank you for participating – Do you have any comments about the survey or this
particular field of study?
If you would like to follow-up on the results of this research project or participate in a
future survey, please feel free to contact me at the e-mail address and/or phone number
provided in the introduction.
133
Appendix D: Workshop # 3, PSM Conference Survey Exercise
134
Appendix E: Requirements Volatility Survey # 2
1. Research Project Background
I am a doctoral student at the University of Southern California, Industrial and Systems
Engineering Department. You are being invited to participate in a research project that
investigates the impact, causes, and expected level of requirements volatility during an
engineering project’s lifecycle.
Requirements volatility is defined as the change in requirements (added, deleted, and
modified) over a given time interval. It is one of the requirements trends leading
indicators included in the Systems Engineering Leading Indicators Guide developed by
the Lean Advancement Initiative (LAI). The guide defines a leading indicator as “a
measure for evaluating the effectiveness of a how a specific activity is applied on a
project in a manner that provides information about impacts that are likely to affect the
system performance objectives.”
Your participation in this research study is voluntary. If you chose to take part, we ask
you to provide your name for the purpose of validating your participation. Your e-mail
address and phone number are requested only if you would like to receive additional
information regarding this research project.
The results of this study will be used for scholarly purposes only. The survey should
take approximately 15 minutes to complete. Your participation is greatly appreciated.
If you have any questions about this research project, please feel free to contact me.
Thank you for your interest and support,
Mauricio E. Peña
E-mail address: mauricip@usc.edu
135
2. Survey Participant Background Information
2.1 Please provide your contact information:
Name: _____________________________
E-mail Address: _____________________________
Phone Number: _____________________________
The following questions request information regarding your professional background.
2.2 How would you describe your role in your organization? (Check all that apply)
Project leader/manager
Functional Manager
Requirements owner/developer
System/software designer
System Analyst
Validation/Verification Engineer
Information manager (configuration/data management)
Quality Assurance
Other, please specify
2.3. Please state your years of experience in systems or software engineering
Years of experience
136
3. Organizational Background
The following questions request information on the background of your organization or
business.
3.1. Which of the following best describes your organization’s application domain?
(Check all that apply).
Aircraft/Avionics
Automotive
Data Systems / IT
Infrastructure
Military / Defense
Scientific / Research
Space Systems
Telecommunications
Transportation Systems
Other (please specify),
3.2 Which of the following best describes the category of systems that are produced
by your organization? (Check all that apply).
Information processing (i.e. software intensive system)
Command, Control, Communications, Computers, Intelligence, Surveillance and
Reconnaissance (C4ISR)
Machine (i.e. automobile, aircraft, spacecraft)
System of Systems (i.e. GPS network)
Other, please specify
3.3 What is the approximate ratio of H/W and S/W in terms of systems engineering
effort of a typical system produced by your organization (assume the first article
in production)?
100% Hardware
75% Hardware, 25% Software
50% Hardware, 50% Software
25% Hardware, 75% Software
100% Software
137
4. Requirements Volatility
The following questions will ask you to provide information on requirements volatility
based on your professional experience.
Requirements volatility is defined as the percentage of the project's total number of
requirements that are added, modified or deleted over a specified period of time.
4.3 Based on your experience, please provide your best estimate of the percentage of
the baseline set of requirements that is likely to change over the following life cycle
phases (as defined by ISO/IEC 15288 and EIA/ANSI 632): Conceptualize
Development, Operational Test and Evaluation, and Transition to Operation.
Note: Assume stabilized evolutionary development, not agile development.
%
4.4 If available, please estimate the breakdown of requirements changes among each of
the following life cycle phases:
Note: The total should add to 100%
Conceptualize _%
Development _ %
Operational Test & Evaluation _ %
Transition to Operation _
100%
4.5 Based on your experience, please estimate the breakdown of the type of
requirements changes for your system of interest over the life cycle phases defined
in question 4.1
Note: The total should add to 100%
Added _ %
Modified _ %
Deleted _
100%
138
The following questions deal with the causes of requirements volatility.
4.4 Please state your assessment of the following potential causes of requirements
volatility on a scale of 1-5; from 1: strongly disagree; to 5: strongly agree.
Causes of Requirements
Volatility
1
Strongly
Disagree
2
Disagree
3
Neither
agree nor
disagree
4
Agree
5
Strongly
Agree
Poor initial understanding of the
system and customer needs
1 2 3 4 5
Lack of SE process maturity 1 2 3 4 5
Inexperienced staff 1 2 3 4 5
Customer-requested scope change 1 2 3 4 5
Immature technology 1 2 3 4 5
Changes in external environment
(political/business climate)
1 2 3 4 5
Changes in COTS products 1 2 3 4 5
Changes in co-dependent systems 1 2 3 4 5
Internal factors: Change in
policies, organizational structure
and/or leadership
1 2 3 4 5
Other causes; please specify
The following questions deal with the impact of requirements volatility
4.5 Please provide your best estimate of the additional systems engineering effort
required to incorporate a requirements change during the different life cycle
phases (defined by ISO/IEC 15288 and EIA/ANSI 632).
A value of “1” would indicate that no additional effort is required to incorporate
a requirements change, while a value of “2” indicates that a requirement would
require twice the effort if introduced later in the lifecycle as compared to the
effort of the same requirement if it had been part of the original baseline.
Conceptualize
Develop
Oper Test
& Eval
Transition
to Operation
139
4.6 Please provide your best estimate of cost /effort penalty per type of requirements
change (added, modified, deleted) for each of the life cycle phases (defined by
ISO/IEC 15288 and EIA/ANSI 632).
A negative value indicates a decrease in effort/cost.
Cost/Effort Penalty Conceptualize
Develop
Oper Test
& Eval
Transition to
Operation
Added
Modified
Deleted
5. Size Driver Volatility
The following questions will ask you to provide information on the volatility of the
other COSYSMO size drivers: operational scenarios, interfaces, and algorithms.
5.1 This question deals with the impact of changing requirements on the other size
drivers. For every change in a system requirement, how many changes would you
expect in the system interfaces, operational scenarios, and algorithms? Please
provide your best estimate in terms of a ratio (e.g. 1:1, 1:5, 1:10).
5.1.1 Interfaces
Ratio 1:
No impact
Please comment:
5.1.2 Operational scenarios
Ratio 1:
No impact
Please comment:
5.1.3 Algorithms
Ratio 1:
No impact
Please comment:
140
The following questions deal with the tracking of size driver change metrics.
5.2 Does your organization keep metrics on changes to operational scenarios?
This driver represents the number of operational scenarios that a system must
satisfy. Such scenarios include both the nominal stimulus-response thread plus
all of the off-nominal threads resulting from bad or missing data, unavailable
processes, network connections, or other exception-handling cases. The number
of scenarios can typically be quantified by counting the number of system test
thread packages or unique end-to-end tests used to validate the system
functionality and performance or by counting the number of use cases, including
off-nominal extensions, developed as part of the operational architecture
No
Yes please clarify how:
Don’t know
5.3 Does your organization keep metrics on changes to system interfaces?
This driver represents the number of shared physical and logical boundaries
between system components or functions (internal interfaces) and those external
to the system (external interfaces). These interfaces typically can be quantified
by counting the number of external and internal system interfaces among
ISO/IEC 15288-defined system elements.
No
Yes please clarify how:
Don’t know
5.4 Does your organization keep metrics on changes to system algorithms?
This driver represents the number of newly defined or significantly altered
functions that require unique mathematical algorithms to be derived in order to
achieve the system performance requirements. As an example, this could include
a complex aircraft tracking algorithm like a Kalman Filter being derived using
existing experience as the basis for the all aspect search function. Another
example could be a brand new discrimination algorithm being derived to identify
141
friend or foe function in space-based applications. The number can be quantified
by counting the number of unique algorithms needed to realize the requirements
specified in the system specification or mode description document.
No
Yes please clarify how:
Don’t know
The following questions deals with participation in the research project.
5.6 Are you interested in participating in the COSYSMO requirements volatility
research by providing project data?
Yes
No
5.5 If yes, please estimate the number of projects for which your organization is
likely to complete the COSYSMO requirements volatility data collection form?
This concludes our survey –
Thank you for participating – Do you have any comments about the survey or this
particular field of study?
If you would like to follow-up on the results of this research project or participate in a
future survey, please feel free to contact me at the e-mail address and/or phone number
provided in the introduction.
142
Appendix F: Requirements Volatility Delphi Survey
1. Research Project Background
I am a doctoral student at the University of Southern California, Industrial and Systems
Engineering Department. You are being invited to participate in a research project that
investigates the impact, causes, and expected level of requirements volatility during an
engineering project’s lifecycle.
Requirements volatility is defined as the change in requirements (added, deleted, and
modified) over a given time interval. It is one of the requirements trends leading
indicators included in the Systems Engineering Leading Indicators Guide developed by
the Lean Advancement Initiative (LAI). The guide defines a leading indicator as “a
measure for evaluating the effectiveness of a how a specific activity is applied on a
project in a manner that provides information about impacts that are likely to affect the
system performance objectives.”
Your participation in this research study is voluntary. If you chose to take part, we ask
you to provide your name for the purpose of validating your participation. Your e-mail
address and phone number are requested only if you would like to receive additional
information regarding this research project.
The results of this study will be used for scholarly purposes only. The survey should
take approximately 15 minutes to complete. Your participation is greatly appreciated.
If you have any questions about this research project, please feel free to contact me.
Thank you for your interest and support,
Mauricio E. Peña
University of Southern California
E-mail address: mauricip@usc.edu
143
2. Survey Participant Background Information
2.1 Please provide your contact information:
Name: _____________________________
E-mail Address: _____________________________
Phone Number: _____________________________
The following questions request information regarding your professional background.
2.2 How would you describe your role in your organization? (Check all that apply)
Project leader/manager
Functional Manager
Requirements owner/developer
System/software designer
System Analyst
Validation/Verification Engineer
Information manager (configuration/data management)
Quality Assurance
Other, please specify
2.3. Please state your years of experience in systems or software engineering
Years of experience
144
3 Organizational Background
The following questions request information on the background of your organization or
business.
3.1 Which of the following best describes your organization’s application domain?
(Check all that apply).
Aircraft/Avionics
Automotive
Data Systems / IT
Infrastructure
Military / Defense
Scientific / Research
Space Systems
Telecommunications
Transportation Systems
Other (please specify),
3.2 Which of the following best describes the category of systems that are produced
by your organization? (Check all that apply).
Information processing (i.e. software intensive system)
Command, Control, Communications, Computers, Intelligence, Surveillance and
Reconnaissance (C4ISR)
Machine (i.e. automobile, aircraft, spacecraft)
System of Systems (i.e. GPS network)
Other, please specify
3.3 What is the approximate ratio of H/W and S/W in terms of systems engineering
effort of a typical system produced by your organization (assume the first article
in production)?
100% Hardware
75% Hardware, 25% Software
50% Hardware, 50% Software
25% Hardware, 75% Software
100% Software
145
4. Requirements Volatility
The following questions will ask you to provide information on requirements volatility
based on your professional experience.
Requirements volatility is defined as the percentage of the project's total number of
requirements that are added, modified or deleted over a specified period of time.
4.6 Based on your experience, please provide your best estimate of the percentage of
the baseline set of requirements that is likely to change over the following life cycle
phases (as defined by ISO/IEC 15288 and EIA/ANSI 632): Conceptualize
Development, Operational Test and Evaluation, and Transition to Operation.
Note: Assume stabilized evolutionary development, not agile development.
Round 1:
%
Round 2:
%
4.7 Please estimate the breakdown of requirements changes among each of the
following life cycle phases:
Note: The total should add to 100%
Round #1:
Conceptualize _%
Development _ %
Operational Test & Evaluation _ %
Transition to Operation _
100%
Round #2:
Conceptualize _%
Development _ %
Operational Test & Evaluation _ %
Transition to Operation _
100%
4.8 Based on your experience, please estimate the breakdown of the type of
requirements changes for your system of interest over the life cycle phases defined
in question 4.1
146
Note: The total should add to 100%
Round #1
Added _ %
Modified _ %
Deleted _
100%
Round #2
Added _ %
Modified _ %
Deleted _
100%
The following questions deal with the impact of requirements volatility
4.4 Please provide your best estimate of cost /effort penalty per type of requirements
change (added, modified, deleted) for each of the life cycle phases (defined by
ISO/IEC 15288 and EIA/ANSI 632).
A value of “1” would indicate that no additional effort is required to incorporate
a requirements change, while a value of “2” indicates that a requirement would
require twice the effort. A value less than “1” indicates a decrease in effort/cost.
Round #1
Cost/Effort
Penalty
Conceptualize
Develop
Oper Test
& Eval
Transition to
Operation
Added
Modified
Deleted
A value of “1” would indicate that no additional effort is required to incorporate
a requirements change, while a value of “2” indicates that a requirement would
require twice the effort. A value less than “1” indicates a decrease in effort/cost.
147
Round #2
Cost/Effort
Penalty
Conceptualize
Develop
Oper Test
& Eval
Transition to
Operation
Added
Modified
Deleted
This concludes our survey –
Thank you for participating – Do you have any comments about the survey or this
particular field of study?
If you would like to follow-up on the results of this research project or participate in a
future survey, please feel free to contact me at the e-mail address and/or phone number
provided in the introduction.
Abstract (if available)
Abstract
Although changes in requirements are expected as part of a system’s development, excessive volatility after the requirements baseline is likely to result in cost overruns and schedule extensions in large complex systems. Furthermore, late changes in requirements may cause significant rework of engineering products and lead to project failure. Changes in requirements should be expected, accounted for and managed within the context of the system of interest. Unfortunately, system developers lack adequate methods and tools to anticipate and manage the impact of volatile requirements, and cost estimating techniques often fail to account for their economic consequences. ❧ This dissertation presents an extension to COSYSMO, a generally-available parametric systems engineering cost model, which incorporates requirements volatility as a predictor of systems engineering effort within COSYSMO’s structure and scope with the aim of improving the model’s cost estimation capabilities. The requirements volatility model extension was developed through a combination of expert judgment gathered through surveys and discussions in six different research workshops and historical data collected from 25 projects. The null hypothesis that the volatility of requirements throughout a system’s life cycle is not a statistically significant predictor of systems engineering effort was rejected in factor of the alternative hypothesis with a P-value of 0.03. A comparison of the estimation accuracy of COSYSMO to that of the model that includes volatility effects shows an improvement in predictive accuracy from 52% to 80% at the PRED (20) level and a reduction in the mean magnitude error (MMRE) from 21% to 16%. In addition, the coefficient of determination between the predictor, systems engineering size adjusted for diseconomies of scale, and the response, actual systems engineering effort improved from an R2 of 0.85 to an R2 of 0.92 when the volatility factor was applied to the model. ❧ In addition to the mathematical model that quantifies the impact of volatility on systems engineering effort
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Estimating systems engineering reuse with the constructive systems engineering cost model (COSYSMO 2.0)
PDF
Impacts of system of system management strategies on system of system capability engineering effort
PDF
COSYSMO 3.0: an extended, unified cost estimating model for systems engineering
PDF
Domain-based effort distribution model for software cost estimation
PDF
The effects of required security on software development effort
PDF
Formalizing informal stakeholder inputs using gap-bridging methods
PDF
Quantitative and qualitative analyses of requirements elaboration for early software size estimation
PDF
Risk transfer modeling among hierarchically associated stakeholders in development of space systems
PDF
A system framework for evidence based implementations in a health care organization
PDF
WikiWinWin: a Wiki-based collaboration framework for rapid requirements negotiations
PDF
Using social networking technology to improve collaborative requirements elicitation, negotiation, prioritization and evolution
PDF
A framework for intelligent assessment and resolution of commercial-off-the-shelf product incompatibilities
PDF
Incremental development productivity decline
PDF
Analytical and experimental studies in modeling and monitoring of uncertain nonlinear systems using data-driven reduced‐order models
PDF
Organizing complex projects around critical skills, and the mitigation of risks arising from system dynamic behavior
PDF
Effectiveness of engineering practices for the acquisition and employment of robotic systems
PDF
Extending the COCOMO II software cost model to estimate effort and schedule for software systems using commercial -off -the -shelf (COTS) software components: The COCOTS model
PDF
Studies into computational intelligence approaches for the identification of complex nonlinear systems
PDF
Damage detection using substructure identification
PDF
A model for estimating schedule acceleration in agile software development projects
Asset Metadata
Creator
Peña, Mauricio Eduardo
(author)
Core Title
Quantifying the impact of requirements volatility on systems engineering effort
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Industrial and Systems Engineering
Publication Date
07/06/2012
Defense Date
04/30/2012
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
COSYSMO,OAI-PMH Harvest,requirements volatility,systems engineering
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Boehm, Barry W. (
committee chair
), Settles, F. Stan (
committee chair
), Ghanem, Roger G. (
committee member
), Valerdi, Ricardo (
committee member
)
Creator Email
mauricip@usc.edu,mauriziopena@yahoo.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-52794
Unique identifier
UC11290003
Identifier
usctheses-c3-52794 (legacy record id)
Legacy Identifier
etd-PeaMaurici-917.pdf
Dmrecord
52794
Document Type
Dissertation
Rights
Peña, Mauricio Eduardo
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
COSYSMO
requirements volatility
systems engineering