Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Impacts of system of system management strategies on system of system capability engineering effort
(USC Thesis Other)
Impacts of system of system management strategies on system of system capability engineering effort
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
IMPACTS OF SYSTEM OF SYSTEM MANAGEMENT STRATEGIES ON
SYSTEM OF SYSTEM CAPABILITY ENGINEERING EFFORT
by
Jo Ann Lane
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(INDUSTRIAL AND SYSTEMS ENGINEERING)
May 2009
Copyright 2009 Jo Ann Lane
ii
Acknowledgements
This research endeavor is the culmination of a lifelong dream that has been
supported and encouraged by many. The realization of this dream started with my
soul-mate, Mike, who has always encouraged me to follow my dreams and has
continued to lend an ear and challenge me when I needed to explore new
concepts. As I pursued the possibility of an advanced degree, I found great
enthusiasm and support from my advisor, Dr. Barry W. Boehm. He has guided
me through a wonderful adventure, helped me keep my sense of humor as I
struggled with the reality of “state-of-the-art” and the availability of data from
industry, and kept me out of deep pits and quicksand. The realization of this
research effort also exists because of the tremendous support from my other
committee members, Dr. F. Stan Settles, Dr. George Friedman, and Dr. Paul
Adler, as well as the constructive systems engineering cost model (COSYSMO)
research work done by Dr. Ricardo Valerdi. In addition, this research could not
have been conducted without support from the University of Southern California
Center for Systems and Software Engineering corporate, government, and
academic affiliates. Other key supporters include the Department of Defense
(DoD) Office of the Secretary of Defense (OSD) Systems Engineering Guide for
System of Systems sponsors and researchers (Kristen Baldwin, Dr. Judith
Dahmann, Ralph Lowry, and George Rebovich); Rob Flowe; Dr. David Zubrow;
Dr. Michael Green and his Naval Postgraduate School students; The Aerospace
Corporation (Marilee Wheaton and Richard Adams) and their consultants
(Donald Greer and Dr. Laura Black); and Cheryl Jones (US Army). Finally, I
would also like to acknowledge my other moral supporters: fellow student
iii
Indrajeet Dixit, Dr. Mary Anne Herndon, Wendy Becker Hunt, Tony Jordano,
and the computer science faculty at San Diego State University, especially Dr.
Leland Beck, Dr. Carl Eckberg, Dr. Teresa Larson, and Dr. Marie Roch.
And to the nay-sayers who thought systems of systems engineering is no
different from the systems engineering that engineers have been doing for
decades: you kept me going with your personal observations and questions. It
turned out that we were all correct—it just depends upon your point of view of
the system of systems.
This research has also received support from the following organizations:
International Council on Systems Engineering Foundation/Steven PhD Award
Committee; Practical Software and Systems Measurement; the Space Systems
Cost Analysis Group; and DoD OSD Acquisition, Technology, and Logistics
(AT&L) Software Engineering and Systems Assurance (SSA).
iv
Table of Contents
Acknowledgements ii
List of Tables vi
List of Figures vii
Abstract viii
Chapter 1: Introduction 1
1.1 Research Overview 1
1.2 Motivation 2
1.3 Proposition and Hypotheses 3
1.4 Intended Research Contribution 4
Chapter 2: Background and Related Work 5
2.1 Overview of Literature Review 5
2.2 What Is a System of Systems? 6
2.3 Related Engineering Disciplines 8
2.4 Traditional SE, SoSE, and Related Industry Standards 9
2.5 Engineering Cost Models 14
2.6 Related Organizational Theory Concepts 18
2.7 Process Models 23
Chapter 3: Methodology 25
3.1 Overview of Research Design 25
3.2 Data Collection Instruments 26
3.3 Data Analysis 29
3.4 Potential Threats to Validity and Limitations 30
Chapter 4: The SoSE Model 36
4.1 SoSE Process Model Assumptions and Constraints 37
4.2 SoSE Model Effort Calculations 41
4.3 SoSE Model Effort Multipliers for Effort Calculations 44
4.4 SoSE Process Model Parameter Variations 51
Chapter 5: Research Results 52
5.1 SoS Size Variation 52
5.2 Scope of SoS Capability 57
5.3 Summary of Model Executions 63
Chapter 6: Conclusions 67
6.1 General Conclusions 67
6.2 Future Work 68
6.3 Summary of Research Contributions 69
References 70
v
Appendix A: DoD SoS Case Study Summaries 75
Appendix B: System Interdependency Survey 77
Appendix C: COSYSMO Parameter Definitions Tailored for SoSE 81
Comparison Model
vi
List of Tables
Table 1. SoSE Management Approaches 1
Table 2. Summary of COSYSMO Parameters [Valerdi, 2005] 16
Table 3. Mapping of DoD SoSE Core Elements to COSYSMO Parameters 35
Table 4. SoSE Model Equation Term Definitions 42
Table 5. SoSE EM for SoS Capability Requirements Rationale 45
Table 6. SoSE Oversight Requirements Cost Factor Rationale 47
Table 7. “SE for SoS Requirements, SoSE Support” Cost Factor Rationale 48
Table 8. “SE for SoS Requirements, No SoSE Support” Cost Factor Rationale 49
Table 9. SE for Non-SoS-Requirements Cost Factor Rationale 50
Table 10. SoSE Model Parameters 51
Table 11. Summary of SoSE Model Executions 64
Table A-1. DoD SoS Case Study Summaries 75
Table B-1. Summary of System Interdependency Survey Responses Part I 79
Table B-2. Summary of System Interdependency Survey Responses Part II 80
vii
List of Figures
Figure 1. Graphical View of SoSE Model 36
Figure 2. SoSE EM for SoS Requirements 45
Figure 3. SoSE EM to Monitor Constituent-system Non-SoS-Requirements 46
Figure 4. SE EM for SoS Requirements with SoSE Support 47
Figure 5. SE EM for SoS Requirements without SoSE Support 48
Figure 6. SE Effort Multiplier for System-Specific Non-SoS-Requirements 50
Figure 7. SoSE Model Case 1 52
Figure 8. SoSE Model Case 2 53
Figure 9. SoSE Model Case 3 54
Figure 10. SoSE Model Case 4 55
Figure 11. SoSE Model Case 5 56
Figure 12. SoSE Model Case 6 57
Figure 13. SoSE Model Case 7a (SoS Size = 10) 58
Figure 14. SoSE Model Case 7b (SoS Size = 100) 58
Figure 15. SoSE Model Case 8 59
Figure 16. SoSE Model Case 9 60
Figure 17. SoSE Model Case 10 61
Figure 18. SoSE Model Case 11 62
Figure 19. SoSE Model Case 12 63
Figure 20. Research Summary 67
viii
Abstract
Today’s need for more complex, more capable systems in a short timeframe
is leading more organizations towards the integration of new and existing systems
with commercial-off-the-shelf (COTS) products into network-centric,
knowledge-based systems of systems (SoS). With this approach, system
development processes to define the new architecture, identify sources to either
supply or develop the required components, and eventually integrate and test
these high level components are evolving and are being referred to as SoS
Engineering (SoSE). In recent years, the systems engineering (SE) community
has struggled to decide if SoSE is really different from traditional SE and, if it is
different, how does it differ. Recent research and case studies [DoD, 2008] have
confirmed that there are indeed key differences and that traditional SE processes
are not sufficient for SoSE. However, as with any engineering discipline, how
and how much SoSE differs depends on several factors. This research further
investigated SoSE through the study of several large-scale SoSE programs and
several SE programs that were considered part of one or more SoSs to identify
key SoSE strategies and how these strategies differed based on SoS
characteristics and constituent-systems. The results of these investigations were
then captured in a system dynamics model that allows one to explore SoSE
options with respect to engineering effort and return on SoSE investment. Two
SoS capability development strategies (with and without an SoSE team to guide
capability development) were compared and used to assess the value-added of the
SoSE team with respect to total SE effort expended to engineer an SoS capability.
It is clear from both the Office of the Secretary of Defense (OSD) pilot studies
ix
[DoD, 2008] and the system dynamics model analysis conducted as part of this
research that there exist conditions under which investments in SoSE have
positive and negative returns on investment. This dissertation provides the first
quantitative determination of these conditions, and points out directions for future
research that would strengthen the results.
1
Chapter 1: Introduction
1.1 Research Overview
Today’s need for more complex, more capable systems in a short timeframe
is leading more organizations towards the integration of new and existing systems
with commercial-off-the-shelf (COTS) products into network-centric,
knowledge-based system of systems (SoS). With this approach, system
development processes to define the new architecture, identify sources to either
supply or develop the required components, and eventually integrate and test
these high level components are evolving and are being referred to as SoS
Engineering (SoSE) [Lane and Valerdi, 2007; Ring and Madni, 2005]. As a result
of recent SoS research [Maier, 1998; Dahmann and Baldwin, 2008], four types of
SoSE management approaches have been identified: virtual, collaborative,
acknowledged, and directed. These categories are primarily based upon the levels
of responsibility and authority overseeing the evolution of the SoS. Table 1
describes these SoSE management approaches.
Table 1. SoSE Management Approaches
Type Description
Virtual [Maier,
1998]
Lacks a central management authority and a clear SoS purpose.
Often ad hoc and may use a service-oriented architecture where the
constituent-systems are not necessarily known.
Collaborative
[Maier, 1998]
Constituent-system engineering teams work together more or less
voluntarily to fulfill agreed upon central purposes.
No SoSE team to guide or manage SoS-related activities of constituent-
systems.
Acknowledged
[Dahmann and
Baldwin, 2008]
Have recognized objectives, a designated manager, and resources at the
SoS level (SoSE team), but not complete authority over constituent-
systems.
Constituent-systems maintain their independent ownership, objectives,
funding, and development approaches.
Directed
[Maier, 1998]
SoS centrally managed by a government, corporate, or Lead System
Integrator (LSI) and built to fulfill specific purposes.
Constituent-systems maintain ability to operate independently, but
evolution predominately controlled by SoS management organization.
2
In 2007, the United States Department of Defense (DoD) Office of the
Secretary of Defense (OSD) Acquisition, Technology, and Logistics (AT&L)
Software Engineering and Systems Assurance (SSA) organization sponsored a
group led by Dr. Judith Dahmann (MITRE Corporation) and supported by
George Rebovich (MITRE Corporation), Jo Ann Lane (USC), and Ralph Lowry
(MTSI Incorporated) to conduct case study investigations to better understand
SoSE. Using data collected from these SoSE case studies [DoD, 2008] and single
system information on system interdependencies from other surveys, this
research effort developed an SoSE model based on the constructive systems
engineering cost model (COSYSMO) [Valerdi, 2005]. This SoSE model is used
to compare the effort required to engineer an SoS capability (or capability
modification) using either the collaborative or acknowledged SoSE approach.
The model allows one to modify the SoS size, the size and scope of a proposed
new SoS capability or capability modification, and the concurrent constituent-
system volatility. By varying these parameters and computing the associated
SoSE and systems engineering (SE) effort for the collaborative and
acknowledged approaches, one can find the point, if any, at which the size and
complexity of the SoS or the SoS capability makes it more cost-effective to
evolve the SoS using an acknowledged SoSE team.
1.2 Motivation
Recent reports [DoD, 2008; Northrop et al, 2006; USAF SAB, 2005] have
indicated that SoSE activities are considerably different from the more traditional
SE activities. Many SoS programs have been adapting and expanding traditional
3
SE activities to handle the increased size and scope of SoSs. Many SoSE teams
interviewed as part of the SSA SoSE case studies [DoD, 2008] indicated that
their SoS was managed primarily as a collaborative SoS until it reached a point
where it was either too important, too complex, or not cost effective to continue
managing it in this manner. At this point, an SoSE team was designated to guide
the evolution of the group of systems. Typically, in this first evolutionary step,
the SoSE team has overarching engineering responsibilities and can influence the
constituent-systems, but does not have complete authority over the constituent-
systems, i.e., the acknowledged SoS management approach.
Of particular interest to SoS sponsors is identifying the point at which a
collaborative SoS should be transitioned to an acknowledged SoS. The goal of
this research is to model and analyze SoS complexity and constituent-system
interdependencies to provide the desired insights into the management of SoSs.
1.3 Proposition and Hypotheses
The principal research question being addressed in this research is: When is it
cost effective to create and empower an SoSE team to oversee and guide the
evolution of an SoS?
The proposed central hypothesis is: Subject to a set of reasonably grounded
assumptions, there exists a threshold where it is more cost effective to manage
and engineer changes to an SoS using an SoSE team and this threshold can be
located by modeling the complexity and desired capability interdependency
characteristics of the SoS.
To conduct the proposed research, a process model based upon the
COSYSMO cost model is used to explore ranges of SoS size, SoS constituent-
4
system volatility, and size and scope of SoS changes to determine the point, if
any, where it is more cost effective with respect to estimated labor hours to
employ an SoSE team to oversee and guide the evolution of the SoS.
1.4 Intended Research Contribution
This research provides valuable insights into management approaches for
SoSs as well as a framework to support the estimation of effort associated with a
proposed new SoS capability or capability modifications. In particular, this
research provides:
Management guidance to DoD leadership and system sponsors with
respect to the management of sets of inter-related systems that are
functioning as a SoS.
Guidance to SoSs in other domains that are managed as collaborative or
acknowledged SoSs.
A model that
- Provides a method to conduct trade-off analyses of different
approaches for implementing a given SoS capability for a specific
SoS.
- Can evolve into an SoSE cost estimation model through calibration
for a given SoS or SoS domain.
5
Chapter 2: Background and Related Work
2.1 Overview of Literature Review
SoSE is considered by many to be a multidisciplinary area and is, in fact, a
popular topic at many multidisciplinary conferences such as the Integrated
Design and Process Technology conference [SDPS, 2006] and the Complex
Systems conference [The Aerospace Corporation et al., 2007]. These conferences
reach out to researchers in the areas of biology, sociology, psychology, business
process engineering, mathematics, computer science, and engineering, to name a
few, in order to share cross-cutting information and concepts that may help to
understand complex topics of interest.
SoSE includes management and organizational aspects as well as technical
aspects of systems engineering at the SoS level. On the technical side, it includes
systems engineering specialties and software and information management
specialties. As these systems become larger and larger, both engineers and
researchers are looking at the unique aspects of complex systems and working on
the edge of chaos to develop new and innovative approaches for SoSE [Northrop
et al., 2006; Kreitman, 1996; Sheard, 2006; Berryman et al., 2006; Prokopenko et
al., 2006; Highsmith, 2000].
The following sections summarize the literature in the areas key to this
proposed research, namely, system of systems, systems of systems engineering,
traditional systems engineering, engineering cost modeling, organizational
theory, and process modeling.
6
2.2 What Is a System of Systems?
The earliest references in the literature to “systems within systems” or
“system of systems” can be found in [Berry, 1964] and [Ackoff, 1971]. These
1960-1970 era SoS concepts are early insights into the evolution of today’s
systems of systems. Even though the term “system of systems” was not
commonly used forty years ago, systems of systems were being developed and
deployed. These SoSs are represented by:
Undersea surveillance and weapons systems such as the Integrated
Undersea Surveillance System (IUSS) [FAS, 2006; IUSSCAA, 2006],
Sound Surveillance System (SOSUS) [GlobalSecurity.ORG, 2005], and
Anti-Submarine Warfare (ASW) system [Smithsonian Institution, 2000]
used during the Cold War era to track and evade Russian submarines
Global Positioning System (GPS) [NAVSTAR, 2006] that is today
considered both as a SoS and a constituent-system for other SoSs
Military command and control centers.
As these types of integrated systems became more common, system
engineering experts and researchers began to define and study them as a special
class of systems. Also, the term has become a popular way to represent a strategic
and economic approach to enhancing existing system capabilities, and we now
have an abundance of definitions.
A review of recent publications [Lane and Valerdi, 2007] shows that the term
“system of systems” means many things to many different people and
organizations. In the business domain, an SoS is the enterprise-wide or multiple
enterprise integration and sharing of core business information across functional
7
and geographical areas. In the military domain, an SoS is a dynamic
communications infrastructure and a configurable set of constituent-systems to
support operations in a constantly changing, sometimes adversarial, environment.
For some, an SoS may be a multisystem architecture that is planned up-front by a
prime contractor or lead system integrator. For others, an SoS is an architecture
that evolves over time, often driven by organizational needs, new technologies
appearing on the horizon, and available budget and schedule. The evolutionary
SoS architecture is more of a network architecture that is reconfigured and grows
with needs and available resources.
Some SoS definitions refer to “emergent behaviors” or a “common purpose”
where an SoS can perform functions that cannot be provided by any of the
constituent-systems [Cocks, 2006; DoD, 2006; Eisner, 1993; Kriegel, 1999;
Maier, 1998; Sage and Cuppan, 2001; Shenhar, 1994; USAF SAB, 2005].
However, if one reviews definitions of a system [ANSI/EIA, 1999; Blanchard and
Fabrycky, 1998; INCOSE, 2006; ISO/IEC, 2002; Rechtin, 1991], one sees that
many of these definitions indicate that a system is a set of components working
together for a common objective or purpose. So, “emergent behavior” does not
appear to be a system characteristic unique to SoSs. What is controversial in the
SoS arena is whether emergent behaviors should be planned and managed. There
are those that would like to see the development of convergent protocols that
could be implemented in a variety of systems to “SoS-enable” them, allowing
these systems to easily come and go in SoSs with little additional effort [USAF
SAB, 2005]. There are others that are concerned that if one is not careful, there
may be undesirable emergent behaviors (e.g., safety or security problems), and to
8
avoid these problems, emergent behaviors must be planned, tested, and managed
[DoD, 2008].
In any case, users and nodes in the SoS network may be either fixed or
mobile. Communications between constituent-systems in the SoS are often some
combination of common and custom-defined protocols. Networks may tie
together other networks as well as nodes and users. SoS constituent-systems
typically come and go over time. These constituent-systems can operate both
within the SoS framework and independent of this framework. In a general sense,
it is challenging to define clear boundaries of a specific SoS because of its
dynamic nature.
What is unique to SoSs by many definitions [Maier, 1998; Sage and Cuppan,
2001; USAF SAB, 2005; DoD, 2006; DoD, 2008, Cocks, 2006] is that they are
comprised of constituent-systems that possess the following characteristics:
Operationally independent, meaning that most or all of the constituent-
systems can perform useful functions both within the SoS and outside of
the SoS.
Managerially independent, meaning that most or all of the constituent-
systems are managed and maintained for their own purposes.
The research described in this dissertation assumes that the SoSs under
consideration are those that are comprised of constituent-systems that possess the
characteristics of operational and managerial independence.
2.3 Related Engineering Disciplines
The term “system of systems engineering” is primarily a DoD term to
describe the system development approach of integrating existing military
9
systems together to create the capability to perform functions that cannot be
performed by any of the constituent-systems alone.
Other domains have similar terms for this type of approach. When most or all
of the constituent-systems are COTS products, a term often used is “COTS
integration.” When the constituent-systems are the business or information
technology (IT) systems of an organization (or a set of related organizations), a
term often used is “enterprise-wide engineering.”
When an organization (or a group of related organizations) decides that the
development and maintenance of their IT systems should not be a core internal
function, they may decide to outsource the development and maintenance of
these constituent-systems. This approach or discipline is often referred to as “IT
outsourcing.”
2.4 Traditional SE, SoSE, and Related Industry Standards
Systems engineering began in the earliest of times and has evolved as
technologies have evolved. The International Council on Systems Engineering
(INCOSE) Systems Engineering Handbook [INCOSE, 2006] provides an
overview of the origins of today’s systems engineering that is often very
multidisciplinary in nature. The INCOSE history of SE starts in 1829 with the
development of the rocket locomotive and then developing some rigor in the
1930s with the engineering of the British air-defense systems and Bell Labs
work. Today, there are several standards for systems engineering processes:
ANSI/EIA Standard 632 [ANSI/EIA, 1999]: This standard defines a set
of five systems engineering process areas and thirty-three tasks or
activities associated with the various process areas. The purpose of this
10
standard is to support the engineering (or re-engineering) of a system.
The phases addressed are system conceptualization, development, and
transition to operation. The following summarizes the activity categories
for each ANSI/EIA process:
- Acquisition and supply: Product supply, product acquisition, and
supplier performance.
- Technical management: Process implementation strategy, technical
effort definition, schedule and organization, technical plans, work
directives, progress against plans and schedules, progress against
requirements, technical reviews, outcomes management, and
information dissemination.
- System design: Acquirer requirements, other stakeholder
requirements, system technical requirements, logical solution
representation, physical solution representations, and specified
requirements.
- Product realization: Implementation and transition to use.
- Technical evaluation: Effectiveness analysis, tradeoff analysis, risk
analysis, requirements statements validation, acquirer requirements
validation, other stakeholder requirements validation, system technical
requirements validation, logical solution representations validation,
design solution verification, end product verification, enabling product
readiness, and end products validation.
ISO/IEC Standard 15288 [ISO/IEC, 2002]: This ISO standard provides a
framework for describing system lifecycle processes in the areas of
hardware, software, and human interfaces. The scope of the processes is
11
from system conception through the initial development, operation,
maintenance, and evolution of the system, and all the way through system
retirement.
Defense Acquisition Guidebook (DAG) [DoD, 2006]: The DAG is
designed to provide acquisition personnel and their industry partners with
a comprehensive reference to best business practices and supporting
policies, statutes, and lessons learned that are applicable to the
development of DoD systems. As part of this guidance, the DAG defines
and describes eight technical management processes (technical planning,
requirements management, interface management, risk management,
configuration management, technical data management, technical
assessment, and decision analysis) and eight technical processes
(requirements development, logical analysis, design solution,
implementation, integration, verification, validation, and transition).
Software Engineering Institute’s Capability Maturity Model Integrated
(CMMI) [SEI, 2001]: The CMMI provides a framework to help an
organization define its standard systems engineering processes. It focuses
on process areas that are applicable to both systems and software
engineering (process management, project management, engineering, and
support) and provides guidance for defining standard organizational
processes in these areas in terms of goals, practices, and typical work
products. The framework is organized in a manner that also provides
guidance for continual process improvement and a method for assessing
the maturity of a given organization’s system and software engineering
processes.
12
While these standards have various phases of interest and describe different
process areas and activities, there is considerable overlap between the phases,
process areas, and activities of each. For example, ANSI/EIA 632 is viewed as a
more detailed description of the ISO/IEC 15288 phases of conceptualization,
development, and transition to operation. The DAG references ANSI/EIA 632,
ISO/IEC 15288, and the SEI CMMI as examples of best practice models and
standards. However, in an attempt to standardize terminology across these
various models and standards, the DAG defines its own set of processes.
Most of these standards address SoS development to some extent and at least
imply that the processes, standards, and models apply to SoSE. The DoD OSD
AT&L SSA organization has gone a step further and developed an SoSE
guidebook that extends the DAG, the Systems Engineering Guide for System of
Systems (hereinafter referred to as the DoD SoSE guidebook) [DoD, 2008]. The
DoD SoSE guidebook is designed to be used in conjunction with the DAG. It
defines and describes seven core SoSE activities. These seven core SoSE
activities both expand upon the activities described in the DAG as well as
incorporate all of the 16 systems engineering technical and technical management
processes. A key message of the DoD SoSE guidebook is that SoSE core
elements are built upon the traditional SE activities, but they are both an
expansion of the traditional activities and organized in a very different manner.
The key SoSE core activities in the DoD SoSE guidebook [DoD, 2008] are:
Translating capability objectives: The SoSE team must develop a basic
understanding of the expectations of the SoS capability and then translate
the capability into a set of requirements for meeting the expectations.
13
Understanding systems and relationships: In a SoS, the focus is on the
systems which contribute to SoS capabilities and their interrelationships
instead of the traditional focus on boundaries and interfaces.
Assessing actual performance to capability objectives: To be able to
understand current SoS performance and ascertain the impact of
constituent-system changes, the SoSE team establishes SoS metrics,
defines methods for assessing performance, and conducts evaluations of
actual performance using the metrics and methods.
Developing, evolving, and maintaining an SoS architecture/design: As
soon as systems start interfacing with each other and sharing data, there is
an implied architecture for the collection of systems (or SoS). One of the
key responsibilities of an SoSE team is to establish and maintain a
sustainable framework to support the evolution of the SoS to meet user
needs. Evolutionary changes include changes in systems functionality,
performance, or interfaces. These needed changes often require systems
to migrate from the early “implied” architecture to a more robust
architecture or framework.
Monitoring and assessing changes: The SoSE team must constantly
monitor proposed or potential changes to the constituent-systems and
assess their impacts to a) identify opportunities for enhanced functionality
and performance, and b) preclude or mitigate problems for the SoS and
other constituent-systems.
Addressing new requirements and options: The SoSE team reviews,
prioritizes, and determines which SoS requirements to implement next.
Part of this activity is evaluating various options for implementing the
14
capability and requires the participation of the affected constituent-
systems.
Orchestrating upgrades to SoS: This activity is the actual implementation
of the desired capabilities and includes the planning, coordination,
integration, and testing of changes in the constituent-systems to meet SoS
needs.
2.5 Engineering Cost Models
As mentioned in the introduction, one of the goals of this research is to better
understand SoSE and support the development of cost models that may be used to
estimate cost and support tradeoff analyses. Considerable work has been done in
the areas of software development, systems engineering, and COTS integration
cost modeling, as well as some preliminary work in the area of SoSE cost
modeling [Boehm et al., 2005]. The cost models focus on the engineering
product, the processes used to develop the product, and the skills and experience
levels of the technical staff responsible for the development of the product. These
parametric cost models require a set (one or more) of size drivers to describe the
size of the product to be developed. The size driver(s) are used to compute a
nominal effort (labor hours) for the project. In addition, there are a set of cost
drivers to adjust the nominal effort either up or down, depending on the
characteristics of the product to be developed, the processes used to develop the
product, and the experience and capabilities of the people developing the product.
Key to these cost models is determining through weighting factors how much
influence each cost driver should have on the estimated effort. The computed
15
adjustment factor, based on the cost drivers and associated weights, is called an
effort multiplier (EM).
The systems engineering cost model, COSYSMO, is a calibrated cost model
that most closely estimates the effort associated with SoSE activities in a
collaborative or acknowledged SoS. Table 2 describes the COSYSMO Version
1.0 parameters.
As a follow-up to the development of COSYSMO, some heuristics were
developed to guide the distribution of effort across traditional EIA 632 SE
processes [Valerdi and Wheaton, 2005]:
Acquisition and supply: 7%
Technical management: 17%
System design: 30%
Product realization: 15%
Technical evaluation: 31%
This distribution guidance is useful when a single organization is responsible
for only a part of the traditional SE effort. This distribution information can also
be used to support the development of staffing plans for a given SE project.
As various SE organizations began to use COSYSMO, they identified a
significant limitation to the COSYSMO cost model: it does not adequately
address reuse of existing designs and system components. To remedy this, [Wang
et al, 2008] provides a framework for incorporating adjustments for reuse into
COSYSMO.
16
Table 2. Summary of COSYSMO Parameters [Valerdi, 2005]
Name Description
Size Drivers
Number of system
requirements
Number of requirements driving the SE effort to develop/modify the
system-of-interest, adjusted for complexity
Number of system
interfaces
Number of shared physical and logical boundaries between system
components or functions (internal interfaces) and those external to
the system (external interfaces) that must be developed or modified,
adjusted for complexity
Number of algorithms Number of newly defined or significantly altered functions that
require unique mathematical algorithms to be derived in order to
achieve the system performance requirements, adjusted for
complexity
Number of operational
scenarios
Number of operational scenarios that the system must satisfy,
adjusted for complexity
Cost Drivers
Requirements
understanding
Level of understanding of the requirements by stakeholders
(customers, users, and developers)
Architecture
understanding
Assessment of relative difficulty in determining and managing the
system architecture
Level of service
requirements
Assessment of the difficulty and criticality of service requirements,
e.g., safety, security, interoperability, performance
Migration complexity Extent to which the legacy system(s) limitations impact the
migration to the new capability
Technology risk Overall assessment of the planned (or existing) technology maturity,
readiness, and obsolescence
Documentation Assessment of the overall level of formality and detail of
documentation to be developed for the system of interest
Number and diversity
of installations/
platforms
Assessment of the level of complexity created by the number of
different platforms on which the system will be (or is)
hosted/installed
Number of recursive
levels in the design
Assessment of the difficulty associated with the number and
complexity of the expected levels of design related to the system-of-
interest
Stakeholder team
cohesion
Assessment of the stakeholder culture, compatibility, familiarity,
and trust
Personnel/team
capability
Assessment of the composite intellectual capability of a team of
systems engineers (compared to the national pool of system
engineers) to analyze complex problems and synthesize solutions
Personnel
experience/continuity
Assessment of the overall personnel experience levels and the
expected continuity of the staff during the project
Process capability Assessment of the overall consistency and effectiveness of the
project team at performing SE processes
Multisite coordination Anticipated impact of geographic distribution of stakeholders, team
members, and resources
Tool support Expected level of coverage, integration, and maturity of the tools in
the SE environment
17
However, there are some additional limitations that must be handled before
the COSYSMO cost model can be used to estimate SoSE effort. Initial efforts to
develop a separate SoSE cost model for directed SoSs are described a technical
report [Lane and Boehm, 2007]. This report identified several SoSE parameters
that are not in the current version of COSYSMO. These include:
Constituent-system maturity and stability
Constituent-system readiness
Cost/schedule compatibility of proposed SE approach
Level of overall risk resolution
Number of constituent-systems and associated organizations.
In addition, as the number of organizations and groups participating in the
development of the SoS grow, it becomes necessary to characterize the personnel,
process, and tool cost factors at a lower organizational level. It is seldom “one
size fits all” for these types of efforts. The approach described in [Lane and
Boehm, 2007] addresses the issue of not just distributing effort across key stages
of directed SoS development, but allows one to tailor the personnel, process, and
tool factors for each stage. However, this still does not provide an adequate
framework for estimating effort for different constituent-systems with different
complexities and often developed by different organizations with different
personnel, processes, and tools and under different sponsorship.
To address this need, one can look to the approach used for software
development in the constructive cost model (COCOMO) for software [Boehm et
al, 2000]. This approach allows one to define specific EMs for each software
component and to apply each EM to the appropriate portion of the total size.
18
2.6 Related Organizational Theory Concepts
Many of the systems engineering and SoSE practices and processes (both
management and technical) can be traced to those that began to develop in the
early to mid-1900s as project-based, multidisciplinary work became the norm in
the areas of construction industry, city planning, civil engineering, aircraft design
and development, car design and development, and oil industry systems [Pinney,
2001]. In fact, in many of these industries today, much of the design and
production activities are based on systems that are often integrated into what
might be considered by some as SoSs.
While it is important to understand the general theory and history of today’s
engineering processes, this is not the focus of this research. Rather this research is
attempting to find the “homegrounds” for two different SoSE management
approaches: collaborative and acknowledged. Of interest here is organizational
theory that focuses on management and technical processes and communication
dynamics for very large distributed, multidisciplinary engineering projects with
performing organizations that often cross company, political, and national
boundaries.
Processes and Process Modeling: SoSs typically fall into the realm of
complex systems [Sheard, 2006]. As complex systems become more of the norm,
many are writing about developing complex systems on the edge of chaos
[Highsmith, 2000; Markus et al., 2002; Sheard, 2006] and the associated need for
flexible, adaptable, agile development processes to deal with rapid change and to
facilitate innovative solutions in ultra-large solution spaces [Boehm and Lane,
2006; Kreitman, 1996]. Teams must be able to experiment and change processes
19
when progress is not as expected [Kreitman, 1996]. In order to understand when
progress is not as expected, it is important to have management processes to track
cost, schedule, and completed work so that deviations can be identified early and
adjustments made that will reduce late rework. Some are quick to point out that
many system development projects are overly ambitious and often management
fears that if this is known early on, the program will be cancelled—so true
progress is hidden and management encourages the team to work harder in the
hope that the development team will come up with a miracle to get the program
back on track [Brooks, 1995; Kreitman, 1996; Carlock and Fenton, 2001; Carlock
and Lane, 2006]. By combining accurate management tracking and flexible
technical processes, many problems can be avoided.
Often engineering processes and associated process models are defined to be
tailorable to accommodate differences between projects [SEI, 2001]. Therefore,
through the analysis of project process models, one can often discern key
differences between different types of projects. For example, [Lane, 1999]
discusses how traditional software development processes are tailored to support
the development of software functionality through the integration of COTS
products. By analyzing process models and resource allocations captured in
project work breakdown structures (WBSs), it was found that there were some
key differences. For COTS integration, key activities are performed in a different
order and effort is distributed very differently across the key activities than in
traditional software development projects. For the COTS integration project,
there are considerably more requirements analyses, trade studies, and product
evaluations (testing) prior to software design and development and a considerable
focus on associated business processes to be supported by the software system. In
20
addition, the COTS integration and implementation activities consist more of user
interface customization, insertion of business rules and initialization data into the
COTS products, and the development of glue code to support integration with
other COTS products. This is instead of the “develop new code” approach in
traditional software development. Because the implementation activities are
considerably different, COTS integration projects require some different skill sets
than are typically applied to new software development projects: more software
product evaluators, domain experts to work with the user community in deciding
on key business rules and initialization data, and system programmers to
implement COTS product integrations with database systems, legacy systems,
and other COTS products. [Lane, 1999] also shows how these process differences
translated into significant differences with respect to software system
development cost and schedule.
Other Coordination, Communication, Collaboration, and Decision Making
Dynamics: As engineering teams get larger, coordination, communications,
collaboration, and the resulting decision making activities become more time-
consuming and difficult. This is especially so when one views the various
specialties that participate in the engineering of a system or SoS. For example,
performance, human factors, security, safety, as well as functional performance
must be addressed in a manner that produces a system or SoS that meets the
customer’s needs. Technical decisions need to be made in the context of all of
these areas—decisions that meet the overall needs, yet are not to the detriment to
any of these system-of-interest aspects. Based on these and other observations,
Stephen Lu suggests that engineering is evolving to more of a collaborative
negotiation process [Lu, 2003]. Others provide a case study showing how the
21
decision making process erodes as the number of decision makers increase over
time [Pressman and Wildavsky, 1973]. Dietrich Dorner has identified and
analyzed the roots of catastrophes in strategic planning and associated strategic
decision making and provides techniques for recognizing and avoiding errors at
the strategic level in complex situations [Dorner, 1996]. Dorner’s key
recommendations to overcome these errors in complex situations are to:
a. Understand before you leap, that is, better understand goals in concrete
terms, balance contradictory or incompatible goals, and establish
priorities before focusing on planning and gathering information for a
solution.
b. Avoid “economizing” up-front since this encourages one to omit crucial
steps in the thought process. For example, clarifying complex
relationships among variables (or system components) before down-
selecting to the variable(s) (or systems) of interest may avoid problems of
“unintended” side-effects or long term repercussions (undesired emergent
behaviors).
Related to this, [Kreitman, 1996] talks about the need to make sure ALL
affected organizations are involved in cross-cutting decisions to ensure proper
tradeoffs in situations similar to those that can impact both the SoS level and the
constituent-system level. [Friedman, 2005] also looks at many of these issues on
a very global level, focusing on the importance of building strategic alliances
across geographical and political boundaries and interacting in a very
international culture to develop larger resource bases needed to keep up with the
rapidly increasing pace of business and associated strategic decisions.
22
Finally, Eberhart Rechtin presents some interesting observations on
communication and coordination issues in his system architecture heuristics
[Rechtin, 1991]:
The time in days, T, to obtain an approval requiring N signatures can be
estimated by T=2
N-2
. It may also be the case that only 20% of the
signatures really count (Pareto principle).
The probability of successfully implementing a new idea depends on the
number of persons in the chain leading to the implementation of that idea
and the probability that each person understands and retransmits the idea.
An optimally sized team producing at the fastest rate possible spends half
its time coordinating and interfacing.
Taking all of these observations, heuristics, and findings together, one can
conclude that as systems get larger and more complex, resulting in larger and
more diverse engineering teams, coordination and collaboration needs to be both
broad and deep. In addition, negotiation and decision making needs to happen at
appropriate levels and be broad enough to include the right considerations
(effective), but not extend unnecessarily to multiple levels in the organization
(efficient). These were confirmed during the data collection for several of the
DoD SoS case studies [DoD, 2008]: as the collection of systems within the SoS
became more complex or critical to DoD operations or missions, there came a
“tipping point” where the collection of systems could no longer be effectively
managed in the traditional single system way (the collaborative SoS management
approach). When the collaborative SoS management approach was used, there
were many instances where the development of new SoS capabilities was handled
appropriately, but there were other system changes going on within the SoS
23
collective that adversely impacted the SoS and caught system managers and users
by surprise. In addition, as the number of systems within the SoS grew and all
systems were considered “equal” in the collaborative arrangement, it became
more difficult to make strategic decisions for the good of the SoS because the
number of decision makers (at least one for each system) increased and each
system was attempting to optimize its own development and evolution. And
lastly, as the number of systems within an SoS expands, there comes a point
where the SoS architecture, typically using a point-to-point interconnectivity
approach with a multitude of protocols, is no longer efficient or sustainable and
requires the migration to a better architecture or framework at the SoS level. In
these cases, an SoSE team was designated and made responsible for guiding the
development of the SoS (the acknowledged management approach). DoD SoS
case study summaries are provided in Appendix A.
2.7 Process Models
An analysis of various process modeling tools [Lane et al, 2007] showed that
system dynamics models were better suited for modeling alternative SoSE
strategies than more static models. System dynamics modeling was initially
developed by Forrester [Forrester, 1961] to support the analysis of industrial
dynamics. System dynamics modeling tools are visual modeling tools that allow
one to conceptualize, simulate, and analyze models of dynamic systems and
processes. Simulation models are built from causal loop or stock and flow
diagrams. Relationships among system variables (or influences) are entered as
causal connections. The model is analyzed throughout the building process by
looking at the causes and uses of a variable and at the loops involving the
24
variable. System dynamics models are also executable, allowing the user to
explore the behavior of the model and conduct “what if” analyses.
Researchers in the areas of software engineering, SE, and SoSE are using
these tools to identify and better understand key influences in various engineering
activities. For example, [Madachy et al., 2006] showed how the use of both agile
and plan-driven processes can be used to develop and deploy large systems in a
constantly changing environment. [Cresswell et al., 2002] described how a model
might be used to better understand collaboration, trust building, and knowledge
sharing in a complex, intergovernmental information system project while [Greer
et al., 2005] investigated “disconnects” in baselines across multiple organizations
in a large software-intensive, space system development program. Through
another model, [Black and Repenning, 2001] investigated the under-allocation of
resources in the early phases of a project and the impacts that it had on increasing
error rates, overworked engineers, and declining performance. [Ferreira, 2002]
developed a model to investigate the impact of requirements volatility on
software development projects. [Ford and Sterman, 2003] used a system
dynamics model to analyze concurrent development projects and interactions
between the technical and behavioral dimensions of the projects.
These sample system dynamics models have been very insightful in
understanding influences and dynamics in situations often common to systems
engineering and development. Using these techniques, it is possible to analyze
high level observations or indicators and better understand how processes can be
selected or changed to take advantage of certain good dynamics and to minimize
the impact of undesirable dynamics.
25
Chapter 3: Methodology
3.1 Overview of Research Design
The goal of this research effort is to evaluate two management approaches
for implementing a new SoS capability: collaborative and acknowledged.
Because little formal research has been done in this emerging area, a mixed
methods approach (both qualitative and quantitative) has been selected. This
approach seems especially well-suited to this area of SoSE since there are no
definitive standards and little quantitative SoSE information available from those
currently evolving more traditional systems engineering methods to meet the
challenges of SoSE. The foundational elements of this research are the findings
of the DoD SoS case studies [DoD, 2008] (qualitative) and an SoSE system
dynamics model that is based upon the case study findings and the COSYSMO
SE cost estimation algorithm (quantitative).
More specifically, the qualitative information from DoD SoS case studies
along with quantitative data from surveys of SoS constituent-systems was
analyzed and used to build a system dynamics SoSE process model to
quantitatively compare the two SoS management approaches for engineering an
SoS capability. The system dynamics model incorporates the existing
COSYSMO cost estimation algorithm and associated parameters to characterize
each SoS management approach. In addition, the model contains a set of
parameters that can be used to modify the size of the SoS, the size and scope of
the SoS capability, and the concurrent constituent-system volatility. The system
dynamics model then compares the two management approaches as these
parameters change.
26
3.2 Data Collection Instruments
For this research, there are three primary sets of data: the DoD SoS case
study data, survey data from SoS constituent-system programs, and data
generated from the SoSE process model.
DoD SoS Case Study Data: The DoD case study data was collected by the
DoD SoSE guidebook authors as they interviewed key SoSE representatives.
Interviewees were asked to provide an overview of their SoS and to then go
through a list of typical SE activities and comment on whether that activity was
performed at the SoS level and how that activity was tailored (if at all) for SoSE.
Those conducting the interviews captured and integrated the results into a large
spreadsheet where it was further analyzed by the SoS SE guidebook team to
discern the ways in which traditional SE is evolving to support the challenges
faced in SoSE. While the interview data collected and placed in the spreadsheet
has not been published, the results of the analysis qualitatively describe the
emerging SoSE processes and are documented in [DoD, 2008]. The SoS
programs that were interviewed as part of this research effort are [DoD, 2008]:
Army Battle Command System (ABCS)
Air Operations Center (AOC)
Ballistic Missile Defense System (BMDS)
United States Coast Guard Command and Control Convergence (C2
Convergence)
Common Aviation Command and Control System (CAC2S)
Distributed Common Ground Station (DCGS-AF)
DoD Intelligence Information System (DoDIIS)
27
Future Combat Systems (FCS)
Ground Combat Systems (GCS)
Military Satellite Communications (MILSATCOM)
Naval Integrated Fire Control-Counter Air (NIFC-CA)
National Security Agency (NSA)
Naval Surface Warfare Center Dahlgren Division (NSWCDD)
Single Integrated Air Picture (SIAP)
Space and Missile Systems Center (SMC)
Space Radar (SR)
Theater Joint Tactical Networks (TJTN)
Theater Medical Information Systems-Joint (TMIP).
Note that the findings in [DoD, 2008] are findings that were common across
the SoS, and except for clearly identified examples, are not attributed to only a
single SoS. Additional information on the above SoSs is provided in Appendix A.
SoS Constituent-system Data: To support the development of the SoSE
process model, it was necessary to collect data that characterizes the systems that
comprise SoSs. To collect this data, a brief survey was developed and distributed
to systems engineers assigned to system development projects. The survey form
captured information about the typical range of system interdependencies
(number of external interfaces to other systems), system volatility (rate of change
of the system over time), and percent of system change that could be attributed to
SoS needs for systems that are part of one or more SoSs. The data collection form
and a summary of the responses received are provided in Appendix B. In some
cases, the respondent’s answers were vague or missing information and in these
cases, internet searches were conducted to fill in some of the missing details. The
28
web sites used to complete the information are indicated in the summary of
responses.
Data Generated Using SoSE Process Model: The DoD SoS case study data
and the results of the SoS constituent-system surveys were used to develop a
process model. This model is described in detail in the following chapter. For the
purposes of discussion here, it is sufficient to discuss the inputs and outputs to the
SoSE process model. The SoSE process model allows the user to vary several
SoS characteristics (input parameters) and then graph the results (model outputs)
as the parameter(s) of interest change.
The SoSE process model inputs are:
SoS size (number of constituent-systems in the SoS of interest)
SoS capability size (equivalent number of nominal SoSE requirements to
be implemented to achieve a new capability, as defined by the
COSYSMO model)
SoS capability complexity (number of constituent-systems affected by
the proposed new capability)
Constituent-system volatility (number of constituent-system non-SoS-
requirements being implemented in parallel with SoS capability
requirements)
The associated COSYSMO EMs for SoSE
- SoS-level:
- SE for the SoS capability
- SoS SE oversight of non-SoS-requirements being implemented
by the constituent-systems concurrently with the SoS
requirements that were initially based on the work conducted
29
by [Wang et al, 2008] from a single organization and then
adjusted based upon anecdotal observations from other
organizations.
- Constituent-system level:
- SE of SoS requirements with SoSE support
- SE of SoS requirements without SoSE support
- SE of constituent-system only/non-SoS-requirements.
The selected values for each of these parameters are intended to characterize
the “typical” or expected types of capability changes being implemented over
time in a given SoS. The ranges of values for the SoS size, SoS capability
complexity and size, and constituent-system volatility as well as the calculated
EM values are based upon the DoD case study SoSs, associated case study
findings, and the constituent-system surveys.
The SoSE process model outputs are: a) estimated total SE effort for the SoS
capability with SoSE support and b) estimated total SE effort for the SoS
capability without SoSE support.
3.3 Data Analysis
An initial data analysis was performed to determine the appropriate range of
values for the SoSE process model. The actual analysis performed and the
resulting parameter values are discussed in detail in the next chapter, which
describes the SoSE process model.
The final data analysis was performed on the outputs of the SoSE process
model. The SoSE process model outputs, acknowledged SoS SE effort (SE with
SoSE support) and collaborative SoS SE effort (SE without SoSE support), were
30
compared to determine for each set of inputs which approach is the most cost
effective with respect to the total SE effort. The goal of the final data analysis
was to locate the “tipping point” where the SoS and/or SoS capability are
sufficiently complex that the use of an acknowledged SoSE team is the more
cost-effective approach.
3.4 Potential Threats to Validity and Limitations
The key threats and limitations identified for this particular research are in
the areas of SoSE and SE engineering data precision, SoS sample size, and the
ability of COSYSMO to estimate SoSE effort. The following describes these
threats and limitations, the potential impacts each might have on this research,
and the steps that have been taken to mitigate these potential threats.
Effort Precision: The key indicator evaluated in the comparison of SoS
collaborative and acknowledged management approaches is the amount of effort
or the relative amount of effort spent on the engineering of an SoS capability. The
ways actual effort data are captured and reported on systems engineering and SoS
projects can be both limiting to this research and potentially pose a threat to the
validity of this research. Effort is typically captured as labor hours. However, this
data is seldom precise on engineering projects. Companies working on large
programs attempt to capture effort using a set of pre-defined categories.
Sometimes the categories are related to the skills and level of experience of the
people working on the project. Other times the categories are related to the
engineering phases or activities of the project. There are cases where both types
of categories are used. When effort is captured by engineering phase or activity, it
is not always easy to determine when one phase/activity ends and the next one
31
starts. In addition, all engineering hours are not always recorded on projects.
Engineering staff and management are often only required to record how they
spent the first eight hours of the work day, but are not required to record any
hours worked after the first eight. This means that these “extra” hours, sometimes
referred to as professional “uncompensated overtime,” are often not captured or
tracked on projects. This can result in problems where the data is not broken out
in a way that supports the research project, values for the specified labor
categories are inaccurate, or labor profiles are skewed.
The COSYSMO research project also had to address this problem.
Considerable effort was expended to capture data consistently across the projects
that were used to calibrate and validate this model. Therefore, if there was
incomplete effort information provided by these projects, it should be consistent.
In addition, the COSYSMO validation process [Valerdi, 2005] showed that the
academic version of COSYSMO was able to predict effort within 30% of actual
values 75% of the time (PRED(30)=75%) when calibration projects were
stratified by domain. As USC CSSE industry affiliates began to implement
COSYSMO using local calibrations with data from their organizations, anecdotal
evidence indicated that they were able to achieve PRED(30)=85%. By using
COSYSMO in the SoSE process model to calculate relative effort (rather than
attempt to calculate an estimate for a specific SoS), a reasonable, consistent
comparison was provided for this research effort.
Sample Size: SE and SoSE projects tend to span multiple years. In addition,
there are not that many large-scale SoSE projects. Therefore, it can be difficult to
identify a sufficient number of projects for a comparison of SoSE management
approaches.
32
To mitigate this potential threat, a representative set of projects (eighteen
DoD SoSE projects) was selected. In 2006, during his keynote address at the SoS
Engineering Center of Excellence (SoSECE) Conference, the Honorable Dr.
James Finley, Deputy Under Secretary of Defense for Acquisition and
Technology, indicated that there are about forty identifiable (acknowledged or
directed) SoSs within DoD [Finley, 2006]. Eighteen DoD SoSE projects out of
about forty provide a sampling of over 40%. These eighteen projects in the DoD
case studies are of a similar size and complexity and are also within a single
domain (DoD military systems). The SoSE characteristics used in this research
area are based on the findings of the DoD case studies published in the DoD
SoSE guidebook [DoD, 2008]. The guidebook authors interviewed
representatives from the eighteen projects and distilled the guidebook findings
from the detailed interview notes. The case study analyses indicated that a) SoSE
includes all of the traditional SE activities described in the various SE standards
[ANSE/EIA, 1999; ISO/IEC, 2002; and DoD, 2006], b) these traditional activities
are performed in a different context, c) SoSE includes activities that are not
typically performed when conducting SE for a single system, and d) for activities
that require detailed knowledge of the constituent-system internal design and
capabilities, SoSE includes the participation of system engineers from the
constituent-system(s). To capture the essence of these differences and the context
of SE within an SoS environment, the findings were presented in terms of seven
core SoSE elements. SoS case study project representatives reviewed the draft
findings along with others within the DoD community and provided comments to
the final version of the guidebook. The comments received on the draft DoD
SoSE guidebook indicated that the seven core SoSE elements were an accurate
33
representation of the key activities performed at the SoS level for each of the case
studies.
COSYSMO Applicability to SoSE Effort Estimation: The COSYSMO cost
estimation model was developed to estimate the traditional systems engineering
effort associated with the development or enhancement of systems. COSYSMO
1.0 (also referred to as Academic COSYSMO) was calibrated primarily with data
from the development of DoD military systems, which is the same domain for the
SoSE process model. In addition, some of the initial COSYSMO calibration data
sets were identified as an SoS and several systems reported multiple external
interfaces, which implies that they are constituents in one or more SoSs.
So, while there is overlap between single system SE and SoSE in the
calibration data sets for COSYSMO, there is still concern regarding the
applicability of COSYSMO for estimating effort for SoSE due to the differences
identified between traditional SE and SoSE (see section 2.5). However,
adjustments can be made to account for many of these differences:
Using the COSYSMO reuse extension, effort can be calculated for the
SoSE oversight of constituent-system development and evolution.
Using the COCOMO method of applying different EMs to different parts
of the system development, adjustments can be made to accommodate
different levels of complexity or difficulty.
By using the distribution of COSYSMO effort reported in [Valerdi and
Wheaton, 2005], one can add effort to the constituent-system
development effort for the support of SoSE activities.
It has also been stated in section 2.5 that some additional cost model
parameters are required to provide more accurate estimates of SoSE effort. While
34
the inclusion of additional parameters would potentially provide more accurate
estimates, the goal of this research effort is to model two approaches for
engineering the same SoS capability and determine the relative differences in the
two approaches. For the purposes of this model, precise effort estimates are not
required. The SoSE process model EM calculations assume nominal values for
each cost driver unless there is a reason provided in the DoD SoSE guidebook for
adjusting the parameter up (more effort) or down (less effort). No justification
has been found in the DoD SoSE guidebook for adjusting (either up or down) any
of the additional cost drivers identified in section 2.5. So, while there may be
limitations of the current COSYSMO for the precise estimation of SoSE effort
for larger SoSs, COSYSMO should be sufficient for the comparison purposes of
this research effort when the above adjustments are applied. In addition, table 3
shows that there are COSYSMO parameters that can be used to characterize each
of the SoSE core elements for a given SoS.
By basing the COSYSMO SoSE characterization on the vetted findings in the
DoD SoSE guidebook [DoD, 2008] and assuming that the accuracy of
COSYSMO for SoSE is somewhere between the values obtained for the
academic version of COSYSMO (PRED(30)=75%) and the local calibrations
performed by the USC CSSE industry affiliates (PRED(30)=85%), it follows that
the SoSE process model results are accurate enough to compare SoS management
approaches for the DoD military SoSs.
35
Table 3. Mapping of DoD SoSE Core Elements to COSYSMO Parameters
SoSE Core Element Related COSYSMO Parameters
Translating capability Requirements understanding
Understanding systems and relationships Architecture understanding
Migration complexity
Technology risk
Number of recursive levels in the design
Assessing actual performance to capability
objectives
Level of service requirements
Developing, evolving, and maintaining an SoS
architecture/design
Architecture understanding
Multisite coordination
Monitoring and assessing changes Level of service requirements
Multisite coordination
Addressing new requirements and options Requirements understanding
Architecture understanding
Migration complexity
Technology risk
Orchestrating upgrades to SoS Stakeholder team cohesion
Personnel/team capability
Personnel experience/continuity
Process capability
Multisite coordination
Tool support
Comparison Model Is Not a Cost Model: The SoSE comparison model
developed to determine the conditions under which an SoSE team would be cost
effective in guiding the evolution of the SoS is based upon the cost estimation
model COSYSMO. However, the SoSE comparison model is not a cost model
that can be used to support the estimation of the systems engineering effort
required to implement a new capability. The SoSE comparison model generalizes
SoS and constituent-system characteristics based on the DoD SoS case studies
and the constituent-system surveys. However, no actual effort data was collected
as part of this research to calibrate and validate the model. This will be a follow-
on effort to the current research.
36
Chapter 4: The SoSE Model
The SoSE model used for the analysis of collaborative and acknowledged
SoSE management approaches was developed as both a system dynamics model
and a Microsoft Excel model. Figure 1 illustrates this model using the iThink
system dynamic modeling conventions [isee Systems, 2007]. The effort
calculations are based on the COSYSMO cost model and identical formulas were
used for both the iThink system dynamics model and the Microsoft Excel model.
Figure 1. Graphical View of SoSE Model
This model is designed to evaluate the effort required to engineer a single
SoS capability in a software-intensive, net-centric SoS. This model uses the input
capability rate and converts it into equivalent nominal requirements (the
37
underlying COSYSMO aggregated size driver). At this point, there are two
primary flows: one to calculate the associated SE effort using an acknowledged
SoS management approach and one to calculate the associated SE effort using a
collaborative SoS management approach. Note that:
The acknowledged approach estimates the SE effort at the SoS level and
the associated SE effort at the constituent-system level.
The collaborative approach estimates the SE effort at only the
constituent-system level (since there is no SoSE team in this approach).
The primary method used to determine the associated effort values (in
person-months) is the COSYSMO 1.0 algorithm, Effort = 38.55*EM*(size)
1.06
,
where 38.55 and 1.06 are calibration constants for COSYSMO 1.0 [Valerdi,
2005].
For the purposes of this SoSE model, EM values are calculated for various
parts of the engineering process: SoSE for new capability, SoSE for oversight of
the non-SoS constituent-system changes, SE at the constituent-system level for
the SoS capability (both with and without SoSE support), and SE at the
constituent-system level for the non-SoS-requirements being implemented in
parallel with the SoS capability.
The following sections describe various aspects of the model in more detail.
4.1 SoSE Process Model Assumptions and Constraints
Several assumptions and constraints were used in the development of the
SoSE process model. In some cases, these model assumptions and constraints
generate more conservative estimates of cost savings for an acknowledged SoSE
38
team and therefore strengthen the resulting findings. The assumptions and
constraints are:
1. All constituent-systems currently exist. This means that all of the
constituent-systems are legacy systems undergoing some level of change
(very small to major upgrades). In addition, there are no new “long-lead”
constituent-system development activities that may have extraordinarily
high levels of internal change and/or may not be fielded within the
timeframe of the current SoS capability change. (There could be new
systems under development that may eventually be part of the SoS, but
these are not considered as part of the SoSE comparison model.)
2. The model assumes a relatively mature engineering process at both the
constituent-system and SoS levels. This is primarily because the
COSYSMO cost model has been calibrated using SE projects from
relatively mature SE organizations. This assumption is also related to the
fact that successful system development has been shown to be strongly
correlated to relatively “mature engineering processes” [SEI, 2001]. By
limiting the constituent-systems to existing systems, as stated in the first
assumption above, one can reasonably assume it is appropriate to model
the constituent-system engineering processes as “mature.” As for the
SoSE team processes, they are typically not as mature as the constituent-
system due to the fact that these processes are currently evolving to meet
the new challenges that SoSs present [DoD, 2008]. However, the SoSE
teams typically have a strong foundation upon which to build their
processes given that they leverage the processes of the constituent-
systems and work with the constituent-system engineers.
39
3. In general, each constituent-system has its own evolutionary path based
on system-level stakeholder needs/desires. This is related to the definition
of an SoS [Maier, 1998]. The exception for some SoSs is the SoS
infrastructure that integrates the constituent-systems together. This is
typically identified as an SoS constituent-system, but it does not
necessarily have its own evolutionary path outside of the SoS or
independent of the SoS.
4. SoS capabilities are software-intensive. Most SoSs of interest today are
those that are net-centric in nature and the constituent-systems are
interfacing each other in order to share data or information.
5. There is no SoS capability requirements volatility. The rationale for this
is that “no SoS capability requirements volatility” simplifies the initial
process model and the impact of this volatility would similarly affect both
collaborative and acknowledged SoSs. Intuitively, the presence of an
SoSE team would somewhat streamline the configuration management of
changes across the constituent-systems and be another “value added”
aspect of an acknowledged SoSE team, but this SoSE process dynamic is
left for follow-on research. Also note that this assumption is not
applicable to the constituent-systems and in fact, the model does assume
varying levels of constituent-system volatility.
6. 100% of the SoS capability requirements are allocated to each
constituent-system needed to implement the capability.
7. There is no focus on schedule or the asynchronous nature of SoS
constituent-system upgrades. The scheduling aspects of SoS capability
implementation are typically driven by the asynchronous schedules of the
40
constituent-systems and preliminary reports do not indicate that there are
significant differences in the two approaches (collaborative and
acknowledged) except for the fact that an acknowledged SoSE team may
have some additional leverage to negotiate different priorities for
constituent-system SE changes.
8. Management of SoS internal interfaces reduces complexity for systems.
This is based on observations [DoD, 2008] and heuristics [Rechtin, 1991]
about the common drivers of complexity within a system or an SoS.
9. SE effort/information provided to the SoSE team in support of SoSE must
be added to SE effort for the subset of the constituent-system upgrade
requirements that are at the SoS level. SoS programs participating in the
DoD case studies reported that it is difficult for an SoSE team to make
reasonable decisions without input from the constituent-system SEs
[DoD, 2008].
These assumptions and constraints (or simplifiers) allowed this research to
focus on the basic question of: When it is cost effective to transition an existing
collaborative SoS to an acknowledged SoS in the situation where the SoS is not
currently affected by the development of new constituent-systems? In addition,
these assumptions and constraints limit the modeled differences between the
collaborative and acknowledged approaches to those key SoSE differences
identified in [DoD, 2008] and therefore produce conservative estimates with
respect to each approach.
41
4.2 SoSE Model Effort Calculations
The SoSE model incorporates several effort calculations. As mentioned
earlier, each of these calculations is based upon the COSYSMO 1.0 algorithm
[Valerdi, 2005], shown in Equation 1 where 38.55 and 1.06 are calibration
constants for COSYSMO 1.0.
Equation 1: Effort (in person-months) = 38.55*EM*(size)
1.06
/152
In addition, the SoSE model uses the COCOMO II approach for applying
different EM factors to different parts of the system that have different cost
drivers. The general form of this approach is shown in Equation 2. In Equation 2,
i ranges from one to the number of components and A and B are the calibration
factors.
Equation 2: Effort = A * ∑ EM
i
*(component
i
size/total size)*(total size)
B
In addition, both the acknowledged and collaborative constituent-system
effort calculations include the effort to engineer the non-SoS-requirements being
engineered concurrently with the SoS capability. This is to ensure that the SoSE
model captures the associated diseconomy of scale that occurs as the number of
requirements at the constituent-system level increases (whether they are SoS-
related requirements or constituent-system-only requirements).
The following sections describe each of the SoSE model calculations using
the terms defined in table 4.
42
Table 4. SoSE Model Equation Term Definitions
Term Definition
CS
nonSoS
Number of non-SoS constituent-system requirements planned for upgrade cycle
CS
TreqSoSE
Total number of requirements planned for constituent-system upgrade cycle with
support from an SoSE team and is equal to SoS
CSalloc
+ CS
nonSoS
CS
TreqwoSoSE
Total number of constituent-system requirements that must be addressed for the
upgrade cycle with no support from an SoSE team
EM
SOS-CR
Effort multiplier for SoS capability requirements engineered at the SoS level
EM
SOS-MR
Effort multiplier for constituent-system non-SoS-requirements monitored by the
SoSE team
EM
CS-CRwSOSE
Effort multiplier for constituent-system capability requirements engineering with
SoSE team support
EM
CS-CRnSOSE
Effort multiplier for constituent-system capability requirements engineering with
no SoSE team support
EM
CSnonSOS
Effort multiplier for constituent-systems engineering for non-SoS-requirements
OSF Oversight adjustment factor to capture SoSE effort associated with monitoring
constituent-system non-SoS changes (Values used include 5%, 10%, and 15%)
SoS
CR
Number of SoSE capability requirements (equivalent nominal requirements),
based upon the modeled size and complexity of the SoS capability
SoS
CSalloc
Number of SoS capability requirements allocated to constituent-system
SoS
MR
Number of SoSE monitored requirements (equivalent nominal requirements): the
sum of all non-SoS-requirements being addressed in parallel with the SoS
capability requirements in the upgrade cycle
SoS
Treq
Total number of SoSE nominal (weighted) requirements:
SoS
CR
+ (SoS
MR
*OSF)
SoSE Effort for Acknowledged SoS: Using the COSYSMO algorithm with
different EMs for the SoS capability requirements and the constituent-system
“monitored” requirements, the SoSE model calculation for the SoSE team’s effort
is shown in Equation 3.
Equation 3: SoSE Effort = 38.55*[(( SoS
CR
/ SoS
Treq
)* (SoS
Treq
)
1.06
* EM
SOS-CR
)
+ ((SoS
MR
/ SoS
Treq
)*( SoS
Treq
)
1.06
* EM
SOS-MR
*OSF)] /152
Constituent-system Effort with Support from SoSE Team (for Acknowledged
SoS): This equation calculates the constituent-system effort for engineering the
SoS capability requirements allocated to it plus engineering in parallel the non-
SoS-requirements scheduled for the current upgrade cycle. In this case, the SoS
43
requirements engineering is lead by the SoSE team with some support from the
constituent-system. Therefore, in this calculation, one needs to include the
constituent-system engineering effort required to support system design at the
SoS level. This SoSE “tax” is based on the findings from the DoD case studies
(and documented in the DoD SoSE guidebook) that indicate the SoSE team
requires the support of the constituent-systems in the design of the approach for
meeting the SoS desired capability. Using the distribution of SE effort for system
design in [Ricardo and Wheaton, 2005], this model approximates the SoSE tax by
adding on half of the typical percentage of system design effort (half of 30%) to
the constituent-systems for those requirements allocated to the constituent-system
from the SoSE team. This factor was based on anecdotal inputs from systems
engineers with experience in the SoS environment. Thus the calculation for
constituent-system SE effort using an SoSE team is as shown in Equation 4.
Equation 4: Constituent-System SE Effort with SoSE Team =
38.55*[1.15*( (SoS
CSalloc
/ CS
TreqSoSE
)*( CS
TreqSoSE
)
1.06
* EM
CS-CRwSOSE
)
+ (CS
nonSoS
/ CS
TreqSoSE
)*( CS
TreqSoSE
)
1.06
* EM
CSnonSOS
] /152
Total Systems Engineering Effort (SoSE and SE) for Acknowledged SoS: The
total concurrent systems engineering effort for the acknowledged SoS is the sum
of the SoSE effort (equation 3) and the constituent-system SE effort with the
SoSE team support (equation 4).
Constituent-system Effort with No SoSE Team Support (for Collaborative
SoS): In the case where there is no SoSE team to support the engineering of the
SoS capability requirements, the constituent-system is responsible for
engineering all of the SoS capability requirements (not just an allocated subset)
as well as the non-SoS-requirements planned for the system upgrade cycle. Thus
44
the calculation for constituent-system SE effort without SoSE team support is as
shown in Equation 5.
Equation 5: Constituent-system SE Effort without SoSE Team =
38.55*[(( SoS
CR
/ CS
TreqwoSoSE
)*( CS
TreqwoSoSE
)
1.06
* EM
CS-CRnSOSE
) +
((CS
nonSoS
/ CS
TreqwoSoSE
)*( CS
TreqwoSoSE
)
1.06
* EM
CSnonSOS
)] /152
4.3 SoSE Model Effort Multipliers for Effort Calculations
As indicated above, the SoSE model uses several EMs to calculate the effort
associated with the engineering of the SoS capabilities. This section presents the
COSYSMO-generated EM factors based on cost driver settings and explains the
rationale for each of the cost driver settings. In general, each cost driver value is
set to “nominal” unless there is some justification within the DoD SoS case study
or constituent-system survey analysis for adjusting it up or down. Definitions for
each of the COSYSMO parameters and descriptions of each value are provided in
Appendix C.
SoSE SoS Capability EM: Figure 2 shows the COSYSMO-generated EM
factor for the SoSE team engineering of the SoS capability.
45
Figure 2. SoSE EM for SoS Requirements
Table 5 explains the rationale for each setting that is not nominal.
Table 5. SoSE EM for SoS Capability Requirements Rationale
Cost Factor Value Rationale
Requirements
Understanding
Low Reflects the fact that SoSE team starts with high level capability
objectives that must be analyzed and then translated into high-
level SoS requirements. This setting is based upon the findings in
[DoD, 2008] that the core SoSE activity, translating capability
objectives, is typically not performed (not needed) when
engineering requirements for a single system.
Level of Service
Requirements
High Reflects level and complexity of service typically required for
performance, interoperability, and/or security in a networked
SoS, with the “service” level depending upon the interactions of
multiple systems.
Number of
Recursive Levels in
the Design
High Reflects the added complexity of interdependencies,
coordination, and tradeoff analysis required in an SoS versus a
single system.
Multisite
Coordination
Low Reflects the fact that a DoD SoS is comprised of multiple
systems, each developed/maintained by one or more
organizations, often with considerable time zone impacts and
security restrictions.
46
SoSE “Oversight” Requirements EM: Figure 3 shows the COSYSMO-
generated EM factor for the constituent-system requirements that are being
implemented concurrently with the SoS requirements and must be monitored by
the SoSE team.
Figure 3. SoSE EM to Monitor Constituent-System Non-SoS-Requirements
Most of the COSYSMO parameters are kept at a nominal rating since the
SoSE team must evaluate these non-SoS-requirements for potential adverse
impacts upon the SoS, as indicated in [DoD, 2008]. Table 6 explains the rationale
for each setting that is not nominal.
47
Table 6. SoSE Oversight Requirements Cost Factor Rationale
Cost Factor Value Rationale
Technology Risk Very Low Reflects the fact that the technology risk for
constituent-systems that are characterized as
mature legacy systems tends to be nominal and
should therefore be even less at the SoS level in
those cases where the technology is not directly
tied to a new SoS capability.
Documentation Very Low Reflects the fact that typically the SoS develops
little formal documentation for the constituent-
systems, as indicated in [DoD, 2008].
Personnel/Team Capability High Reflects the fact that the SoSE team is
supported by system engineers from the
constituent-systems, as indicated in [DoD,
2008].
“Constituent-system SE for SoS Capability with SoSE Support” EM: Figure 4
shows the COSYSMO-generated EM factor for constituent-system SE of the SoS
capability with support from the SoSE team.
Figure 4. SE EM for SoS Requirements with SoSE Support
48
Table 7 explains the rationale for each setting that is not nominal.
Table 7. “SE for SoS Requirements, SoSE Support” Cost Factor Rationale
Cost Factor Value Rationale
Architecture Understanding High Reflects the fact that in an acknowledged SoS,
the SoS architecture is defined and evolved at
the SoS level by the SoSE team and not at the
constituent-system level.
Level of Service Requirements High Reflects the fact that the SoS adds an additional
set of level of service requirements to manage
at the constituent-system level and these service
levels can be impacted by other constituent-
systems within the SoS.
“Constituent-system SE for SoS Capability Without SoSE Support” EM:
Figure 5 shows the COSYSMO-generated EM factor for constituent-system SE
of the SoS capability with no support from an SoSE team.
Figure 5. SE EM for SoS Requirements Without SoSE Support
49
Table 8 explains the rationale for each setting that is not nominal. Note that
the rationale for keeping “architecture understanding” at a nominal level in this
situation may not be intuitive, so it is also addressed in table 8.
Table 8. “SE for SoS Requirements, No SoSE Support” Cost Factor
Rationale
Cost Factor Value Rationale
Requirements
Understanding
Low Reflects the fact that SoS requirements addressed at the
constituent-system level without the help of an SoSE team
are often at a much higher level that the typical constituent-
system requirements and therefore requires additional effort.
In this case, the constituent-systems must collaborate to
accomplish the “Translating Capability Objectives” core
activity before they can proceed with the more typical
requirements analysis.
Architecture
Understanding
Nominal If this parameter was set lower, this would reflect the fact that
the system SE team will have to figure out how to coordinate
architecture changes that affect both their system and other
systems without the assistance of an SoSE team. However,
there is typically an ad hoc architecture in place for a
collaborative SoS and many new capabilities do not require
architecture changes, so no additional effect is added to this
model parameter.
Level of Service
Requirements
High Reflects the fact that the SoS adds an additional set of level of
service requirements to manage at the constituent-system
level.
“Constituent-system SE for Non-SoS-Requirements” EM: Figure 6 shows
the COSYSMO-generated EM factor for constituent-system SE of non-SoS-
requirements being implemented concurrently with the SoS capability.
50
Figure 6. SE Effort Multiplier for System-Specific Non-SoS-Requirements
These EM parameter settings reflect some of the research assumptions listed
in Section 4.1, SoSE Process Model Assumptions and Constraints. The key
assumptions that apply to this EM are that all constituent-systems currently exist
and that the legacy constituent-system engineering processes are relatively
mature. Table 9 explains the rationale for each setting that is not nominal.
Table 9. SE for Non-SoS-Requirements Cost Factor Rationale
Cost Factor Value Rationale
Architecture Understanding High Reflects the fact that the constituent-systems
already exist and the SE team should have a
good understanding of the existing architecture.
Number of Recursive Levels in
the Design
Low Reflects the fact that non-SoS-requirements
tend to only affect the single (existing)
constituent-system and that the constituent-
system engineers are already familiar with the
existing levels.
51
4.4 SoSE Process Model Parameter Variations
To compare the collaborative and acknowledged SoSE management
approaches, four SoS characteristics (or parameters) are varied: SoS Size,
constituent-system volatility, SoS capability size, and the scope of SoS capability.
The range of values for each parameter was determined by the characteristics of
the SoSs in the DoD case studies [DoD, 2008] and the survey results from the
CSSE System Interdependency Survey (see Appendix B for a copy of the survey
form used and a summary of the responses). Table 10 describes the parameters
used in the SoSE process model to compare the collaborative and acknowledged
SoSE management approaches.
Table 10. SoSE Model Parameters
SoSE Model
Parameter
Description Range of Values
SoS Size Number of constituent-systems
within the SoS.
2-200
SoS Capability Size Number of equivalent nominal
requirements as defined by
COSYSMO.
1-1000
Constituent-system
Volatility
Number of non-SoS changes being
implemented in each constituent-
system in parallel with SoS
capability changes.
0-2000
Scope of SoS Capability Number of constituent-systems that
must be changed to support
capability.
One to SoS Size (total
number of constituent
systems within the SoS)
SoSE Oversight Factor Oversight adjustment factor to
capture SoSE effort associated with
monitoring constituent-system non-
SoS changes.
5%, 10%, and 15%
52
Chapter 5: Research Results
This section presents the results of the various cost model executions. The
results are grouped by the parameters that were varied for each execution of the
model. Note that there were actually three executions of each case, each with a
different SoSE oversight factor (OSF): 5%, 10%, and 15%. In general, as the
SoSE OSF decreased, the SoSE became more cost effective, i.e. less time is being
spent monitoring the non-SoS constituent-system changes.
5.1 SoS Size Variation
The first set of model results is the set of cases that varied the size of the SoS.
This was accomplished by changing the number of systems within the SoS. SoS
size generally ranged from two systems to 200 systems.
Case 1: Figure 7 shows the case where constituent-system volatility (non-
SoS changes) and the size of the SoS capability requirements are equal and the
capability changes affect half of the constituent-systems.
Figure 7. SoSE Model Case 1
Relative Cost of Collaborative and Acknowledged SoSE
Capability Affects Half of the Systems
System Volatility = 100 Reqs and SoS Capability = 100 Reqs
3 Levels of Oversight Factor (OSF)
-300.00
0.00
300.00
600.00
900.00
1200.00
1500.00
1800.00
0 50 100 150 200 250
Number of Systems
Savings (Person Months)
OSF 5%
OSF 10%
OSF 15%
53
In this case, the earliest point at which the SoSE team becomes cost effective
is when the SoS is of size 10 or higher and this is for the sub-case where the OSF
factor is 5%. As the OSF factor changes from 5% to 10%, the cost effective point
changes by one (to 11 systems or higher) and as the OSF factor changes from
10% to 15%, the cost effective point again changes by one (to 12 systems or
higher).
Case 2: Figure 8 shows the case where the constituent-system volatility is
twice the size of the SoS capability requirements and the capability requirements
affect half of the constituent-systems.
Figure 8. SoSE Model Case 2
In this case, the earliest point at which the SoSE team becomes cost effective
is when the SoS is of size 11 or higher and this is for the sub-case where the OSF
factor is 5%. As the OSF factor changes from 5% to 10%, the cost effective point
changes by three (to 14 systems or higher). As the OSF factor changes from 10%
to 15%, the cost effective point changes by six (to 20 systems or higher). This
case also shows that as the constituent-system volatility becomes greater in size
Relative Cost of Collaborative and Acknowledged SoSE
Capability Affects Half of the Systems
System Volatility = 100 Reqs and SoS Capability = 50 Reqs
3 Levels of Oversight Factor (OSF)
-200.00
0.00
200.00
400.00
600.00
800.00
0 50 100 150 200 250
Number of Systems
Savings (Person Months)
OSF 5%
OSF 10%
OSF 15%
54
than the SoS capability size, the SoSE team only becomes effective for larger
SoSs.
Case 3: Figure 9 shows the case where the constituent-system volatility is
four times the size of the SoS capability requirements and the capability
requirements affect half of the constituent-systems.
Figure 9. SoSE Model Case 3
In this case, the earliest point at which the SoSE team becomes cost effective
is when the SoS is of size 13 or higher and this is for the sub-case where the OSF
factor is 5%. As the OSF factor changes from 5% to 10%, the cost effective point
more than doubles (to 31 systems or higher). And as the OSF factor changes from
10% to 15%, the SoSE team is no longer cost effective for any size SoS. This last
sub-case shows that when the SoS capability size is relatively low compared to
the constituent-system changes, the SoSE oversight effort overwhelms the SoS
engineering effort and therefore, the SoSE team is not cost effective. This case
also shows that when the SoSE oversight effort is kept at a lower level and the
Relative Cost of Collaborative and Acknowledged SoSE
Capability Affects Half of the Systems
System Volatility = 100 Reqs and SoS Capability = 25 Reqs
3 Levels of Oversight Factor (OSF)
-200.00
-100.00
0.00
100.00
200.00
300.00
400.00
0 50 100 150 200 250
Number of Systems
Savings (Person Months)
OSF 5%
OSF 10%
OSF 15%
55
constituent-system volatility continues to grow in size relative to the capability
requirements, the SoSE team only becomes effective for larger SoSs.
Case 4: Figure 10 shows the case where the constituent-system volatility is
again equal to the size of the SoS capability requirements, but the capability
requirements affect only a fourth of the constituent-systems.
Figure 10. SoSE Model Case 4
In this case, the earliest point at which the SoSE team becomes cost effective
is when the SoS is of size 10 or higher and this is for the sub-case where the OSF
factor is 5%. As the OSF factor changes from 5% to 10%, the cost effective point
changes by one (to 11 systems or higher) and as the OSF factor changes from
10% to 15%, the cost effective point again changes by one (to 12 systems or
higher). This observation is the same as for case 1 where the constituent-system
volatility was equal to the number of SoS requirements, but the number of
affected systems was a half instead of a quarter of the total number of
constituent-systems. This tends to show that the relative differences between
Relative Cost of Collaborative and Acknowledged SoSE
Capability Affects One-Fourth of the Systems
System Volatility = 100 Reqs and SoS Capability = 100 Reqs
3 Levels of Oversight Factor (OSF)
-400.00
0.00
400.00
800.00
1200.00
1600.00
2000.00
0 50 100 150 200 250
Number of Systems
Savings (Person
Months)
OSF 5%
OSF 10%
OSF 15%
56
constituent-system volatility and SoS capability size have more of an effect than
the percentage of systems affected by the SoS capability changes.
Case 5: Figure 11 shows the case where the constituent-system volatility is
more than an order of magnitude greater in size than the size of the SoS
capability requirements and the SoS capability requirements affect half of the
constituent-systems.
Figure 11. SoSE Model Case 5
In this case, the SoSE team is not cost effective for any SoS size.
Case 6: This case, illustrated in figure 12, is similar to case 5 except that the
SoS capability requirements affect all of the constituent-systems. In both case 5
and case 6, the constituent-system volatility size is more than an order of
magnitude greater that the SoS capability requirement size.
Relative Cost of Collaborative and Acknowledged SoSE
Capability Affects Half of the Systems
System Volatility = 2000 Reqs and SoS Capability = 100 Reqs
3 Levels of Oversight Factor (OSF)
-15000.00
-10000.00
-5000.00
0.00
0 50 100 150 200 250
Number of Systems
Savings (Person
Months)
OSF 5%
OSF 10%
OSF 15%
57
Figure 12. SoSE Model Case 6
In this case, the SoSE team is not cost effective for any SoS size when OSF is
either 15% or 10%. For the sub-case where OSF is 5%, the SoSE team becomes
only slightly cost effective when the SoS size is 28 constituent-systems or more.
5.2 Scope of SoS Capability
The second set of model results is the set of cases that varied the number of
constituent-systems affected by the SoS capability. This was accomplished by
changing the number of systems within the SoS that must implement changes to
support the desired new or modified SoS capability.
Case 7: For the initial case 7, shown in figure 13, the constituent-system
volatility size and the number of SoS capabilities requirements were equal
(1000), the SoS size was set to 10, and the number of constituent-systems within
the SoS that were affected by the SoS capability were varied from one to all
systems.
Relative Cost of Collaborative and Acknowledged SoSE
Capability Affects All of the Systems
System Volatility = 2000 Reqs and SoS Capability = 100 Reqs
3 Levels of Oversight Factor (OSF)
-10000.00
-8000.00
-6000.00
-4000.00
-2000.00
0.00
2000.00
0 50 100 150 200 250
Number of Systems
Savings (Person Months)
OSF 5%
OSF 10%
OSF 15%
58
Figure 13. SoSE Model Case 7a (SoS Size = 10)
In this case, the SoSE team is cost effective when the number of systems
affected is at least 6 (60%) out 10 (100%) and the OSF is 15% or 10%. The SoSE
team is cost effective when the number of systems is 5 (50%) for the sub-case
where OSF is 5%. To evaluate this case further, the total SoS size was changed
from 10 to 100 and the same percentage of systems affected by the capability
used (i.e., 10%). Figure 14 illustrates case 7b.
Figure 14. SoSE Model Case 7b (SoS Size = 100)
Relative Cost of Collaborative and Acknowledged SoSE
SoS Capability Scope Varies
System Volatility = 1000 Reqs and SoS Capability = 1000 Reqs
3 Levels of Oversight Factor (OSF)
-1500.00
-1000.00
-500.00
0.00
500.00
1000.00
1500.00
0 1 2 3 4 5 6 7 8 9 10 11 12
Number of Systems Affected by Capability
Savings (Person Months)
OSF 5%
OSF 10%
OSF 15%
Relative Cost of Collaborative and Acknowledged SoSE
SoS Capability Scope Varies
System Volatility = 1000 Reqs and SoS Capability = 1000 Reqs
3 Levels of Oversight Factor (OSF)
-5000.00
0.00
5000.00
10000.00
15000.00
20000.00
25000.00
0 20 40 60 80 100 120
Number of Systems Affected by Capability
Savings (Person
Months)
OSF 5%
OSF 10%
OSF 15%
59
The same relative profile exists as in case 7a, except that the cost effective
point shifts to the left when viewing the results in terms of the percentage of
systems affected by the capability. The SoSE team is cost effective when at least
a) 19% of the constituent-systems are affected and OSF is 15%, b) 14% of the
constituent-systems are affected and OSF is 10%, and c) 10% of the constituent-
systems are affected and OSF is 5%.
Case 8: This case represents the situation where the SoS is comprised of
stable legacy systems with basically no volatility (constituent-system volatility
equals zero). Figure 15 illustrates this case where the SoS size is 10 and the scope
of the SoS change ranges from one system to all systems. Note that because there
is no constituent-system volatility to monitor, the data for all OSF values (5%,
10%, and 15%) is identical.
Figure 15. SoSE Model Case 8
In this case, the SoSE team becomes cost effective when the SoS capability
requirements affect at least 5 (50%) of the 10 systems. This case was run a
Relative Cost of Collaborative and Acknowledged SoSE
SoS Capability Scope Varies
System Volatility = None and SoS Capability = 1000 Reqs
3 Levels of Oversight Factor (OSF)
-1000.00
-500.00
0.00
500.00
1000.00
1500.00
0 1 2 3 4 5 6 7 8 9 10 11 12
Number of Systems Affected by Capability
Savings (Person Months)
OSF 5%
OSF 10%
OSF 15%
60
second time with the SoS size set to 100 (instead of 10). The results of this
second run were somewhat similar: the SoSE team becomes cost effective when
the SoS capability requirements affect at least 5 of the constituent-systems. So, in
this situation where there is no constituent-system volatility, it is the actual
number of systems affected (not the percentage of systems affected) that
determines when the SoSE team becomes cost effective.
Case 9: This case represents the situation where the desired SoS capabilities
and performance are fairly stable and little change is occurring at the SoS level.
Figure 16 illustrates this case where the SoS size is 10 systems, the SoS
capability size is one requirement, the constituent-system volatility is 1000
requirements, and the number of systems affected by the SoS capability
requirement varies from one system to all 10 systems.
Figure 16. SoSE Model Case 9
In this case, the SoSE team is not cost effective for any number of affected
constituent-systems.
Relative Cost of Collaborative and Acknowledged SoSE
SoS Capability Scope Varies
System Volatility = 1000 and SoS Capability = 1 Req
3 Levels of Oversight Factor (OSF)
-300.00
-200.00
-100.00
0.00
100.00
0 1 2 3 4 5 6 7 8 9 10 11 12
Number of Systems Affected by Capability
Savings (Person Months)
OSF 5%
OSF 10%
OSF 15%
61
Cases 10 through 12: To further investigate cases 7 through 9 for a smaller
SoS, cases 7 through 9 were modified to reflect an SoS size of 5 constituent-
systems.
Case 10, illustrated in figure 17, is a modified version of case 7 where the
SoS capability size is equal to the constituent-system volatility size (1000
requirements) and the number of systems affected by the capability changes
ranges from one system to all systems. This case presents results similar to those
in case 7: the SoSE team becomes cost effective when the number of constituent-
systems affected by the changes is 5. However, since the total number of systems
in the SoS is 5, it is clear that there is only a limited situation where the SoSE
team would be cost effective: when the capability to be implemented affects all of
the systems.
Figure 17. SoSE Model Case 10
Case 11, illustrated in figure 18, is a modified version of case 8 where the
SoS capability size is 1000 requirements, the constituent-systems are stable
legacy systems with no current volatility, and the number of systems affected by
Relative Cost of Collaborative and Acknowledged SoSE
SoS Size = 5 SoS Capability Scope Varies
System Volatility = 1000 Reqs and SoS Capability = 1000 Reqs
3 Levels of Oversight Factor (OSF)
-1000.00
-500.00
0.00
500.00
0 1 2 3 4 5 6
Number of Systems Affected by Capability
Savings (Person Months)
OSF 5%
OSF 10%
OSF 15%
62
the capability changes ranges from one system to all systems. Note that there is
no influence from the OSF factor since there is no constituent-system volatility.
Figure 18. SoSE Model Case 11
This case presents results similar to those in case 8: the SoSE team becomes
cost effective when the number of constituent-systems affected by the changes is
5. However, since the total number of systems in the SoS is 5, it is clear that there
is only a limited situation where the SoSE team would be cost effective: when the
capability to be implemented affects all of the systems.
The last case for the small SoS (SoS size equals 5), shown in figure 19, is
similar to case 9 and represents the situation where the SoS capability size is
small relative to the on-going system volatility. This case also presents results
similar to those in case 9: the SoSE team is not cost effective as the capability
scope changes.
Relative Cost of Collaborative and Acknowledged SoSE
SoS Size = 5 SoS Capability Scope Varies
System Volatility = None and SoS Capability = 1000 Reqs
3 Levels of Oversight Factor (OSF)
-800.00
-600.00
-400.00
-200.00
0.00
200.00
0 1 2 3 4 5 6
Number of Systems Affected by Capability
Savings (Person
Months)
OSF 5%
OSF 10%
OSF 15%
63
Figure 19. SoSE Model Case 12
When the results of cases 7 through 9 (for an SoS consisting of 10
constituent-systems) are compared to cases 10 through 12 (for an SoS consisting
of 5 constituent-systems), comparison findings indicate that, in general, it will be
cost effective when the SoS capability affects about 5 or 6 systems. However,
when one also compares these results to case 7b where the size of the SoS
increases to 100 systems, one finds that the percentage of systems affected by the
capability required to make the SoSE team cost effective decreases considerably.
5.3 Summary of Model Executions
Table 11 provides a summary of each of the SoSE model runs.
Relative Cost of Collaborative and Acknowledged SoSE
SoS Size = 5 SoS Capability Scope Varies
System Volatility = 1000 and SoS Capability = 1 Req
3 Levels of Oversight Factor (OSF)
-160.00
-120.00
-80.00
-40.00
0.00
0 1 2 3 4 5 6
Number of Systems Affected by Capability
Savings (Person Months)
OSF 5%
OSF 10%
OSF 15%
64
Table 11. Summary of SoSE Model Executions
SoSE Team Effectiveness Point
Case
SoS Size
SoS Capability
Size
Constituent-
system Volatility
Scope of SoS
Capability
OSF 5% OSF 10% OSF 15%
1 2-200 100 100
Half of
CSs
SoS size 10
or higher
SoS size 11 or
higher
SoS size 12 or
higher
2 2-200 50 100
Half of
CSs
SoS size 11
or higher
SoS size 14 or
higher
SoS size 20 or
higher
3 2-200 25 100
Half of
CSs
SoS size 13
or higher
SoS size 31 or
higher
Not cost
effective for
any size
4 4-200 100 100
Quarter
of CSs
SoS size 10
or higher
SoS size 11 or
higher
SoS size 12 or
higher
5 2-200 100 2000
Half of
CSs
Not cost
effective for
any size
Not cost
effective for
any size
Not cost
effective for
any size
6 2-200 100 2000 All CSs
SoS size 28
or higher
Not cost
effective
Not cost
effective
7a 10 1000 1000 1-10
SoS change
affects 50%
CSs or more
SoS change
affects 60%
CSs or more
SoS change
affects 60%
CSs or more
7b 100 1000 1000
10% to
100%
SoS change
affects 10%
or more CSs
SoS change
affects 14% or
more CSs
SoS change
affects 19% or
more CSs
8 10 1000 0 1-10
SoS change
affects 5 CSs
or more
SoS change
affects 5 CSs
or more
SoS change
affects 5 CSs
or more
9 10 1 1000 1-10
Not cost
effective for
any scope
Not cost
effective for
any scope
Not cost
effective for
any scope
10 5 1000 1000 1-5
SoS change
affects all 5
SoS change
affects all 5
SoS change
affects all 5
11 5 1000 0 1-5
SoS change
affects all 5
SoS change
affects all 5
SoS change
affects all 5
12 5 1 1000 1-10
Not cost
effective
Not cost
effective
Not cost
effective
65
Looking at all of these SoSE model runs together, one can discern a set of
heuristics that indicate when an SoSE will be cost effective from an effort
perspective:
1. When the SoS capability size is greater than or equal to the non-SoS
changes being implemented in parallel with the SoS changes, there is
usually an SoS size at which the SoSE team becomes cost effective.
2. Cases 2 and 3 show that for larger SoSs, there are also situations
where the SoS capability size is smaller than the non-SoS changes
and the SoSE team is cost effective.
3. When the SoS size is small (5 or fewer systems) there are few
situations where an SoSE team is cost effective. The situation
identified in the model executions where an SoSE team would be cost
effective for a small SoS is when
a. The SoS capability size is greater than or equal to the non-SoS
changes being implemented in each constituent-system in
parallel with the SoS changes.
b. The SoS capability changes affects all of the constituent-
systems within the SoS.
4. When the size of the SoS capability change is extremely small
compared to the non-SoS changes being implemented in the
constituent-systems, the SoSE team is generally not cost effective.
5. The oversight factor (OSF) which indicates the relative amount of
time spent monitoring constituent-system non-SoS changes also
impacts the cost effectiveness of the SoS team. This is a parameter
that deserves further investigation with respect to a) the level of
66
constituent-system understanding required at the SoS level to
adequately engineer SoS changes, b) the potential impacts of
constituent-system non-SoS changes, and c) the effects of rework
when non-SoS changes adversely impact the goals of the SoS.
It is also important to note that these heuristics are derived from a set of cases
that are based upon a set of simplifying assumptions and constraints (see section
4.1). As was noted in section 4.1, these model assumptions and constraints
generate relatively conservative estimates of cost savings for an acknowledged
SoSE team and as a result, the actual threshold points for a specific SoS may be
lower than indicated in table 11. To determine the cost effectiveness for a specific
SoS implementing a specific set of SoS changes while the constituent-systems are
implementing their specific non-SoS changes, one needs to model those specific
characteristics to determine the cost effectiveness of an SoSE team for that SoS.
67
Hypothesis: Subject to a set of
reasonably grounded assumptions, there
exists a threshold where it is more cost
effective to manage and engineer
changes to an SoS using an SoSE team
and this threshold can be located by
modeling the interdependency and
complexity characteristics of the SoS.
Question: When is it cost
effective to create and empower
an SoSE team to oversee and
guide the evolution of an SoS?
SoSE Model
Model parameters:
SoS size
Scope/size of SoS change
CS volatility
SoSE oversight
Chapter 6: Conclusions
6.1 General Conclusions
Returning to the original research question posed for this research effort and
the associated hypothesis illustrated in figure 20, this research has shown that key
SoS and SoSE characteristics can be modeled using COSYSMO and that the
extensions provided to COSYSMO through this research can be used to evaluate
the cost effectiveness of an SoSE team.
Figure 20. Research Summary
More specifically, this research has shown that there is a “home-ground” for
an SoSE team supporting the evolution of a given SoS. Smaller SoSs and SoSs
where the SoS changes are relatively small compared to the constituent-systems
changes can generally be managed cost effectively using a collaborative SoS
68
management approach. As the SoS grows in size or as the SoS changes grow in
size and scope (for example, the SoS overall architecture needs to change), it will
generally be more cost effective to put an SoSE team in place to manage the SoS
as an acknowledged SoS. However, these findings were found in the context of
evaluating approaches for implementing a single SoS capability or capability
change in a single incremental upgrade to the SoS constituent-systems. So, while
there is a discernable cost savings in person-months using one approach or the
other, it is important to evaluate the cost effectiveness in a much broader context.
One must ask the questions such as:
What is the typical rate of SoS change over time?
How long will the current rate of change continue?
Will SoS-level changes or constituent-system (non-SoS) changes
predominate over time?
An SoSE team is typically composed of several systems engineers each with
different specialty areas. Over time, one must evaluate whether the SoS oversight
activities and the rate of SoS capability modifications/changes being
implemented are sufficient to keep the SoSE team fully engaged (i.e., little to no
slack time). If there is too much slack time for the SoSE team, then the cost
benefits may disappear.
6.2 Future Work
During this research effort, additional issues have been identified that might
also affect the cost-effectiveness of an SoSE team. These include:
The types of SoS requirements being implemented (performance versus
functionality).
69
How the quality of the SoS engineering activities are affected by not
engineering changes at a level above the constituent changes and the
amount of rework that is generated when changes are not adequately
engineered.
These issues lead to future opportunities to expand on this research:
Incorporate a rework factor to show the rework impacts of not
engineering cross-cutting changes at the SoS level.
Incorporate the evaluation of other measures of SoS effectiveness related
to complexity and performance.
Develop an SoS capability cost model by characterizing the SoSE team
and all of the constituent-system engineering teams in COSYMO, then
calibrating that model.
6.3 Summary of Research Contributions
In summary, with the basic SoSE comparison model developed as part of this
research effort, one can perform some quick assessments to determine the cost
effectiveness of an SoSE team for the engineering of SoS capabilities as well as
investigate relative costs of multiple options for implementing a given SoS
capability. This model can also be easily tailored to capture the specific
characteristics of a single SoS, and with further calibration, can support actual
cost estimation, not just the relative evaluation of alternative approaches to the
implementation of a given capability.
70
References
Ackoff, R. 1971. Towards a system of systems concepts. Management Science
Theory Series 17, no. 11: 661-671.
ANSI/EIA. 1999. ANSI/EIA-632-1988 Processes for engineering a system.
Berry, B. 1964. Cities as systems within systems of cities. The Regional Science
Association Papers 13.
Berryman, M., A. Allison, and D. Abbott. 2007. Optimizing genetic algorithm
strategies for evolving networks. Paper presented at the symposium on
complex systems engineering, January 11-12, in Santa Monica, CA.
http://cs.calstatela.edu/wiki/index.php/Symposium_on_Complex_Systems_E
ngineering (accessed on 1/11/2007).
Black, L. and N. Repenning. 2001. Why firefighting is never enough: Preserving
high-quality product development. System Dynamics Review 17, no. 1: 33-
62.
Blanchard, B. and W. Fabrycky.1998. Systems engineering and analysis. Upper
Saddle River: Prentice Hall.
Boehm, B., C. Abts, A. Brown, S. Chulani, B. Clark, E. Horowitz, R. Madachy,
D. Reifer, and B. Steece. 2000. Software cost estimation with COCOMO II.
Upper Saddle River: Prentice Hall.
Boehm, B. and J. Lane. 2006. 21st century processes for acquiring 21st century
software-intensive systems of systems. CrossTalk - The Journal of Defense
Software Engineering 19, no. 5: 4-9.
Boehm, B., R. Valerdi, J. Lane, and A. Brown. 2005. COCOMO suite
methodology and evolution. CrossTalk - The Journal of Defense Software
Engineering 18, no. 4: 20-25.
Brooks, F. 1995. The mythical man-month: essays on software engineering, New
York: Addison-Wesley.
Carlock, P. and R. Fenton. 2001. System of systems (SoS) enterprise systems for
information-intensive organizations. Systems Engineering 4, no. 4: 242-261.
Carlock, P., and J. Lane. 2006. System of systems enterprise systems
engineering, the enterprise architecture management framework, and system
of systems cost estimation. Proceedings of the 21
st
International Forum on
COCOMO and Systems/Software Cost Modeling, November 7-9, in
Herndon, VA.
Cocks, D. 2006. How should we use the term "system of systems" and why
should we care? Proceedings of the 16
th
Annual INCOSE International
Symposium, July 9-13, in Orlando, FL.
71
Cresswell, A., T. Pardo, F. Thompson, D. Canestraro, M. Cook, L. Black, L.
Luna, I. Martinez, D. Anderson, and G. Richardson. 2002. Modeling
intergovernmental collaboration: A system dynamics approach. Proceedings
of the 35
th
Annual Hawaii International Conference on System Sciences,
January 7-10, in Waikoloa, Hawaii.
Dahmann, J. and K. Baldwin. 2008. Understanding the current state of US
defense systems of systems and the implications for systems engineering.
Proceedings of the IEEE Systems Conference, April 7-10, in Montreal,
Canada.
Department of Defense, 2006. Defense acquisition guidebook, Version 1.6.
http://akss.dau.mil/dag/ (accessed on 2/2/2007).
Department of Defense. 2008. Systems engineering guide for system of systems,
version 1.0.
Dorner, D. (1996); The logic of failure, Metropolitan Books.
Eisner, H. 1993. RCASSE: Rapid computer-aided systems of systems
engineering. Proceedings of the 3rd International Symposium of the National
Council of System Engineering (NCOSE) 1: 267-273.
Federation of American Scientists (FAS), Integrated undersea surveillance
system (IUSS), http://www.fas.org/irp/program/collect/iuss.htm (accessed
on 12/27/2006).
Ferreira S. 2002. Measuring the effects of requirements volatility on software
Development Projects. Ph.D. Dissertation, Arizona State University.
Finley, J. 2006. Keynote address. Presented at the 2nd Annual System of
Systems Engineering Conference, July 25-26, in Fort Belvoir, VA.
Ford D. and J. Sterman. 2003. Iteration management for reduced cycle time in
concurrent development projects, Concurrent Engineering Research and
Application (CERA) Journal Special Issue. March.
Forrester, J. 1961. Industrial dynamics, Pegasus Communications.
Friedman, T. 2005. The world is flat: A brief history of the twenty-first century.
New York: Farrar, Straus and Giroux.
GlobalSecurity.ORG. 2005. Sound surveillance system (SOSUS).
http://www.globalsecurity.org/intell/systems/sosus.htm (accessed on
1/20/2007).
Greer, D., L. Black, and R. Adams. 2005. Improving inter-organizational
baseline alignment in large space system development programs,
Proceedings of IEEE Aerospace Conference.
72
Highsmith, J. 2000. Adaptive software development: A collaborative approach to
managing complex systems, New York: Dorset House Publishing.
INCOSE. 2006. Systems engineering handbook, Version 3, INCOSE-TP-2003-
002-03.
isee Systems (2007), iThink.
http://www.iseesystems.com/Softwares/Business/ithinkSoftware.aspx
accessed (accessed on 2/10/2007).
ISO/IEC. 2002. ISO/IEC 15288:2002(E) Systems engineering - system life cycle
processes.
IUSS-Caesar Alumni Association (IUSSCAA). 2006. IUSS history.
http://www.iusscaa.org/history.htm (accessed on 12/27/2006).
Kreitman, K. 1996. From the magic gig to reliable organizations: A new
paradigm for the control of complex systems. Paper presented at the
symposium on complex systems engineering.
http://cs.calstatela.edu/wiki/index.php/ Symposium_on_Complex_Systems_
Engineering (accessed on 1/11/2007).
Krygiel, A. 1999. Behind the wizard’s curtain: An integration environment for a
system of systems. CCRP Publication Series.
Lane, J. 1999. Quantitative Assessment of Rapid System Development Using
COTS Integration, Proceedings of the Eleventh Annual Software Technology
Conference, May 2-6, in Salt Lake City, UT.
Lane, J. and B/ Boehm. 2007. Modern tools to support DoD software intensive
system of systems cost estimation: A DACS state-of-the-art report, DACS
Report Number 347336. New York: Data and Analysis Center for Software.
Lane, J., S. Settles, and B. Boehm. 2007. Assessment of process modeling tools
to support the analysis of system of systems engineering activities.
Proceedings of the Fifth Annual Conference on Systems Engineering
Research, March 14-16, in Hoboken, NJ.
Lane, J. and R. Valerdi. 2007. Synthesizing system-of-systems concepts for use
in cost estimation. Systems Engineering 10, no. 4:297-307.
Lu, S. 2003. Engineering as collaborative negotiation: A new paradigm for
collaborative engineering. http://wisdom.usc.edu/ecn/
about_ECN_what_is_ECN.htm (accessed on 2/14/2007).
Madachy, R., B. Boehm, and J. Lane. 2007. Assessing hybrid incremental
processes for SISOS development. Software process: Improvement and
practice 12, no. 5: 461-473.
73
Maier, M. 1998. Architecting principles for systems-of-systems. Systems
Engineering 1, no. 4: 267-284.
Markus, M., A. Majchrzak, and L. Gasser. 2002. A design theory for systems that
support emergent knowledge processes. MIS Quarterly 26, no.3.
NAVSTAR Global Positioning System Joint Program Office,
http://gps.losangeles.af.mil/ , (accessed on 12/6/2006).
Northrop, L., P. Feiler, R. Gabriel, J. Goodenough, R. Linger, T. Longstaff, R.
Kazman, M. Klein, D. Schmidt, K. Sullivan, and K. Wallnau. 2006; Ultra-
large-scale systems: The software challenge of the future, Pittsburgh:
Software Engineering Institute.
Pinney, B. 2001. Projects, management, and protean times: Engineering
enterprise in the united states, 1870-1960, PhD Dissertation, Massachusetts
Institute of Technology.
Pressman, J. and A. Wildavsky. 1973. Implementation: How great expectations
in Washington are dashed in Oakland; Or, why it’s amazing that Federal
programs work at all, this being a saga of the Economic Development
Administration as told by two sympathetic observers who seek to build
morals on a foundation of ruined hopes, Berkeley: University of California
Press.
Prokopenko, M., F. Bochetti, and A. Ryan. 2006. An information-theoretic
primer on complexity, self-organisation and emergence", Paper presented at
the symposium on complex systems engineering,
http://cs.calstatela.edu/wiki/
index.php/Symposium_on_Complex_Systems_Engineering (accessed on
1/11/2007).
Rechtin, E. 1991. Systems architecting: Creating and building complex systems,
Upper Saddle River, Prentice Hall.
Ring, J., and A. Madni. 2005. Key challenges and opportunities in system of
systems engineering. Proceedings of IEEE International Conference on
Systems, Man and Cybernetics, October 10-12, Waikoloa, Hawaii.
Sage, A. and C. Cuppan. 2001. On thesystems engineering and management of
systems of systems and federations of systems. Information, Knowledge, and
Systems Management 2: 325-345.
SEI. 2001. Capability maturity model integration (CMMI), CMU/SEI-2002-TR-
001.
Sheard, S. 2006. Foundations of complexity theory for systems engineering of
systems of systems. Proceedings of the IEEE Conference on System of
Systems Engineering, April 24-26, Los Angeles, CA.
74
Shenhar, A. 1994. A new systems engineering taxonomy”, Proceedings of the
4th International Symposium of the National Council on System Engineering,
National Council on System Engineering 2, pp 261-276.
Smithsonian Institute, National Museum of American History. 2000. Submarine
missions: Anti-dubmarine warfare, http://americanhistory.si.edu/subs/work/
missions/warfare/index.html (accessed on 1/20/2007).
Society for Design and Process Science (SDPS). 2006. Proceedings of the Ninth
World Conference on Integrated Design and Process Technology, Volume 1.
The Aerospace Corporation, MITRE, RAND, and Third Millennium Systems,
2007. Symposium on complex systems engineering,
http://cs.calstatela.edu/wiki/
index.php/Symposium_on_Complex_Systems_Engineering (accessed on
1/11/2007).
United States Air Force (USAF) Scientific Advisory Board (SAB). 2005. Report
on system-of-systems engineering for Air Force capability development;
Public Release SAB-TR-05-04
Valerdi, R. 2005. Constructive systems engineering cost model. PhD.
Dissertation, University of Southern California.
Valerdi, R. and M. Wheaton. 2005. ANSI/EIA 632 as a standardized WBS for
COSYSMO, AIAA-2005-7373, Proceedings of the AIAA 5th Aviation,
Technology, Integration, and Operations Conference, Arlington, Virginia.
Wang, G., R. Valerdi, A. Ankrum, C. Millar, and G. Roedler. 2008. COSYSMO
reuse extension, Proceedings of the 18th Annual International Symposium of
INCOSE, The Netherlands.
75
Appendix A: DoD SoS Case Study Summaries
Table A-1. DoD SoS Case Study Summaries
SoS Name SoS Description
Number of
Constituent-
systems
Comments
ABCS
Army Battle Command
System
11
AOC
Weapon
System
Air Operations Center
Weapon System
(USAF)
40+
BMDS
Ballistic Missile
Defense System
(Missile Defense
Agency)
264
Multiple sensors, C2 systems,
and weapons
USCG C2
Systems
Convergence
United States Coast
Guard Command and
Control Systems
Convergence
25 Number of core systems
CAC2S
Common Aviation
Command and Control
System (USMC)
10 22 external systems
DGCS
Distributed Common
Ground System
(USAF)
Many
Many: INT providers, other
service DCGS systems, DCGS
Integration backbone
DoDIIS
Department of Defense
Intelligence
Information System
Many
Many: Regional service centers,
multiple providers and
consumers. Integrates with
DCGS. Over 100 sites.
FCS
Future Combat
Systems (Army)
18+
GCS
Ground Combat
Systems (Army)
Many
Many: Works with FCS,
includes Heavy Brigade
Combat systems, Stryker
Brigade Combat teams
MILSATCO
M
Military Satellite
Communications
(USAF)
16
NIFC-CA
Naval Integrated Fire
Control-Counter Air
12
NSA
National Security
Agency
Responses based upon multiple
SoS programs under NSA.
Specific SoS characteristics not
known.
76
Table A-1, Continued. DoD SoS Case Study Summaries
SoS Name SoS Description
Number of
Constituent-
systems
Comments
NSWCDD
Naval Surface Warfare
Center Dahlgren
Division
Responses based upon multiple
SoS programs at NSWCDD.
Specific SoS characteristics not
known.
SIAP
Single Integrated Air
Picture (Joint)
Many
Many: IABM and service
sensors (legacy and
development)
SMC
Space and Missile
Systems Center
Responses based upon multiple
SoS programs at SMC. Specific
SoS characteristics not known.
SR Space Radar Not Applicable
A system designed to be part of
multiple SoSs
(http://www.globalsecurity.org/s
pace/systems/sr.htm, accessed on
12/22/2008)
TJTN
Theater Joint Tactical
Networks
Many
Based on information at
https://eatss1.sed.monmouth.arm
y.mil/ accessed on 12/22/2008
TMIP-J
Theater Medical
Information Program-
Joint
9
77
Center for Systems & Software Engineering
941 W. 37th Pl., Salvatori 328, Los Angeles, CA 90089-0781
Phone: (213) 740-5703, FAX: (213) 740-4927
Appendix B: System Interdependency Survey
Purpose of Survey
The United States Department of Defense has indicated that “most military systems today are part
of a system of systems (SoS) whether or not explicitly recognized”. To better understand SoS and
the impacts these have on the constituent-systems comprising the SoS, the University of Southern
California (USC) Center for Systems and Software Engineering (CSSE) is conducting SoS and
SoS engineering research. The long term goals are to better understand management structures,
cost, schedule, and risk implications of approaches being used to develop and evolve both formal
and informal SoSs. This particular survey is investigating the rate of change of systems that are
connected in some way into a collection of systems or an SoS, how this rate of change changes as
the number of interfaces changes, and what are the implications for SoS engineering teams
managing/guiding the evolution of a SoS as the number of interconnected systems increases.
Survey Participant Information (optional)
Name: Title
Organization: Phone: Email:
System Description
System name: System sponsor:
Version (or release) of system that survey data corresponds to:
Duration of upgrade cycle: Year that upgrade was deployed:
Number of new/modified requirements associated with release:
Number of requirements changes accepted during release development cycle:
Approximate percentage of these requirements and requirements changes that are related to
external capabilities or potentially impact interfacing systems:
Number of system external interfaces
Number of incoming interfaces: Number of outgoing interfaces:
Do you consider your system to be a SoS (see definition on next page)? Yes No
Is your system a member of one or more SoSs? Yes No
If yes,
Which SoSs:
How many of these SoS are explicitly managed by a SoS engineering team?
Directions: Please fill out this survey using the definitions provided on the second
page and return your response to Jo Ann Lane (jolane@usc.edu). If these definitions are
not sufficient, please contact Jo Ann Lane for additional clarification.
78
Definition of Terms
Duration of upgrade cycle: This is the length of time, in months, from start of
requirements analysis for system upgrade through formal test and deployment of upgrade.
Initial number of new/modified requirements: This is the number of new and
modified requirements associated with the indicated system release that initially define the system
upgrade. These are requirements specified to be implemented on top of the previous system
version/release.
Number of requirements changes: This is the number of requirement changes that are
accepted after the start of the current release development effort and implemented as part of the
current release.
Requirements and requirements changes that are related to external
capabilities or potentially impact interfacing systems: Examples of these include
requirements that are related to external interfaces, data processing or algorithms used to generate
or process data sent/received over an external interface, or requirements related to overall
performance, security, safety of the collection of interfacing systems.
Number of system external interfaces: The number of interfaces between the system
and other systems. If the specific number of interfaces is unknown due to the architecture style
(e.g., a service-oriented architecture has been implemented and the number of systems using the
service is unknown), indicate “many (or several), but number unknown”. (Question: should this
data be collected by incoming and outgoing interfaces?)
System of Systems: A system is considered to be a system of systems if the key components
comprising the SoS can be thought of as systems, the collection of components/systems
comprising the SoS has its own purpose that cannot be fulfilled by any single system within the
collection, and the SoS key components more or less possess the additional properties described
by [Maier, 1998]:
• Components have their own purpose and can operate independently of the system of
systems
• Components are managed and evolved over time by their own sponsors or organizations.
System Upgrade: A major new release of an existing system that starts with a set of new or
modified requirements and culminates in the formal test and deployment of a new version.
How to Return Responses:
You may email your response to Jo Ann Lane at jolane at usc.edu or you can return your response
via mail or fax:
Jo Ann Lane
USC CSSE
941 West 37
th
Street, SAL 328
Los Angeles, CA 90089-0781
FAX: 213-740-4927
Thank you for your time and consideration!
79
Table B-1. Summary of System Interdependency Survey Responses Part I
System
Name
Description
System Sponsor
Version/ Release
Upgrade Cycle
Duration
Year Deployed
# of Release Reqs
# of Release Req Chgs
% of chgs related to
external capabilities
RAM
GMLS
Rolling Airframe
Missile Guided
Missile Launching
System
NAVSEA MOD 3 5 years 2004 200 50 60%
AN/SPS-73
Surface Search
Radar
PEWO
IWS2
SW: 11.7
18
months
2004 10 3 0
AN/SPS-67
Surface Search
Radar
PEO IWS
2R114
Sys: V5
SW: 1.9.C
2007 15 15 24%
Tomahawk
Tomahawk
Weapons System
2-3 years 2003 4 5 50%
MK 84
Free-fall, nonguided
GP 2,000-pound
bomb
PEO IWS
JTRS
Joint Tactical Radio
System
DoD Initial
TAS
Target Acquisition
System
US Navy 9 25 years 1983 22 15 30%
JABMD
Japan Aegis
Ballistic Missile
Defense
MDA JB1.0 2.5 years 2007 10 5 5
NAVSSI
AN/SSN-
6F(V)4
Navigation Sensor
System Interface
SPAWAR
Block 4
Build
2.1.95
12
months
2006 11
11
(est)
50%
NFCS
AN/SQY-
27
Naval Fires Control
System
PEO-IWS 3.4 1 year 2007 1 1 95%
SSDS
Ship Self Defense
System
USN/PEO
(IWS)
SSDS MK
2
4 years 2002 2 2 100%
AEGIS Weapon System
ISW1/
PEO S
BL 7 4 years 2005 5 5 0%
PHALANX
(CIWS)
Close-In Weapons
System
PEO IWS
3B
BLK IB
B/L 2
4 years 2006 5 15 10%
80
Table B-2. Summary of System Interdependency Survey Responses Part II
System
Name
Description
# of I/Fs: Incoming
# of I/Fs: Outgoing
Is system an SoS?
Part of an SoS?
Which SoSs
SoS Mgmt Type
RAM
GMLS
Rolling Airframe
Missile Guided
Missile
Launching
System
3 2 Yes Yes
Ship Self Defense
System
Unknown
AN/SPS-73
Surface Search
Radar
4 2 No No(?)
AN/SPS-67
Surface Search
Radar
7 9 No Yes
AEGIS Combat
System Element
on DDG51 Class
Managed by
SoSE team
Tomahawk
Tomahawk
Weapons System
5 3 No No(?)
MK 84
Free-fall,
nonguided GP
2,000-pound
bomb
1 3 No No
JTRS
Joint Tactical
Radio System
many many No Yes FCS
TAS
Target
Acquisition
System
4 4 Yes Yes
2: LHD/LHA
class, CV/CVN
class
No SoSE team
for either
JABMD
Japan Aegis
Ballistic Missile
Defense
8 4 Yes Yes
Aegis Weapon
System
Managed by
SoSE team
NAVSSI
AN/SSN-
6F(V)4
Navigation
Sensor System
Interface
5 5 Yes Yes
4: Ship
Navigation, LPD-
17 Combat System
(CS), CVN-76,
LHD CS
All managed
by SoSE team
NFCS
AN/SQY-27
Naval Fires
Control System
5 3 Yes Yes
Naval Surface Fire
Support (NSFS)
Managed by
SoSE team
SSDS
Ship Self
Defense System
9 7 Yes Yes
SSDS MK 2 MOD
1/2/3/4
Managed by
SoSE team
AEGIS Weapon System Yes No
PHALANX
(CIWS)
Close-In
Weapons System
4 6 Yes Yes
3: AEGIS, SSDS,
LPWS
81
Appendix C: COSYSMO Parameter Definitions Tailored
for SoSE Comparison Model
The COSYSMO model parameter definitions tailored for the SoSE comparison model are based
upon the COSYSMO model parameter definitions in [Valerdi, 2005].
SIZE DRIVER
Number of System Requirements
This driver represents the number of requirements for the system-of-interest at a specific level of
design. The quantity of requirements includes those related to the effort involved in system
engineering the system interfaces, system specific algorithms, and operational scenarios.
Requirements may be functional, performance, feature, or service-oriented in nature depending on
the methodology used for specification. They may also be defined by the customer or contractor.
Each requirement may have effort associated with it such as verification and validation, functional
decomposition, functional allocation, etc. System requirements can typically be quantified by
counting the number of applicable shalls/wills/shoulds/mays in the system or marketing
specification. Ratings:
Easy: Simple to implement, traceable to source, little requirements overlap
Nominal: Familiar, can be traced to source with some effort, some overlap
Difficult: Complex to implement or engineer, hard to trace to source, high degree of
requirements overlap
For system of system (SoS) responses, only provide data for SoS-related parameters. For
constituent-systems, provide two separate counts: one for requirements that are related to changes
within the system of interest (non-SoS) and another count for the requirements that are related to
SoS needs (e.g., sharing of information or coordinating activities with other members of the SoS).
COST PARAMETERS
NOTE: SoS responses are for only SoS-related. Constituent-systems provide both non-SoS and
SoS-related, as applicable.
Requirements understanding
This cost driver rates the level of understanding of the system requirements by all stakeholders
including systems, software, hardware, customers, team members, users, etc. Primary sources of
added systems engineering effort are unprecedented systems, unfamiliar domains, or systems
whose requirements are emergent with use. Ratings:
Very Low: Poor, emergent requirements or unprecedented system
Low: Minimal, many undefined areas
Nominal: Reasonable, some areas undefined
High: Strong, few undefined areas
Very High: Full understanding of requirements, familiar system
Architecture understanding
This cost driver rates the relative difficulty of determining and managing the system architecture
in terms of platforms, standards, components (COTS/GOTS/NDI/new), connectors (protocols),
and constraints. This includes tasks like systems analysis, tradeoff analysis, modeling, simulation,
case studies, etc. Ratings:
Very Low: Poor understanding of architecture and COTS, unprecedented system
Low: Minimal understanding of architecture and COTS, many unfamiliar areas
Nominal: Reasonable understanding of architecture and COTS, some unfamiliar areas
High: Strong understanding of architecture and COTS, few unfamiliar areas
Very High: Full understanding of architecture, familiar system and COTS
82
Level of service requirements
This cost driver rates the difficulty and criticality of satisfying the ensemble of level of service
requirements, such as security, safety, response time, interoperability, maintainability, Key
Performance Parameters (KPPs), the “ilities”, etc. Ratings:
Very Low: Simple; single dominant KPP, Criticality: slight inconvenience
Low: Low, some coupling among KPPs, Criticality: easily recoverable losses
Nominal: Moderately complex, coupled KPPs, Criticality: some loss
High: Difficult, coupled KPPs, Criticality: high financial loss
Very High: Very complex, tightly coupled KPPs, Criticality: risk to human life
Migration complexity
This cost driver rates the extent to which the legacy system affects the migration complexity, if
any. Legacy system components, databases, workflows, environments, etc., may affect the new
system implementation due to new technology introductions, planned upgrades, increased
performance, business process reengineering, etc. Ratings:
Nominal: Legacy contractor is self, legacy system is well documented, original team is
largely available; effect of legacy system on new system: everything is new, legacy is
completely replaced or non-existent
High: Legacy contractor is self, original development team not available, most
documentation is available; effect of legacy system on new system: migration is restricted to
integration only
Very High: Legacy contractor is a different contractor, limited documentation; effect of
legacy system on new system: migration is related to integration and development
Extra High: Original contractor is out of business, no documentation available; effect of
legacy system on new system: migration is related to integration, development, architecture,
and design
Technology Risk
The maturity, readiness, and obsolescence of the technology being implemented. Immature or
obsolescent technology will require more Systems Engineering effort. Ratings:
Very Low: Technology proven and widely used throughout industry (TRL 9), no
obsolescence issues
Low: Proven through actual use and ready for widespread adoption (TRL 8), no
obsolescence issues
Nominal: Proven on pilot projects and ready to roll-out for production jobs (TRL 7),
technology is state-of-the-practice, emerging technology could compete in future
High: Ready for pilot use (TRL 5 or 6) OR technology stale and/or new and better
technology ready for pilot use
Very High: Still in laboratory (TRL 3 or 4) OR technology is outdated and use should be
avoided in new systems/spare parts supply is scarce
Documentation match to life cycle needs
The formality and detail of documentation required to be formally delivered based on the life
cycle needs of the system. Ratings:
Very Low: Minimal or no specified document and review requirements relative to life cycle
needs (general goals, stories)
Low: Relaxed documentation and review requirements relative to life cycle needs (broad
guidance, flexibility is allowed)
Nominal: Risk-driven degree of formality, amount of documentation and reviews in sync and
consistent with life cycle needs of the system (risk-driven degree of formality)
High: High amounts of documentation, more rigorous relative to life cycle needs, some
revisions required (partially streamlined process, largely standards-driven)
Very High: Extensive documentation and review requirements relative to life cycle needs,
multiple revisions required (rigorous, follows strict standards and requirements)
83
# and diversity of installations/platforms
The number of different platforms that the system will be hosted and installed on. The complexity
in the operating environment (space, sea, land, fixed, mobile, portable, information
assurance/security, constraints on size weight, and power). For example, in a wireless network it
could be the number of unique installation sites and the number of and types of fixed clients,
mobile clients, and servers. Number of platforms being implemented should be added to the
number being phased out (dual count). Ratings:
Nominal: Single installation site or configuration; existing facility meets all known
environmental operating requirements; less than 3 types of platforms being installed and/or
being phased out/replaced, homogeneous platforms, typically networked using single
industry standard protocol
High: 2-3 sites or diverse installation configurations; moderate environmental constraints,
controlled environment (i.e., A/C, electrical); 4-7 types of platforms being installed and/or
being phased out/replaced, compatible platforms, typically networked suing a single industry
protocol and multiple operating systems
Very High: 4-5 sites or diverse installation configurations; ruggedized mobile land-based
requirements, some information security requirements, coordination between one or two
regulatory or cross functional agencies required; 8-10 types of platforms being installed
and/or phased out/replaced, heterogeneous, but compatible platforms, typically networked
using a mix of industry standard protocols and proprietary protocols, single operating
systems
Extra High: 6 or more sites or diverse installation configuration; harsh environment (space,
sea, airborne), sensitive information security requirements, coordination between three or
more regulatory or cross functional agencies required; greater than 10 types of platforms
being installed and/or being phased out/replaced, heterogeneous/incompatible platforms,
typically networked using a mix of industry standard protocols and proprietary protocols,
multiple operating systems
# of recursive levels in the design
The number of levels of design related to the system-of-interest (as defined by ISO/IEC 15288)
and the amount of required SE effort for each level. Ratings:
Very Low: Single level, focused on single product
Low: 2 levels, some vertical and horizontal coordination
Nominal: 3-5 levels, more complex interdependencies, coordination and tradeoff analysis
High: 6-7 levels, very complex interdependencies, coordination, tradeoff analysis
Very High: Greater than 7 levels, extremely complex interdependencies, coordination, and
tradeoff analysis
Stakeholder team cohesion
Represents a multi-attribute parameter which includes leadership, shared vision, diversity of
stakeholders, approval cycles, group dynamics, Integrated Product Team framework, team
dynamics, trust, and amount of change in responsibilities. It further represents the heterogeneity
in stakeholder community of the end users, customers, implementers, and development team.
Ratings:
Very Low: Culture: Stakeholders with diverse expertise, task nature, language, culture,
infrastructure, highly heterogeneous stakeholder communities; Compatibility: Highly
conflicting organization objectives; Familiarity and trust: Lack of trust
Low: Culture: Heterogeneous stakeholder community, some similarities in language and
culture; Compatibility: Converging organizational objectives; Familiarity and trust: Willing
to collaborate, little experience
Nominal: Culture: Shared project culture; Compatibility: Compatible organizational
objectives; Familiarity and trust: Some familiarity and trust
High: Culture: Strong team cohesion/project culture, multiple similarities in language and
expertise; Compatibility: Clear roles/responsibilities; Familiarity and trust: Extensive
successful collaboration
Very High: Culture: Virtually homogeneous stakeholder communities, institutionalized
project culture; Compatibility: Strong mutual advantage to collaboration; Familiarity and
trust: Very high level of familiarity/trust
84
Personnel/team capability
Composite intellectual capability of a team of Systems Engineers (compared to the national pool
of SEs) to analyze complex problems and synthesize solutions. Ratings:
Very Low: 15
th
percentile
Low: 35
th
percentile
Nominal: 55
th
percentile
High: 75
th
percentile
Very High: 90
th
percentile
Personnel experience/continuity
The applicability and consistency of the staff at the initial stage of the project with respect to the
domain, customer, user, technology, tools, etc. Ratings:
Very Low: Less than 2 months experience, 48% annual turnover
Low: 1 year continuous experience, other technical experience in similar job, 24% annual
turnover
Nominal: 3 years continuous experience, 12% annual turnover
High: 5 years of continuous experience, 6% annual turnover
Very High: 10 years continuous experience, 3% annual turnover
Process capability
The consistency and effectiveness of the project team at performing SE processes. This may be
based on assessment ratings from a published process model (e.g., CMMI, EIA-731, SE-CMM,
ISO/IEC15504). It can alternatively be based on project team behavioral characteristics, if no
assessment has been made. Ratings:
Very Low: CMMI Level 0 (if continuous model)
Low: CMMI Level 1
Nominal: CMMI Level 2
High: CMMI Level 3
Very High: CMMI Level 4
Extra High: CMMI Level 5
Multisite coordination
Location of stakeholders, team members, resources, corporate collaboration barriers. Ratings:
Very Low: Collocation: International, severe time zone impact; Communications: Some
phone, mail; Corporate Collaboration Barriers: Severe export and security restrictions
Low: Collocation: Multi-city and multi-national, considerable time zone impact;
Communications: Individual phone, FAX; Corporate Collaboration Barriers: Mild export
and security restrictions
Nominal: Collocation: Multi-city or multi-company, some time zone effects;
Communications: Narrowband email; Corporate Collaboration Barriers: Some contractual
and intellectual property constraints
High: Collocation: Same city or metro area; Communications: Wideband electronic
communication; Corporate Collaboration Barriers: Some collaborative tools and processes
in place to facilitate or overcome, mitigate barriers
Very High: Collocation: Same building or complex, some co-located stakeholders or onsite
representation; Communications: Wideband electronic communication, occasional video
conference; Corporate Collaboration Barriers: Widely used and accepted collaborative tools
and processes in place to facilitate or overcome/mitigate barriers
Extra High: Collocation: Fully co-located stakeholders; Communications: Interactive media;
Corporate Collaboration Barriers: Virtual team environment fully supported by interactive,
collaborative tools environment
Tool support
Coverage, integration, and maturity of the tools in the Systems Engineering environment. Ratings:
Very Low: No SE tools
Low: Simple SE tools, little integration
Nominal: Basic SE tools moderately integrated throughout the systems engineering process
High: Strong, mature SE tools, moderately integrated with other disciplines
Very High: Strong, mature proactive use of SE tools integrated with process, model-based
SE and management systems
Abstract (if available)
Abstract
Today's need for more complex, more capable systems in a short timeframe is leading more organizations towards the integration of new and existing systems with commercial-off-the-shelf (COTS) products into network-centric, knowledge-based systems of systems (SoS). With this approach, system development processes to define the new architecture, identify sources to either supply or develop the required components, and eventually integrate and test these high level components are evolving and are being referred to as SoS Engineering (SoSE). In recent years, the systems engineering (SE) community has struggled to decide if SoSE is really different from traditional SE and, if it is different, how does it differ. Recent research and case studies conducted by the Department of Defense have confirmed that there are indeed key differences and that traditional SE processes are not sufficient for SoSE. However, as with any engineering discipline, how and how much SoSE differs depends on several factors. This research further investigated SoSE through the study of several large-scale SoSE programs and several SE programs that were considered part of one or more SoSs to identify key SoSE strategies and how these strategies differed based on SoS characteristics and constituent-systems. The results of these investigations were then captured in a system dynamics model that allows one to explore SoSE options with respect to engineering effort and return on SoSE investment. Two SoS capability development strategies (with and without an SoSE team to guide capability development) were compared and used to assess the value-added of the SoSE team with respect to total SE effort expended to engineer an SoS capability. It is clear from both the Office of the Secretary of Defense pilot studies and the system dynamics model analysis conducted as part of this research that there exist conditions under which investments in SoSE have positive and negative returns on investment.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Quantifying the impact of requirements volatility on systems engineering effort
PDF
Estimating systems engineering reuse with the constructive systems engineering cost model (COSYSMO 2.0)
PDF
A value-based theory of software engineering
PDF
Domain-based effort distribution model for software cost estimation
PDF
Organizing complex projects around critical skills, and the mitigation of risks arising from system dynamic behavior
PDF
Risk transfer modeling among hierarchically associated stakeholders in development of space systems
PDF
Effectiveness of engineering practices for the acquisition and employment of robotic systems
PDF
How project teams conceive of and manage pre-quantitative risk
PDF
The effects of required security on software development effort
PDF
A system framework for evidence based implementations in a health care organization
PDF
Extending the COCOMO II software cost model to estimate effort and schedule for software systems using commercial -off -the -shelf (COTS) software components: The COCOTS model
PDF
Simulation modeling to evaluate cost-benefit of multi-level screening strategies involving behavioral components to improve compliance: the example of diabetic retinopathy
PDF
Composable risk-driven processes for developing software systems from commercial-off-the-shelf (COTS) products
PDF
Designing an optimal software intensive system acquisition: a game theoretic approach
PDF
COSYSMO 3.0: an extended, unified cost estimating model for systems engineering
PDF
Optimizing a lean logistic system and the identification of its breakdown points
PDF
Calibrating COCOMO® II for functional size metrics
PDF
Adaptive resource management in distributed systems
PDF
A user-centric approach for improving a distributed software system's deployment architecture
PDF
Incremental development productivity decline
Asset Metadata
Creator
Lane, Jo Ann
(author)
Core Title
Impacts of system of system management strategies on system of system capability engineering effort
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Industrial and Systems Engineering
Publication Date
03/09/2009
Defense Date
01/22/2009
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
cost model,OAI-PMH Harvest,SoS management strategies,system engineering,system of systems engineering
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Boehm, Barry (
committee chair
), Adler, Paul S. (
committee member
), Settles, F. Stan (
committee member
)
Creator Email
jo.ann.lane@sbcglobal.net,jolane@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m2008
Unique identifier
UC157877
Identifier
etd-Lane-2668 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-213051 (legacy record id),usctheses-m2008 (legacy record id)
Legacy Identifier
etd-Lane-2668.pdf
Dmrecord
213051
Document Type
Dissertation
Rights
Lane, Jo Ann
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
cost model
SoS management strategies
system engineering
system of systems engineering