Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
A model for estimating schedule acceleration in agile software development projects
(USC Thesis Other)
A model for estimating schedule acceleration in agile software development projects
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
A MODEL FOR ESTIMATING SCHEDULE ACCELERATION
IN AGILE SOFTWARE DEVELOPMENT PROJECTS
by
Dan Ingold
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the Requirements for the Degree
DOCTOR OF PHILOSOPHY
(COMPUTER SCIENCE)
March 2015
Copyright 2015 Dan Ingold
Dedication
I would like to thank the many people and organizations that made this research
possible, and helped me through the years to realize my goal of receiving a doctorate.
Financial support by the Systems Engineering Research Consortium, Lockheed Martin,
the Metropolitan Water District of Southern California, and other sponsors made my
research feasible, and I hope my contributions merit the investment these organizations so
graciously made. Individuals who have intellectually challenged and guided me through
many fascinating research projects include Jo Ann Lane, Ray Madachy, and Rich Turner,
without whose help my work could not have progressed, and I offer special thanks to
John Alms for being the “mother hen” who nagged me toward completion. I am deeply
grateful, too, for the insightful comments, criticisms, and advice provided by my
committee members—Leana Golubchik, William GJ Halfond, and Behrokh Khoshnevis.
Words seem inadequate to express my gratitude for my advisor and committee chair
Barry Boehm, such a giant in this field, who guided, advised, defended and trusted me
through the ups and downs of research and dissertation preparation, and who never gave
up even when I was discouraged about finishing. Finally, this journey would have been
impossible were it not for the loving support of my wife Monica, and the constant
encouragement of Daniel, Kathryn, Jacqueline, and William, my now-grown children of
whom I am so proud. If I have omitted others from this note, the error is mine and I
apologize, for my gratitude extends to them as well.
The glory of God is intelligence, or, in other words, light and truth.
(Doctrine & Covenants 93:36)
The fear of the Lord is the beginning of knowledge: but fools despise
wisdom and instruction. (Proverbs 1:7)
Table of Contents
List of figures...................................................................................................................................v
List of tables....................................................................................................................................vi
Abstract..........................................................................................................................................vii
Chapter 1 Introduction..............................................................................................................1
1.1 Motivation............................................................................................................................1
1.2 Research questions...............................................................................................................4
1.3 Intended research contribution.............................................................................................4
Chapter 2 Background and related work.................................................................................. 6
2.1 Introduction..........................................................................................................................6
2.2 Schedule estimation............................................................................................................. 9
2.3 Schedule acceleration.........................................................................................................12
2.4 CORADMO schedule acceleration factors........................................................................13
2.5 Agile development methodologies.................................................................................... 16
2.6 Taxonomy of ISD agility................................................................................................... 20
Chapter 3 Methodology..........................................................................................................23
3.1 Design of experiment.........................................................................................................23
3.2 Static model approach........................................................................................................24
3.3 Dynamic model approach.................................................................................................. 29
3.4 System dynamics modeling............................................................................................... 30
3.5 Hypotheses.........................................................................................................................33
Chapter 4 Model..................................................................................................................... 34
4.1 Model overview................................................................................................................. 34
4.2 CORADMO module.......................................................................................................... 38
4.2.1 Overview............................................................................................................................38
4.2.2 Application of CORADMO factors...................................................................................42
4.3 Planning module................................................................................................................ 58
4.4 Refactoring module............................................................................................................62
4.5 Change module.................................................................................................................. 65
4.6 Customer module...............................................................................................................68
4.7 QA module.........................................................................................................................70
4.8 Productivity module...........................................................................................................72
4.9 Estimation module............................................................................................................. 75
4.10 Manpower module............................................................................................................. 77
Chapter 5 Results....................................................................................................................79
5.1 Dynamic model calibration................................................................................................79
5.2 Running test cases..............................................................................................................84
5.3 Quantitative results............................................................................................................ 87
5.4 Qualitative observations.....................................................................................................89
iv
5.4.1 Requirements growth.........................................................................................................89
5.4.2 Change traffic.....................................................................................................................91
5.4.3 Refactoring and quality......................................................................................................94
5.4.4 Productivity........................................................................................................................97
5.4.5 Schedule and scope............................................................................................................99
Chapter 6 Discussion............................................................................................................102
6.1 Hypotheses.......................................................................................................................103
6.2 Threats to validity............................................................................................................ 104
6.2.1 Internal validity................................................................................................................104
6.2.2 Construct validity.............................................................................................................105
6.2.3 External validity...............................................................................................................107
References....................................................................................................................................109
Appendix A System dynamics flow equations.........................................................................A-1
A.1 Change module................................................................................................................ A-2
A.2 CORADMO module........................................................................................................A-5
A.3 Customer..........................................................................................................................A-7
A.4 Estimation........................................................................................................................A-9
A.5 Manpower......................................................................................................................A-10
A.6 Planning......................................................................................................................... A-11
A.7 Productivity....................................................................................................................A-15
A.8 Quality Assurance (QA)................................................................................................ A-18
A.9 Refactoring.................................................................................................................... A-21
Appendix B Graphs of results for each sample project............................................................B-1
B.1 Project 1: Applied Systems.............................................................................................. B-3
B.2 Project 2: DAS Profume.................................................................................................. B-5
B.3 Project 3: Finatus............................................................................................................. B-7
B.4 Project 4: Argent Trading.................................................................................................B-9
B.5 Project 5: CBOE 2..........................................................................................................B-11
B.6 Project 6: CTC............................................................................................................... B-13
B.7 Project 7: CBOE 1......................................................................................................... B-15
B.8 Project 8: Chronos..........................................................................................................B-17
B.9 Project 9: eMerge...........................................................................................................B-19
B.10 Project 10: Appointments 123........................................................................................B-21
B.11 Project 11: VisiBILLity..................................................................................................B-23
B.12 Project 12: Motient........................................................................................................ B-25
v
List of figures
Figure 1. Agile adoption rates, from (West et al. 2010)...................................................................2
Figure 2. Cost/schedule tradeoff (Boehm 1981)..............................................................................8
Figure 3. Effort vs Duration, ISBSG 2010...................................................................................... 9
Figure 4. Effort vs. Duration, AgileTek 2004................................................................................ 10
Figure 5. Duration reduction..........................................................................................................11
Figure 6. “Impossible zone” of schedule compression (McConnell 2006, 226)...........................12
Figure 7. Problem domain vs. level of abstraction........................................................................ 31
Figure 8. Modeling approach vs. level of abstraction....................................................................31
Figure 9. CORADMO model overview.........................................................................................35
Figure 10. CORADMO module.....................................................................................................39
Figure 11. Planning module...........................................................................................................59
Figure 12. Refactoring module...................................................................................................... 63
Figure 13. Change module.............................................................................................................66
Figure 14. Customer module..........................................................................................................69
Figure 15. QA module....................................................................................................................71
Figure 16. Productivity module..................................................................................................... 73
Figure 17. Estimation module........................................................................................................75
Figure 18. Manpower module........................................................................................................77
Figure 19. Nominal schedule derived through iteration (typical)..................................................81
Figure 20. Requirements growth and schedule..............................................................................90
Figure 21. Change traffic behavior, Project 3 (Finatus).................................................................92
Figure 22. Other change traffic behavior, Project 3 (Finatus)....................................................... 93
Figure 23. Early vs. late refactoring...............................................................................................95
Figure 24. “Bad smell” refactoring................................................................................................96
Figure 25. Productivity decline......................................................................................................98
Figure 26. Schedule and scope interactions.................................................................................100
vi
List of tables
Table 1. Traditional vs. agile development.................................................................................... 13
Table 2. CORADMO factor contributions to ISD agility attributes.............................................. 22
Table 3. Schedule accelerators and rating factors..........................................................................25
Table 4. Initial calibration against commercial projects................................................................ 27
Table 5. System dynamics modeling elements.............................................................................. 32
Table 6. CORADMO module variables.........................................................................................41
Table 7. Mapping of CORADMO factors to Cao modules........................................................... 44
Table 8. Simplicity sub-factor rating values.................................................................................. 45
Table 9. Element reuse sub-factor rating values............................................................................ 46
Table 10. Low-priority deferrals sub-factor rating values............................................................. 48
Table 11. Models vs documents sub-factor rating values.............................................................. 48
Table 12. Technology maturity sub-factor rating values................................................................49
Table 13. Process streamlining sub-factor rating values................................................................51
Table 14. Tool support sub-factor rating values.............................................................................52
Table 15. Collaboration support sub-factor rating values..............................................................54
Table 16. MMPTs sub-factor rating values....................................................................................55
Table 17. General KSA sub-factor rating values........................................................................... 56
Table 18. Team compatibility sub-factor rating values..................................................................57
Table 19. Risk acceptance sub-factor rating values....................................................................... 57
Table 20. Planning module variables............................................................................................. 62
Table 21. Refactoring module variables........................................................................................ 65
Table 22. Change module variables............................................................................................... 68
Table 23. Customer module variables............................................................................................70
Table 24. QA module variables......................................................................................................72
Table 25. Productivity module variable......................................................................................... 74
Table 26. Estimation module variables..........................................................................................76
Table 27. Manpower module variables..........................................................................................78
Table 28. Sample project values (supplement to (Manzo 2004))..................................................83
Table 29. Original factor ratings summary, by project.................................................................. 85
Table 30. Original factor ratings schedule results, by project........................................................85
Table 31. Final factor ratings summary, by project........................................................................87
Table 32. Revised factor ratings detail, by project.........................................................................88
Table 33. Revised factor ratings schedule results, by project........................................................ 88
vii
Abstract
This research assesses the effect of product, project, process, people and risk factors on
schedule for software development projects that employ agile methods. Prior research identified
these factors as being significant within lean/agile organizations with a history of rapid-response
to new product development needs. This work integrates these factors into CORADMO, the
Constructive Rapid Application Development Model, an offshoot of the COCOMO family of
effort and schedule estimation models.
CORADMO is based on a system dynamics model of the agile development process,
which simulates the flow of development tasks and change items through the process. The five
major factors are elaborated into twelve sub-factors, most having a second-, third- or higher-
order effect on schedule. Each of the factors and sub-factors is rated along a six-element Likert
scale, which determines a set of weighing multipliers derived from COCOMO, COSYSMO, and
other models. These multipliers are applied to the system dynamics model elements that affect
task production, change rates, defect insertion, refactoring, and other processes, and the schedule
effects assessed.
The results of this modeling show very good ability to predict the schedule outcomes of
agile projects. The research evaluates the dynamic model against twelve commercial projects,
which show from 2% schedule overrun to 56% underrun, and that implement a variety of product
types using diverse languages. The twelve factors were rated for each project based on
information the projects provided, and the simulated schedule results compared with the actual
schedules realized. Although wide-range validation is limited due to the availability of test data,
the CORADMO model is able to predict accurately the actual schedule outcomes of these
commercial projects.
Chapter 1
Introduction
1.1 Motivation
Accelerating development schedules is increasingly important in a competitive world.
Reduced time-to-market is a key response to competitive threats in the commercial sphere, and
rapid response in deploying military systems may save lives and deter adversaries in a
geopolitical environment characterized by rapidly emerging and ever-changing physical threats.
Agile/lean development methodologies show promise in providing the desired schedule
acceleration, within certain problem domains and organizational characteristics (Boehm and
Turner 2004).
This research develops and explores the Constructive Rapid Application Development
Model (CORADMO), a derivative of the revised Constructive Cost Model (COCOMO II)
(Boehm et al. 2000). CORADMO attempts to quantify both the positive and the negative effects
1
2
of key schedule drivers, and thus enables planners to estimate the relative schedule for projects
with these characteristics. Earlier efforts at defining CORADMO envisioned it as addressing
primarily the early phases of the development lifecycle, then a common use of the “rapid
application development” (RAD) concept (2000, 214). As such, the earlier CORADMO
explicitly examined the distribution of schedule by phase as calculated by COPSEMO (2000,
197). More recently, the idea of RAD, while related to that originally conceived by James Martin
(1991), has come to encompass a range of development techniques now more commonly called
agile or lean.
Figure 1. Agile adoption rates, from (West et al. 2010)
As Figure 1 illustrates, agile/lean methods dominate the development landscape, making
research on their scheduling particularly relevant (West et al. 2010; de Cesare et al. 2010).
Agile/lean development methods are often chosen when time-certain delivery is required, and
when cost and/or features can be allowed to vary to meet the schedule constraint (Beck 2005;
3
Boehm and Turner 2004). Agile methods encourage schedule planning at the team level, and
such plans frequently encompass only the next iteration or release with any specificity
(Schwaber 1997; Beck 2005). Development organizations must still employ some means of
estimating the total effort required to produce a product, and its requisite schedule. Little
research is available, though, nor do practical guidelines exist for estimating an appropriate
schedule in agile projects, regardless of the source of the effort estimation.
COCOMO estimates project duration as being proportional to the cube root of the project
effort (Boehm 1981; Oligny et al. 2000). This relationship was calibrated against larger projects,
however, which are typically optimized to reduce cost, whereas the goal of projects using
agile/lean techniques is often to accelerate schedule. Further, COCOMO II generates
unreasonably high duration estimates for projects of fewer than two person-years of effort
(Boehm et al. 2000, 202), and does not explicitly consider rapid development techniques.
Smaller projects are reported to have durations proportional to the square root of the project
effort.
Following the guidance established by Basili (1986), the author considers the following
motivations for pursuing the CORADMO research:
• To determine whether agile projects follow the square root baseline duration curve
• To identify the factors that accelerate or extend agile project duration from its baseline
• To assess the quantitative effects of these factors on schedule, in both large and small
agile projects
4
1.2 Research questions
This research is intended to address the following questions:
• RQ 1: Do the durations of larger-scale agile projects remain proportional to the square-
root of development effort?
• RQ 2: What factors are predictive of schedule reduction or extension in agile
development projects?
• RQ 3: What is the quantitative effect of the application of these factors on project
schedule?
1.3 Intended research contribution
This research is intended to provide the following contributions:
• Examine relationship of development effort and schedule duration in agile projects
• Identify key factors affecting agile software project development schedule duration
• Quantify effect of each factor on schedule acceleration/deceleration
• Create a model to allow predictive modeling of schedule acceleration factors
The CORADMO model described in this research can be used to estimate the effect of
choices about project, process, product, people, and risk acceptance on the project, by rating the
model factors for a given software development effort. CORADMO can be used to evaluate the
tradeoffs between model factors—for example, to decide whether the schedule expansion caused
by a non-collocated team might be offset by better collaboration support. Even where tradeoffs
are not being evaluated, rating the factors for an effort before commencing it may assist project
management by defining the feasibility of a proposed schedule.
5
Note that CORADMO does not itself estimate schedule. That is left to other tools such as
COCOMO II. Given the nominal schedule estimate other tools provide, however, CORADMO
can help determine whether that schedule might be accelerated through management choices that
affect the CORADMO factors, or similarly if the schedule might be extended past its nominal
estimate because of those choices.
The system dynamics model used to evaluate and support CORADMO in this research
may itself be used to evaluate the effects of software development methods, with respect to
CORADMO factors. This research has postulated the likely effects of those factors on the
development process, and evaluated the results of those choices on the development schedule.
Future researchers may extend or modify the underlying model to support non-agile development
processes, or more complex or hybrid processes. Others might also find the CORADMO factors
affect other aspects of development not considered in this research. Both CORADMO and the
supporting system dynamics model are intended to be extensible by future researchers interested
in factors that affect project duration.
Chapter 2
Background and related work
2.1 Introduction
Increased competition in the commercial sphere makes time-to-market a key advantage.
Agile software development methods (SDMs) have been effective in reducing schedules and
increasing productivity in commercial software projects, and helping to decrease time-to-market
(Reifer 2013). Recent surveys of commercial firms show that agile SDMs have now overtaken
traditional plan-driven, serial methods as the preferred choice for development (West et al. 2010;
de Cesare et al. 2010).
Likewise in the military world, the rapidly changing tactics of adversaries makes
mandatory the equally rapid production of counter-measures. In an era of decreasing defense
budgets and increasing asymmetric threats, rapid response at low cost is vital to our success.
6
7
Defense contractors and the government are eager to find ways these agile SDMs might provide
similar productivity gains and schedule improvement in their complex and critical systems.
Regardless of whether a traditional or an agile SDM will be used, planners must estimate
the labor effort required to produce a software-intensive system. This effort estimate will be
proportional to the size and complexity of the system to be constructed, the environment in
which it will be produced, the capabilities of the development staff, and many other factors, and
can be developed from estimation models and/or expert judgement.
Estimation methods include judgement-, heuristic- and model-based strategies. The latter
comprise several popular tools, including COCOMO II, SEER/SEM, SLIM, and so forth.
Regardless of the estimation method, planners determine the required effort (notionally in
person-months) necessary to produce the desired system. This effort may be realized in varying
combinations of schedule (in months) and staff loading (in persons). For example, a 42 person-
month effort might be scheduled for six months with seven persons, or with six persons for seven
months.
Models such as COCOMO II can recommend an “optimal” schedule and staff loading;
this optimum, however, is usually chosen to minimize total cost (Boehm 1981). As illustrated in
Figure 2, many models assume a convex relationship between schedule and cost. Schedules that
deviate from “nominal” are presumed to increase cost (1981). To realize a given effort in a
shorter schedule requires additional staff and/or longer hours, which increases interpersonal
communication and/or lowers productivity (Brooks 1995). A longer schedule requires fewer
staff, which may reduce staff productivity due to lower schedule pressure, and possibly increase
“social loafing” (Alnuaimi, Robert, and Maruping 2010; Nan and Harter 2009).
8
Figure 2. Cost/schedule tradeoff (Boehm 1981)
Development projects may be more concerned with minimizing schedule to meet a
competitive threat or required deadline, however, than with minimizing cost. Agile proponents
have claimed schedule reductions of 40-60%, some of which have been validated through
independent analysis (Reifer 2013). Some research has been done on the effect of specific agile
practices on schedule and quality, including pair programming (Williams et al. 2000; Hulkko and
Abrahamsson 2005; Balijepally et al. 2009), refactoring (Cao, Ramesh, and Abdel-Hamid 2010),
and other agile practices (Parsons, Ryu, and Lal 2007). No research has been done on the effect
of larger-scale influences, the organizational milieu, on schedule.
This research aims to identify a set of product, process, project, people, and risk-tolerance
factors, and to determine their quantitative influence on project schedule.
9
2.2 Schedule estimation
Historically, software development cost models have concentrated on estimating the
effort required for implementation. Schedule estimation has been a secondary aspect of the
estimation models, which typically derive the total development schedule as
where C is a constant in the range of 2.0 to 4.0, and F is approximately 0.33 (Boehm et
al. 2000; McConnell 2006, 221). Many authors have used this approximately cube-root
relationship of effort to duration, with (Walston and Felix 1977) being among the earliest to
document it.
Figure 3. Effort vs Duration, ISBSG 2010
Figure 3 plots the effort vs. duration of 36 recent projects from data provided by
(Menzies et al. 2012) in the PRedictOr Models In Software Engineering (PROMISE) database,
collected by the International Software Benchmarking Standards Group (ISBSG). Automated
10
curve fitting by the Eureqa data analysis package fits the cube-root (CUBRT) curve to this data,
with the equation
which agrees well with the relationship described in (Boehm et al. 2000, 57), and
consistent with the observation in (McConnell 2006, 222) that the coefficient of this equation
falls in the range 2.0-4.0.
The COnstructive Phased Schedule and Effort MOdel (COPSEMO) (Boehm et al. 2000,
205) asserts that in traditional “phased” projects, the duration of projects smaller than 16 person-
months of effort fall along the square-root curve (SQRT) in Figure 3, and projects above 64
person-months of effort on the cube-root curve (CUBRT), with efforts between these limits
following the black line (COPSEMO). This research hypothesizes that the duration of agile
projects continues to follow the square-root curve through the entire range of efforts.
Figure 4. Effort vs. Duration, AgileTek 2004
11
Figure 4 shows a data set of 12 agile projects provided by AgileTek (supplement to
Manzo 2004) to the Center for Systems and Software Engineering (CSSE). These projects span
the effort range considered by COPSEMO, and yet even at effort levels exceeding 64 person-
months still show project durations along the square-root curve. Although further data is needed,
these data appear to support the hypothesis that agile projects have a duration proportional to the
square-root of effort.
Figure 5. Duration reduction
Agile SDM proponents claim a key benefit of agile methods is a reduction in project
duration (Beck 2005). Figure 5 compares the cube-root and square-root curves relating effort and
duration. It shows an expected reduction in duration ranging from 50% in projects of about 10
person-months duration to 10% in three person-year projects. This is consistent with the actual
10–60% schedule reduction for agile projects reported by (Reifer 2013, 4).
12
2.3 Schedule acceleration
Figure 6 illustrates a concept common to many software schedule estimation models—
that is, a prediction that deviation from the nominal schedule increases costs. Furthermore,
schedule reduction incurs exponentially increasing costs, and reduction below about 75% of the
nominal schedule is effectively “impossible” (McConnell 2006). The fundamental causes of this
growth are thought to be the additional coordination needed, the increased communication
overhead of additional staff, and the limits of executing tasks in parallel (Brooks 1995).
Figure 6. “Impossible zone” of schedule compression (McConnell 2006, 226)
Yet agile development projects appear able to achieve schedules within this impossible
zone. The AgileTek data shows a 31–78% reduction below the nominal schedule in all but the
largest effort, where the reduction is 22%. In its analysis of ISBSG data, (Reifer 2013) reports
evidence of 10–60% schedule reductions. While the difference between the cube-root and
square-root curves described above may explain this achievement, some distinguishing features
of the agile SDMs must enable them to perform on this lower-curve. Table 1 outlines some of the
13
potentially distinguishing characteristics of agile SDMs (adapted from Nerur, Mahapatra, and
Mangalaraj 2005; Dyba and Dingsoyr 2008; Dingsøyr et al. 2012).
Table 1. Traditional vs. agile development
Although these factors have been held to distinguish agile SDMs from traditional SDMs,
and hence to affect schedule, current research suggests other contributing factors as discussed in
the following section.
2.4 CORADMO schedule acceleration factors
This research stems from Research Task 34 (RT-34) of the System Engineering Research
Center (SERC). RT-34 studied ways that systems engineering might be expedited, particularly
within the aerospace/defense community. Through industry and government contacts, the study
Traditional development Agile development
Control Process centric People centric
Management style Command and control Leadership and collaboration
Role assignment Individual—favors specialization
Communication Formal; high “ceremony” Informal; low “ceremony”
Customer's role
Project cycle
Development model
Quality control
Fundamental
assumption
Systems are fully specifiable, predictable,
and are built through meticulous and
extensive planning
High-quality adaptive software is
developed by small teams using the
principles of continuous design
improvement and testing based on rapid
feedback and change
Knowledge
management
Explicit; comprehensive written
documentation
Tacit, shared understanding; only strictly-
necessary documentation
Self-organizing teams—encourages role
interchangeability
Important; passive review of development
process
Critical; active participation in
development process
Preplanned; guided by tasks or activities;
resistant to change
Emergent; guided by needed product
features; responsive to change
Life-cycle model (waterfall, spiral or some
variation)
Incremental- or evolutionary-delivery
model
Desired organizational
form/structure
Mechanistic (bureaucratic with high
formalization)
Organic (flexible and participative,
encouraging cooperative social action)
Heavy planning and strict control; Late,
heavy testing and integration
Continuous control of requirements,
design and solutions; Continuous,
ongoing testing and integration
14
sought out and interviewed over thirty organizations that had a history of successfully
compressing the development time of projects. The objective of the RT-34 research was the
qualitative identification of project and organizational attributes that foster rapid development.
In a series of onsite visits and in-depth follow-up interviews for RT-34, Lepore and
Colombi and (2012) and Ford et al. (2012) identified three major factors that characterize these
organizations, in the areas of people, process, and product (also see (Zirger and Hartley 1996)),
and also noted that rapid organizations are risk-tolerant. In earlier work with some of the same
organizations, Lane et al. (2010) had also identified project-related factors as critical to rapid
development success. These five major factors became the basis for CORADMO, and the intent
of this research is to determine the quantitative effect of these factors on schedule performance in
software-intensive systems development.
A short discussion of the five major factors and their sub-factors follows:
Product factors describe the nature of the system to be developed across five sub-
factors: simplicity (Zirger and Hartley 1996), ability to reuse existing elements (Brooks 1995;
Daft and Lengel 1986; Zirger and Hartley 1996), ability to defer lower-priority requirements
(Beck 2005), degree that models (prototypes, simulations, etc.) can be substituted for written
documentation (Beck 2005), and maturity of the component technologies (Boehm et al. 1995).
Process factors characterize the development methodology using three sub-factors:
concurrency of artifact development (operational concept, requirements, code, etc.) (Zirger and
Hartley 1996); degree of process streamlining; and the coverage, integration, and maturity (CIM)
of tools used to support the development process (Baik and Boehm 2000). Use of concurrent vs.
sequential processes has been consistently observed to accelerate schedule in the use of such
methods as the spiral model (Boehm 1988), the Rational Unified Process (Kruchten 2004), and
15
agile methods (Beck 2005), although with the need of mechanisms to synchronize and stabilize
the concurrently-developed elements via buffered phases in the Microsoft approach (Cusumano
and Selby 1995), and over multiple levels of activities and artifacts {Strode et al 2012}, and
evidence-based milestones in the Incremental Commitment Spiral Model (Lane and Boehm
2007).
The process streamlining sub-factor drew on the Development Process Reengineering
and Streamlining factor in the original CORADMO (Boehm et al. 2000, 215), which primarily
addressed removal of bureaucratic and procedural delays, or the presence of enabling vs.
coercive bureaucracy (Adler and Borys 1996). The RT-34 study also identified other key
contributors such as Kaizen performer-identified streamlining (Deming 2000; Ono 1988), and
lean approaches such as Kanban (Anderson and Reinertsen 2010). The effect of tool support was
found in the COCOMO database analysis (Boehm 1988) to be due about 50% for tool coverage
and 25% each for toolset integration and maturity.
Project factors span four sub-factors describing execution of the development effort:
project staff size (Jeffery 1987); degree and nature of team collaboration (Jeffery 1987; Hannay
and Benestad 2010); CIM of the single-domain models, methods, processes, and tools (MMPTs)
employed (Baik and Boehm 2000; Baik, Boehm, and Steece 2002); and CIM of the multi-
domain MMPTs used, where required (Baik, Boehm, and Steece 2002).
People factors describe the project staff using four sub-factors: general knowledge,
skills, and agility (or, ability to thrive with the more concurrent nature of the agile/lean process)
(Jeffery 1987; Boehm and Turner 2004; Druskat and Pescosolido 2002); KSAs specific to the
primary problem domain; KSAs spanning multiple problem domains, where needed; and team
compatibility (Boehm et al. 2010; Hannay and Benestad 2010; Zirger and Hartley 1996).
16
In addition, the author used work by (Anderson and Reinertsen 2010; Arthur 1992;
Highsmith 2000; McConnell 1996; Murman 2002; United States Air Force 2011; Womack and
Jones 2003) on rapid development, rapid fielding, and schedule acceleration in developing the
rating scales for the product, process, project, and people factors.
Finally, the risk factor characterizes the project stakeholders’ willingness to accept rapid
but imperfect solutions (Boehm and Turner 2004; J. S. Ford, Colburn, and Morris 2012);
stakeholders may range from highly risk-averse, to strongly risk-accepting.
2.5 Agile development methodologies
Agile development methodologies (ADMs) are a family of software development
processes, philosophically related to “lean” manufacturing (Poppendieck and Poppendieck
2003), and typically characterized by incremental planning and design, time-boxed scheduling,
strong customer involvement, short development cycles, continuous integration, and a focus on
software over documentation (Beck 2005, xvi). Although many of these have been common
elements of software development for decades, Beck was perhaps the first to articulate them
concisely as a unified set of practices, and to name them “agile.” Sutherland and Schwaber
describes the similar Scrum methodology somewhat earlier (1995).
Beck et al. clearly articulates the guiding principles of agile as the Agile Manifesto
(2001):
We are uncovering better ways of developing software by doing it and helping
others do it. Through this work we have come to value:
• Individuals and interactions over processes and tools
• Working software over comprehensive documentation
17
• Customer collaboration over contract negotiation
• Responding to change over following a plan
That is, while there is value in the items on the right, we value the items on the left
more.
Agile is juxtaposed to more traditional plan-driven development. In the latter, adherence
to a predefined schedule requires careful control of requirements changes. In contrast, agile
proponents recognize that customers frequently cannot articulate their needs precisely or fully,
and that development must easily accommodate change. To manage change, plan-driven methods
usually emphasize written documentation that customers must review and approve before
development begins. Agile development emphasizes the delivery of working software, which
customers can evaluate in context. That is, whereas documentation requires customers to
envision their needs in abstract form, working software allows them to evaluate a concrete
realization of their needs, and to redirect development as necessary.
This concept of changing customer needs and feedback is central to ADMs, and an
important element of the simulation model described in this work. In the model, the project
scope changes over time, as an interaction between the customer and development team. Hunt
expresses this relationship well (2006, 3):
Agile software development is an attempt to put the software being developed first and to
acknowledge that the user requirements change. It is agile because it can respond quickly to the
users’ changing needs.
Again in contrast to typical plan-driven development, agile projects deliver software
more frequently, often on a fixed, periodic interval of one to four weeks. This interval is
frequently called an iteration. This rapid delivery schedule supports the ability of agile
18
development to respond quickly to changes, by enabling frequent customer evaluation and
feedback. Again quoting Hunt (2006, 3):
Agile software development, in general, advocates frequent, and regular, software
delivery. In turn, users can respond frequently and quickly to these releases with feedback,
changing requirements, and general comments....
Agile projects deliver more frequently by executing their phases more concurrently, as
opposed to sequentially. Plan-driven software projects typically use a “waterfall” development
model, with sequential analysis, design, development, testing, and delivery phases (Royce 1970).
This approach is less responsive to changing requirements, or any upstream phase once it is
complete (although plan-driven projects do allow some backtracking). Each agile iteration, on
the other hand, encompasses a bit of each of these phases, like a mini-project. This approach
allows the requirements, design, and other aspects to be revisited each iteration, which in turn
enables the project to be more adaptive to change. It also enables features only conceptualized in
the design phase to be demonstrated in delivery after each iteration, which can be used to
validate those decisions frequently, and to rework them as necessary.
One goal of the waterfall process is to minimize rework. If upstream steps are done well,
downstream steps can build on that earlier work without fear of changes. That is, ideally the
requirements are perfect, enabling a perfect design, which can be coded perfectly, and so forth.
Practically the upstream steps cannot be perfect, of course, but considerable effort is spent in
plan-driven methods to make them as correct as possible. This approach has problems if the
requirements are not well known or change, or other defects exist, because the effects of such
defects can cause exponentially increasing rework in downstream phases. In contrast, changes
and defects are expected, and rework is an accepted attribute of agile processes. That is, agile
19
assumes the requirements (or design, or coding) will be wrong, but the incorrect parts can be
detected in each iteration, and the cascading effects of needed corrections minimized.
1
The
process of this expected rework is called refactoring in ADMs.
Some ADMs describe a unit of customer requirements as a user story. This is similar to a
use case, although it is intended as a placeholder for the requirement rather than a fully
elucidated description (Beck 2005, 44). User stories, and high-level requirements in general, are
elaborated prior to development into a set of concrete tasks on which developers work (2005,
47). Tasks are the unit of work for the software development simulation model described in this
work.
Agile and plan-driven development methods each have their “home grounds”—domains
and project characteristics where each method is better suited for use (Boehm and Turner 2004).
Agile methods work best in projects with smaller, more experienced teams (Cockburn 2002, 178–
180; Boehm and Turner 2004, 46–48). The ADM preference for lightweight documentation
makes the development team rely on “tacit knowledge” shared amongst team members (Leonard
and Sensiper 1998; Takeuchi and Nonaka 2000, 143; Cockburn 2002, 91), rather than on formal,
written documentation, which makes an ADM project less tolerant of personnel turnover. This
reliance on tacit knowledge may also limit the ability of ADMs to employ larger teams (Boehm
and Turner 2004, 84, but see Auer and Miller 2001). The lower emphasis of ADMs on formal
documentation may make them less suitable for developing critical or highly complex systems
(Boehm and Turner 2004, 26); at the least, more critical projects require a higher degree of
“ceremony”
2
(Cockburn 2002, 151–153).
1
This assumption is a frequent criticism of ADMs, which further assume the “cost of
change” accommodation is flat, rather than exponential as in a pure plan-driven process.
2
Cockburn defines ceremony as “the amount of precision and tightness of tolerance in
the methodology” (2002, 123).
20
The model described in this work incorporates some of these ADM attributes as
assumptions. Project staffing is assumed to be stable throughout, and the current model does not
address personnel turnover. Because maintaining and conveying tacit knowledge requires a level
of stability and compatibility amongst team members, though, the “team compatibility” model
parameter may partially address this assumption. Although the model can evaluate teams with a
mix of experience levels, limitations in the available calibration data required the author to
assume a team with highly experienced members. Similarly, the projects in the calibration data
all employed relatively small staffs, and so testing the effects of larger staffs was not possible.
3
Finally, all of the projects in the calibration data were commercial systems with low criticality
4
,
although they reportedly employed sophisticated design and risk management strategies (Manzo
2004).
2.6 Taxonomy of ISD agility
The factors and sub-factors described above are not traditionally associated with agile
projects. Although they are complementary to the principals of the Agile Manifesto (Beck et al.
2001), they are not the core practices of agile SDMs such as Extreme Programming (XP) (Beck
2005), the management practices of Scrum (Schwaber 1997), or similar agile methods as
discussed in (Cockburn 2002). As noted above, however, the factors were developed through
behavioral analysis of projects exhibiting schedule acceleration, including both hardware and
software components (Lepore and Colombi 2012), and from theory.
3
The underlying system dynamics model by Abdel-Hamid and Madnick (1991) was itself
calibrated against only small projects, and so its applicability to larger projects is questionable.
4
As described, probably at or below the “loss of discretionary monies” level, using
Cockburn’s criticality criteria (2002, 152).
21
A direct correspondence between the observed general agility attributes in CORADMO,
and the specific practices in various agile SDMs, is not necessary. Through an analysis from
first-principles of the contributing attributes of agility, (Conboy 2009) develops a set of general,
high-level principles of information system development (ISD) agility, which are independent of
the particular agile SDM that might be used, as follows:
• To be agile, an ISD method component
5
must contribute to one or more of the following:
• creation of change
• proaction in advance of change
• reaction to change
• learning from change
• To be agile, an ISD method component must contribute to one or more of the following,
and must not detract from any:
• perceived economy
• perceived quality
• perceived simplicity
• To be agile, an ISD method component must be continually ready—that is, minimal time
and cost to prepare the component for use.
5
An ISD method component refers to any distinct part of an ISD method.
22
Table 2. CORADMO factor contributions to ISD agility attributes
Table 2 examines these agile attributes with respect to the CORADMO schedule
acceleration factors. These ISD agility attributes appear to fit the CORADMO factors well. The
author therefore asserts that the factors may be themselves be considered as contributing to
agility in the projects that exhibit them, and thence to schedule acceleration, as this research will
demonstrate.
Perceived Customer Value
Model Factor / ISD Attribute Creation Proaction Reaction Learning Economy Quality Simplicity
Simplicity X X X X X X X X
Element Reuse X X X X X X X X
Low-Priority Deferrals X X X X
Models vs Documents X X X X X X
Technology Maturity X X X X X
Concurrent Operations X X X X X
Process Streamlining X X X X X X
Tool Support CIM X X X X X X
Project Staff Size X X X X
Collaboration Support X X X X X
Single-domain MMPTs X X X X X
Multi-domain MMPTs X X X X X
Team Compatibility X X X X X
Risk Acceptance X X X X X X
Continual
Readiness
Chapter 3
Methodology
3.1 Design of experiment
Following the definitions of (Basili, Selby, and Hutchens 1986), the object of study (“unit
of analysis”) for this research is the project, analyzed from the perspective of the project
manager. The study examines multiple projects, considering the staff of each project as a single
team, thus classifying the scope of the experimental domain as a multi-project variation. The
purposes of the study are to characterize the duration of agile projects, and to evaluate the
contribution of various factors to the acceleration or expansion of the project duration with
respect to its baseline value.
In preparation for this research experiment, the author established a set of candidate
schedule factors through a literature search. These were then supplemented by factors derived
from a study of more than thirty organizations with the reported ability to execute projects in a
23
24
rapid manner. Evaluation of these factors and weights in a pilot study conducted against a small
set of commercial projects validated the concept. This initial evaluation process is described in
further detail in the next section.
The next section describes the evaluation process used in preparation for this research;
the main body of this dissertation then reviews the final approach chosen, a system dynamics
model.
3.2 Static model approach
As an initial step toward calibration of CORADMO, the author decomposed the five
factors into the sub-factors described earlier, each rated along a six-item Likert rating scale
ranging from Very Low (VL) to Extra High (XH). This decomposition was based on earlier
research into macro-risk factors (Boehm, Ingold, and Madachy 2008), performance-and
personnel competency-risk factors (Boehm et al. 2010), and on the agile factors as summarized
by {(Parsons, Ryu, and Lal 2007), among many others. Once these sub-factors were defined, the
author and colleagues conducted a preliminary Wideband Delphi (Boehm 1981; Linstone and
Turoff 2002) to assign initial multiplier values to the rating scale, as noted in Table 3.
25
Table 3. Schedule accelerators and rating factors.
The initial CORADMO approach considered duration (D) as the product of multipliers
associated with the rating factors (F
i
) in Table 3, and the nominal duration to be the square root
of baseline effort in person-months (PM),
As seen in Table 3, each of the factors is rated along a six-value scale, where factors
rating lower in the scale tend to extend the schedule, and those rating higher to reduce it. Initial
Accelerators / Ratings Very Low Low Nominal High Very High Extra High
PRODUCT FACTORS 1.09 1.05 1 0.96 0.92 0.87
Simplicity Highly simple
Element Reuse None (0%) Some (30%)
Low-Priority Deferrals Never Rarely Sometimes Often Usually Anytime
Models vs. Documents None (0%) Some (30%)
Key Technology Maturity 1–2 TRL 6 All > TRL 7
PROCESS FACTORS 1.09 1.05 1 0.96 0.92 0.87
Process Streamlining
Minimal CIM Some CIM
PROJECT FACTORS 1.08 1.04 1 0.96 0.93 0.9
Over 300 Over 100 Over 30 Over 10 Over 3 ≤ 3
Collaboration support
Minimal CIM Some CIM
Multi-domain MMPTs Minimal CIM
PEOPLE FACTORS 1.13 1.06 1 0.94 0.89 0.84
Weak KSAs Some KSAs Good KSAs Strong KSAs
Single-Domain KSAs Weak Some Moderate Good Strong Very strong
Multi-Domain KSAs Weak Some Good Strong Very strong
Team Compatibility
RISK ACCEPTANCE FACTOR 1.13 1.06 1 0.94 0.89 0.84
-
Extremely
complex
Highly
complex
Moderately
complex
Moderately
simple
Extremely
simple
Minimal
(15%)
Moderate
(50%)
Considerable
(70%)
Extensive
(90%)
Minimal
(15%)
Moderate
(50%)
Considerable
(70%)
Extensive
(90%)
>0 TRL 1,2 or
>1 TRL 3
1 TRL 3 or >
1 TRL 4
1 TRL 4 or >
2 TRL 5
1–2 TRL 5 or
>2 TRL 6
Concurrent Operational
Concept, Requirements,
Architecture, V&V
Highly
sequential
Mostly
sequential
2 artifacts
mostly
concurrent
3 artifacts
mostly
concurrent
All artifacts
mostly
concurrent
Fully
concurrent
Heavily
bureaucratic
Largely
bureaucratic
Conservative
bureaucratic
Moderate
streamline
Mostly
streamlined
Fully
streamlined
General SE tool support CIM
(Coverage, Integration,
Maturity)
Simple tools,
weak
integration
Moderate
CIM
Considerable
CIM
Extensive
CIM
Project size (peak # of
personnel)
Globally
distributed;
weak comm.,
data sharing
Nationally
distributed;
some sharing
Regionally
distributed;
moderate
sharing
Metro-area
distributed;
good sharing
Simple
campus;
strong
sharing
Largely
collocated;
Very strong
sharing
Single-domain MMPTs
(Models, Methods, Processes,
Tools)
Simple
MMPTs,
weak
integration
Moderate
CIM
Considerable
CIM
Extensive
CIM
Simple; weak
integration
Some CIM or
not needed
Moderate
CIM
Considerable
CIM
Extensive
CIM
General SE KSAs (Knowledge,
Skills, Agility)
Moderate
KSAs
Very strong
KSAs
Moderate or
not needed
Very difficult
interactions
Some difficult
interactions
Basically
cooperative
interactions
Largely
cooperative
Highly
cooperative
Seamless
interactions
Highly risk-
averse
Partly risk-
averse
Balanced risk
aversion,
acceptance
Moderately
risk-
accepting
Considerably
risk-
accepting
Strongly risk-
accepting
26
values of the schedule acceleration multipliers were chosen to span a relatively small range of
duration expansion and reduction, pending model calibration. Our evaluation of rapid
development projects in this research, however, suggests that people factors (J. S. Ford, Colburn,
and Morris 2012) and risk tolerance (Boehm and Turner 2004; Lepore and Colombi 2012)—
which tracks willingness to accept some product imperfections to improve schedule—have
greater effects than the other factors, which is reflected in the greater span of their associated
schedule multipliers.
The author then evaluated this initial CORADMO model, with its provisional rating
factor values, against a 12-project dataset of diverse but single-company projects executed by a
Midwest software development firm that used agile practices (AgileTek), and that supplemented
those practices with architectural processes distinguishing their approach from typical
BigDesignUpFront-avoiding agile projects (Manzo 2004). These architectural practices included
detailed business process analyses, Delphi estimates of software testing effort, risk-based
situation audits, and componentized architectures, among others. Use of systematic architectural
processes by the firm was considered to make these projects more comparable to the practices
applied in the more complex aerospace/defense projects from which the factors were derived.
27
Table 4. Initial calibration against commercial projects
Table 4 presents the results of that initial evaluation on a set of rapid development
projects ranging in size from 10 KLOC (thousands of source lines of code) to 400 KLOC, of
varying complexity and technology. The author rated these projects against the Product, Process,
Project, People and Risk factors discussed earlier to compute the product of the schedule
acceleration factors, and to compare them against the calculated from the reported
project duration and effort.
Factor ratings were selected based upon the reported characteristics of each project, and
of the firm as a whole. The projects that employed C++ technologies received Low (L) Product
Simplicity ratings as compared with the other HTML/Visual Basic projects and the described
product complexity; the “Hybrid Web/Client Server” Product was rated Low (L) due to its high
degree of innovation and requirements churn. For the Process factor, most projects used a highly
concurrent development process, resulting in a Very High (VH) rating; some projects reported
using more complex mixes of technology that suggest less concurrency, and therefore received
lower ratings. Reported variation in project staff sizes is the primary reason for the varying
Project ratings. The staff was described as being very capable and senior-level, and so the People
Application Type Technology Product Process Project People Risk
HTML/VB 34.94 3.82 0.65 VH VH XH VH N 0.68 5%
Scientific/engineering C++ 18.66 3.72 0.86 L VH VH VH N 0.8 –7%
Compliance - expert HTML/VB 17.89 3.36 0.79 VH VH XH VH N 0.68 –15%
Barter exchange SQL/VB/ HTML 112.58 9.54 0.9 VH H H VH N 0.75 –16%
Options exchange site HTML/SQL 13.94 2.67 0.72 VH VH XH VH N 0.68 –5%
Commercial HMI C++ 205.27 13.81 0.96 L N N VH N 0.93 –3%
Options exchange site HTML 42.41 4.48 0.69 VH VH XH VH N 0.68 –1%
Time and billing C++/VB 26.87 4.8 0.93 L VH VH VH N 0.8 –14%
VB/HTML 70.93 8.62 1.02 L N VH VH N 0.87 –15%
ASP HTML/VB/SQL 9.79 1.39 0.44 VH VH XH VH N 0.68 53%
On-line billing/tracking VB/HTML 17.2 2.7 0.65 VH VH XH VH N 0.68 4%
Palm email client C/HTML 4.53 1.45 0.68 N VH VH VH N 0.76 12%
Person
Months
Duration
(Months)
Duration
/ √PM
Multi-
plier
Error
%
Insurance agency
system
Hybrid Web/client-
server
28
factor rated at Very High (VH) across the board. Similarly, the firm documented a consistent and
rigorous development approach, balancing good engineering against development speed, and
hence were all rated at Nominal (N) Risk acceptance.
The product of the selected rating factors is shown in the Multiplier column of Table 4,
and should be compared against the value in the column, calculated from actual
duration and effort. The close correspondence of these values in the Error column suggests that
the acceleration-deceleration factors are appropriate, although additional work remains in that the
calculated factors suggest greater schedule acceleration that was actually observed. The “ASP”
project is an outlier that the author cannot explain from the data reported. It had a team of 7
people produce a 16,875 SLOC product in just 1.39 months.
The combination of the original CORADMO model and the additional insights on
product, process, project, people, and risk factors provided by the SERC RT-34 analyses enabled
the revised CORADMO model to explain the variations in schedule acceleration among the
projects in Table 4. This is encouraging, but it is unknown to what extent the model will
accurately describe projects outside this limited set. At a minimum, though, this static model
might be useful as a checklist for assessing an organization’s status and prospects with respect to
schedule acceleration.
This initial static model enabled the author to develop the CORADMO factors, to
estimate their ranges, and to perform preliminary validation of the multiplier factors. Without
data from additional agile projects, however, it is impossible to verify if the model remains
viable outside the domain of projects studied. After many months of unsuccessful attempts to
gather additional project data, it became clear that another approach was required to validate the
model, as discussed in the next section.
29
3.3 Dynamic model approach
The CORADMO approach discussed in the previous section did not investigate how the
multiplier factors actually affected schedule. Rather, they were assumed to be the net result on
schedule of unknown underlying processes that make up the agile development methods
(ADMs). This suggested another approach to validating the CORADMO model, by investigating
its effects on those underlying processes, using a dynamic model of them.
The literature search into factors affecting schedule uncovered a number of such dynamic
models, which studied other aspects of ADMs, such as the effect of pair programming or
refactoring (A. S. White 2014; Cao, Ramesh, and Abdel-Hamid 2010; Lyneis, Cooper, and Els
2001; Ferreira et al. 2009; He Zhang, Kitchenham, and Pfahl 2008). Cao presents the most
complete and best described of these in her dissertation, which develops “a tool to examine the
impact of agile practices and management policies on critical project variables including project
scope, schedule, and cost” (2005). Cao’s work provided the basis that allowed the author to
examine the influences of the CORADMO factors on the underlying ADM processes.
A principal advantage of this approach is that it disaggregates the direct effect of the
CORADMO factors on schedule. Rather than assume the CORADMO factors directly modify
schedule or productivity, the implicit result of the static model, the factors instead could be
applied to modeled elements of the ADM, and their effect observed. Although this approach does
not support the extension of the CORADMO factors to projects outside the data set, it does begin
to provide the basis of a theory for how these factors influence agile projects. As suggested in
Eisenhardt (1989) and Eisenhardt and Graebner (2007), and coupled with the qualitative and
quantitative results a dynamic model provides, the 12 available projects allow the author to
deduce and examine these influences.
30
Details of Cao’s model and the CORADMO extensions made to support this research are
outlined in the Model section of this dissertation.
3.4 System dynamics modeling
This research uses a system dynamics model to simulate the behavior of software
development projects. The modeling and simulation technique chosen to analyze a problem
depends upon the type of system to be analyzed, how well its underlying details are understood,
and the desired level of abstraction. Figure 7 illustrates how different problem domains map to
levels of abstraction (from Borshchev and Filippov 2004). Analysis of the schedule effects of
CORADMO factors is closely related to the “Manpower & Personnel” and “R&D Project
Management” problem domains, which suggests a higher level of abstraction is appropriate.
Figure 8 suggests the modeling approaches appropriate to different levels of abstraction (also
from Borshchev and Filippov 2004). An appropriate analysis for CORADMO models the macro-
level schedule of a software development project—its aggregate behavior—and not the micro-
level details of how and when individual development tasks are addressed. This suggests that
system dynamics is an appropriate modeling technique for this research.
31
Figure 7. Problem domain vs. level of abstraction
Figure 8. Modeling approach vs. level of abstraction
System dynamics is a continuous simulation technique, which models concepts not as
discrete entities but rather as continuous quantities. Madachy summarizes the major elements of
system dynamics models as follows (2007, 8):
Quantities are expressed as levels, rates, and information links representing
feedback loops. Levels represent real-world accumulations and serve as the state
32
variables describing a system at any point in time (e.g., the amount of software
developed, number of defects, number of personnel on the team, etc.). Rates are
the flows over time that affect the levels.
In the system dynamics model used in this research, the measured quantities represent
software development tasks, which “flow” through different stages of the development process.
Task flow occurs at a rate—that is tasks per unit of time—and accumulates at these stages into
levels (or stocks) by integrating the task flow over time.
6
Information links (or connectors)
connect model elements together to communicate values, such as flow rates or levels. Table 5
illustrates the symbology used for the various components of a system dynamics model, as
employed in this research.
Table 5. System dynamics modeling elements
Model Element Description of use
A level or stock (acting like a water tank),
which is used to accumulate a flow over
time.
A rate (acting like a valve), which connects
two source levels and regulates the flow per
unit time between them.
An auxiliary or variable, which holds or
computes a value.
A link or connector, which communicates
information in a directed path between two
model entities.
A source or sink of flow, which is a special-
purpose level that can provide to or receive
from a rate element an infinite amount of
flow.
6
The timestep in a system dynamics model is fixed, and is typically set at some fraction
of the time unit (here, 0.25 days) to reduce rounding errors in the integration calculation.
33
A system dynamics model is governed by feedback loops formed amongst its elements.
These feedback loops modulate over time the rates of flow, and thus govern the rates at which
levels increase or decline. In the model used in this research, these levels represent software
development tasks as they transition through various production stages. The model measures the
rates and levels over time, which allows one to measure the effects of different strategies
represented by the CORADMO model factors on the software development process and
schedule.
3.5 Hypotheses
• H1: The durations of agile projects is proportional to the square-root of their effort.
• H2: Each of the rating scales is positively correlated with schedule acceleration or
deceleration.
• H3: There exist domains where the accuracy of the predicted schedule for a given effort
is within 30% of the actual schedule, at least 70% of the time—that is PRED(0.30) ≥
70%.
Chapter 4
Model
4.1 Model overview
CORADMO is a system dynamics model based on the agile software development model
described in the (Cao 2005) dissertation, which in turn is adapted from elements of the
Integrated Project Dynamics Model (IPDM) in (Abdel-Hamid and Madnick 1991). Figure 9
presents the eight modules from Cao’s original model
7
, along with the CORADMO module this
research adds. Modules are simply an organizational technique used to divide the model into
more manageable parts. Linkages amongst the modules represent model parameter data
references, not the flow of software artifacts through the development process. That the overall
model thus appears to be a fully-connected graph is due to the explicit module structure this
7
Cao’s dissertation describes only those modules with a “D” suffix in this diagram,
although it refers to the other modules. This author adapted the modules with an “AH” suffix
from the work of Abdel-Hamid and Madnick, based on those references.
34
35
author imposed in his revision; the original model was not modularized, and these intra-model
links were implicit.
Figure 9. CORADMO model overview
The principal unit of analysis in the model is the task, which represents an indivisible unit
of software development work. Following the convention of the IPDM, the model arbitrarily
defines a task to be 60 lines of source code. In the context of agile software development, a set of
tasks comprises a user story as described by (Beck 2005) and (Cockburn 2002). The Scrum agile
36
development process described in (Schwaber 1997) similarly describes work in terms of backlog
items, which also can be considered to be composed of tasks. The model employs user stories
only as a convenience, as stories are frequently used to describe the size and productivity of an
agile project. Stories are mapped to tasks for analysis in the model workflow.
In terms of overall model workflow, some number of tasks are initially scheduled for
production. The scheduled tasks can increase in number as new work is discovered, and may
decrease if the measured productivity indicates the development team cannot complete the work
within the allotted schedule. This dynamic is consistent with the application of team velocity and
“yesterday’s weather” (measured productivity) for planning in agile development processes
(Beck and Fowler 2001, 33). As development proceeds, the model also introduces changes that
add to the workload, and accumulates technical debt that reduces quality. Addressing change and
improving quality requires staff attention, which reduces their productivity for task development
and delivery.
These basic development processes and feedback loops are part of the original Cao
model
8
. This author’s contribution is to add CORADMO factors that moderate the existing
model parameters and rates, using multiplicative factors derived from COCOMO II (Boehm et
al. 2000; Baik, Boehm, and Steece 2002), COSYSMO (Valerdi 2005), and other models. With
limited exception, the CORADMO factors introduce higher-order rather than direct effects. That
is, rather than affect productivity, which would directly affect schedule, the new factors influence
change rates, requirements volatility, quality processes, and other components, which have an
indirect effect on productivity and schedule. These moderating factors are further discussed in
8
As part of this work, the author also identified and corrected various errors and
omissions in Cao’s original model, as she described it.
37
the CORADMO module section, and in the context of the original Cao model module
descriptions.
A principal insight of this research is the integration of effort multipliers from other
estimation models, such as COCOMO II and COSYSMO, with Cao’s existing process model.
Effort multipliers from those models are paired with CORADMO model factors, with the
assumption that the function of each factor in its original model might cause a real-world effect,
as modeled by Cao. That is, whereas in COCOCO and COSYSMO the effort multiplier factors
are derived through regression analysis of empirical data, this research posits that those factors
have an indirect effect on the software development process through the parameters of Cao’s
model, whose ultimate effect may be observed on the development schedule. As part of the
validation of CORADMO using Cao’s model, these factors were applied to the Cao model to
moderate the behavior of the software development process.
The following sub-sections describe each of the modules in further detail. The remainder
of this section provides a brief overview of module functions to help orient the reader:
• CORADMO. This module defines the influence factors identified in the CORADMO
research, and quantifies their rating scale into multiplicative factors, which are applied to
elements of the original Cao model.
• Planning. This module is the heart of the model, and describes software task scheduling
and development.
• Refactoring. This module models the accrual and reduction of technical debt
(Cunningham 1992) and its effect on design quality.
• Change. This module describes the accumulation of adaptive and corrective changes, and
the manpower necessary to address them.
38
• Customer. This module provides a simple model of the trust relationship between the
customer and project staff, and its effect on feedback.
• QA. This module describes the accrual and resolution of defects, and affects the rate of
identifying changes.
• Productivity. This module provides a bookkeeping function to track and update team
productivity across iteration timeboxes.
• Estimation. This module establishes the initial conditions of the model, by estimating the
project productivity and schedule.
• Manpower. This module defines the staff available to the project, and allocates that staff
amongst project functions.
Equations for the CORADMO system dynamics model may be found in Appendix A.
4.2 CORADMO module
4.2.1 Overview
The CORADMO module in Figure 10 is a placeholder for the CORADMO sub-factors,
which are in turn applied to Cao model. Calculations in this module transforms the discrete six-
level Likert scale rating for each CORADMO sub-factor, which range from Very Low (VL) to
Extra High (XH), into a multiplicative factor, with the Nominal rating value being 1.0. While a
continuous rating scale might have been employed, the author chose to retain discrete ratings as
in COCOMO. This may avoid a false sense of precision, since the choice of rating is somewhat
subjective, although it is guided by descriptions of the attributes of each rating level.
39
Figure 10. CORADMO module.
The CORADMO model posits five major factors: Project, Process, Product, People and
Risk, which are discussed in detail earlier in this work. The full CORADMO model decomposes
these major factors into sixteen sub-factors, plus Risk which has no sub-factors. Of these, the
system dynamics model retains only eleven, plus Risk. Although the additional sub-factors may
have value in future elaboration of the model, the Cao system dynamics model (2005), used to
validate CORADMO, lacks the sophistication to simulate some distinctions.
The system dynamics CORADMO validation model omits the Concurrency sub-factor of
the Process factor. This was unavoidable, as the Cao model assumes a fully-concurrent agile
development process. A hybrid model with a mix of concurrent and serial development as
discussed in (Boehm and Turner 2004) would require this parameter to analyze its effect on
40
schedule, along with a more sophisticated model. The project data needed to validate such a
model is unavailable, and so that work will be left to follow-on research. The current model does
not evaluate the effects of concurrency, as full concurrency is implicit in the model.
The CORADMO validation model also omits the distinction between the use of single-
domain models, methods, processes, and tools (MMPTs) and multi-domain MMPTs. Again, the
Cao model does not include multi-domain development, nor is validation data available for
projects that performed multi-domain work. The model might be expanded to have multiple,
semi-independent task workflow streams to test these sub-factors, and such a model might also
allow the testing of the concurrency parameter. This expansion work will be left to future
research, though, and the current model evaluates only single-domain MMPTs.
Similarly, in the People factor, the CORADMO validation model does not distinguish
between single-domain knowledge, skills, and agility (KSAs) and multi-domain KSAs, for the
same reasons as cited above for MMPTs. The current model therefore evaluates only single-
domain KSAs.
Table 6 lists the simulation input variables for the CORADMO module .
41
Table 6. CORADMO module variables.
Variable Value
Simplicity index (per project)
Reuse index (per project)
Deferrals index (per project)
Models index (per project)
TRL index (per project)
Streamlining index (per project)
Tool index (per project)
Collaboration index (per project)
MMPT index (per project)
KSA index (per project)
Compatibility index (per project)
Risk index (per project)
Each of the following per-module sections first discusses the underlying structure and
behavior of the Cao module, and then describes how the CORADMO factors are applied to the
existing components of that module, and their effect. Thus, in addition to positing and validating
the multiplier values of the CORADMO model, another contribution of this work is to show the
application of CORADMO factors to the Cao process model. That is, this research demonstrates
that mapping the CORADMO factors to effort multiplier factors previously derived in
COCOMO and COSYSMO, and then applying these multipliers to elements of the process
model, produces project schedules consistent with CORADMO predictions and with the limited
empirical data available.
This success in mapping the COCOMO and COSYSMO models suggests that their effort
multipliers, despite being derived in those models through regression analysis, have some (at
least, simulated) real-world analog. These multipliers are often applied to the Cao model “as-
is”—that is, used to modify effort (or quality) magnitudes directly. The consistent effect on the
final project schedules of applying these multipliers bolsters the correctness of their underlying
42
values in their respective models. Future researchers might build upon this work to find the
effects of other COCOMO-family multiplier factors on this model, to study other effects by
expanding this modified Cao process model, or to explore the effects observed in this research in
further detail.
4.2.2 Application of CORADMO factors
CORADMO investigates the effects of product, process, project, people and risk-
acceptance factors on agile software development schedules. It makes a fundamental assumption
from COPSEMO that the duration of smaller software development projects (16 person-months
and smaller) is proportional to the square root of the effort expended (or predicted) on the project
(Boehm et al. 2000, 202). Evidence suggests that projects using agile development methods
(ADMs) with a staff of ten or fewer might also achieve schedules that follow this square-root
rule (Reifer 2014). Experience reports from (Manzo 2004) corroborate this square-root schedule
rule for projects with staff up to 15 persons and effort levels up to 205 person-months.
Research into the performance of over 30 rapid-response organizations identified sub-
factors within these five areas that characterize and distinguish these organizations from their
lower-performing peers (J. S. Ford, Colburn, and Morris 2012; Lepore and Colombi 2012).
Although none of these organizations used specific ADMs, many would be considered as lean,
and as espousing the principles underlying agile (Beck et al. 2001; Poppendieck and
Poppendieck 2003). The author therefore hypothesizes that these factors and sub-factors might
also distinguish higher- and lower-performing projects using ADMs. This research determines
how these factors might be applied, and what value ranges might characterize them, and then
validates this model against data provided by Manzo.
43
The original form of the CORADMO model used static effort-multipliers, as used in
COCOMO II, its related models (Boehm et al. 2000), and its predecessors (Boehm 1981). This
static model proved difficult to validate due to lack of concrete performance data from agile
projects, and concerns that the factors might interact in unpredictable ways. These difficulties
encouraged the development of a dynamic model. After surveying a variety of ADM simulation
models and approaches (Cao, Ramesh, and Abdel-Hamid 2010; A. S. White 2014; Kuppuswami,
Vivekanandan, and Rodrigues 2003; X. Zhang et al. 2009; H. Zhang, Kitchenham, and Pfahl
2010), the author chose the system dynamics model described in (Cao 2005) as the starting point
for an expanded model to validate CORADMO.
The resulting expansion of Cao’s model forms the basis for a new dynamic CORADMO
model. This approach encodes the CORADMO factors to affect processes that characterize
ADMs, including development, scheduling, change management, refactoring, project scoping,
and so forth. In this dynamic model, the CORADMO factors no longer affect schedule directly,
as they did in the static model, but rather indirectly influence processes that affect schedule in
non-linear, interdependent ways, often as elements of feedback loops.
Table 7 summarizes where each CORADMO factor is applied to the Cao process model.
Further details of the application of these factors to the process model may be found in the
sections following, which discuss each of the Cao process model modules.
44
Table 7. Mapping of CORADMO factors to Cao modules
CORADMO Factors Cao Modules
PRODUCT
Simplicity Planning
Element reuse Planning
Low-priority deferrals Planning
Models vs. Documents Customer
Key technology maturity Planning
PROCESS
Concurrency
9
(Omitted)
Process streamlining Planning, Change, Customer, Manpower
General tool-support Refactoring, Change, QA, Estimation
PROJECT
Project staff size Productivity, Estimation, Manpower
10
Collaboration support Change, Customer
Model, methods, processes and tools
11
Refactoring, QA
PEOPLE
Knowledge, skills, and agility (KSAs)
12
Refactoring, Change, QA, Estimation
Team compatibility (Omitted)
RISK Planning, Customer
The following sub-sections describe each of the CORADMO factors and sub-factors,
how their value ranges were chosen, and where they are applied to extend the Cao agile system
dynamics model.
4.2.2.1 Product factors
The Product factors, which comprise the Simplicity, Reuse, Key technology (TRL) and
Deferrals sub-factors, have a large influence on schedule. The first three sub-factors are the
“blunt force instruments” of the CORADMO model, because they directly affect the rate at
which tasks are developed in the Planning module. Their ratings must be considered carefully to
avoid swamping the other factors. For example, going from a Nominal (N) to High (H) rating of
9
Concurrency is not modeled because the underlying Cao model assumes the process is
fully concurrent, and the sample data also used only fully concurrent processes.
10
Staff size is used directly in the Cao model, not as a multiplier.
11
Single- and multiple-domain MMPTs are not distinguished.
12
General, single-, and multi-domain KSAs are not separately modeled, due to lack of
supporting empirical data, and absence of the distinction in the Cao process model.
45
Simplicity directly reduces effort, and therefore schedule, by 4% (subject to the moderating
effects of other variables).
The following sub-sections discuss these four sub-factors in further detail.
4.2.2.1.1 Simplicity sub-factor
The author initially attempted calibration of the system dynamics validation model using
the COCOMO II Product Complexity (CPLX) factor (Boehm et al. 2000, 42). The productivity
range of CPLX, from 1.74 to 0.73, was found to be too high, as it caused extreme schedule
effects that were not supported by the available data. As such, the author employed the more
modest multiplier range determined in the initial Wideband Delphi calibration of the
CORADMO static model, as shown in Table 8.
Table 8. Simplicity sub-factor rating values
VL L N H VH XH
1.09 1.05 1.00 0.96 0.92 0.87
4.2.2.1.2 Element reuse sub-factor
Boehm, et al. uses a non-linear estimation model in COCOMO II to determine the effect
of code reuse (2000, 22–25) on the effective number of software source lines of code (SLOC) in
a project, which the author adapted for use in the system dynamics validation model. Since the
reuse factor of the CORADMO model specifies only the degree of code reuse, and does not
characterize the difficulty of incorporating reused code into the product, several simplifying
assumptions were made.
First, the author assumes that reused modules will be employed without modifying their
design (design percent modified, DM) or code (percent code modified, CM), and that 100% of
46
the effort will be applied to integrated the adapted software (percent of integration required, IM),
so (DM=0, CM=0, IM=100. Calculating the adaptation adjustment factor (AAF) with these
assumptions results in a value of 30:
This in turn allows the following form of the adaptation adjustment modifier (AAM)
equation:
Here, further simplifying assumptions were made to set the software understanding (SU),
assessment and assimilation increment (AA), and programmer unfamiliarity (UNFM)
components of the COCOMO II reuse model to their nominal values (SU=30, AA=4,
UNFM=0.4). This results in an AAM value of 0.112. The equivalent source lines of code
(ESLOC) from the adapted source code (SLOC) is calculated as (where the number of
automatically translated lines of code, AT, is assumed zero):
The CORADMO rating factor is not multiplicative, but rather specifies the percent of
reuse across the project. The original model assumed a range of reuse from 0–90%, and use of
such an extreme range in the validation model results in ridiculous schedules, as at the extreme
only 10% of the estimated work is required. As such, the range of reuse values was modified to
those shown in Table 9.
Table 9. Element reuse sub-factor rating values
VL L N H VH XH
0.00 0.05 0.10 0.20 0.30 0.50
47
These are relative percentages of reuse, rather than absolute SLOC reused, and so the
resulting ESLOC is therefore the relative amount of code required, with respect to the estimated
SLOC in the project. In the system dynamics model, each task is assumed to comprise a fixed
number of SLOC. Therefore, the calculated ESLOC value effectively modifies (reduces) the
number of tasks that must be delivered to complete the project.
Directly modifying the number of tasks completed, however, makes it difficult to
compare the simulation runs of the system dynamics model with before-and-after values—that
is, to compare the nominal schedule with the accelerated (or extended) schedule. Therefore, the
model employs the equivalent action of causing reuse to modify (increase) the productivity of the
team in developing tasks. This modification occurs by increasing the task production rate in the
Planning module.
4.2.2.1.3 Low-priority deferrals sub-factor
The low-priority deferrals sub-factor rates how often requirements might be deferred.
Simple deferral only reprioritizes when a given lower-priority requirement is worked on, by
putting higher-priority requirements before it in the work queue; it does not reduce the total
number of requirements to be completed. Requirements are not treated by themselves in the
system dynamics validation model used here, and are embodied as tasks. Since a system
dynamics model cannot distinguish individual tasks, however, a simple approach of reordering
tasks cannot be modeled. The author therefore assumes that requirements are deferred only if
tasks implementing those requirements remain incomplete when the project ends.
Deferrals are therefore measured as the difference between the Scheduled tasks and the
Delivered tasks in the Planning module. A project may be considered “complete” only if the
48
fraction of deferred tasks is less than the factor rating value. The sub-factor values presented in
the Table 10 ratings are therefore not used multiplicatively, but rather as this threshold criterion
to determine project completion.
Table 10. Low-priority deferrals sub-factor rating values
VL L N H VH XH
0.00 0.05 0.10 0.15 0.20 0.25
4.2.2.1.4 Models vs. documents sub-factor
The second principle of the Agile Manifesto states that adherents prefer “working
software over comprehensive documentation” (Beck et al. 2001). For complex projects,
especially in early phases, working software can also be interpreted to include executable models
(Highsmith 2002, 160; Shinde 2008). Working software (and models) are often more
understandable by both users and developers than written documentation when interpreting
requirements (Boehm 2000), and hence may be expected to reduce development effort.
Although not directly addressing this aspect of preferring models over documents, the
COCOMO II documentation match to life-cycle needs (DOCU) factor (Boehm et al. 2000, 42)
seems a reasonable analog, as it estimates the relative effort necessary to produce varying levels
of documentation. Table 11 adapts the DOCU rating scale, inverting it so that increased
documentation leads to increased schedule.
Table 11. Models vs documents sub-factor rating values
VL L N H VH XH
1.23 1.10 1.00 0.90 0.81 0.81
49
4.2.2.1.5 Key technology maturity sub-factor
Low technology maturity increases the risks of software development. An agile project
may require more “spikes”—efforts to evaluate and mitigate an unknown—if the technology is
poorly understood or less mature (Leffingwell 2010). Spikes may not necessarily lead to
observable customer value, as their goal is to reduce risk and help team understanding. As such,
technical immaturity may increase effort without increasing deliverable products.
Although the author could find no literature directly relating technology maturity (as
opposed to process maturity) to increased software development effort or schedule, Valerdi’s
COSYSMO model estimates the effect of many factors on systems engineering efforts (Valerdi
2005). This research therefore employs the COSYSMO technology risk driver (TRD) factor
(Valerdi and Kohl 2004) as a proxy for this effect on software projects. Table 12 adapts the TRD
rating scale, inverting it so that lower technology maturity increases schedule.
Table 12. Technology maturity sub-factor rating values
VL L N H VH XH
1.30 1.15 1.00 0.82 0.68 0.68
4.2.2.2 Process factors
The Process factors, which comprise the Concurrency, Streamlining and Tools sub-
factors, have a subtle influence on schedule. They moderate existing variables throughout the
model, and affect schedule only through their effects on other processes.
The following sub-sections discuss these sub-factors in further detail.
50
4.2.2.2.1 Concurrent operations
In contrast to the highly sequential operations of the traditional waterfall process (Royce
1970), agile software development seeks to overlap the design, development, testing, ad
integration activities, or at least to execute them in short, iterative cycles that approximate
concurrency (Beck 2005). In the presence of high rates of change, or when users poorly
understand their own needs, highly sequential processes can lead to developing the wrong
product (Boehm and Turner 2004). More concurrent (or rapidly iterating) processes seek to avoid
this problem by producing intermediate results quickly so that users can evaluate them and
instigate change if necessary, before the product diverges too far from the need.
The CORADMO model includes a sub-factor for evaluating the concurrency of
engineering, which was a significant enabler of the rapid-response organizations studied in (J.
Ford, Colburn, and Morris 2012) and (Lepore and Colombi 2012). For this version of
CORADMO, however, we assume the project is already executing using agile processes, and so
the concurrency of operations is a forgone conclusion, and so this factor is omitted. Further, the
sample projects whose data was available for calibration of CORADMO, and the system
dynamics model used for validation, employ fully concurrent processes, making evaluation of
this parameter impossible. Future research may re-introduce the Concurrency factor and evaluate
it, when data become available.
4.2.2.2.2 Process streamlining sub-factor
Adler and Borys characterizes bureaucracies on a spectrum from “enabling” to
“coercive” (1996), and earlier work by Caiden discusses how “over-bureaucratized
organizations” become dysfunctional (1985). Caiden’s J-curve suggests an optimal degree of
bureaucracy that leads to increased productivity, beyond which point productivity is stifled. The
51
first principle of the Agile Manifesto seeks to emphasize “individuals and interactions over
processes and tools,” and its fourth principle to prefer “responding to change over following a
plan” (Beck et al. 2001).
In this research, we assume that organization lies at or to the right of the optimal point of
bureaucratization. Although overly-bureaucratic organizations and processes are known to
decrease efficiency, the exact effect has not been measured. Lacking research that provides
specific values, this work adapts the development process re-engineering (DPRS) factor from the
CORADMO 1999 model (Boehm et al. 2000, 221). The DPRS factor has different effects,
depending on the software development phase. This work adapts the construction multiplier
values, however, given that agile development methods concentrate on development, rather than
design. Table 13 provides the Streamlining factor multiplier, which describes processes along a
scale that varies from Heavily bureaucratic (VL) to Fully streamlined (XH).
Table 13. Process streamlining sub-factor rating values
VL L N H VH XH
1.15 1.06 1.00 0.98 0.95 0.95
52
4.2.2.2.3 General tool-support CIM sub-factor
While the first principle of the Agile Manifesto emphasizes “individuals and interactions
over processes and tools” (Beck et al. 2001), the right development tools can nonetheless help
software developers work more quickly and efficiently. Integrated development environments
(IDEs) like Eclipse and Microsoft Visual Studio can seamlessly integrate software design,
development, testing, bug reporting, and configuration management, and other tools like wikis
can assist in documentation, team communication, and customer review. Software developers
have come to expect these capabilities.
The CORADMO model considers different aspects of tool support: coverage, integration,
and maturity. For example, a given set of tools may provide great capabilities (provide coverage
and integration), but be bug-ridden or unstable (lack maturity). This sub-factor is intended to
holistically consider these three aspects of tool use on the project. Its evaluation is somewhat
subjective, and depends on the expectations and capabilities of the developers using the tools.
Lacking quantified results on the effects of these tools, this research adapts the use of software
tools (TOOL) effort multiplier factor from COCOMO II as a proxy for the effect (Boehm et al.
2000, 49), as presented in Table 14.
Table 14. Tool support sub-factor rating values
VL L N H VH XH
1.17 1.09 1.00 0.90 0.78 0.78
4.2.2.3 Project factors
The Project factors describes the organization of the project, and the facilities that
support it. It includes sub-factors for Size, Collaboration Support, and Single- and Multi-Domain
53
Models, Methods, Processes, and Tools (MMPTs). These sub-factors moderate existing variables
throughout the model, and affect schedule only through their effects on other processes.
The following sub-sections discuss these sub-factors in further detail.
4.2.2.3.1 Project staff size sub-factor
Project staff size is not used in the system dynamics model as an explicit multiplier
factor. The Abdel-Hamid and Madnick Integrated Project Dynamics Model, which is the basis
for the Cao model on which CORADMO builds, instead reduces productivity with increasing
staff size (1991) to account for the overhead of project communication. The IPDM was itself
calibrated against smaller projects, and addresses staff size only up to 30 persons, and so this
feature must be re-examined before sample data for larger projects is modeled.
4.2.2.3.2 Collaboration support sub-factor
Agile projects, especially those using the Extreme Programming (XP) paradigm, were
initially considered viable only with geographically collocated teams (Teasley et al. 2002; Beck
and Fowler 2001). The prevalence and success of geographically distributed teams has
necessarily modified this assumption, but the types of communication and coordination
mechanisms employed can greatly affect team performance (Pikkarainen et al. 2008; McChesney
and Gallagher 2004; Te’eni 2001; Crowston and Kammerer 1998; Hartwick and Barki 2001).
Although several studies have examined the role of communication on performance, none
quantify the effect. The author has therefore adapted the multisite development (SITE) factor of
COCOMO II as a proxy (Boehm et al. 2000, 49), using the values in Table 15.
54
Table 15. Collaboration support sub-factor rating values
VL L N H VH XH
1.22 1.09 1.00 0.93 0.86 0.80
4.2.2.3.3 Models, methods, processes and tools sub-factor
Models, methods, processes and tools (MMPTs) (or, more commonly, just methods,
processes and tools [MPTs]) is a term more typically associated with systems engineering than
with software engineering. [M]MPTs is a general term that encompasses the techniques used for
planning, engineering, managing and maintaining products or projects (Lee and Muthig 2006). In
the context of CORADMO, the MMPT sub-factors describe how well these techniques provide
coverage (capabilities), integration (the ability to interoperate with each other), and maturity
(stability), abbreviated CIM, in support of the project. Good MMPT CIM might comprise a set of
interacting models, and tools to support those models, that are well integrated with the processes
and methods used on a project, such as those used in the “Concept Design Center” described by
(Lepore and Colombi 2012).
Some MMPTs support a single technical domain, while others may span multiple
technical domains that must be integrated across a project. Multi-domain MMPTs would be more
common in a large project spanning both hardware and software components, and better MMPTs
in that application provide CIM across those multiple domains. Single-domain MMPTs are more
common in a software-only project. MMPTs themselves are somewhat amorphous, in that they
may comprise some combination of software tools, development methods, organizational
processes, and simulation models, among other possibilities. Baik, et al. discusses the respective
roles of coverage, integration, and maturity in MMPTs (2002).
55
Lacking literature support for the quantitative effect of MMPT support on software
project schedule, the author instead adapts the use of software tools (TOOL) effort multiplier
factor from COCOMO II (Boehm et al. 2000, 49), as presented in Table 16.
Table 16. MMPTs sub-factor rating values
VL L N H VH XH
1.17 1.09 1.00 0.90 0.78 0.78
4.2.2.4 People factors
The People factors discuss the capabilities of the human members of the development
team, and their interactions within the team, and with other stakeholders. The sub-factors include
the knowledge, skills, and agility (KSA) of team members, and the compatibility of team
interactions.
The following sub-sections discuss these sub-factors in further detail.
4.2.2.4.1 General knowledge, skills, and agilities sub-factor
Boehm et al. identifies critical success factors related to knowledge, skills and ability for
systems engineering personnel competency (2010), many of which are shared with software
engineering and development. Lepore and Colombi discusses the need to “acquire people with
the right education, experience, and personality” (2012, 47), and to find personnel with good
“knowledge, skills, and agility” (KSAs) in general, in specific domains, and across multiple
domains (2012, 77). The latter term, agility, is representative of the need for development team
members to be comfortable with adapting to change, and of working in a potentially chaotic
environment (Boehm and Turner 2004, 46–57).
56
Following the approach of Lepore and Colombi, the CORADMO static model
distinguishes general, single-domain, and multi-domain KSAs. It is difficult to see how these
might be separately applied within the Cao model, however, and validation is further
complicated by the absence of these distinctions in the available empirical data from AgileTek.
Hence, for validating the static model against the dynamic one in this work, the author collapses
these distinctions into a single-dimension KSA concept.
COCOMO II identifies two knowledge-related aspects of teams, the analyst capability
(ACAP) and programmer capability (PCAP) factors (Boehm et al. 2000, 47). Agile projects,
however, do not distinguish between the design phase (analysts) and development phase
(programmers), as the rapid iteration causes these phases effectively to overlap and intermingle.
The capabilities needed by analysts in COCOMO II include “analysis and design capability,
efficiency and thoroughness, and the ability to communicate and cooperate” (Boehm et al. 2000,
47). The cross-functional work of agile team members seems more consistent with the
description of analysts than of programmers, however, and the author chose to adapt the ACAP
factor, as shown in Table 17.
Table 17. General KSA sub-factor rating values
VL L N H VH XH
1.42 1.19 1.00 0.85 0.71 0.71
4.2.2.4.2 Team compatibility sub-factor
The first principle of the Agile Manifesto emphasizes the importance of “individuals and
interactions” (Beck et al. 2001). Agile teams are by nature tightly knit, and depend upon
compatibility both within the development team, and between the team and its customer (Beck
57
2005, 38; Boehm and Turner 2004, 46). This research adapts the collaboration (CLAB) factor
scale from the original CORADMO 1999 model described in (Boehm et al. 2000, 221–226),
which itself is based on a fuzzy average of the multisite development (SITE) effort multiplier,
and the team cohesion (TEAM) and personnel experience (PREX) cost-drivers. The
Construction phase values were deemed most relevant to the emphasis in agile development of
“constructing software,” and Table 18 presents the multipliers used.
Table 18. Team compatibility sub-factor rating values
VL L N H VH XH
1.10 1.05 1.00 0.98 0.95 0.93
4.2.2.5 Risk acceptance factor
Lepore and Columbi observe that “the willingness to accept some types of risk buys
down the cost” of projects (Lepore and Colombi 2012, 33), and perhaps more importantly, that
“designing out all risk takes forever” (Lepore and Colombi 2012, 28). Absent literature support
for the quantitative effects of risk-acceptance, though, and without COCOMO or other model
factors relating to this topic, the author chose to retain the multiplier factors identified in the
original Wideband Delphi estimation of CORADMO factors, as presented in Table 19.
Table 19. Risk acceptance sub-factor rating values
VL L N H VH XH
1.13 1.06 1.00 0.94 0.89 0.84
58
4.3 Planning module
The Planning module (Figure 11) is the heart of the CORADMO model, where
development tasks are scheduled, developed, and delivered. Tasks are the unit of work in the
model, and following the convention in (Cao 2005) and (Abdel-Hamid and Madnick 1991), are
considered to be 60 source lines of code (SLOC) in length.
13
Without belaboring the details of the
simulation process, the following discusses the general concepts in the Planning module, and the
important decisions therein.
Scheduled tasks is a system dynamics “stock,” initialized with the Estimated number of
tasks, as calculated at simulation start in the Estimation module. To model requirements
volatility, the underestimation of requirements, and the incorporation of adaptive changes in
requirements, the number of scheduled tasks may increase during the simulation through the
Changes to the estimated job size inflow. Correspondingly, to simulate the adaptively-changing
scope of agile projects, the scheduled tasks may decrease in number through the Rate adjusting
scope outflow. The scheduled tasks therefore represent the dynamically-varying number of tasks
to complete in the development process.
Developed tasks and Delivered tasks are also stocks, whose initial value is zero at the
start of simulation, before development begins. The rate at which tasks are developed, the Task
production rate, is determined by the available staff, and their current productivity, as moderated
by CORADMO factors. Tasks are delivered at the Acceptance rate, nominally the same as the
production rate, but reduced by the rate that corrective changes are required. Developed tasks
may also be reduced by customer rejection at the Rejection rate. At any point in time, the
delivered tasks therefore represent the number of tasks that have been completed.
13
This length is arbitrary, and not considered in the model except to map the lines of code
in sample data to tasks in the model.
59
Figure 11. Planning module
60
The model compares the scheduled and delivered tasks to determine the state of
completion. Tasks remaining is the difference of these values, and represents the work remaining
to be done. This difference is used with the Cumulative productivity (commonly known in agile
development as the team velocity (Beck 2005; Schwaber 1997)), the available workforce (WF
level), and the remaining schedule time (Time remaining) to determine the effort and time
required to complete the project. When this effort (Indicated tasks) exceeds the available
schedule, the Adjusted scope indicates the number of tasks that can be completed, a value that
varies based on the willingness of the customer to reduce the project scope.
The Scheduled completion date stock is initialized at simulation start with the Estimated
schedule calculated in the Estimation module. The inability to complete tasks within the
currently available schedule, as discussed above, may cause the completion date to increase up to
the Maximum tolerable completion date. (The maximum permitted percent of schedule overrun
is a model input parameter.) The rate at which the schedule is adjusted depends on the iteration
duration, since planning in agile projects is usually done at each iteration {Beck 2005}, possibly
moderated by the CORADMO factor representing the degree of process streamlining.
Some CORADMO factors have a direct influence on task productivity, while others have
more subtle, higher-order effects. The Effort multiplier input to the Task production rate is a
direct effect, and comprises the CORADMO Product sub-factors of Simplicity, degree of Reuse,
and Technology readiness (TRL). These in turn are directly traceable to the complexity (CPLX)
and reuse (RUSE) effort multiplier factors of COCOMO II (Boehm et al. 2000, 42), and the
technology risk driver (TRD) factor of COSYSMO (Valerdi 2005; Valerdi and Kohl 2004).
In COCOMO and COSYSMO the effort multipliers affect the effective source lines of
code (ESLOC) and engineering effort, respectively. In the CORADMO system dynamics model,
61
this could be represented by adjusting the number of tasks to complete. Such an approach,
however, would have made it difficult to track that the number of tasks in the calibration projects
were actually completed. Instead, the model uses these factors to adjust the amount of staff effort
needed to complete the tasks, without adjusting their number. This has the same effect on the
number of tasks produced per unit of effort, while preserving the comparability of modeled tasks
to actuals.
For these CORADMO factors that directly affect effort, a lower rating corresponds to
greater effort to produce tasks, and a higher rating to lesser effort. These factors are applied to
the Task production rate to reflect this effect. Because of this direct application to increase or
decrease effort, the factors have a profound effect on the number of tasks delivered, and therefore
schedule. That is, if the multiplier increases effort by fifteen percent, it produces almost a fifteen
percent increase in schedule. This suggests that the effort-related factors must be selected
conservatively while modeling, lest their effects overwhelm the other CORADMO factors.
Other CORADMO factors used in the the Planning module have a more subtle effect.
The CORADMO factor for Streamlining affects the feedback loop time constants for the delays
in incorporating scope changes, and the delay in making schedule adjustments. A higher rating of
the Streamlining factor leads to more responsive adjustment of schedule and scope. Similarly, the
Risk acceptance CORADMO factor affects the Willingness to adjust scope. Here higher ratings
imply more willingness to adapt the project scope to the capabilities of the team, a characteristic
of the acceptance of less capability in trade for improved schedule.
The Planning module contains the simulation input variables listed in Table 20.
62
Table 20. Planning module variables
Variable Value
Assim delay 30
Hiring delay 90
Max schedule overrun percent 0.0
Max tolerable WF (per project)
Further details of the selection of the CORADMO factor multiplier values are provided in
the CORADMO module section.
4.4 Refactoring module
The Refactoring module shown in Figure 12 models the accumulation and reduction of
technical debt, and its effect on design quality (Cunningham 1992; Kruchten, Nord, and Ozkaya
2012). Technical debt is modeled here as a system dynamics stock accumulating person-days of
deferred effort, the Cumulative gap of refactoring. As technical debt increases, it reduces the
Quality of evolutionary design, which in other parts of the model increases the incidence of
defects and changes, and lowers productivity (Garcia et al. 2009). Refactoring applies effort to
reduce the technical debt, and thus to improve the architectural quality of design (Beck 2005)
(Cockburn 2002).
63
Figure 12. Refactoring module
64
In Cao’s model, technical debt accumulates because the effort needed for perfecting the
design (Needed fractional MP for refactoring) exceeds the effort actually applied to perfecting
the design (Actual fractional MP for refactoring).
14
Technical debt is directly modeled as this
difference of effort over time, which accumulates as person-days of required refactoring work.
The model further assumes that deferred design effort takes longer to rework than it would have
had it been done immediately, and therefore increases the technical debt by 50% over the number
of person-days actually deferred. Refactoring work, when being performed, distracts the
available staff from producing tasks and handling change, and thus reduces productivity.
Product, project, people, and process factors affect the effort needed, and effort available,
to keep technical debt in check. The model postulates that complex projects require more such
effort than simple ones, and larger projects more than smaller. It assumes that more senior people
design code that requires less refactoring, as do projects with better tools or processes.
Correspondingly, it postulates that schedule pressure reduces the availability of staff to improve
design, and that projects have varying degrees of aversion to or desire for refactoring. These
factors exist in the base model that Cao provides, and are supplemented by additional
CORADMO factors in this author’s extensions.
The CORADMO factors applied to the Refactoring module come from the project,
people, and process areas. To avoid cluttering the model, the KSA, Tools, and MMPT sub-factors
are multiplied to create a temporary CORADMO multiplier, which is then applied to the Needed
fractional MP for refactoring and Gap adjust rate calculations. That is, these factors are
postulated to affect both the need for refactoring, and the efficiency of performing refactoring
when needed. Specifically, more skilled staff, tools with better coverage, integration and maturity
14
In her model, Cao calls this needed vs. actual “refactoring” effort, but this is a
misnomer. It is actually the needed and actual effort to keep technical debt under control.
Refactoring is the effort to reduce technical debt already accumulated.
65
(CIM) (Baik and Boehm 2000; Baik, Boehm, and Steece 2002), and improved models, methods,
processes, and tools (MMPT) (Damian and Moitra 2006) are expected to improve design quality,
and hence reduce the need for refactoring. Similarly, improved capabilities in these sub-factors
should reduce the effort required for refactoring, once it is undertaken.
Table 21 lists the simulation input variables used in the Refactoring module.
Table 21. Refactoring module variables
Variable Value
Degree of complexity 0.5
Lack of refactoring 0.5
Planned degree of PP 0.5
Planned unit tests 0.5
Quality objective 0.8
Ratio of pros 1.0
Further details of the selection of the CORADMO factor multiplier values are provided in
the CORADMO module section.
4.5 Change module
The Change module shown in Figure 13 simulates the adaptive and corrective change
traffic on the project, independent of the baseline task development in the Planning module and
perfective change rework in the Refactoring module (Cao 2005). Change requests are triggered
by requirements volatility (adaptive change) and detected errors (corrective changes), and may
be addressed immediately or after some delay. Cao’s model assumes that delayed changes cause
additional work above the baseline effort needed to address the change. Once identified, changes
66
may be rejected, scheduled as new discrete tasks in the Planning module, or addressed here as
level-of-effort work.
Figure 13. Change module.
67
This last alternative is the principal effect of this module on the model as a whole. Once
changes are accepted for work (added to the Changes verified stock), Cao assumes that each
change takes a fixed amount of effort to address (MP needed per change), as modeled in the
Rate of fix change flow. The staff used for performing changes draws from those used to perform
development work (Daily MP for DesignDev) and refactoring (Daily MP for refactoring), and
change work is of higher priority than either activity. That is, the manpower used to address
change requests reduces that available for development work and refactoring, and thereby
significantly affects the delivered productivity of the project.
The CORADMO factors make several additions to this module, moderating existing
model variables. From the Project factor, the Collaboration sub-factor moderates the effort
necessary to address changes, with the reasoning that better or worse collaboration on the project
affects the research and coordination effort necessary to enact change. Similarly, the KSA and
Tool sub-factors from the People and Process factors also affect change effort, due to differing
skill levels that improve or hinder execution, and tools that facilitate or impede detecting the
effects of changes. Finally, the Streamlining sub-factor of Process positively or negatively affects
the delay components of incorporating change requests, and performing the fixes themselves. All
the CORADMO factors have a higher-order effect on overall project schedule, by influencing the
rate at which changes are enacted, and the required manpower.
The Change module contains the following simulation input variables listed in Table 22.
68
Table 22. Change module variables
Variable Value
Desired change delay (iteration length)
External change rate (iteration length)
Impact of delay 0.5
Percent rescheduled as new task 0.5
Person-days per change 5.2
Further details of the selection of the CORADMO factor multiplier values are provided in
the CORADMO module section.
4.6 Customer module
The Customer module in Figure 14 provides a simple model of the relationship between
the customer and the project. The customer has perceptions of the quality and speed of the work,
abilities that help or hinder interactions with the project, and trust that influences the willingness
of the customer to cooperate with the project. Some of these factors are inherent in the customer,
and represented by input variables in the original Cao model; others are influenced by project
performance, and themselves influence performance in feedback loops.
69
Figure 14. Customer module
With respect to trust, the author hypothesizes an inverse relationship between the
CORADMO sub-factors for Risk and Models vs documents. Customers with below-nominal risk
tolerance are thought to prefer documents over models, and to assign higher trust to documents
and lower to models. The Customer trust variable is thus modified to reflect this relationship.
The converse is not true: customers with above-nominal risk acceptance have no such
preference, and the customer trust variable is not modified.
The CORADMO Collaboration sub-factor affects two customer/team relationship
variables in this module: Customer involvement and Feedback quality. Above-nominal
collaboration improves both involvement and feedback, and, conversely, a below-nominal value
reduces both. The Streamlining sub-factor also affects customer involvement, in that above-
nominal (less bureaucratic) processes facilitate increased customer involvement, and below-
70
nominal inhibit it. As in other modules, Streamlining also influences delay factors to improve or
worsen the delay; here it affects the Delay on change request variable.
The Customer module contains the simulation input variables listed in Table 23.
Table 23. Customer module variables
Variable Value
CRACK index 0.5
Involvement 0.707
Nominal errors committed per task 5
Participation 0.707
Previous experience 0.5
Team reputation 0.5
Further details of the selection of the CORADMO factor multiplier values are provided in
the CORADMO module section.
4.7 QA module
The QA (Quality Assurance) module in Figure 15 provides a simple model of defect
insertion and detection. While (Cao 2005) briefly references a QA function, it does not describe
it, and her model does not provide it. The Cao model references only two outputs, however,
Errors last iteration and Errors per task last iteration, in the Change module. Given this
information, and knowledge of the (Abdel-Hamid and Madnick 1991) Integrated Project
Dynamics Model (IPDM) from which Cao’s model derives, the author recreated the module.
While it may not represent the full intentions of Cao, the IPDM QA model is well-known and
71
widely referenced, if simplistic. The IPDM QA module
15
assumes defects are added at a rate
proportional to the code size, and removed at a rate proportional to the QA manpower available.
Figure 15. QA module
The author used the IPDM QA module as a starting point, and removed the parts
inapplicable to agile development, leaving only the defect insertion and detection capabilities.
16
The author then added new model components tying defect generation and detection to the task
15
The IPM model, as realized in modern system dynamics tools, is usually represented as
a “sector”---that is, a simple grouping within the larger model. The author re-implemented it as a
module, a partitioning mechanism with the potential for lower intra-model coupling.
16
The original IPM model is represented in the diagram in black; this author’s additions
are in blue and red.
72
production rate and iteration cycle of Cao’s agile model. The model detects defects continuously,
as typical in system dynamics, and then accumulates them on a per-iteration basis in the Errors
last iteration stock, from which the Errors per task last iteration output is calculated.
The CORADMO factors applied to the QA module come from the Project, People, and
Process areas. As in the Refactoring module, to avoid cluttering the model, the KSA, Tool, and
MMPT sub-factors are multiplied to create a temporary CORADMO multiplier, which is then
applied other calculations. Similar to their use in Refactoring, these factors are postulated to
affect the efficiency both of detecting defects, and of removing them. Specifically, more skilled
staff, tools with better coverage, integration and maturity (CIM) (Baik, Boehm, and Steece
2002), improved models, methods, processes, and tools (MMPT) (Damian and Moitra 2006)
improve the ability to detect defects, and the same sub-factors also increase the defect correction
rate (Madachy 2007).
Table 24 lists the simulation input variables used in the QA module.
Table 24. QA module variables
Variable Value
Average QA delay 10
DSI per task 60
Further details of the selection of the CORADMO factor multiplier values are provided in
the CORADMO module section.
4.8 Productivity module
The Productivity module shown in Figure 16 is primarily a bookkeeping function that
calculates per-iteration productivity (Tasks developed in last iteration, Tasks delivered in last
73
iteration, and Productivity estimation) and Cumulative productivity, and makes them available to
other parts of the model. (Cao 2005) describes this module also as providing a Software
development productivity output, but omits this calculation from her model. As with the QA
module, however, knowledge of the (Abdel-Hamid and Madnick 1991) Integrated Project
Dynamics Model (IPDM) allowed this author to supplement Cao’s model with the IPDM
software productivity calculations
17
.
Figure 16. Productivity module
17
Again, as in the QA module, the IPDM components are represented in black. The red
and blue components are a mixture of Cao’s and this author’s contributions.
74
This module describes two types of productivity measures: one the author considers
intrinsic productivity, and the other empirical productivity. Intrinsic productivity is in the sense
of (Van Heeringen and Dijkwel 1987), the “level of productivity irrespective of influences of
specific variables but typical of the type of person,” although considered here at the aggregate
team level. This is represented as a model input, the Nominal potential productivity, which is
moderated by several other variables to compute the module output, Software development
productivity. Empirical productivity is effectively team velocity (Beck 2005), the aggregate
measure of tasks produced per person-day.
The Nominal potential productivity (intrinsic) is a fundamental input to the model.
18
For
calibration to a given project, the intrinsic productivity is adjusted so that all tasks the project
actually completed are finished in the simulation on the nominal schedule, assumed to equal the
square-root of the effort expended. This productivity value is then left constant as CORADMO
model factors are adjusted, and the effect on the simulated schedule observed. That is, a basic
assumption is that the CORADMO factors do not alter this intrinsic productivity, but rather
change how effectively that productivity can be applied to the project.
The Productivity module contains the simulation input variables listed in Table 25.
Table 25. Productivity module variable
Variable Value
Nominal potential prod: pros (per project)
Nominal potential prod: rookies -
Ratio of pros to rookies 1.0
No CORADMO factors directly influence productivity in this module.
18
The IPDM distinguishes the productivity for both “pros” and “rookies,” but as a
simplifying assumption this research ignores the distinction and assumes uniform productivity.
75
4.9 Estimation module
The Estimation module in Figure 17 establishes the initial conditions of the model, by
estimating the productivity and schedule, based on the estimated size of the project and an initial
estimate of its productivity. As in the (Abdel-Hamid and Madnick 1991) Integrated Project
Dynamics Model (IPDM), this model assumes a fixed, inherent underestimation of the project
size, represented by the Underestimate factor input variable. The IPDM sets this value at 0.67,
which the author has maintained. That is, projects are assumed to be underestimated in size by
33%.
Figure 17. Estimation module
The estimated size of an agile project is typically measured in user stories, one of the
model inputs. Since the unit of work in the Cao model is the task, some transformation must
occur between these values. (Cao 2005) assumes that each story comprises exactly four tasks,
because her calibration projects did not provide a count of the source lines of code (SLOC).
Since the sample data collected by this author includes both SLOC and story counts, the number
76
of tasks is made variable, and calculated as SLOC / (60 SLOC/task), where the constant (60
SLOC/task) is assumed in both Cao’s model and the IPDM.
CORADMO factors make some adjustments to these assumptions. The Underestimate
factor is assumed to be moderated by the skill level of the team (KSA), and the use of tools
(Tools). Specifically, above-nominal values of KSA and Tools reduce the degree of
underestimation, making the estimated number of tasks agree more closely with the actual
number of tasks. Similarly, it is assumed that the use of tools affects Requirements volatility, with
above-nominal values reducing volatility.
The Estimation module contains the simulation input variables listed in Table 26.
Table 26. Estimation module variables
Variable Value
Avg number of tasks per story (per project)
Avg story velocity (per project)
Initial design percent 0.1
Iteration duration (per project)
(Raw) requirements volatility 0.32
Real number of stories (per project)
Underestimate factor 0.67
Further details of the selection of the CORADMO factor multiplier values are provided in
the CORADMO module section.
77
4.10 Manpower module
The Manpower module in Figure 18, like the QA module, is referenced in (Cao 2005) but
not provided, nor described in detail. The Integrated Project Dynamics Model (IPDM) of
(Abdel-Hamid and Madnick 1991) includes an extensive manpower model, in which the staffing
level dynamically varies based on staff morale and burnout, recruitment and training of new
staff, and the willingness to change the workforce level, among other variables. The CORADMO
adaptation of this model assumes a stable, level workforce, and so avoids these complexities. A
fixed staff is available, which the project allocates amongst design and development, quality
assurance, refactoring, and change management.
Figure 18. Manpower module
A fundamental assumption in the IPDM is the fraction of time staff are actually available
to do work (Load factor), which it sets at 0.6. The value is less than unity because it assumes
staff availability will be effectively reduced by meetings, non-productive work, and other
78
distractions. While the author has retained this assumption, he also applies a CORADMO factor
to the Load Factor based on the Streamlining sub-factor. Specifically, the author hypothesizes
that a reduction in bureaucracy will increase staff availability, and vice versa.
Table 27 lists the simulation input variables contained in the Manpower module.
Table 27. Manpower module variables
Variable Value
Actual fraction of MP for QA 0.1
Load factor 0.6
Total work force (per project)
Further details of the selection of the CORADMO factor multiplier values are provided in
the CORADMO module section.
Chapter 5
Results
This section discusses the results obtained in running the CORADMO system dynamics
model. It presents a discussion of how the model was calibrated, and how tests are run. It
provides a summary of the primary results comparing the modeled schedule to the actual
schedule, and some qualitative secondary results arising from observing different projects'
behavior in the model.
5.1 Dynamic model calibration
Before testing hypotheses on the effect of CORADMO factors on project schedule, it is
first necessary to calibrate the system dynamics model. The base system dynamics model
presented in (Cao 2005) includes 36 variables, listed in the respective module sub-sections of the
79
80
Model section of this paper. Most of these variables assume the fixed values noted in those
sections. The following model variables must be set for each project in the sample data:
• Estimation
• Avg number of tasks per story
• Avg story velocity
• Iteration duration
• Real number of stories
• Planning
• Max tolerable WF
• Productivity
• Nominal potential prod: pros
• Manpower
• Total work force
Values for the Avg story velocity, Iteration duration, Real number of stories, Max
tolerable WF and Total work force variables
19
are taken directly from the project being studied.
The Integrated Project Dynamics Model (IPDM) (Abdel-Hamid and Madnick 1991) arbitrarily
defines the task size (in lines-of-code) at 60, a decision that was carried forward into the Cao
model. The Avg number of tasks per story is calculated from the source lines of code (SLOC) and
number of stories in each project, as follows:
19
The IPDM distinguishes between the current workforce level and the size to which the
workforce might grow. The workforce is held constant in this research, and so these variables are
set to the same value.
81
As discussed earlier, a fundamental assumption of this research is that the nominal
development schedule (TDEV) for agile software development projects that expend Effort
person-months is:
Figure 19. Nominal schedule derived through iteration (typical)
For each project, the system dynamics model must first be run to determine the intrinsic
productivity
20
required to achieve this nominal schedule, with the actual number of tasks
delivered, and with all CORADMO sub-factors set to their nominal values.
21
Figure 19 illustrates
the results of one such run, for the Argent Trading project, which resulted in a nominal
completion date of 202 days, and 2336 tasks delivered. Deriving the required productivity is an
iterative, manual process of (re)estimating the productivity, running the model, observing the
resulting schedule, and repeating the process until the modeled schedule matches the nominal
schedule, as derived from the effort expended. Table 28 shows the results of determining the
20
See the discussion of “intrinsic productivity” in the Productivity module section.
21
Productivity is measured in tasks/person-day.
82
intrinsic productivity (Nominal prod’ty) for all of the sample projects, to achieve the computed
nominal schedule (Nominal (months)).
22
22
The unit of time in the model is days, which is derived by multiplying months by 19.
Table 28. Sample project values (supplement to (Manzo 2004))
Agile+ Projects Application Type Technologies Stories Staff Tasks
Applied Systems Insurance agency system HTML/VB 34,920 12 9.1 48.5 582 34.94 3.82 5.91 72.58 112.31 76 3.26
DAS Profume Scientific/engineering C++ 64,816 23 5 46.97 1,080 18.66 3.72 4.32 70.68 82.07 16 14
Finatus Compliance - expert HTML/VB 43,036 22 5.3 32.6 717 17.89 3.36 4.23 63.84 80.36 17.25 8.1
Argent Trading Barter exchange SQL/VB/HTML 140,134 103 11.8 22.68 2,336 112.58 9.54 10.61 181.26 201.6 20.77 7.9
CBOE 2 Options exchange site HTML/SQL 25,011 19 5.2 21.94 417 13.94 2.67 3.73 50.73 70.94 17.5 5.13
CTC Commercial HMI C++ 409,008 261 14.9 26.12 6,817 205.27 13.81 14.33 262.39 272.21 14.94 19
CBOE 1 Options exchange site HTML 53,731 48 9.5 18.66 896 42.41 4.48 6.51 85.12 123.73 21.9 4.7
Cronos Time and billing C++/VB 76,766 71 5.6 18.02 1,279 26.87 4.8 5.18 91.2 98.5 6.98 13.14
eMerge Hybrid Web/client-server VB/HTML 38,968 43 8.2 15.1 649 70.93 8.62 8.42 163.78 160.02 27.3 3.28
Appointments 123 ASP HTML/VB/SQL 16,875 21 7 13.39 281 9.79 1.39 3.13 26.41 59.44 15.75 2.945
VisiBILLity On-line billing/tracking VB/HTML 64,277 111 6.4 9.65 1,071 17.2 2.7 4.15 51.3 78.79 4.1 10.41
Motient Palm email client C/HTML 10,173 22 3.1 7.71 170 4.53 1.45 2.13 27.55 40.43 3.8 5.87
Physical
LOC
Tasks/
Story
Person
Months
Duration
Months
Nominal
(months)
Duration
(days)
Nominal
(days)
Nominal
velocity
Nominal
prod’ty
The productivity required to achieve the nominal schedule varies markedly across
projects in the sample data, even though a single company executed all projects. Note, however,
that the productivity for the projects implemented in C++ all show similar productivity, and most
of the HTML projects also show similar productivity. The actual productivity does not matter to
the CORADMO model, as productivity is held constant when varying CORADMO factors to
reach the actual schedule. That is, the CORADMO model evaluates what factors might cause the
project to deviate from its nominally achievable schedule, for a given (fixed) productivity.
After calibrating the productivity to each project, the author rated each project by its
CORADMO factors, which are entered as indices (1–6, corresponding to Low-to-Extra-High
ratings) into the model. Lookup tables in the CORADMO module translate these indices into
factors that are applied as discussed in the Model section. The Results section reviews this
process and discusses the findings.
5.2 Running test cases
As discussed in Calibration, the project size (in staff and tasks) are first loaded into the
model, and an intrinsic productivity determined that allows the project to complete at the
nominal schedule, which is proportional to the square-root of the effort expended on the project.
Once this step is completed for a project, the CORADMO Product, Process, Project, People, and
Risk factor ratings are entered. In the original work done early in this research, these ratings were
applied only at the factor level, as presented in Table 29. Because the new system dynamics
model rates the individual sub-factors, these are expanded uniformly into the sub-factors. That is,
each of the sub-factors initially receives the same VL–XH (1–6) rating. The model is then run
and the schedule results examined.
84
85
Table 29. Original factor ratings summary, by project
Table 30. Original factor ratings schedule results, by project
Table 30 presents the results of this first-cut model run. While the results are very good
for some projects, it appears some factors may be incorrect, or over-influencing schedule. An
understanding of how the factors apply in the dynamic model shows it is particularly sensitive to
the Product ratings, because several of the sub-factors directly affect the task production rate,
and therefore schedule. The author therefore undertook a more conservative approach, first
returning the Simplicity, Reuse, and TRL sub-factors to their nominal ratings, and re-running the
model. The source project data was re-examined to determine the sub-factor ratings with more
granularity. Simplicity was then re-determined based on the project application type, its
technology, and (where available) notes from (Manzo 2004). Reuse was set based on the
1: Applied Sys
2: DAS Profume
3: Finatus
4: Argent Trading
5: CBOE 2
6: CTC
7: CBOE 1
8: Chronos
9: eMerge
10: Appts 123
11: VisiBILL
12: Motient
Product VH L VH VH VH L VH L L VH VH N
Process VH VH VH H VH N VH VH N VH VH VH
Project XH VH XH H XH N VH VH VH XH XH VH
People VH VH VH VH VH VH VH VH VH VH VH VH
Risk N N N N N N N N N N N N
PROJECTS /
FACTORS
1: Applied Sys
2: DAS Profume
3: Finatus
4: Argent Trading
5: CBOE 2
6: CTC
7: CBOE 1
8: Chronos
9: eMerge
10: Appts 123
11: VisiBILL
12: Motient
Actual schedul 73 71 64 181 51 262 85 91 164 26 51 28
Modeled sched 39 79 30 54 28 313 39 92 161 24 24 29
% error -47% 11% -53% -70% -45% 19% -54% 1% -2% -8% -54% 3%
PROJECTS /
SCHEDULE
86
technology used and domain, using the author’s experience to estimate how much reuse was
likely.
23
Finally, TRL was changed from nominal only if the project characteristics strongly
suggested use of particularly mature or immature technology. This more conservative approach
resulted in more credible schedules in this rough-cut.
As discussed in Application of CORADMO factors, other factors and sub-factors have
indirect, and much more subtle effects on schedule. These effect such behaviors as the growth of
requirements, the speed in handling change, the magnitude of change encountered, and many
other second- and higher-order effects. As with the Product sub-factors, these remaining sub-
factors were initially set to the original project ratings. To complete the schedule estimation,
these values were then adjusted upward or downward until the modeled schedule was close to
the actual schedule for each project. The author’s preference was to keep the average sub-factor
rating as close to the original rating as possible, since those were arrived at by Wideband Delphi
consensus, and to make changes only where the project sample data suggested the alterations
made sense. The results of these adjustments may be seen in the next section.
23
For example, HTML project code tends to be quite repetitive, and also to reuse
standard Javascript libraries. User interface code in C++ also frequently uses display libraries,
and its Standard Template Library (STL).
87
5.3 Quantitative results
Table 31 presents the quantitative results of the sub-factor adjustments, summarized at
the CORADMO factor level. Differences in the factor ratings from the originals in Table 1 are
highlighted here in boldface type. Six of the twelve Product factor ratings were changed, all but
one reduced by one rating level from the original. The Product rating for Project 2 (DAS
Profume) was raised by one level. Two People factor ratings were changed, one raised by one
level, and the other reduced by two. Only one Project rating was changed, raised by one level.
Table 31. Final factor ratings summary, by project
Details of the CORADMO sub-factor revised ratings, in numeric form, are presented in
Table 32 for all projects, along with the average rating for each factor. Table 33 presents the
schedule as estimated by the CORADMO model with these revised sub-factor ratings, and the
actual schedule realized. Close agreement may be seen between the modeled and actual
schedules.
1: Applied Sys
2: DAS Profume
3: Finatus
4: Argent Trading
5: CBOE 2
6: CTC
7: CBOE 1
8: Chronos
9: eMerge
10: Appts 123
11: VisiBILL
12: Motient
PRODUCT H N H H H L H L L VH VH N
PROCESS VH VH VH H VH N VH VH N VH VH VH
PROJECT XH XH XH H XH N VH VH VH XH XH VH
PEOPLE VH XH VH VH VH VH N VH VH VH VH VH
RISK N N N N N N N N N N N N
PROJECTS /
FACTORS
88
Table 32. Revised factor ratings detail, by project
Table 33. Revised factor ratings schedule results, by project
To summarize, the CORADMO system dynamics model can be calibrated to a nominal
agile schedule, produce static quantitative results that are credible, and demonstrate dynamic
behavior that is relatable to real-world experience. Similarly, when its twelve sub-factors are
rated according to objective project criteria, CORADMO is capable of predicting the actual
1: A pplied Sys
2: D A S Profum e
3: Finatus
4: A rgent T rading
5: C B O E 2
6: C TC
7: C B O E 1
8: C hronos
9: eM erge
10: A ppts 123
1 1: V isiB ILL
12: M otient
Product 3.6 2.6 3.8 4.0 3.8 2.2 3.6 2.2 2.2 5.0 5.0 3.4
Simplicity 4 2 3 5 4 1 3 2 2 5 5 4
Reuse 3 3 3 4 4 2 4 2 2 5 5 4
Deferrals 3 3 5 3 3 3 4 3 3 5 5 3
Models 5 3 5 5 5 2 4 2 2 5 5 3
TRL 3 2 3 3 3 3 3 2 2 5 5 3
Process 5.0 5.0 4.5 3.5 5.0 3.0 4.5 5.0 3.0 5.0 5.0 5.0
Streamlining 5 5 5 4 5 3 4 5 3 5 5 5
Tools 5 5 4 3 5 3 5 5 3 5 5 5
Project 6.0 6.0 6.0 4.0 6.0 3.0 5.0 5.0 5.0 6.0 6.0 5.0
Collaboration 6 6 6 4 6 3 5 5 5 6 6 5
MMPTs 6 6 6 4 6 3 5 5 5 6 6 5
People 5.0 6.0 5.0 4.5 5.0 5.0 3.0 5.0 5.0 5.0 5.0 5.0
KSAs 5 6 5 4 5 5 3 5 5 5 5 5
Compatibility 5 6 5 5 5 5 3 5 5 5 5 5
Risk 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0
Risk 3 3 3 3 3 3 3 3 3 3 3 3
PROJECTS
FACTORS
1: Applied Sys
2: DAS Profume
3: Finatus
4: Argent Trading
5: CBOE 2
6: CTC
7: CBOE 1
8: Chronos
9: eMerge
10: Appts 123
11: VisiBILL
12: Motient
Actual schedule 73 71 64 181 51 262 85 91 164 26 51 28
Modeled schedu 74 72 64 180 50 262 86 92 161 26 51 28
% error 1% 1% 0% -1% -2% 0% 1% 1% -2% 0% 0% 0%
PROJECTS /
SCHEDULE
89
schedule with very good accuracy, and of demonstrating dynamic behavior that is consistent with
actual or reasonably expected real-world experience.
5.4 Qualitative observations
The CORADMO model was shown above to achieve very good estimation accuracy of
schedules, within the limitations of the sample data. This section explores some of the qualitative
results of the system dynamics model that seem to correspond with project behaviors seen in the
real world. The model repeats many of these behaviors, to greater or less magnitude, on several
projects. Rather than belabor these in repeated discussion of similar behaviors on each project,
the following sub-sections illustrate examples of these behaviors. Detailed graphs from all of the
studied projects may be found in AppendixB.
5.4.1 Requirements growth
Requirements growth (also volatility or creep) is a well-known source of project cost and
schedule overrun (Jones 1995; Zowghi and Nurmuliani 2002; Nurmuliani, Zowghi, and Powell
2004). Figure 20 illustrates the difference that tool use, knowledgeable staff, and good processes
made on requirements growth, and thereby schedule, in an example taken from Project 5 (CBOE
2). Comparing the slopes of the scheduled tasks (line 2) in each graph, one can note far less
requirements growth in graph (b) than in (a). Within the model, this restraint is due to better
initial requirements analysis (the higher starting point of line 2 in graph (b)), and to better control
of change traffic throughout the project (lesser slope of line 2 in graph (b)).
90
(a)
(b)
Figure 20. Requirements growth and schedule
Evident in the dynamic behavior of the model, but difficult to illustrate in static graphs, is
the reaction of requirements growth to productivity. Many of the sample projects show high
productivity in their early phases. Varying no other parameter but productivity in the model, one
also sees a faster increase in requirements when the task productivity is higher. It would appear
that the customer reacts to good performance by asking for still more, a behavior many
practitioners (including the author) have seen in real-world projects.
91
In graph (a), when the productivity levels out around day 50 (due to a high level of
change rework, discussed elsewhere), the customer is slow to react (or perhaps greedy) and
continues to demand more tasks, even though the already delivered tasks have nearly reached the
then-current scheduled tasks. That is, the project could have ended around day 50 having
sufficiently met the customer requirements, but instead was extended to day 71 (the nominal
completion date). More details of the need for, and reaction to mid-project change traffic are
discussed in a later sub-section.
5.4.2 Change traffic
Repositioning manpower from design and development to handle change traffic can
significantly and adversely affect project schedule. Figure 21 from Project 3 (Finatus) illustrates
this effect.
92
(a)
(b)
Figure 21. Change traffic behavior, Project 3 (Finatus)
In graph (a), the nominal behavior, manpower to support change traffic (line 2) begins
building early in the project, and peaks around day 55, by which time it consumes nearly all the
available design and development staff (line 3). While change traffic tapers off after this point, it
continues to use more than one-half the available staff through the end of the project. Overall,
manpower to handle change traffic consumes roughly one-third of the entire staffing budget for
the project. Improving this performance by reducing change traffic, as seen by the actual
behavior in graph (b), is a major contributing factor to reducing the project schedule by 20%.
93
(a)
(b)
Figure 22. Other change traffic behavior, Project 3 (Finatus)
Not all of the sample projects exhibit such limited change traffic, although they
nonetheless are able to improve schedule. Figure 22 from Project 4 (Argent Trading) shows
significant staff being used to address changes in both the nominal (a) and actual (b) manpower
graphs. (An error in the graph omits the line for the total design and development staff, which is
approximately 10 persons.) In both nominal and actual graphs, at the peak more than one-half of
the staff is busy handling change traffic. The increase in change traffic of Project 3 over Project
94
4 is attributable to the much larger scope and schedule of the latter, and to poorer ratings in the
Process and Project ratings.
The dip in line (2) around day 105 in graph (a), and around day 140 in graph (b), are due
to staff redirection to handle refactoring, a topic which is discussed further in a later sub-section.
5.4.3 Refactoring and quality
Due to the lack (or at least lower reliance on) up-front design, agile development methods
usually rely on planned re-work, also known as refactoring, to improve the design periodically
(Beck 2005). (Cao, Ramesh, and Abdel-Hamid 2010) argues that the lower cost-of-change in
agile projects “largely relies on refactoring practice.” Figure 23 from Project 4 (Argent Trading)
shows the effect of refactoring on the quality of design.
95
(a)
(b)
Figure 23. Early vs. late refactoring
The nominal behavior in graph (a) shows the design quality (line 1) decreasing at a more
rapid rate than the actual behavior in graph (b). This is due largely to the above-nominal values
of Product, Process, and People factors in the actual performance, which keep the design from
degrading as quickly.
24
This behavior allows Project 4 to delay refactoring longer than the
nominal case, where the earlier refactoring allows later degradation of the design quality, but too
late for another refactoring. The higher Process and People factors also allow few staff to be
24
These factors are not as high in Project 4 as in other projects, however, which can be
seen in those projects’ still-lower rates of quality degradation.
96
dedicated to refactoring than in the nominal case, but with the same improvement in quality.
Finally, the lower rate of quality degradation in the actual case means the product is of higher
quality at project end.
(a)
(b)
Figure 24. “Bad smell” refactoring
Figure 24 illustrates related refactoring behavior from Project 7 (CBOE 1). In the
nominal case of graph (a), the design quality at the end of the project has degraded to the point
(caused a “bad smell”) that late refactoring is required, as in (Elssamadisy and Schalliol 2002).
97
By contrast in the actual case of graph (b), because of higher quality performance and a faster
schedule, the design quality has not degraded so far by project end as to require refactoring.
5.4.4 Productivity
The CORADMO system dynamics model calculates the productivity across the project,
in both nominal and actual cases, as illustrated in Figure 25. Here line (1) is the actual
productivity, calculated from the tasks-delivered per person-day in the previous iteration (after an
initial estimate), and also known as “yesterday’s weather” (Beck 2005). Line (2) is the intrinsic
productivity (defined earlier), as modified by team size, learning, and other factors. Line (3) is
the cumulative actual productivity measured across the entire schedule to date.
98
(a)
(b)
Figure 25. Productivity decline
The modified intrinsic productivity increases across the duration of the project, in both
nominal (a) and actual (b) scenarios. This growth is due largely to team learning, a component of
productivity in the Integrated Project Dynamics Model (IPDM) of (Abdel-Hamid and Madnick
1991), on which this work is based. In the nominal case (a), a “kink” in line (2) occurs around
day 110, due to the quality improvement that occurs due to refactoring at that time; no such
refactoring occurs in the actual case (b), and line (2) is smooth.
99
Regardless of the growth in modified intrinsic productivity, the productivity actually
realized declines throughout the project. This decline is due to the increasing diversion of staff to
handle change traffic, instead of developing and delivering new tasks. The productivity decline is
steeper in the nominal case (a) because the level of change traffic is much higher than in the
actual case (b). Productivity recovers somewhat in the last two iterations of case (a), due to
reduced change traffic, although it begins another decline. Case (b) shows a slight improvement
of productivity in the last two iterations.
Since agile development methods deliver incrementally, where each iteration builds upon
the work of the previous, this observation may be an example of the Incremental Development
Productivity Decline (IDPD) noted in (Moazeni, Link, and Boehm 2013). The IDPD predicts an
increased number of required builds due to the decline of productivity, which is directly
analogous to the additional iterations necessary because of an extended schedule.
5.4.5 Schedule and scope
A defining characteristic of agile development methods is the intertwining of schedule
and scope, where schedule may be shortened by shedding tasks, or lengthened by adding tasks,
as the customer desires. The COCOMO system dynamics model, augmenting the model in (Cao
2005), illustrates the results of this interplay in Figure 26.
In these graphs from Project 11 (VisiBILL), line (1) represents the schedule pressure,
defined as the ratio of the time remaining to the time perceived needed to complete the scheduled
work. Line (2) is the time perceived remaining, which is the lesser of the time perceived needed
and the maximum tolerable schedule. The time remaining, line (3), is the time remaining until the
end of the schedule as currently known. Finally, the adjusted scope, line (4), is the number of
tasks that “should” be scheduled, given the current productivity and remaining time.
100
(a)
(b)
Figure 26. Schedule and scope interactions
In the nominal schedule (a), the expectation of what can be completed (line 2) varies until
it is realized, around day 50, that the deadline is approaching. Correspondingly, the schedule
pressure remains low until that time. Since the customer is slow to react to the looming time
constraint, though, the adjusted scope rises late in the project, which further increases the
schedule pressure. This combination of increasing customer expectations and decreasing time
available sets the stage for a “death march” (Yourdon 2003), or at least portends a stressed and
overworked staff.
101
By contrast in the actual schedule (b), whose rating factors were very-high or extra-high,
the expectation of what can be completed (line 2) peaks around day 23, and then decreases as the
schedule progresses. The schedule pressure remains minimal, because the time perceived
remaining is always below the actual time remaining. The adjusted scope rises slowly from the
beginning, but never increases to the point the project cannot be completed. These behaviors are
the result of a well-controlled development process.
Chapter 6
Discussion
The CORADMO model presents quantitative schedule estimates that agree well with the
actual schedule results observed, often predicting the schedule within a day or two. These results
were achieved not by directly modifying the productivity with multiplicative factors, which can
generate any result desired, but by second-, third- or higher-order effects on productivity, through
the related processes of estimating, managing change, controlling technical debt, and
streamlining operations, among others.
The multiplicative values used for the CORADMO sub-factors were derived from existing
sources, and applied to parts of the model whose behavior was reasonably dependent on the sub-
factor. While the exact values of these multiplicative sub-factors may be incorrect, the ratio of
the smallest-to-largest value appears to be correct, at least within the ranges exercised. That is,
running the model by varying the sub-factor indices within reasonable ranges, as constrained by
102
103
objective project characteristics, still generated both subjectively plausible and objectively
correct results.
6.1 Hypotheses
• H1: The durations of agile projects is proportional to the square-root of their effort.
This hypothesis is conditionally confirmed, with respect to the available sample data.
• H2: Each of the rating scales is positively correlated with schedule acceleration or
deceleration.
This hypothesis is partially confirmed, with the exception of the Risk acceptance factor,
the Deferrals sub-factor of Product, and the Tools sub-factor of Process.
Risk acceptance did not vary across the projects in the sample data set, and so its affect
on schedule cannot be verified; varying it experimentally, however, showed a negligible effect on
schedule.
Deferrals does not directly affect project schedule in the model, as it only identifies
points during performance where the customer might consider the project complete because a
high enough percentage of the scheduled tasks have been delivered. It is positively correlated,
however, in that varying it does move that point forward or backward in schedule.
Tools use appears to be negatively correlated with schedule acceleration under some
conditions. If requirements volatility is high, then more extensive tool use actually extends the
schedule. This appears to occur because better tool use provides more accurate information when
initially estimating schedule, with the result that more schedule time is (correctly) allotted.
104
• H3: There exist domains where the accuracy of the predicted schedule for a given effort
is within 30% of the actual schedule, at least 70% of the time—that is PRED(0.30) ≥
70%.
This hypothesis is conditionally confirmed, with respect to the available sample data. The
actual completion date of all projects in the sample data set fell within a small delta of the
predicted completion.
6.2 Threats to validity
The nature of this research problem does not permit traditional experiment design, where
model independent variables are varied individually to assess their effect on model outcome
dependent variables. The author cannot control the variables, nor establish a control group, and
must observe existing populations using passive techniques. Data collection in this research
therefore falls into the category of quasi-experimentation, with its associated threats to validity.
According to (Cook and Campbell 1979), validity threats for quasi-experiments fall into the
following four major categories:
• Statistical conclusion validity
• Internal validity
• Construct validity
• External validity
The following sections discuss how these threats were controlled in the research.
6.2.1 Internal validity
(Lavrakas 2008a) describes internal validity as the degree to which the research design
can provide evidence to test the possible cause-and-effect relationship between an independent
105
variable (the CORADMO model factors), and a dependent variable (the schedule duration). If
the research design lacks internal validity, although the researchers may speculate about the
observed cause-effect relationship, the research cannot support it.
The system dynamics model may not adequately establish causality. In this context,
the association of higher or lower values of factors may not be causally related to the
acceleration or extension of schedule duration. One possible cause of such a threat would be the
incorrect application of CORADMO model factors to simulation elements of the (Cao 2005)
system dynamics model. Another cause might be that the simulation elements of the Cao model
might not reflect the correct influence of that element on projects—that is, Cao’s model may
itself be incorrect. A third cause might be that the CORADMO model factors are actually
unrelated to schedule acceleration, and the results obtained are due to chance or other unknown
factors.
6.2.2 Construct validity
Construct validity addresses whether the model constructs are representative of the
hypothetical area of interest, whether the research approach is sufficient to collect meaningful
data addressing those constructs, and whether the actual realization of the approach is capable of
measure the phenomena accurately (Lavrakas 2008b). The following threats to validity exist for
constructs:
6.2.2.1 General construct validity
Model factors may inaccurately represent the concept to be measured. This potential
problem may be assessed at a high level by evaluating the “face validity” of the model factors—
that is, whether each factor is reasonably related to schedule acceleration. This potential threat
106
has been mitigated by a thorough literature review, and by examination of and expert judgement
regarding the model factors by outside experts, including the advisor to this research.
6.2.2.2 Modeling errors
Underlying model constructs may have been misunderstood, or errors introduced.
Potential causes may include: 1) misunderstanding or misinterpretation of the (Cao 2005) model
constructs by the author, 2) mis-entering the Cao model when reconstructing it, and 3)
misunderstanding the application of CORADMO model factors to Cao model elements. This
potential threat has been reduced by:
• Establishing, verifying, and maintaining unit consistency within the model
• Comparing model constructs with expert sources, such as (Madachy 2007)
• Comparing model constructs with alternative and related models, such as (Abdel-Hamid
and Madnick 1991) and (A. S. White 2014)
• Verifying model baseline results against the original results published by Cao
• Performing sensitivity analysis of CORADMO model factor adjustments
6.2.2.3 Hypothesis-guessing
Being aware of the hypotheses, the author may have biased the modeling results
(Cook and Campbell 1979). That is, the author may have chosen CORADMO model factor
multiplier values, applied the factors to underlying model components, or selected CORADMO
factor indices to create the expected result. This threat has been mitigated by:
• Selecting model factor multiplier values from independent sources, such as COCOMO II
• Applying model factors to underlying model components conservatively, based on the
relationship of each factor to the underlying component
107
• Running the final model after identifying the factor multipliers and their application to
the underlying model
• Minimizing the differences of CORADMO factor indices from the original Wideband
Delphi indices assignment
6.2.2.4 Confounding constructs (Cook and Campbell 1979)
Model factors interact, reducing sensitivity. The effect of many of the individual model
sub-factors is small, and the schedule contributions of these factors difficult so separate. This
threat was mitigated from the initial static CORADMO model by the use of the system dynamics
model, which explicitly calculates the effect of each factor. Model factors were found to interact,
and this interaction is explicitly modeled and tested.
6.2.3 External validity
External validity threats affects the generalizability of the results to other, similar settings.
These threats may include (a) sample characteristics, and (b) setting characteristics (Kalaian and
Kasim 2008).
6.2.3.1 Sample characteristics
The project samples may not represent the population. In this case, the research
results may be inapplicable outside of the projects modeled. (Manzo 2004) may have selected
sample projects that showed specific behavior, such as schedule acceleration, and omitted
projects that showed schedule extension. In future work, the threat may be mitigated by
identifying and testing a wider range of projects, from a variety of sources.
6.2.3.2 Setting characteristics
The study setting may be influenced by organization-specific conditions. The
research was performed against projects from a single organization, which limits the applicability
108
of the study outside this sample set. In future work, the threat may be mitigated by identifying
and testing a wider range of projects, from a variety of sources.
109
References
Abdel-Hamid, Tarek, and Stuart E. Madnick. 1991. Software Project Dynamics: An Integrated
Approach. Prentice-Hall, Inc. http://dl.acm.org/citation.cfm?id=124574.
Adler, Paul S., and Bryan Borys. 1996. “Two Types of Bureaucracy: Enabling and Coercive.”
Administrative Science Quarterly 41 (1): 61–89. doi:10.2307/2393986.
Alnuaimi, Omar A., Lionel P. Robert, and Likoebe M. Maruping. 2010. “Team Size, Dispersion,
and Social Loafing in Technology-Supported Teams: A Perspective on the Theory of
Moral Disengagement.” Journal of Management Information Systems 27 (1): 203–30.
Anderson, David J., and Donald G. Reinertsen. 2010. Kanban: Successful Evolutionary Change
for Your Technology Business. Sequim, Washington: Blue Hole Press.
Arthur, Lowell Jay. 1992. Rapid Evolutionary Development: Requirements, Prototyping &
Software Creation. New York [u.a.: Wiley.
http://catdir.loc.gov/catdir/description/wiley033/91016670.html.
A. S. White. 2014. “An Agile Project System Dynamics Simulation Model.” International
Journal of Information Technologies and Systems Approach (IJITSA) 7 (1): 55–79.
doi:10.4018/ijitsa.2014010104.
Auer, Ken, and Roy Miller. 2001. Extreme Programming Applied: Playing to Win. 1 edition.
Boston: Addison-Wesley Professional.
Baik, Jongmoon, and Barry Boehm. 2000. “Empirical Analysis of CASE Tool Effects on
Software Development Effort.” ACIS International Journal of Computer & Information
Science 1 (1): 1–10.
Baik, Jongmoon, Barry Boehm, and Bert Steece. 2002. “Disaggregating and Calibrating the
CASE Tool Variable in COCOMO II.” IEEE Transactions on Software Engineering 28
(11): 1009–22. doi:10.1109/TSE.2002.1049401.
Balijepally, VenuGopal, Radha K. Mahapatra, Sridhar P. Nerur, and Kenneth Price. 2009. “Are
Two Heads Better than One for Software Development? The Productivity Paradox of Pair
Programming.” Management Information Systems Quarterly 33 (1): 7.
Basili, Victor R., Richard W. Selby, and David H. Hutchens. 1986. “Experimentation in Software
Engineering.” Software Engineering, IEEE Transactions on, no. 7: 733–43.
Beck, Kent. 2005. Extreme Programming Explained. 2nd ed. The XP Series. Boston: Addison-
Wesley.
Beck, Kent, Mike Beedle, Arie van Bennekum, Alistair Cockburn, Ward Cunningham, Martin
Fowler, James Grenning, et al. 2001. “Manifesto for Agile Software Development,.”
http://www.agilemanifesto.org.
Beck, Kent, and Martin Fowler. 2001. Planning Extreme Programming. Boston: Addison-
Wesley.
Boehm, Barry. 1981. Software Engineering Economics. Prentice-Hall Englewood Cliffs, NJ.
———. 1988. “A Spiral Model of Software Development and Enhancement.” Computer 21 (5):
61–72.
———. 2000. “Requirements That Handle IKIWISI, COTS, and Rapid Change.” IEEE
Computer 33 (7): 99–102.
Boehm, Barry, Bradford Clark, Ellis Horowitz, A. Winsor Brown, Reifer Reifer, Sunita Chulani,
Ray Madachy, and Bert Steece. 2000. Software Cost Estimation with Cocomo II. 1st ed.
Upper Saddle River, NJ, USA: Prentice Hall PTR.
110
Boehm, Barry, Bradford Clark, Ellis Horowitz, Chris Westland, Ray Madachy, and Richard
Selby. 1995. “Cost Models for Future Software Life Cycle Processes: COCOMO 2.0.”
Annals of Software Engineering 1 (1): 57–94.
Boehm, Barry, Dan Ingold, Kathleen Dangle, Rich Turner, and Paul Componation. 2010. Early
Identification of SE-Related Program Risks. http://stinet.dtic.mil/oai/oai?
&verb=getRecord&metadataPrefix=html&identifier=ADA548664.
Boehm, Barry, Dan Ingold, and Raymond Madachy. 2008. “The Macro Risk Model: An Early
Warning Tool for Software-Intensive Systems Projects.” In Proceedings of the 18th
Annual International Symposium of INCOSE. The Netherlands.
Boehm, Barry, and Richard Turner. 2004. Balancing Agility and Discipline: A Guide for the
Perplexed. Boston: Addison-Wesley.
Borshchev, Andrei, and Alexei Filippov. 2004. “From System Dynamics and Discrete Event to
Practical Agent Based Modeling: Reasons, Techniques, Tools.” In Proceedings of the
22nd International Conference of the System Dynamics Society.
http://www.econ.iastate.edu/tesfatsi/systemdyndiscreteeventabmcompared.borshchevfilip
pov04.pdf.
Brooks, Frederick. 1995. The Mythical Man-Month: Essays on Sotware Engineering. Reading,
Mass [etc.]: Addison-Wesley.
Caiden, Gerald E. 1985. “Excessive Bureaucratization: The J-Curve Theory of Bureaucracy and
Max Weber through the Looking Glass.” Dialogue 7 (4): 21–33.
Cao, Lan. 2005. “Modeling Dynamics in Agile Software Development.” Doctoral dissertation,
Atlanta, Georgia: Georgia State University.
http://search.proquest.com/docview/304999802/abstract?accountid=14749.
Cao, Lan, Balasubramaniam Ramesh, and Tarek Abdel-Hamid. 2010. “Modeling Dynamics in
Agile Software Development.” ACM Trans. Manage. Inf. Syst. 1 (1): 5:1–5:26.
doi:10.1145/1877725.1877730.
Cockburn, Alistair. 2002. Agile Software Development. Boston: Addison-Wesley.
Conboy, Kieran. 2009. “Agility from First Principles: Reconstructing the Concept of Agility in
Information Systems Development.” Information Systems Research 20 (3): 329–54.
doi:10.1287/isre.1090.0236.
Cook, Thomas D, and Donald T Campbell. 1979. Quasi-Experimentation: Design & Analysis
Issues for Field Settings. Boston: Houghton Mifflin.
Crowston, K., and E. E. Kammerer. 1998. “Coordination and Collective Mind in Software
Requirements Development.” IBM Systems, San Francisco Frameworks, 37 (2).
doi:10.1147/sj.372.0227.
Cunningham, Ward. 1992. “Technical Debt.” http://c2.com/cgi/wiki?TechnicalDebt.
Cusumano, Michael A, and Richard Selby. 1995. Microsoft Secrets: How the World’ s Most
Powerful Software Company Creates Technology, Shapes Markets, and Manages People.
New York: Simon & Schuster.
Daft, Richard L, and Robert H Lengel. 1986. “Organizational Information Requirements, Media
Richness and Structural Design.” Management Science 32 (5): 554.
Damian, D., and D. Moitra. 2006. “Guest Editors’ Introduction: Global Software Development:
How Far Have We Come?” IEEE Software 23 (5): 17–19. doi:10.1109/MS.2006.126.
De Cesare, Sergio, Mark Lycett, Robert D. Macredie, Chaitali Patel, and Ray Paul. 2010.
“Examining Perceptions of Agility in Software Development Practice.” Commun. ACM
53 (6): 126–30. doi:10.1145/1743546.1743580.
111
Deming, W. Edwards. 2000. Out of the Crisis. Cambridge, Mass.: MIT Press.
Dingsøyr, Torgeir, Sridhar Nerur, VenuGopal Balijepally, and Nils Brede Moe. 2012. “A Decade
of Agile Methodologies: Towards Explaining Agile Software Development.” Journal of
Systems and Software 85 (6): 1213–21. doi:10.1016/j.jss.2012.02.033.
Druskat, Vanessa Urch, and Anthony T. Pescosolido. 2002. “The Content of Effective Teamwork
Mental Models in Self-Managing Teams: Ownership, Learning and Heedful
Interrelating.” Human Relations 55 (3): 283–314. doi:10.1177/0018726702553001.
Dyba, Tore, and Torgeir Dingsoyr. 2008. “Empirical Studies of Agile Software Development: A
Systematic Review.” Information and Software Technology 50 (9–10): 833–59.
doi:10.1016/j.infsof.2008.01.006.
Eisenhardt, Kathleen M. 1989. “Building Theories from Case Study Research.” Academy of
Management Review 14 (4): 532–50.
Eisenhardt, Kathleen M., and Melissa E. Graebner. 2007. “Theory Building from Cases:
Opportunities and Challenges.” Academy of Management Journal 50 (1): 25–32.
Elssamadisy, Amr, and Gregory Schalliol. 2002. “Recognizing and Responding to ‘Bad Smells’
in Extreme Programming.” In Proceedings of the 24th International Conference on
Software Engineering, 617–22. Orlando, Florida: ACM. doi:10.1145/581339.581418.
Ferreira, Susan, James Collofello, Dan Shunk, and Gerald Mackulak. 2009. “Understanding the
Effects of Requirements V olatility in Software Engineering by Using Analytical
Modeling and Software Process Simulation.” Journal of Systems and Software 82 (10):
1568–77. doi:10.1016/j.jss.2009.03.014.
Ford, Jennifer, Ryan Colburn, and Yosef Morris. 2012. “Principles of Rapid Acquisition and
Systems Engineering.” Wright-Patterson Air Force Base, Ohio: Air Force Institute of
Technology.
Ford, Jennifer S., Ryan M. Colburn, and Yosef A. Morris. 2012. Principles of Rapid Acquisition
and Systems Engineering. DTIC Document. http://oai.dtic.mil/oai/oai?
verb=getRecord&metadataPrefix=html&identifier=ADA562705.
Garcia, J., D. Popescu, G. Edwards, and N. Medvidovic. 2009. “Identifying Architectural Bad
Smells.” In 13th European Conference on Software Maintenance and Reengineering,
2009. CSMR ’09, 255–58. IEEE. doi:10.1109/CSMR.2009.59.
Hannay, Jo E., and Hans Christian Benestad. 2010. “Perceived Productivity Threats in Large
Agile Development Projects.” In Proceedings of the 2010 ACM-IEEE International
Symposium on Empirical Software Engineering and Measurement, 15:1–15:10. ESEM
’10. New York, NY , USA: ACM. doi:10.1145/1852786.1852806.
Hartwick, J., and H. Barki. 2001. “Communication as a Dimension of User Participation.”
Professional Communication, IEEE Transactions on 44 (1): 21.
He Zhang, B. Kitchenham, and D. Pfahl. 2008. “Software Process Simulation Modeling: Facts,
Trends and Directions.” In Software Engineering Conference, 2008. APSEC ’08. 15th
Asia-Pacific, 59–66. IEEE. doi:10.1109/APSEC.2008.50.
Highsmith, James A. 2000. Adaptive Software Development: A Collaborative Approach to
Managing Complex Systems. New York: Dorset House Pub.
Highsmith, James A. 2002. Agile Software Development Ecosystems. Addison-Wesley
Professional.
Hulkko, Hanna, and Pekka Abrahamsson. 2005. “A Multiple Case Study on the Impact of Pair
Programming on Product Quality.” In Proceedings of the 27th International Conference
on Software Engineering, 495–504. ACM. http://dl.acm.org/citation.cfm?id=1062545.
112
Hunt, John. 2006. Agile Software Construction. Springer London.
http://link.springer.com/chapter/10.1007/1-84628-262-4_1.
Jeffery, D. Ross. 1987. “Time-Sensitive Cost Models in the Commercial MIS Environment.”
Software Engineering, IEEE Transactions on, no. 7: 852–59.
Jones, C. 1995. “Determining Software Schedules.” Computer 28 (2): 73–75.
doi:10.1109/2.348003.
Kalaian, Sema A., and Rafa M. Kasim. 2008. “External Validity.” Edited by Paul J. Lavrakas.
Encyclopedia of Survey Research Methods. Thousand Oaks, CA: Sage Publications, Inc.
http://knowledge.sagepub.com/view/survey/n172.xml.
Kruchten, Philippe. 2004. The Rational Unified Process: An Introduction. Addison-Wesley
Professional.
Kruchten, Philippe, Robert L. Nord, and Ipek Ozkaya. 2012. “Technical Debt: From Metaphor to
Theory and Practice.” IEEE Software 29 (6): 18–21. doi:10.1109/MS.2012.167.
Kuppuswami, S., Kalimuthu Vivekanandan, and Paul Rodrigues. 2003. “A System Dynamics
Simulation Model to Find the Effects of XP on Cost of Change Curve.” In Extreme
Programming and Agile Processes in Software Engineering, 54–62. Springer.
http://link.springer.com/chapter/10.1007/3-540-44870-5_8.
Lane, Jo Ann, and Barry Boehm. 2007. Modern Tools to Support DoD Software Intensive System
of Systems Cost Estimation. DAN 347336. USC Center for Systems and Software
Engineering.
Lane, Jo Ann, Barry Boehm, Mark Bolas, Azad Madni, and Richard Turner. 2010. “Critical
Success Factors for Rapid, Innovative Solutions.” In New Modeling Concepts for Today’ s
Software Processes, 52–61. Springer. http://link.springer.com/chapter/10.1007/978-3-
642-14347-2_6.
Lavrakas, Paul J. 2008a. “Internal Validity.” Edited by Paul J. Lavrakas. Encyclopedia of Survey
Research Methods. Thousand Oaks, CA: Sage Publications, Inc.
http://knowledge.sagepub.com/view/survey/n229.xml.
———. 2008b. “Construct Validity.” Edited by Paul J. Lavrakas. Encyclopedia of Survey
Research Methods. Thousand Oaks, CA: Sage Publications, Inc.
http://knowledge.sagepub.com/view/survey/n92.xml.
Lee, Jaejoon, and Dirk Muthig. 2006. “Feature-Oriented Variability Management in Product Line
Engineering.” Communications of the ACM 49 (12): 55–59.
Leffingwell, Dean. 2010. Agile Software Requirements: Lean Requirements Practices for Teams,
Programs, and the Enterprise. 1 edition. Addison-Wesley Professional.
Leonard, Dorothy, and Sylvia Sensiper. 1998. “The Role of Tacit Knowledge in Group
Innovation.” California Management Review 40 (3): 112.
Lepore, Deborah, and John Colombi. 2012. Expedited Systems Engineering for Rapid Capability
and Urgent Needs. SERC-2012-TR-034. Stevens Institute of Technology.
http://sercuarc.org/uploads/files/SERC%20RT-34%20Expedited%20SE%20Final
%20Report%20-%20FINAL%5B1%5D.pdf.
Linstone, Harold A, and Murray Turoff. 2002. The Delphi Method: Techniques and Applications.
V ol. 2006.
Lyneis, James M., Kenneth G. Cooper, and Sharon A. Els. 2001. “Strategic Management of
Complex Projects: A Case Study Using System Dynamics.” System Dynamics Review 17
(3): 237–60. doi:10.1002/sdr.213.
113
Madachy, Raymond J. 2007. Software Process Dynamics. Hoboken, New Jersey: John Wiley &
Sons.
Manzo, John. 2004. “Guilt-Free Agile Development: Plan-Driven or Agile--Why Choose When
You Can Have The Benefits of Both?” presented at the CSE Annual Research Review,
University of Southern California, Los Angeles, CA, March 18.
http://csse.usc.edu/events/2004/arr/presentations/friday/2_Agile/2_ADM.ppt.
Martin, James. 1991. Rapid Application Development. New York; Toronto; New York:
Macmillan Pub. Co. ; Collier Macmillan Canada ; Maxwell Macmillan International.
McChesney, Ian R., and Seamus Gallagher. 2004. “Communication and Co-Ordination Practices
in Software Engineering Projects.” Information and Software Technology 46 (7): 473–89.
doi:10.1016/j.infsof.2003.10.001.
McConnell, Steve. 1996. Rapid Development: Taming Wild Software Schedules. Redmond,
Wash.: Microsoft Press.
———. 2006. Software Estimation: Demystifying the Black Art: Demystifying the Black Art.
O’Reilly Media, Inc.
Menzies, T, B Caglayan, E Kocaguneli, J Krall, F Peters, and B Turhan. 2012. “The PROMISE
Repository of Empirical Software Engineering Data.” West Virginia University,
Department of Computer Science. http://promisedata.googlecode.com/.
Moazeni, Ramin, Daniel Link, and Barry Boehm. 2013. “Incremental Development Productivity
Decline.” In Proceedings of the 9th International Conference on Predictive Models in
Software Engineering, 7. ACM. http://dl.acm.org/citation.cfm?id=2499403.
Murman, Earll M. 2002. Lean Enterprise Value Insights from MIT’ s Lean Aerospace Initiative.
New York: Palgrave.
Nan, Ning, and D.E. Harter. 2009. “Impact of Budget and Schedule Pressure on Software
Development Cycle Time and Effort.” IEEE Transactions on Software Engineering 35
(5): 624–37. doi:10.1109/TSE.2009.18.
Nerur, Sridhar, RadhaKanta Mahapatra, and George Mangalaraj. 2005. “Challenges of Migrating
to Agile Methodologies.” Commun. ACM 48 (5): 72–78. doi:10.1145/1060710.1060712.
Nurmuliani, Nur, Didar Zowghi, and S. Powell. 2004. “Analysis of Requirements V olatility
during Software Development Life Cycle.” In Software Engineering Conference, 2004.
Proceedings. 2004 Australian, 28–37. IEEE. http://ieeexplore.ieee.org/xpls/abs_all.jsp?
arnumber=1290455.
Oligny, Serge, Pierre Bourque, Alain Abran, and Bertrand Fournier. 2000. “Exploring the
Relation between Effort and Duration in Software Engineering Projects.” In Proceedings
of the World Computer Congress, 175–78.
Ono, Taiichi. 1988. Toyota Production System: Beyond Large-Scale Production. Portland, Or.:
Productivity Press.
Parsons, David, Hokyoung Ryu, and Ramesh Lal. 2007. “The Impact of Methods and Techniques
on Outcomes from Agile Software Development Projects.” In Organizational Dynamics
of Technology-Based Innovation: Diversifying the Research Agenda, 235–49. Springer.
http://link.springer.com/chapter/10.1007/978-0-387-72804-9_16.
Pikkarainen, M., J. Haikara, O. Salo, P. Abrahamsson, and J. Still. 2008. “The Impact of Agile
Practices on Communication in Software Development.” Empirical Software
Engineering 13 (3): 303–37. doi:10.1007/s10664-008-9065-9.
Poppendieck, Mary, and Tom Poppendieck. 2003. Lean Software Development: An Agile Toolkit.
The Agile Software Development Series. Boston: Addison-Wesley.
114
Reifer, Donald. 2013. Ten “Take Aways” from the Reifer “Quantitative Analysis of Agile
Methods” Study. http://www.isbsg.com/products/10-take-aways-from-the-reifer-agile-
report.
———. 2014. “Agile Estimating: Straightforward and Simple.” Reifer Consulting LLC.
Royce, Winston W. 1970. “Managing the Development of Large Software Systems.” In
Proceedings of IEEE WESCON. V ol. 26. Los Angeles.
http://leadinganswers.typepad.com/leading_answers/files/original_waterfall_paper_winst
on_royce.pdf.
Schwaber, Ken. 1997. “SCRUM Development Process.” In Business Object Design and
Implementation, edited by Dr Jeff Sutherland, Cory Casanave, Joaquin Miller, Dr Philip
Patel, and Glenn Hollowell, 117–34. Springer London.
http://link.springer.com/chapter/10.1007/978-1-4471-0947-1_11.
Shinde, Chirag. 2008. “Independent Signatories of The Manifesto for Agile Software
Development.” The Agile Manifesto. August.
http://www.agilemanifesto.org/sign/display.cgi?ms=000000123.
Sutherland, J. V ., and K. Schwaber. 1995. “The SCRUM Methodology.” In Business Object
Design and Implementation: OOPSLA Workshop.
Takeuchi, Hirotaka, and Ikujiro Nonaka. 2000. “Classic Work: Theory of Organizational
Knowledge Creation.” In Knowledge Management: Classic and Contemporary Works,
139–82. The MIT Press.
Teasley, S.D., L.A. Covi, M.S. Krishnan, and J.S. Olson. 2002. “Rapid Software Development
through Team Collocation.” IEEE Transactions on Software Engineering 28 (7): 671–83.
doi:10.1109/TSE.2002.1019481.
Te’eni, Dov. 2001. “Review: A Cognitive-Affective Model of Organizational Communication for
Designing IT.” MIS Quarterly 25 (2): 251.
United States Air Force. 2011. “Air Force Instruction 63-114.” http://static.e-
publishing.af.mil/production/1/saf_aq/publication/afi63-114/afi63-114.pdf.
Valerdi, Ricardo. 2005. “The Constructive Systems Engineering Cost Model (COSYSMO).”
University of Southern California.
http://csse.usc.edu/csse/TECHRPTS/PhD_Dissertations/files/Ricardo%20Valerdi
%20dissertation%20(COSYSMO).pdf.
Valerdi, Ricardo, and Ron J. Kohl. 2004. “An Approach to Technology Risk Management.” In
Engineering Systems Division Symposium, 3:29–31.
http://www.researchgate.net/publication/228752216_An_approach_to_technology_risk_
management/file/3deec529e0378270be.pdf.
Van Heeringen, A., and P. A. Dijkwel. 1987. “The Relationships between Age, Mobility and
Scientific Productivity. Part I.” Scientometrics 11 (5-6): 267–80.
Walston, C. E., and C. P. Felix. 1977. “A Method of Programming Measurement and
Estimation.” IBM Systems Journal 16 (1): 54–73. doi:10.1147/sj.161.0054.
West, Dave, Tom Grant, M. Gerush, and D. D’Silva. 2010. “Agile Development: Mainstream
Adoption Has Changed Agility.” Forrester Research.
http://www.ca.com/us/~/media/Files/IndustryResearch/forrester-agile-development-
mainstream-adoption.pdf.
Williams, Laurie, Robert R. Kessler, Ward Cunningham, and Ron Jeffries. 2000. “Strengthening
the Case for Pair Programming.” Software, IEEE 17 (4): 19–25.
115
Womack, James P, and Daniel T Jones. 2003. Lean Thinking: Banish Waste and Create Wealth in
Your Corporation. New York: Free Press.
Yourdon, Edward. 2003. Death March. Pearson Education. http://dl.acm.org/citation.cfm?
id=940374.
Zhang, He, Barbara Kitchenham, and Dietmar Pfahl. 2010. “Software Process Simulation
Modeling: An Extended Systematic Review.” In New Modeling Concepts for Today’ s
Software Processes, edited by Jürgen Münch, Ye Yang, and Wilhelm Schäfer, 6195:309–
20. Berlin, Heidelberg: Springer Berlin Heidelberg.
http://www.springerlink.com/content/1026116w64103568/.
Zhang, Xiaodong, Le Luo, Yu Yang, Yingzi Li, Christopher Schlick, and Morten Grandt. 2009.
“A Simulation Approach for Evaluation and Improvement of Organisational Planning in
Collaborative Product Development Projects.” International Journal of Production
Research 47 (13): 3471–3501. doi:10.1080/00207540802356770.
Zirger, B.J., and J.L. Hartley. 1996. “The Effect of Acceleration Techniques on Product
Development Time.” IEEE Transactions on Engineering Management 43 (2): 143–52.
doi:10.1109/17.509980.
Zowghi, Didar, and Nur Nurmuliani. 2002. “A Study of the Impact of Requirements V olatility on
Software Project Performance.” In Software Engineering Conference, 2002. Ninth Asia-
Pacific, 3–11. IEEE. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1182970.
Appendix A
System dynamics flow equations
A-1
A-2
A.1 Change module
Changes_Verified(t) = Changes_Verified(t - dt) + (Rate_of__Verifying_Changes
+ Rate_of_External__Rej_Change - Rate_of_Fix_Change) * dt
INIT
Changes_Verified = 0
INFLOWS:
Rate_of__Verifying_Changes = DELAY( Rate_Corrective_Change_Requests
* ( 1 - Percent_Rescheduled_As_New_Task ) * Percent_Verified,
Estimation.Iteration_Duration )
Rate_of_External__Rej_Change = External_Change_Rate
OUTFLOWS:
Rate_of_Fix_Change = IF MP_Needed_Per_Change > 0 THEN
Daily_MP_For_Change / MP_Needed_Per_Change
ELSE
0
Change_as_New_Task(t) = Change_as_New_Task(t - dt) +
(Rate_of_Change_as_New_Task - Rate_Adaptive_Change_Incorporation) * dt
INIT
Change_as_New_Task = 0
INFLOWS: Rate_of_Change_as_New_Task = Rate_Corrective_Change_Requests -
Rate_of_Response – Rate_of__Verifying_Changes
OUTFLOWS:
Rate_Adaptive_Change_Incorporation =
DELAY( Rate_of_Change_as_New_Task, Estimation.Iteration_Duration,
0 )
Change_Request(t) = Change_Request(t - dt) + (Rate_Corrective_Change_Requests
- Rate_of__Verifying_Changes - Rate_of_Change_as_New_Task -
Rate_of_Response) * dt
INIT
Change_Request = 0
INFLOWS:
Rate_Corrective_Change_Requests = ( Changes_on_Last_Iteration +
QA_AH.Errors_Last_Iteration ) / ( CORADMO.Streamlining_Multiplier
* Estimation.Iteration_Duration ) + Delayed_Change_Out_Rate * ( 1
+ Impact_of_Delay )
OUTFLOWS:
Rate_of__Verifying_Changes = DELAY( Rate_Corrective_Change_Requests
* ( 1 - Percent_Rescheduled_As_New_Task ) * Percent_Verified,
Estimation.Iteration_Duration )
Rate_of_Change_as_New_Task = Rate_Corrective_Change_Requests -
Rate_of_Response – Rate_of__Verifying_Changes
Rate_of_Response = DELAY( ( 1 - ( Percent_Rescheduled_As_New_Task +
Percent_Verified ) ) * Rate_Corrective_Change_Requests,
Estimation.Iteration_Duration )
A-3
Delayed_Changes(t) = Delayed_Changes(t - dt) + (Delayed_Change_Rate -
Delayed_Change_Out_Rate) * dt
INIT
Delayed_Changes = 0
INFLOWS:
Delayed_Change_Rate = Customer_D.Delay_on_Change_Request *
Productivity_D.Tasks_Developed_in_Last_Iteration *
Estimation.Requirements_Volatility /
Estimation.Iteration_Duration
OUTFLOWS:
Delayed_Change_Out_Rate = Delayed_Changes *
Percent_of_Delayed_Change_Per_Day / DT
Fixed_Changes(t) = Fixed_Changes(t - dt) + (Rate_of_Fix_Change) * dt
INIT
Fixed_Changes = 0
INFLOWS:
Rate_of_Fix_Change = IF MP_Needed_Per_Change > 0 THEN
Daily_MP_For_Change / MP_Needed_Per_Change
ELSE
0
Rejected_or_Deferred_Changes(t) = Rejected_or_Deferred_Changes(t - dt) +
(Rate_of_Response) * dt
INIT
Rejected_or_Deferred_Changes = 0
INFLOWS:
Rate_of_Response = DELAY( ( 1 - ( Percent_Rescheduled_As_New_Task +
Percent_Verified ) ) * Rate_Corrective_Change_Requests,
Estimation.Iteration_Duration )
Alignment_to_Customer_Needs = IF ( Change_Request > 0 ) THEN
( Fixed_Changes / Change_Request ) * ( ( Changes_Verified -
Rejected_or_Deferred_Changes ) / Change_Request ) *
( Delayed_Changes / Change_Request ) *
( Change_as_New_Task / Change_Request ) *
Customer_D.Representative_of_Need
ELSE
1
Changes_on_Last_Iteration = ( 1 - Customer_D.Delay_on_Change_Request ) *
Productivity_D.Tasks_Developed_in_Last_Iteration *
Estimation.Requirements_Volatility
Daily_MP_For_Change = IF Changes_Verified > 0 THEN
MIN( MP_Needed_Per_Change * Desired_Change_Fix_Rate,
Manpower_AH.Daily_MP_for_DesignDev -
Refactoring_D.Daily_MP_for_Refactor ) *
A-4
( 1 - Planning_D.Schedule_Pressure )
ELSE
0
Desired_Change_Delay = CORADMO.Streamlining_Multiplier *
Estimation.Iteration_Duration
Desired_Change_Fix_Rate = Changes_Verified / Desired_Change_Delay
External_Change_Rate = 0.1
Impact_of_Delay = 0.5
MP_Needed_Per_Change = Refactoring_D.Impact_on_New_and_Change_Tasks *
Person_Days_per_Change *CORADMO.Collaboration_Multiplier *
CORADMO.Tool_Multiplier * CORADMO.KSA__Multiplier
Percent_of_Delayed_Change_Per_Day =
GRAPH(Planning_D.Percentage_Time_Remaining) (0.00, 0.04), (0.2, 0.025),
(0.4, 0.015), (0.6, 0.005), (0.8, 0.002), (1.00, 0.002)
Percent_Rescheduled_As_New_Task = 0.2
Percent_Verified = GRAPH(Planning_D.Schedule_Pressure) (0.00, 1.00), (0.1,
1.00), (0.2, 0.95), (0.3, 0.9), (0.4, 0.8), (0.5, 0.6), (0.6, 0.35), (0.7,
0.15), (0.8, 0.05), (0.9, 0.025), (1.00, 0.00)
Person_Days_per_Change = 5.2
Response_Index = IF ( Rate_Corrective_Change_Requests > 0 ) THEN
Estimation.Iteration_Duration * Rate_of_Fix_Change /
Rate_Corrective_Change_Requests
ELSE
0
Responsiveness = GRAPH(Response_Index) (0.00, 1.00), (9.00, 0.75), (18.0,
0.547), (27.0, 0.4), (36.0, 0.27), (45.0, 0.164), (54.0, 0.1), (63.0,
0.05), (72.0, 0.025), (81.0, 0.0125), (90.0, 0.00)
A-5
A.2 CORADMO module
AA = 4
AAF = 0.30 * 100.0
AAM = ( AA + AAF * ( 1 + ( 0.02 * SU * UNFM ) ) ) / 100
Collaboration_Index = 3 Collaboration_Multiplier = GRAPH(Collaboration_Index)
(1.00, 1.22), (2.00, 1.09), (3.00, 1.00), (4.00, 0.93), (5.00, 0.86),
(6.00, 0.8)
Compatibility_Index = 3
Compatibility__Multiplier = GRAPH(Compatibility_Index) (1.00, 1.10), (2.00,
1.05), (3.00, 1.00), (4.00, 0.98), (5.00, 0.95), (6.00, 0.93)
Complexity = GRAPH(Simplicity_Index) (1.00, 1.09), (2.00, 1.05), (3.00,
1.00), (4.00, 0.96), (5.00, 0.92), (6.00, 0.87)
Deferrals_Difference = GRAPH(Deferrals_Index) (1.00, 0.00), (2.00, 0.05),
(3.00, 0.1), (4.00, 0.15), (5.00, 0.2), (6.00, 0.25)
Deferrals_Index = 3
Effort_Multiplier = Complexity * Technology_Maturity * Reuse_Multiplier
KSA_Index = 3
KSA__Multiplier = GRAPH(KSA_Index) (1.00, 1.42), (2.00, 1.19), (3.00, 1.00),
(4.00, 0.85), (5.00, 0.71), (6.00, 0.71)
MMPT_Index = 3
MMPT__Multiplier = GRAPH(MMPT_Index) (1.00, 1.17), (2.00, 1.09), (3.00,
1.00), (4.00, 0.9), (5.00, 0.78), (6.00, 0.78)
Models_Index = 3
Models_Multiplier = GRAPH(Models_Index) (1.00, 1.23), (2.00, 1.10), (3.00,
1.00), (4.00, 0.9), (5.00, 0.81), (6.00, 0.81)
Reuse_Index = 3
Reuse_Multiplier = 1 - AAM * Reuse_Percent
Reuse_Percent = GRAPH(Reuse_Index) (1.00, 0.00), (2.00, 0.05), (3.00, 0.1),
(4.00, 0.2), (5.00, 0.3), (6.00, 0.5)
A-6
Risk_Index = 3
Risk_Multiplier = GRAPH(Risk_Index) (1.00, 1.13), (2.00, 1.06), (3.00, 1.00),
(4.00, 0.94), (5.00, 0.89), (6.00, 0.84)
Simplicity_Index = 3
Streamlining_Index = 3
Streamlining_Multiplier = GRAPH(Streamlining_Index) (1.00, 1.15), (2.00,
1.06), (3.00, 1.00), (4.00, 0.98), (5.00, 0.95), (6.00, 0.95)
SU = 30
Technology_Maturity = GRAPH(TRL_Index) (1.00, 1.30), (2.00, 1.15), (3.00,
1.00), (4.00, 0.82), (5.00, 0.68), (6.00, 0.68)
Tool_Index = 3
Tool_Multiplier = GRAPH(Tool_Index) (1.00, 1.17), (2.00, 1.09), (3.00, 1.00),
(4.00, 0.9), (5.00, 0.78), (6.00, 0.78)
TRL_Index = 3
UNFM = 0.4
A-7
A.3 Customer
CRACK_Index = 0.75
Customer_Ability = Impact_of_Customer_Knowledge_on_Project * CRACK_Index
Customer_Involvement = Participation * Involvement *
( 1 / CORADMO.Streamlining_Multiplier ) *
( 1 / CORADMO.Collaboration_Multiplier )
Customer_Learning = Customer_Involvement * Customer_Ability
Customer_Process_Satisfaction = IF ( PREVIOUS( SELF, 1 ) <=
Effect_of_Productivity + Change_D.Responsiveness * Perceived_Quality )
THEN
DELAY( MAX( 0, MIN( Effect_of_Productivity + Change_D.Responsiveness
* Perceived_Quality, 1 ) ), Estimation.Iteration_Duration, 1 )
ELSE
DELAY( MAX( 0, MIN( Effect_of_Productivity + Change_D.Responsiveness
* Perceived_Quality, 1 ) ), 2 * Estimation.Iteration_Duration, 1 )
Customer_Trust = ( ( 1 - Weight_of_Initial_Trust ) * ( 1 -
Customer_Process_Satisfaction ) * ( Customer_Learning +
Customer_Process_Satisfaction ) * Sufficiency_of_Documentation ) +
( Weight_of_Initial_Trust * Initial_Trust )
Delay_on_Change_Request = ( 1 - Feedback_Quality ) *
CORADMO.Streamlining_Multiplier
Effect_of_Productivity = GRAPH(Productivity_D.Productivity_Estimation /
Estimation.Initial_Estimate_of_Productivity) (0.00, -0.95), (0.2, -0.9),
(0.4, -0.8), (0.6, -0.6), (0.8, -0.2), (1.00, 0.00), (1.20, 0.1), (1.40,
0.15), (1.60, 0.2), (1.80, 0.25), (2.00, 0.3)
Feedback_Quality = Customer_Learning * ( 1 / CORADMO.Collaboration_Multiplier
)
Impact_of_Customer_Knowledge_on_Project =
GRAPH(Planning_D.Percentage_Time_Remaining) (0.00, 0.5), (0.5, 0.9),
(1.00, 0.9)
Initial_Trust = MEAN( Customer_Involvement,
Refactoring_D.Quality_Initial_Design, Previous_Experience, Team_Reputation
)
Involvement = 0.5
Nominal_Errors_Committed_per_Task = 5
A-8
Participation = 0.6
Perceived_Quality = MAX( 0, MIN( 1 - QA_AH.Errors_per_Task_Last_Iteration /
Nominal_Errors_Committed_per_Task, 1 ) )
Previous_Experience = 0.5
Representative_of_Need = Feedback_Quality
Sufficiency_of_Documentation = IF ( CORADMO.Risk_Multiplier > 1 ) THEN
CORADMO.Risk_Multiplier * CORADMO.Models_Multiplier
ELSE
1.0
Team_Reputation = 0.5
Weight_of_Initial_Trust = GRAPH(Planning_D.Percentage_Time_Remaining) (0.00,
0.00), (0.1, 0.02), (0.2, 0.05), (0.3, 0.0997), (0.4, 0.2), (0.5, 0.5),
(0.6, 0.8), (0.7, 0.9), (0.8, 0.95), (0.9, 0.98), (1.00, 1.00)
A-9
A.4 Estimation
Avg_Number_of_Tasks_Per_Story = 4
Avg_Story_Velocity = 4.5
Avg_Task_Velocity = Avg_Story_Velocity / Avg_Number_of_Tasks_Per_Story
Day_in_Iteration = COUNTER( 0, Iteration_Duration )
Estimated_Coding_Time = Estimated_Schedule * ( 1 - Initial_Design_Percent )
Estimated_Cost = Manpower_AH.Total_Daily_Manpower * Estimated_Coding_Time
Estimated_Number_of_Stories = MIN( Real_Number_of_Stories *
Underestimate_Factor / ( CORADMO.Tool_Multiplier * CORADMO.KSA__Multiplier
), Real_Number_of_Stories )
Estimated_Number_of_Tasks = Estimated_Number_of_Stories *
Avg_Number_of_Tasks_Per_Story
Estimated_Schedule = ( Estimated_Number_of_Stories * Avg_Story_Velocity ) / (
Manpower_AH.Total_Daily_Manpower * Manpower_AH.Load_Factor )
Initial_Design_Percent = 0.25
Initial_Estimate_of_Productivity = Manpower_AH.Load_Factor /
Avg_Task_Velocity
Iteration_Duration = 10
Raw_Requirements_Volatility = 0.1
Real_Number_of_Stories = 166
Real_Number_of_Tasks = Avg_Number_of_Tasks_Per_Story * Real_Number_of_Stories
Requirements_Volatility = Raw_Requirements_Volatility *
CORADMO.Tool_Multiplier
Underestimate_Factor = 0.67
A-10
A.5 Manpower
Cumul_Devel_Man_Days(t) = Cumul_Devel_Man_Days(t - dt) +
(Devel_Man_Days_Rate) * dt
INIT
Cumul_Devel_Man_Days = 0
INFLOWS:
Devel_Man_Days_Rate = Daily_MP_for_DesignDev
Cumul_Man_Days_Expended(t) = Cumul_Man_Days_Expended(t - dt) +
(Expended_Rate) * dt
INIT
Cumul_Man_Days_Expended = 0.0001
INFLOWS:
Expended_Rate = Total_Daily_Manpower
Actual_Fraction_of_MP_for_QA = 0.15
Avg_Daily_MP_per_Staff = 1
Daily_MP_for_DesignDev = Total_Daily_Manpower * ( 1 -
Actual_Fraction_of_MP_for_QA ) * ( Load_Factor /
CORADMO.Streamlining_Multiplier )
Daily_MP_for_QA = Actual_Fraction_of_MP_for_QA * Total_Daily_Manpower *
( Load_Factor / CORADMO.Streamlining_Multiplier )
Load_Factor = 1
Total_Daily_Manpower = Total_Work_Force * Avg_Daily_MP_per_Staff
Total_Work_Force = 20
A-11
A.6 Planning
Delivered_Tasks(t) = Delivered_Tasks(t - dt) + (Acceptance_Rate -
Rejection_Rate) * dt
INIT Delivered_Tasks = 0
INFLOWS:
Acceptance_Rate = Task_Production_Rate –
Change_D.Rate_Corrective_Change_Requests
OUTFLOWS:
Rejection_Rate = Change_D.Rate_Corrective_Change_Requests –
Task_Production_Rate
Developed_Tasks(t) = Developed_Tasks(t - dt) + (Task_Production_Rate -
Acceptance_Rate) * dt
INIT
Developed_Tasks = 0
INFLOWS:
Task_Production_Rate = IF Tasks_Remaining > 0 THEN
( Manpower_AH.Daily_MP_for_DesignDev -
Refactoring_D.Daily_MP_for_Refactor -
Change_D.Daily_MP_For_Change ) *
Productivity_D.Software_Development_Productivity *
( 1 / CORADMO.Effort_Multiplier )
ELSE
0
OUTFLOWS:
Acceptance_Rate = Task_Production_Rate –
Change_D.Rate_Corrective_Change_Requests
Discovered_Tasks(t) = Discovered_Tasks(t - dt) + (Rate_of_Discovering_Tasks -
Rate_of_Incorporating_Discovered_Tasks) * dt
INIT
Discovered_Tasks = 0
INFLOWS:
Rate_of_Discovering_Tasks = 3 * ( Estimation.Real_Number_of_Tasks -
Estimation.Estimated_Number_of_Tasks ) *
Percent_of_Undiscovered_Tasks_Discovered_Per_Day /
Scheduled_Completion_Date
OUTFLOWS:
Rate_of_Incorporating_Discovered_Tasks = Rate_of_Incorporating
Scheduled_Completion_Date(t) = Scheduled_Completion_Date(t - dt) +
(Rate_of_Adjusting_Schedule) * dt
INIT
Scheduled_Completion_Date = Estimation.Estimated_Schedule
INFLOWS:
Rate_of_Adjusting_Schedule = IF ( Indicated_Completion_Date >
Scheduled_Completion_Date ) THEN
A-12
( Indicated_Completion_Date - Scheduled_Completion_Date ) /
Schedule_Adjust_Time
ELSE
0
Scheduled_Tasks(t) = Scheduled_Tasks(t - dt) +
(Changes_to_the_Estimated_Job_Size - Rate_Adjusting_Scope) * dt
INIT
Scheduled_Tasks = Estimation.Estimated_Number_of_Tasks
INFLOWS:
Changes_to_the_Estimated_Job_Size =
Change_D.Rate_Adaptive_Change_Incorporation + Rate_of_Incorporating
OUTFLOWS:
Rate_Adjusting_Scope = IF ( Adjusted_Scope < Scheduled_Tasks ) THEN
( Scheduled_Tasks - Adjusted_Scope ) / Task_Adjusting_Time
ELSE
0
Undiscovered_Tasks(t) = Undiscovered_Tasks(t - dt) + (-
Rate_of_Discovering_Tasks) * dt
INIT
Undiscovered_Tasks = Estimation.Real_Number_of_Tasks –
Estimation.Estimated_Number_of_Tasks
OUTFLOWS:
Rate_of_Discovering_Tasks = 3 * ( Estimation.Real_Number_of_Tasks -
Estimation.Estimated_Number_of_Tasks ) *
Percent_of_Undiscovered_Tasks_Discovered_Per_Day /
Scheduled_Completion_Date
Adjusted_Scope = Indicated_Tasks * Willingness_to_Adjust_Scope + ( 1 -
Willingness_to_Adjust_Scope ) * Scheduled_Tasks
Assim_Delay = 30
Baseline_Tasks_Complete = Delivered_Tasks / Estimation.Real_Number_of_Tasks
Delay_of_Incorporating_Tasks = Estimation.Iteration_Duration *
CORADMO.Streamlining_Multiplier
Delivered_Stories = Delivered_Tasks /
Estimation.Avg_Number_of_Tasks_Per_Story
Hiring_Delay = 90
Indicated_Completion_Date = TIME + MAX( 0, MIN( Time_Perceived_Remaining,
Max_Tolerable_Completion_Date - TIME ) )
Indicated_Tasks = Productivity_D.Cumul_Productivity * WF_Needed *
A-13
Time_Perceived_Remaining + Delivered_Tasks Indicated_WF_Level = IF
( Time_Remaining > 0 ) THEN
SMTH1( Man_Days_Remaining / Time_Remaining, 10 )
ELSE
0
Man_Days_Remaining = Tasks_Remaining / Productivity_D.Cumul_Productivity
Max_Schedule_Overrun_Percent = 0.25
Max_Tolerable_Completion_Date = Estimation.Estimated_Schedule * ( 1 +
Max_Schedule_Overrun_Percent )
Max_Tolerable_WF = 10
Percentage_Time_Remaining = IF ( TIME < Scheduled_Completion_Date ) THEN
( Scheduled_Completion_Date - TIME ) /
Scheduled_Completion_Date
ELSE
0
Percent_Job_Delivered = Delivered_Tasks / Scheduled_Tasks
Percent_of_Undiscovered_Tasks_Discovered_Per_Day = GRAPH(1 -
Percentage_Time_Remaining) (0.00, 0.2), (0.333, 0.2), (0.667, 0.6), (1.00,
0.6)
Project_Elapsed_Time = TIME
Rate_of_Incorporating = DELAY( Rate_of_Discovering_Tasks,
Delay_of_Incorporating_Tasks )
Scheduled_Stories = Scheduled_Tasks /
Estimation.Avg_Number_of_Tasks_Per_Story
Schedule_Adjust_Time = CORADMO.Streamlining_Multiplier *
Estimation.Iteration_Duration
Schedule_Overrun = Scheduled_Completion_Date / Estimation.Estimated_Schedule
Schedule_Pressure = IF ( Time_Remaining > 0 ) THEN
SMTH1( MAX( 0, MIN( ( Time_Perceived_Needed -
Time_Remaining ) / Time_Remaining, 1 ) ), 10 )
ELSE
1
Sufficiently_Complete = IF Percent_Job_Delivered >= ( 1 -
CORADMO.Deferrals_Difference ) THEN
1.1
A-14
ELSE
0
Tasks_Remaining = MAX( 0, Scheduled_Tasks - Delivered_Tasks )
Task_Adjusting_Time = CORADMO.Streamlining_Multiplier *
CORADMO.Risk_Multiplier * Estimation.Iteration_Duration
Time_Perceived_Needed = IF ( WF_Needed > 0 ) THEN
Man_Days_Remaining / WF_Needed
ELSE
0 Time_Perceived_Remaining = MAX( 0, MIN( Time_Perceived_Needed,
Max_Tolerable_Completion_Date - TIME ) )
Time_Remaining = MAX( 0, Scheduled_Completion_Date - TIME )
Total_Cost = Scheduled_Tasks / Productivity_D.Cumul_Productivity
WCWF = MAX( WCWF1, Willingness_to_Change_WF )
WCWF1 = GRAPH(Time_Remaining / ( Hiring_Delay + Assim_Delay )) (0.00, 0.00),
(0.2, 0.00), (0.4, 0.1), (0.6, 0.4), (0.8, 0.85), (1.00, 1.00), (1.20,
1.00), (1.40, 1.00), (1.60, 1.00), (1.80, 1.00), (2.00, 1.00)
WF_Needed = MIN( MIN( ( WCWF * Indicated_WF_Level + ( 1 - WCWF ) *
Manpower_AH.Total_Daily_Manpower ), Indicated_WF_Level ), Max_Tolerable_WF
)
Willingness_to_Adjust_Scope = MAX( Willingness_to_Adjust_Scope_1,
Willingness_to_Adjust_Scope_2 ) / CORADMO.Risk_Multiplier
Willingness_to_Adjust_Scope_1 = GRAPH(1 - Percentage_Time_Remaining) (0.00,
0.0482), (0.1, 0.0289), (0.2, 0.00643), (0.3, 0.00965), (0.4, 0.0225),
(0.5, 0.0804), (0.6, 0.183), (0.7, 0.305), (0.8, 0.434), (0.9, 0.605),
(1.00, 0.801)
Willingness_to_Adjust_Scope_2 = 0.25 * Customer_D.Customer_Trust + 0.05
Willingness_to_Change_WF = GRAPH(Scheduled_Completion_Date /
Max_Tolerable_Completion_Date) (0.00, 0.00), (0.2, 0.1), (0.4, 0.2), (0.6,
0.35), (0.8, 0.6), (1.00, 0.7), (1.20, 0.77), (1.40, 0.8), (1.60, 0.8),
(1.80, 0.8), (2.00, 0.8)
A-15
A.7 Productivity
Person_Days_Last_Iteration(t) = Person_Days_Last_Iteration(t - dt) +
(ManDays_Rate - Person_Days_Out_Rate) * dt
INIT
Person_Days_Last_Iteration = 0
COOK TIME = Estimation.Iteration_Duration
CAPACITY = INF
FILL TIME = DT
INFLOWS:
ManDays_Rate = ( Manpower_AH.Daily_MP_for_DesignDev -
Refactoring_D.Daily_MP_for_Refactor ) * Estimation.Iteration_Duration
OUTFLOWS:
Person_Days_Out_Rate = OVEN OUTFLOW
Productivity_in__Last_Iteration(t) = Productivity_in__Last_Iteration(t - dt)
+ (Productivity_In_Rate - Productivity_Out_Rate) * dt
INIT
Productivity_in__Last_Iteration =
Estimation.Initial_Estimate_of_Productivity
COOK TIME = Estimation.Iteration_Duration
CAPACITY = INF
FILL TIME = DT
INFLOWS:
Productivity_In_Rate = Productivity_End_of_Iteration / DT
OUTFLOWS:
Productivity_Out_Rate = OVEN OUTFLOW
Tasks_Delivered_in_Last_Iteration(t) = Tasks_Delivered_in_Last_Iteration(t -
dt) + (Task_Dev_Rate - Task_Out_Rate) * dt
INIT
Tasks_Delivered_in_Last_Iteration = 0
COOK TIME = Estimation.Iteration_Duration
CAPACITY = INF
FILL TIME = DT
INFLOWS:
Task_Dev_Rate = Planning_D.Acceptance_Rate * Estimation.Iteration_Duration
OUTFLOWS:
Task_Out_Rate = OVEN OUTFLOW
Tasks_Developed_in_Last_Iteration(t) = Tasks_Developed_in_Last_Iteration(t -
dt) + (Task_Accept_Rate_1 - Task_Out_Rate_1) * dt
INIT
Tasks_Developed_in_Last_Iteration = 0
COOK TIME = Estimation.Iteration_Duration
CAPACITY = INF
FILL TIME = DT
INFLOWS:
A-16
Task_Accept_Rate_1 = Planning_D.Task_Production_Rate *
Estimation.Iteration_Duration
OUTFLOWS:
Task_Out_Rate_1 = OVEN OUTFLOW
Avg_Nominal_Potential_Prod = Ratio_of_Pros_to_Rookies *
Nominal_Potential_Prod:_Pros + ( 1 - Ratio_of_Pros_to_Rookies ) *
Nominal_Potential_Prod:_Rookies
Communication_Overhead = GRAPH(Manpower_AH.Total_Daily_Manpower) (0.00,
0.00), (5.00, 0.015), (10.0, 0.06), (15.0, 0.135), (20.0, 0.24), (25.0,
0.375), (30.0, 0.54)
Cumul_Productivity = IF ( Planning_D.Delivered_Tasks > 0 ) AND
( Manpower_AH.Cumul_Man_Days_Expended > 0 )
THEN
Planning_D.Delivered_Tasks /
Manpower_AH.Cumul_Man_Days_Expended
ELSE
Estimation.Initial_Estimate_of_Productivity
Multiplier_Due_to_Learning = GRAPH(1 - Planning_D.Percentage_Time_Remaining)
(0.00, 1.00), (0.1, 1.01), (0.2, 1.03), (0.3, 1.06), (0.4, 1.09), (0.5,
1.15), (0.6, 1.20), (0.7, 1.22), (0.8, 1.25), (0.9, 1.25), (1.00, 1.25)
Multiplier_for_Losses = Manpower_AH.Load_Factor * ( 1 -
Communication_Overhead )
Nominal_Potential_Prod:_Pros = 0.26
Nominal_Potential_Prod:_Rookies = 0.18
Potential_Productivity = Avg_Nominal_Potential_Prod *
Multiplier_Due_to_Learning
Productivity_End_of_Iteration = IF ( Person_Days_Out_Rate > 0 ) THEN
Task_Out_Rate / Person_Days_Out_Rate
ELSE
0
Productivity_Estimation = IF ( OSTATE( Productivity_in__Last_Iteration ) =
1 ) AND ( Productivity_in__Last_Iteration > 0.001 ) THEN
Productivity_in__Last_Iteration
ELSE
PREVIOUS( SELF, Estimation.Initial_Estimate_of_Productivity )
Ratio_of_Pros_to_Rookies = 0.6
A-17
Software_Development_Productivity = Potential_Productivity *
Multiplier_for_Losses * ( 1 - Refactoring_D.Impact_by_Design )
Story_Productivity = Productivity_Estimation /
Estimation.Avg_Number_of_Tasks_Per_Story
A-18
A.8 Quality Assurance (QA)
Cumulative_Detected_Errors(t) = Cumulative_Detected_Errors(t - dt) +
(Count_Detected_Errors) * dt
INIT
Cumulative_Detected_Errors = 0
INFLOWS:
Count_Detected_Errors = Error_Detection_Rate
Cumulative_Errors(t) = Cumulative_Errors(t - dt) + (Generation_Rate) * dt
INIT
Cumulative_Errors = 0
INFLOWS:
Generation_Rate = Error_Generation_Rate
Detected_Errors(t) = Detected_Errors(t - dt) + (Error_Detection_Rate -
Errors_per_Iteration_In) * dt
INIT
Detected_Errors = 0
INFLOWS:
Error_Detection_Rate = MIN( Potential_Error_Detection_Rate,
Potentially_Detectable_Errors / DT ) / CORADMO_Multiplier
OUTFLOWS:
Errors_per_Iteration_In = 10000000
Escaped_Errors(t) = Escaped_Errors(t - dt) + (Error_Escape_Rate) * dt
INIT
Escaped_Errors = 0
INFLOWS:
Error_Escape_Rate = QA_Rate * Average_#_Errors_per_Task
Potentially_Detectable_Errors(t) = Potentially_Detectable_Errors(t - dt) +
(Error_Generation_Rate - Error_Detection_Rate - Error_Escape_Rate) * dt
INIT
Potentially_Detectable_Errors = 0
INFLOWS:
Error_Generation_Rate = Planning_D.Task_Production_Rate *
Nominal_Errors_Committed_per_Task * Multiplier_Due_to_Schedule_Pressure
* Multiplier_Due_to_Workforce_Mix * CORADMO_Multiplier
OUTFLOWS:
Error_Detection_Rate = MIN( Potential_Error_Detection_Rate,
Potentially_Detectable_Errors / DT ) / CORADMO_Multiplier
Error_Escape_Rate = QA_Rate * Average_#_Errors_per_Task
Tasks_Worked(t) = Tasks_Worked(t - dt) + (SD_Rate - QA__RATE) * dt
INIT
Tasks_Worked = 0
INFLOWS:
A-19
SD_Rate = Planning_D.Task_Production_Rate
OUTFLOWS:
QA__RATE = QA_Rate
Errors_Last_Iteration(t) = Errors_Last_Iteration(t - dt) +
(Errors_per_Iteration_In - Errors_per_Iteration_Out) * dt
INIT
Errors_Last_Iteration = 0
COOK TIME = Estimation.Iteration_Duration
CAPACITY = INF
FILL TIME = DT
INFLOWS:
Errors_per_Iteration_In = 10000000
OUTFLOWS:
Errors_per_Iteration_Out = OVEN OUTFLOW
Actual_Rework_Manpower_Needed_per_Error =
Nominal_Rework_Manpower_Needed_per_Error/Productivity_D.Multiplier_for_Los
ses
Average_#_Errors_per_Task = IF Tasks_Worked <> 0 THEN
Potentially_Detectable_Errors / Tasks_Worked
ELSE
0
Average_QA_Delay = 10
Captured_Errors_per_Task = 0
CORADMO_Multiplier = CORADMO.Tool_Multiplier * CORADMO.KSA__Multiplier *
CORADMO.MMPT__Multiplier
DSI_per_Task = 60
Errors_per_Task_Last_Iteration = IF
Productivity_D.Tasks_Delivered_in_Last_Iteration > 0 THEN
Errors_Last_Iteration /
Productivity_D.Tasks_Delivered_in_Last_Iteration
ELSE
0
Error_Density = Average_#_Errors_per_Task * ( 1000 / DSI_per_Task )
Multiplier_Due_to_Error_Density = GRAPH(Error_Density) (0.00, 50.0), (1.00,
36.0), (2.00, 26.0), (3.00, 17.5), (4.00, 10.0), (5.00, 4.00), (6.00,
1.75), (7.00, 1.20), (8.00, 1.00), (9.00, 1.00), (10.0, 1.00)
A-20
Multiplier_Due_to_Schedule_Pressure = GRAPH(Planning_D.Schedule_Pressure) (-
0.4, 0.9), (-0.2, 0.94), (-5.55e-17, 1.00), (0.2, 1.05), (0.4, 1.14),
(0.6, 1.24), (0.8, 1.36), (1.00, 1.50)
Multiplier_Due_to_Workforce_Mix =
GRAPH(Productivity_D.Ratio_of_Pros_to_Rookies) (0.00, 2.00), (0.2, 1.80),
(0.4, 1.60), (0.6, 1.40), (0.8, 1.20), (1, 1.00)
Nominal_Errors_Committed_per_DSI = GRAPH(Planning_D.Percent_Job_Delivered)
(0.00, 25.0), (0.2, 23.9), (0.4, 21.6), (0.6, 15.9), (0.8, 13.6), (1.00,
12.5)
Nominal_Errors_Committed_per_Task = Nominal_Errors_Committed_per_DSI *
DSI_per_Task / 1000
Nominal_QA_Manpower_Needed_per_Error =
GRAPH(Planning_D.Percent_Job_Delivered) (0.00, 0.4), (0.1, 0.4), (0.2,
0.39), (0.3, 0.375), (0.4, 0.35), (0.5, 0.3), (0.6, 0.25), (0.7, 0.225),
(0.8, 0.21), (0.9, 0.2), (1.00, 0.2)
Nominal_Rework_Manpower_Needed_per_Error =
GRAPH(Planning_D.Percent_Job_Delivered) (0.00, 0.6), (0.2, 0.575), (0.4,
0.5), (0.6, 0.4), (0.8, 0.325), (1.00, 0.3)
Percent_Errors_Detected = IF Cumulative_Errors <> 0 THEN
Cumulative_Detected_Errors / Cumulative_Errors
ELSE
0
Potential_Error_Detection_Rate = Manpower_AH.Daily_MP_for_QA /
QA_Manpower_Needed_to_Detect_an_Error
QA_Manpower_Needed_to_Detect_an_Error = Nominal_QA_Manpower_Needed_per_Error
* ( 1 / Productivity_D.Multiplier_for_Losses ) *
Multiplier_Due_to_Error_Density
QA_Rate = DELAY( Planning_D.Task_Production_Rate, Average_QA_Delay, 0 )
A-21
A.9 Refactoring
Cumul_Gap_of_Refactoring(t) = Cumul_Gap_of_Refactoring(t - dt) + (Gap_Rate -
Gap_Adjust_Rate) * dt
INIT
Cumul_Gap_of_Refactoring = 0
INFLOWS:
Gap_Rate = 1.5 * Manpower_AH.Daily_MP_for_DesignDev * Gap_of_Refactoring
OUTFLOWS:
Gap_Adjust_Rate = IF ( Captured_Quality < Quality_Objective ) AND
( Estimation.Day_in_Iteration <> 0 )
THEN IF ( ( 1 / CORADMO_Multiplier ) * Captured_Gap / ( Deadline_Effect *
Manpower_AH.Daily_MP_for_DesignDev ) <
Estimation.Iteration_Duration )
THEN
( 1 / CORADMO_Multiplier ) * Captured_Gap /
( Estimation.Iteration_Duration - 1 )
ELSE
( 1 / CORADMO_Multiplier ) * Deadline_Effect *
Manpower_AH.Daily_MP_for_DesignDev
ELSE
0
Quality_of_Evolutionary_Design(t) = Quality_of_Evolutionary_Design(t - dt) +
(Rate_Improving_Quality - Rate_Reducing_Quality) * dt
INIT
Quality_of_Evolutionary_Design = 1
INFLOWS:
Rate_Improving_Quality = IF ( Quality_of_Evolutionary_Design < 1 ) AND
( Cumul_Gap_of_Refactoring > 0 )
THEN
Quality_of_Evolutionary_Design * ( Gap_Adjust_Rate /
Cumul_Gap_of_Refactoring )
ELSE
0
OUTFLOWS:
Rate_Reducing_Quality = Insufficiency_of_Refactoring / 30
Actual_Fractional_MP_for_Refactoring = IF Gap_Adjust_Rate > 0 THEN
Needed_Fractional_MP_for_Refactoring
ELSE
Needed_Fractional_MP_for_Refactoring * ( 1 +
Adj_Schedule_Pressure ) * ( 1 - ( 1 - Degree_of_PP ) * 0.5 ) *
( 1 - Lack_of_Refactoring ) * Deadline_Effect
Adj_Schedule_Pressure = GRAPH(Planning_D.Schedule_Pressure) (0.00, 0.00),
(0.2, -0.025), (0.4, -0.0997), (0.6, -0.35), (0.8, -0.475), (1.00, -0.5),
(1.20, -0.5), (1.40, -0.5), (1.60, -0.5), (1.80, -0.5), (2.00, -0.5)
A-22
Captured_Gap = IF ( Estimation.Day_in_Iteration = 0 ) THEN
Cumul_Gap_of_Refactoring
ELSE
PREVIOUS( SELF, 0 )
Captured_Quality = IF ( Estimation.Day_in_Iteration = 0 ) THEN
Quality_of_Design
ELSE
PREVIOUS( SELF, Quality_of_Design )
Complexity_of_Project = Size_Impact * Degree_of_Complexity
CORADMO_Multiplier = CORADMO.Tool_Multiplier * CORADMO.KSA__Multiplier *
CORADMO.MMPT__Multiplier
Daily_MP_for_Refactor = MIN( Manpower_AH.Daily_MP_for_DesignDev,
Gap_Adjust_Rate )
Deadline_Effect = GRAPH(Planning_D.Percentage_Time_Remaining) (0.00, 0.00),
(0.1, 0.75), (0.2, 0.95), (0.3, 0.98), (0.4, 1.00), (0.5, 1.00), (0.6,
1.00), (0.7, 1.00), (0.8, 1.00), (0.9, 1.00), (1.00, 1.00)
Degree_of_Complexity = 0.5
Degree_of_PP = Planned_Degree__of_PP * Degree_of_PP_Fractural
Degree_of_PP_Fractural = GRAPH(Planning_D.Schedule_Pressure) (0.00, 1.00),
(0.2, 1.00), (0.4, 0.8), (0.6, 0.3), (0.8, 0.1), (1.00, 0.00)
Degree_of_Unit_Testing = Planned_Unit_Tests * ( 1 - ( 1 - Degree_of_PP ) *
0.2 ) * ( 1 + Adj_Schedule_Pressure )
Gap_of_Refactoring = MAX( Needed_Fractional_MP_for_Refactoring -
Actual_Fractional_MP_for_Refactoring, 0 )
Impact_by_Design = 0.3 * ( 1 - Quality_of_Design )
Impact_by_Stage = -0.6 * Planning_D.Percentage_Time_Remaining + .80
Impact_on_New_and_Change_Tasks = MIN( Impact_by_Design + Impact_by_Stage, 1 )
Insufficiency_of_Refactoring = DELAY( Gap_of_Refactoring, 30 )
Lack_of_Refactoring = 0.1
Modularity = Quality_Initial_Design * Ratio_of_Pros * Complexity_of_Project
A-23
Needed_Fractional_MP_for_Refactoring = Needed_Fraction_MP_for_Refactoring * (
Quality_Objective ) * ( 1 - Modularity ) * Degree_of_Unit_Testing *
CORADMO_Multiplier
Needed_Fraction_MP_for_Refactoring =
GRAPH(Planning_D.Percentage_Time_Remaining) (0.00, 0.3), (0.2, 0.3), (0.4,
0.3), (0.6, 0.3), (0.8, 0.3), (1.00, 0.3), (1.20, 0.3), (1.40, 0.3),
(1.60, 0.3), (1.80, 0.3), (2.00, 0.3)
Planned_Degree__of_PP = 0.7
Planned_Unit_Tests = 0.9
Quality_Initial_Design = GRAPH(Estimation.Initial_Design_Percent * ( 1 -
Estimation.Requirements_Volatility )) (0.00, 0.00), (0.06, 0.15), (0.12,
0.55), (0.18, 0.65), (0.24, 0.75), (0.3, 0.9)
Quality_Objective = 0.9 Quality_of_Design = IF ( TIME > 0 ) THEN
MIN( Quality_of_Evolutionary_Design, 1 )
ELSE
Quality_Initial_Design
Ratio_of_Pros = 0.6
Size_Impact = GRAPH(Estimation.Estimated_Number_of_Tasks) (0.00, 0.00),
(50.0, 0.01), (100, 0.0289), (150, 0.0418), (200, 0.0675), (250, 0.0997),
(300, 0.151), (350, 0.215), (400, 0.305), (450, 0.559), (500, 1.00)
Appendix B
Graphs of results for each sample project
The following sections graph the simulation results for each of the 12 projects in the sample data
set. In each side-by-side graph, the left-side shows the project performance under the nominal
(square-root) schedule, while the right-side shows the project performance under the schedule
actually realized. Five graph pairs are presented for each project, as follows, with the specified
data lines:
• Scheduled vs. Delivered Tasks
◦ 1: Delivered Tasks
◦ 2: Scheduled Tasks
• Schedule and Scope
◦ 1: Schedule Pressure
B-1
B-2
◦ 2: Time Perceived Remaining
◦ 3: Time Remaining
◦ 4: Adjusted Scope
• Refactoring and Quality
◦ 1: Quality of Design
◦ 2: Daily MP (manpower) for Refactoring
◦ 3: Cumulative Gap of Refactoring
• Productivity
◦ 1: Productivity Estimation
◦ 2: Software Development Productivity
◦ 3: Cumulative Productivity
• Manpower Allocation
◦ 1: Daily MP (manpower) for Refactoring
◦ 2: Daily MP (manpower) for Change
◦ 3: Daily MP (manpower) for DesignDev (design & development)
B-3
B.1 Project 1: Applied Systems
Nominal Actual
B-4
Nominal Actual
B-5
B.2 Project 2: DAS Profume
Nominal Actual
B-6
Nominal Actual
B-7
B.3 Project 3: Finatus
Nominal Actual
B-8
Nominal Actual
B-9
B.4 Project 4: Argent Trading
Nominal Actual
B-10
Nominal Actual
B-11
B.5 Project 5: CBOE 2
Nominal Actual
B-12
Nominal Actual
B-13
B.6 Project 6: CTC
Nominal Actual
B-14
Nominal Actual
B-15
B.7 Project 7: CBOE 1
Nominal Actual
B-16
Nominal Actual
B-17
B.8 Project 8: Chronos
Nominal Actual
B-18
Nominal Actual
B-19
B.9 Project 9: eMerge
Nominal Actual
B-20
Nominal Actual
B-21
B.10 Project 10: Appointments 123
Nominal Actual
B-22
Nominal Actual
B-23
B.11 Project 11: VisiBILLity
Nominal Actual
B-24
Nominal Actual
B-25
B.12 Project 12: Motient
Nominal Actual
B-26
Nominal Actual
Abstract (if available)
Abstract
This research assesses the effect of product, project, process, people and risk factors on schedule for software development projects that employ agile methods. Prior research identified these factors as being significant within lean/agile organizations with a history of rapid‐response to new product development needs. This work integrates these factors into CORADMO, the Constructive Rapid Application Development Model, an offshoot of the COCOMO family of effort and schedule estimation models. ❧ CORADMO is based on a system dynamics model of the agile development process, which simulates the flow of development tasks and change items through the process. The five major factors are elaborated into twelve sub‐factors, most having a second‐, third‐ or higher‐order effect on schedule. Each of the factors and sub‐factors is rated along a six‐element Likert scale, which determines a set of weighing multipliers derived from COCOMO, COSYSMO, and other models. These multipliers are applied to the system dynamics model elements that affect task production, change rates, defect insertion, refactoring, and other processes, and the schedule effects assessed. ❧ The results of this modeling show very good ability to predict the schedule outcomes of agile projects. The research evaluates the dynamic model against twelve commercial projects, which show from 2% schedule overrun to 56% underrun, and that implement a variety of product types using diverse languages. The twelve factors were rated for each project based on information the projects provided, and the simulated schedule results compared with the actual schedules realized. Although wide‐range validation is limited due to the availability of test data, the CORADMO model is able to predict accurately the actual schedule outcomes of these commercial projects.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
A model for estimating cross-project multitasking overhead in software development projects
PDF
Calibrating COCOMO® II for functional size metrics
PDF
The incremental commitment spiral model process patterns for rapid-fielding projects
PDF
Improved size and effort estimation models for software maintenance
PDF
Domain-based effort distribution model for software cost estimation
PDF
A reference architecture for integrated self‐adaptive software environments
PDF
Designing an optimal software intensive system acquisition: a game theoretic approach
PDF
Calculating architectural reliability via modeling and analysis
PDF
The effects of required security on software development effort
PDF
Security functional requirements analysis for developing secure software
PDF
Composable risk-driven processes for developing software systems from commercial-off-the-shelf (COTS) products
PDF
Incremental development productivity decline
PDF
Process implications of executable domain models for microservices development
PDF
Quantitative and qualitative analyses of requirements elaboration for early software size estimation
PDF
Assessing software maintainability in systems by leveraging fuzzy methods and linguistic analysis
PDF
Experimental and analytical comparison between pair development and software development with Fagan's inspection
PDF
Techniques for methodically exploring software development alternatives
PDF
Analysis of embedded software architecture with precedent dependent aperiodic tasks
PDF
Proactive detection of higher-order software design conflicts
PDF
Deriving component‐level behavior models from scenario‐based requirements
Asset Metadata
Creator
Ingold, Dan
(author)
Core Title
A model for estimating schedule acceleration in agile software development projects
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Computer Science (Software Engineering)
Publication Date
04/16/2015
Defense Date
12/08/2014
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
agile,COCOMO,CORADMO,estimation,Modeling,OAI-PMH Harvest,software development,software engineering,system dynamics
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Boehm, Barry W. (
committee chair
), Golubchik, Leana (
committee member
), Halfond, William G. J. (
committee member
), Khoshnevis, Behrokh (
committee member
)
Creator Email
dan@ingold.org,dingold@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-551755
Unique identifier
UC11297718
Identifier
etd-IngoldDan-3320.pdf (filename),usctheses-c3-551755 (legacy record id)
Legacy Identifier
etd-IngoldDan-3320.pdf
Dmrecord
551755
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Ingold, Dan
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
agile
COCOMO
CORADMO
estimation
software development
software engineering
system dynamics