Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
A model for estimating cross-project multitasking overhead in software development projects
(USC Thesis Other)
A model for estimating cross-project multitasking overhead in software development projects
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
A MODEL FOR ESTIMATING CROSS-PROJECT MULTITASKING OVERHEAD
IN SOFTWARE DEVELOPMENT PROJECTS
by
Alexey Tregubov
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the Requirements for the Degree
DOCTOR OF PHILOSOPHY
(COMPUTER SCIENCE)
December 2017
Copyright 2017 Alexey Tregubov
i
Acknowledgments
I would like to thank all the people and organizations that made this research possible.
This research could not have been conducted without support from the University of Southern
California Center for Systems and Software Engineering, government, and academic affiliates.
I found great enthusiasm and support from my advisor, Dr. Barry W. Boehm. I would
also like to thank my Ph.D. committee members – Dr. Leana Golubchik, Dr. Aiichiro Nakano,
Dr. Behrokh Khoshnevis for the support of this work. I am deeply grateful for the insightful
comments, criticisms, and advice provided by my qualification exam committee members – Dr.
William GJ Halfond, Dr. Azad Madni. Individuals who have guided me through many
interesting research projects include Jo Ann Lane and Rich Turner, without whose help my work
could not be done.
The successful completion of the Ph.D. program is a moment for me to thank my parents.
My mother and father did everything they could to support me in pursuing higher education for
many years.
Simulation frameworks and other tools, developed and heavily used in this research, are
based upon work supported, in whole or in part, by the U.S. Department of Defense through the
Systems Engineering Research Center (SERC) under Contract HQ0034-13-D-0004 and Contract
H98230-08-D-0171. SERC is a federally funded University Affiliated Research Center managed
by Stevens Institute of Technology.
ii
Table of Contents
Acknowledgments............................................................................................................................ i
Table of Contents ............................................................................................................................ ii
List of Figures ................................................................................................................................. v
List of Tables ................................................................................................................................ vii
Abstract .......................................................................................................................................... ix
Chapter 1 Introduction ................................................................................................................... 1
1.1 Motivation ........................................................................................................................ 1
1.2 Terms and definitions ....................................................................................................... 4
1.3 The problems .................................................................................................................... 5
1.4 Research questions and hypothesis .................................................................................. 5
1.5 Intended research contributions ....................................................................................... 6
Chapter 2 Background and related work ....................................................................................... 7
2.1 Productivity and multitasking overhead in work environments....................................... 7
2.2 Work interruptions, reimmersion time ........................................................................... 11
2.3 Effort and schedule estimation models .......................................................................... 13
2.4 Simulation approaches overview ................................................................................... 14
Chapter 3 Research methodology ................................................................................................ 16
3.1 Overview of research design .......................................................................................... 16
3.2 Data collection and analysis ........................................................................................... 22
Chapter 4 Model framework ........................................................................................................ 27
4.1 Reimmersion time evaluation......................................................................................... 27
4.2 Work log analysis algorithm and MEM evaluation ....................................................... 29
4.3 Simulation overview (alternative work log configuration generation) .......................... 32
4.4 COCOMO II calibration and extension ......................................................................... 33
4.5 DES framework - simulation tool overview .................................................................. 34
Chapter 5 Results evaluation and analysis ................................................................................... 36
iii
5.2 Projects and source data overview ................................................................................. 36
5.3 Cross-project work interruptions .................................................................................... 37
5.3.1 The association between the average number of cross-project work interruptions and
the average number of projects ................................................................................................. 37
5.3.2 The number of cross-project work interruptions and the number of projects per week
42
5.4 Cross-project multitasking effort multiplier ................................................................... 43
5.5 Comparison with G.Weinberg‘s heuristic ...................................................................... 48
5.6 The impact of cross-project multitasking on quality of work ........................................ 51
5.7 MEM evaluation for projects ......................................................................................... 54
5.7.1 Comparison of self-evaluated reimmersion time in CSCI577 class projects to prior
studies 56
5.8 COCOMO II calibration................................................................................................. 58
5.9 Hypotheses summary ..................................................................................................... 61
5.10 Threats to validity........................................................................................................... 63
Chapter 6 Conclusion and future work ........................................................................................ 65
6.1 Conclusion ...................................................................................................................... 65
6.2 Future work .................................................................................................................... 67
Bibliography ................................................................................................................................. 69
Appendix A Multitasking survey .................................................................................................. 74
Appendix B DES framework simulation tool ............................................................................... 76
B.1. Simulation model overview ........................................................................................... 76
B.2. Types of analysis ............................................................................................................ 77
B.3. Architecture .................................................................................................................... 78
B.4. Development team.......................................................................................................... 79
iv
Appendix C Source data ............................................................................................................... 80
C.1. Cross-project work interruptions in CSCI577 class projects ......................................... 80
C.2. Cross-project work interruptions in industry projects .................................................... 81
C.3. Cross-project work interruptions and grade deduction in CSCI577 projects ................ 87
C.5. Cross-project work interruptions and reopened tasks in industry projects .................... 88
Appendix D Effort multipliers for COCOMO II .......................................................................... 89
D.1. Effort multipliers in industry projects ............................................................................ 89
v
List of Figures
Figure 1. Multitasking overhead evaluated by Weinberg‘s heuristic. ............................................ 9
Figure 2. Interruption in the work environment and scope of the research project. ..................... 10
Figure 3. Reimmersion time distribution. ..................................................................................... 12
Figure 4. Approaches in Simulation on Abstraction Level Scale (adapted from [46]). ............... 15
Figure 5. Causal Graphical Model. ............................................................................................... 20
Figure 6. Research methodology overview. ................................................................................. 21
Figure 7. Structure of each data point. .......................................................................................... 21
Figure 8. Detailed structure of source data. .................................................................................. 22
Figure 9. Reimmersion time when work interrupted. ................................................................... 27
Figure 10. Reimmersion time. ...................................................................................................... 28
Figure 11. Work log analysis algorithm. ...................................................................................... 30
Figure 12. Multitasking overhead evaluated by Weinberg‘s heuristic as MEM. ......................... 31
Figure 13. Simulation: a new work log generation algorithm. .................................................... 32
Figure 14. Simulation example. .................................................................................................... 33
Figure 15. The average number of interruptions vs. the average number of projects per week per
developer in CSCI577 class projects. ........................................................................................... 38
Figure 16. The average number of interruptions vs. the average number of projects per week per
developer in industry projects. ...................................................................................................... 39
Figure 17. The average number of interruptions vs. the average number of projects per week per
developer in CSCI577 class projects (in log space)...................................................................... 40
Figure 18. The average number of interruptions vs. the average number of projects per week per
developer in industry projects (in log space). ............................................................................... 41
Figure 19. The number of cross project interruptions vs. the number of projects per week per
developer in CSCI577 class projects. ........................................................................................... 42
Figure 20. The number of cross project interruptions vs. the number of projects per week per
developer in industry projects. ...................................................................................................... 43
Figure 21. MEM vs. the average number of projects per week per developer in CSCI577 class
projects. ......................................................................................................................................... 44
vi
Figure 22. The average number of interruptions vs. the average number of projects per week per
developer in industry projects. ...................................................................................................... 45
Figure 23. MEM vs. the average number of projects per week per developer in CSCI577 class
projects in log space. ..................................................................................................................... 46
Figure 24. The average number of interruptions vs. the average number of projects per week per
developer in industry projects in log space. .................................................................................. 47
Figure 25. MEM vs. the average number of projects per week per developer in CSCI577 class
projects vs. G.Weinberg‘s heuristic. ............................................................................................. 48
Figure 26. MEM vs. the average number of projects per week per developer in industry projects
vs. G.Weinberg‘s heuristic. ........................................................................................................... 49
Figure 27. Average % of effort spent on cross-project interruptions vs. G.Weinberg‘s heuristic
prediction in class projects. ........................................................................................................... 50
Figure 28. % of effort spent on cross-project interruptions vs. G.Weinberg‘s heuristic prediction
in class projects. ............................................................................................................................ 50
Figure 29. Grade deduction vs. the average number of cross-project interruptions. .................... 52
Figure 30. The average number of reopened tasks vs. the average number of cross-project
interruptions. ................................................................................................................................. 53
Figure 31. Reimmersion time distribution. ................................................................................... 56
Figure 32. Reimmersion time in different studies. ....................................................................... 57
Figure 33. Models evaluation: COCOMOII, locally calibrated COCOMOII and locally calibrated
COCOMOII with MEM. Prediction errors for industry projects. Dashed line shows ―perfect‖
prediction line ............................................................................................................................... 59
Figure 34. Summary of RQ1. ........................................................................................................ 62
Figure 35. Multitasking survey. .................................................................................................... 75
Figure 36. DES framework architecture. ...................................................................................... 78
vii
List of Tables
Table 1. Terms and definitions ....................................................................................................... 4
Table 2. Reimmersion time in different studies. ........................................................................... 12
Table 3. Metrics and measurements.............................................................................................. 19
Table 4. Three values of reimmersion time. ................................................................................. 28
Table 5. Multitasking overhead evaluated by Weinberg‘s heuristic as MEM .............................. 31
Table 6. Analyzed project groups. ................................................................................................ 36
Table 7. Correlation between the average number of interruptions vs. the average number of
projects per week per developer in CSCI577 class projects. ........................................................ 38
Table 8. Correlation between the average number of interruptions vs. the average number of
projects per week per developer industry projects ........................................................................ 40
Table 9. Correlation between the average number of interruptions vs. the average number of
projects per week per developer in CSCI577 class projects (in log space). ................................. 41
Table 10. Correlation between the average number of interruptions vs. the average number of
projects per week per developer in industry projects (in log space). ............................................ 41
Table 11. Correlation between MEM and the average number of projects per week per developer
in CSCI577 class projects. ............................................................................................................ 44
Table 12. Correlation between the average number of interruptions and the average number of
projects per week per developer in in industry projects. .............................................................. 45
Table 13. Correlation between MEM and the average number of projects per week per developer
in CSCI577 class projects in log space. ........................................................................................ 46
Table 14. Correlation between the average number of interruptions and the average number of
projects per week per developer in in industry projects in log space. .......................................... 47
Table 15. Model fit of G.Weinberg‘s heuristic and observations of CSCI577 class projects ...... 48
Table 16. Model fit of G.Weinberg‘s heuristic and observations of industry projects ................. 49
Table 17. Grade deduction vs. the average number of cross-project interruptions. ..................... 52
Table 18. The average number of reopened tasks vs. the average number of cross-project
interruptions. ................................................................................................................................. 53
Table 19. MEM in industry projects ............................................................................................. 55
Table 20. MEM in CSCI577 class projects .................................................................................. 55
viii
Table 21. Two sample Kolmogorov–Smirnov test ....................................................................... 56
Table 22. Reimmersion time in different studies. ......................................................................... 57
Table 23. The correlation between COCOMOII, locally calibrated COCOMOII and locally
calibrated COCOMOII with MEM vs. perfect prediction (actual effort) ..................................... 59
Table 24. Prediction accuracies of COCOMOII, locally calibrated COCOMOII and locally
calibrated COCOMOII with MEM on simulated data set from (industry projects) ..................... 60
Table 25. Data summary for CSCI577 class projects. Sample of students not working in industry.
....................................................................................................................................................... 80
Table 26. Data summary for industry projects.............................................................................. 81
Table 27. Cross-project work interruptions and grade deduction in CSCI577 projects. .............. 87
Table 28. Cross-project work interruptions and reopened tasks in industry projects. .................. 88
Table 29. Effort multipliers in industry projects. .......................................................................... 89
ix
Abstract
This research assesses the effect of cross-project multitasking on productivity of software
development teams working in environments where resources are shared across several projects.
Cross-project multitasking is typical for matrix organizational structures and facilitates the
horizontal cross-project flow of skills and information. Depending on how heavily people
multitask, it may introduce an excessive amount of cross-project interruptions in work flow,
affecting productivity of software developers. Cross-project interruptions are considered harmful
for productivity of information and knowledge workers, because they may require time for
switching between independent work contexts. If the environment with cross-project
multitasking is not taken into account, it can significantly influence cost and schedule
estimations.
This research investigates the cross-project multitasking and its quantitative effect on
software development effort and quality. The research introduces a cross-project multitasking
evaluation model, which improves cost estimations in environments with cross-project
multitasking. The model calculates a multitasking effort multiplier, which can be evaluated based
on work log observations, and incorporates it into the base COCOMO II
®
model to achieve a
better accuracy.
1
Chapter 1
Introduction
1.1 Motivation
To accelerate development schedules and optimize resource utilization, software
development organizations may choose working on several projects in parallel. Depending on
team and organization structure, it may lead to situations where scarce resources are shared
among several projects at a time [1, 2]. While concurrent work on several projects can accelerate
development schedules via better resource utilization, there is a downside – resource
multitasking introduces additional overhead which essentially affects overall productivity.
Cross-project multitasking and work interruptions may appear in different forms. For
example:
Developers are often shared between the projects in organizations with matrix
structure for better resource utilization [3, 4].
2
Developers are shared between multiple releases within a single project when several
releases of the project are maintained.
In System of System (SoS) environments, if a constituent system is developed for
several customers (e.g. different software distributions/releases for each customer),
developers are shared between different contexts. The context here is customer-
specific requirements, success-critical stakeholders, and everything that makes each
system installation unique [4, 5, 6, 7].
Developers are required to attend organizational training related to organizational
policies and practices.
Various Lean and Agile-based engineering processes acknowledge the problem of
multitasking overhead [8, 9, 10, 11, 12]. These practices are embedded in different forms in
various methodologies such Scrum, XP, Kanban. One of the lean principles states that we have
to ―map the value stream and eliminate waste‖ [11]. Eliminating waste of time and effort should
be done in all aspects of the process, which includes personal work processes as well. Most of
the research in this area has studied effects of multitasking on individuals, and very little
attention has been paid to the effects of multitasking on the overall project or even the
organization.
According to studies in [13-18], multitasking is one of the causes of unproductive work,
and to deal with negative effects of multitasking, various project management frameworks
developed different practices. For example, many agile-based methodologies propose a role of a
process facilitator. The process facilitator not only guides the team to follow the process but also
serves as a buffer between the team and external distractions. Kanban-based processes explicitly
limit work in progress and reduce switching between tasks. Scrum methodology explicitly
3
regulates meetings as well as their length, time, and frequency to prevent unnecessary work
interruptions and a waste of time.
Not all multitasking and interruptions at work are necessarily inefficient. Working in a
large software development project also means people often need to solve complex problems
requiring a lot of cognitive involvement. At some point, we may find that one is struggling too
much with a problem. It often helps to go work on something else and let the subconscious mind
work on the problem for a little while. In this case switching to another task may reduce
―struggle time‖. This observation was noted in [19, 20] as a way to increase productivity of the
complex creative work.
There are also other benefits of limited multitasking. For example, the research [16]
suggests that if one is working on a routine task, ―multitasking breaks the boredom‖, reducing
the inclination to procrastinate and providing a stimulating environment. The same applies to
meetings and other collaboration and coordination activities. For instance, design meetings that
help developers to understand the emerging ―big picture‖, and peer reviews for quality and
sharing knowledge are necessary for any project. A lack of such communication can cause
quality to degrade, rework to increase, and slow down the whole workflow.
Cross-project interruptions, however, always have the negative impact on productivity.
When plans and schedules are developed, often they do not explicitly account for cross-project
multitasking overhead. Even when multitasking overhead is estimated and taken into account,
practitioners usually use a rule of thumb and not formal methods, which may cause significant
effort underestimation.
4
1.2 Terms and definitions
This section describes the terms and definitions used in this research.
Table 1. Terms and definitions
Term Definition
Multitasking An activity of performing multiple tasks during a
certain period of time [16].
Cross-project work interruption An event of switching between activities of several
different projects.
Cross-project multitasking
overhead
Effort spent on switching between tasks of different
projects.
Multitasking effort multiplier
(MEM)
A coefficient that indicates relative increase in effort,
caused by switching between different projects. It is a
measure of productivity of work for individuals or for
the whole project. For example, MEM = 1.2 indicates
20% increase in effort in multitasked settings (working
on several projects). If there is no multitasking (or
certain type of multitasking), then MEM = 1.0.
Reimmersion time A time lag necessary for switching from one work
context to another and restore ‗flow‘ mode of operation
[14].
Work log Hours that each developer spent working on projects.
Hours reported for each task on daily or weekly base
5
1.3 The problems
In environments where software developers are shared across multiple projects, effort and
schedule can be underestimated because of the following reasons:
Existing cost and schedule estimation models do not explicitly account for cross-
project multitasking and cross-project work interruptions.
When work is planned and estimated by experts, multitasking overhead is not
accounted.
In organizations where there are several projects that are done by one team of software
developers, lack of formal models and methods to account for cross-project multitasking
overhead may cause effort underestimation.
1.4 Research questions and hypothesis
This research explores the following research questions and tests the following
hypothesis:
Q1. What is the quantitative effect of cross-project multitasking overhead on
development effort and quality?
o H1.a The number of cross-project interruptions is (not) linearly proportional to
number of projects.
o H1.b The G. Weinberg‘s heuristic is (not) applicable for cross-project
multitasking overhead estimation in software development teams.
o H1.c The number of cross-project interruptions is (not) linearly proportional to
the number of reopened tasks.
6
Q2. How can the COCOMO II model be improved to account for cross-project
multitasking overhead?
o H2.a The multitasking effort multiplier (MEM) can be automatically evaluated
based on work log observations.
o H2.b Using the multitasking effort multiplier in the locally calibrated COCOMO
II model can improve prediction accuracies of the COCOMO II.
1.5 Intended research contributions
This research is intended to provide the following contributions:
A quantified effect of a cross-project multitasking on the overall project effort and on
developers‘ productivity.
A model for cross-project multitasking overhead evaluation based on work logs and
schedules.
A model that can be used for evaluation of teams‘ productivity in multitasked
environment.
A COCOMO II calibration method that accounts for multitasking overhead using the
multitasking effort multiplier.
Examined association relationships between cross-project multitasking overhead, the
number of parallel projects and the number of cross-project interruptions.
Examined association relationships between cross-project work interruptions and
quality of work.
7
Chapter 2
Background and related work
2.1 Productivity and multitasking overhead in work environments
Multitasking is a term usually used to describe the activity of performing multiple tasks
during a certain period of time. [16] defines it as the ―engagement in individual and discrete
tasks that are performed in succession‖. It is implied that there is some time spent switching
between tasks.
When people attempt to switch from one task to another, there is a time cost associated
with this ―mental switching‖. Researchers conducted task-switching experiments in order to
quantify the cost of switching between several activities [15, 16], also evaluating how different
types of tasks affect the cost of switching.
In [21] task switching is described as a switch that shifts cognitive processing system
from one configuration to another. Allport and Wylie define the time required to switch between
8
and among tasks as ―reaction time switching costs‖. Switching from one task to another requires
a certain amount of time to cognitively ―switch gears‖ because different parts of the brain are
required for each activity. This also involves attention and focus switching.
In [22] Naveh–Benjamin, M., Craik, F., Perretta, J., Tonev, S. studied divided attention.
Their findings were consistent with Delbridge‘s [15] findings. They established that distractions
effect task performance and impact on learning and retention of information.
In general, it all depends on the type of tasks performed simultaneously (complexity of
the task, task switching intervals, etc.) – it varies from texting and driving to walking and
chewing.
One of the first attempts to evaluate how the number of interactions and attention
switching affects the productivity of employees in the organizational environment was done in
[14]. DeMarco and Lister studied multitasking and attention switching from a different
perspective. To measure the effect of attention switching they measured a number of
uninterrupted hours and body-present hours. They defined an E-factor:
E-factor = (uninterrupted hours) / (body-present hours)
In some cases, they recorded a range of E-factor in their experiments from 0.10 to 0.38.
This could serve as an approximation of attention switching effects on productivity.
To measure the lasting effect of interruptions DeMarco and Lister introduced the concept
of ―reimmersion time‖ [14]. They note that: ―If the average incoming phone call takes five
minutes and your reimmersion period is fifteen minutes, the total cost of that call in flow time is
(work time) lost is twenty minutes. A dozen phone calls use up a half day.‖
DeMarco and Lister also observe that software developers spend thirty percent of their
time working alone, fifty percent working with one other person, and twenty percent working
9
with two or more others (group work). 70% of their time developers interact with someone else.
These interactions involve can involve multiple tasks. Weinberg [13] also suggests the following
heuristic when estimating effects of multitasking overhead:
Figure 1. Multitasking overhead evaluated by Weinberg’s heuristic.
Meyer also observed that ―even brief mental blocks created by shifting between tasks can
cost as much as 40 percent of someone's productive time‖ [23, 24].
Interruptions in work environment may cause a significant productivity decline. In order
to find a way to improve productivity, TRW conducted a software productivity project in 1981
[25, 26]. The surveys of the research recorded 39% of productivity improvement after using
private offices (means fewer interruptions). Private office space was cited as one of the top three
contributing factors for productivity improvement.
0
10
20
30
40
50
60
70
80
1 2 3 4 5
% of effort spent on multitasking
Number of projects
Weinberg's heuristic
10
Implications of multitasking could be more far-reaching if work is interconnected (and it
usually is). It may cause bottlenecks in the overall workflow.
Although there are many sources of interruptions in the work environment, the scope of
this research is process-induced or scheduled interruptions caused by the engineering process and
organizational structure. More specifically the research focuses on cross-project multitasking.
Work productivity
Corporate
culture
Office
Environment
Personality
Personal
process
Cross-project
multitasking
Management
Psychology
Work scheduling,
technical processes
impacts
Work quality
Work interruptions
Figure 2. Interruption in the work environment and scope of the research project.
An empirical study of work fragmentation in software evolution tasks [27] conducted a
large-scale study of work fragmentation of software developers and its effect on productivity.
Several thousands of work sessions were observed via Mylyn and Eclipse activity tracking.
Work fragmentation is correlated to lower observed productivity at both the macro level, and at
the micro level. Longer interruptions strengthen the effect. Similar observation were found in
[28-32].
11
2.2 Work interruptions, reimmersion time
Interrupted tasks are a daily reality for professional software developers. Developers
often have to interrupt their tasks due to various causes. Interruptions often happen due to an
unexpected request from colleagues, a blocked task, a scheduled meeting, etc. It was observed
that when work is resumed often after work interruptions developers experience increased the
time to perform interrupted tasks [33].
In order to resume an incomplete task, developers often must recall their previous
working state. Details of working state include goals, plans, priorities, relevant artifacts.
Knowledge of the working state includes involves many temporary details which are often easy
to forget. Developing a software is not a linear activity; unlike reading a book or watching a
movie, it is not enough to recap the last few paragraphs of text to resume ‗flow‘ mode of work.
Therefore, work interruptions that require switching work context (e.g. cross-project work
interruptions) can be detrimental to work productivity [34, 35].
A resumed after interruption task requires a person to recollect their thoughts. Altmann
and Trafton define this time as a resumption lag [36], which is conceptually similar to
reimmersion time [14], introduced by DeMarco and Lister. In this research we view the
reimmersion time and the resumption lag as synonyms.
In experimental studies of Parnin and DeLine, resumption lag was measured as the time
between a subject being told to resume a task and the first physical response made, such as
mouse click [28, 37]. Parnin and DeLine used Mylyn and Eclipse activity tracking tools to
measure resumption lag in their observation of software developers.
12
Studies examining software companies have evaluated values of reimmersion time or
resumptions lag. Most of them with similar rages of observed values. The following table
summarized results.
Table 2. Reimmersion time in different studies.
Average reimmersion
time / resumption lag
Study
1 hour Estimate by DeMarco and Lister [14]
15 minutes – 2 hours DeMarco and Lister [14]
23 minutes Gonzalez and Harris [38]
15-25 minutes Van Solingen et al [39]
20-30 minutes Parnin and Rugaber [28]
35-45 minutes Self-evaluate in CSCI577 class projects (via surveys, Appendix A)
Parnin and DeLine also provided distribution of the reimmersion time in their
observations.
Figure 3. Reimmersion time distribution.
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
5 10 15 30 45 60 120 360
Frequency
Minutes
Reimmersion time distribution
Parnin and Rugaber's study
13
2.3 Effort and schedule estimation models
Software project managers often have to develop cost and schedule estimations for a
required system. Usually, cost/effort estimations are developed using expert judgment and/or
cost/effort estimation models such as the COCOMO II [40]. Dr. Boehm [40] identifies the
following types of effort estimation models:
Algorithmic (also known as parametric)
Expert Judgment
Analogy
Parkinson
Price-to-Win
Top-Down
Bottom-Up
Each model type has its own advantages and disadvantages. For example, parametric
models are predictable and do not require experts, analogous projects, or experience. Parametric
models are practically useful, but they are hard to develop because they require a significant
number of data points. In addition to the COCMO II, the guidebook [41] identifies the following
parametric models for cost and schedule estimation:
SEER for Software
Sage
REVIC
SLIM
14
None of the models above directly accounts for cross-project multitasking. Local
calibration of these models for a specific organization can help, if the number of parallel projects
remains the same over a long period of time.
In this research, the COCOMO II was chosen because it allows using the whole family of
COCOMO models developed for different project types. Since this research attempts to establish
a functional dependency between relative effort increase and the number of concurrent projects,
that should provide a model framework for multitasking overhead estimation, which allows
incorporating any model from the COCOMO family. In this research, the author uses the
COCOMO II, as proof of concept. There should be no constraints to use any model from the
COCOMO family and to calibrate and extend them in a way that takes into account cross-project
multitasking overhead.
2.4 Simulation approaches overview
Software process simulation modeling is widely used to address a variety of issues.
Kellner [42] developed the first state based simulation applied to software engineering processes.
Abdel-Hamid and Madnick [43] introduced a wider use of the System Dynamics simulation
(continuous simulation) to represent software development processes at the project level. A more
detailed study of System Dynamics was done by Madachy in [44]. Agent based simulation and
event based simulation are also used for software and system engineering process simulation
[45].
Madachy [44] identifies several common types of modeling in system and software
engineering:
15
Parametric models (e.g. COCOMO II)
System dynamics models
Event based simulation
Agent based simulation
Each type of simulation model describes the system under study in different levels of
details. Figure 4 maps the modeling approaches to different levels of abstraction.
In order to observe detailed changes in behavior and performance, agent and event based
simulation models were chosen. Being able to model organizational structures, behavior
strategies, and work flows is necessary for exploring impacts various scheduling and
rescheduling mechanisms, and studying the application of lean concepts in system and software
engineering processes. This level of model detail does not come free: it makes the model more
complex and requires more input information about organizations and their processes.
High abstraction
Less details
Macro level
Strategic level
Middle abstraction
Medium details
Meso level
Tactical level
Low abstraction
More details
Micro level
Operational level
Agent based
simulation
(ABS)
Discrete
event
simulation
(DES)
Mainly discrete models Mainly continuous models
System
dynamics
Parametric
models
Figure 4. Approaches in Simulation on Abstraction Level Scale (adapted from [46]).
16
Chapter 3
Research methodology
3.1 Overview of research design
A mixed methods approach (both qualitative and quantitative approach) has been selected
for this research, because there is little quantitative information available about multitasking
overhead of multitasked development teams. Existing research has studied multitasking on
individuals, but very few of them evaluate effects of multitasking on the team or the whole
organization.
There are two parts of this research:
1. evaluation of the multitasking overhead and examining relationships between
Multitasking overhead Effort Multiplier (MEM), the number of cross-project work
interruptions, the number of projects running in parallel, and work quality (see causal
graphical model).
17
2. calibration of the COCOMO II model using MEM.
The purpose of the first part is to evaluate the impact of cross-project interruptions on
the productivity of software developers,
the overall project effort and work quality.
We measure the multitasking overhead as Multitasking Effort Multiplier (MEM). We use
MEM as productivity metric. When we evaluate MEM, we view it as a function of some
multitasking measure (e.g. the number of parallel project, the number of cross-project
interruptions). MEM shows relative increase of effort (for individual developer or for the whole
project). MEM evaluation model is described in the next chapter. We compute MEM for the
available data set and test hypothesis Ha,b,c.
The second part of the research calibrates the COCOMO II model using MEM. If a
calibrated model with MEM works better than a local calibration of the COCOMO II, then the
MEM evaluation process predicts multitasking overhead properly. Otherwise, multitasking
overhead is not a significant predictor of effort estimation errors.
The study is based on observations of software development projects observations.
Primarily, the research uses work log, schedule, and source code observations collected from two
sources:
class projects from the USC CSCI577 software engineering class,
industry projects (details about data collection from industry provided in the next
section)
Multitasking overhead is not explicitly reported in work logs, it is embedded as a portion
of effort that developers spent on all tasks from different projects. For the management it is
important to understand what is the cost of having one team of developers to work on several
18
projects at the same time. The work log analysis, which evaluates multitasking overhead (portion
of the effort spent on cross-project switching), is a way to make multitasking overhead explicit.
Knowing multitasking overhead should allow the management better understand the impacts of
their decisions.
A work log analysis algorithm determines multitasking overhead for each instance of a
work log. Then, the simulation is used to develop alternative scenarios of work execution for
observed/recorded projects‘ work logs. The result of each simulation is a work log for an
alternative scenario.
Simulation results are analyzed by work log analysis algorithm to determine cross-project
multitasking overhead in the same way as directly observed work logs and schedules.
A local calibration the COCOMO II model with MEM provides a tool that accounts for
cross-project multitasking. The model is calibrated using
effort data from work logs,
SLOC from repository,
MEM is evaluated for each project.
Figures below provide an overview of the methodology. Figure 5 shows causal graphical
model which shows relationships between measurements and metrics. Each relationship is either
known or suggested by one of the hypotheses.
19
Table 3. Metrics and measurements.
Name Source Scope
(project/team
level or
individual level)
Type of metric
(observation or
evaluation)
Number of projects per
week per developer
Work log. We count
all projects developer
was involved
throughout the week
Individual observation
Number of cross-
projects per week per
developer
Work log. We count
interruptions between
tasks from different
projects throughout
the week
Individual or
project
observation
Effort (person-hours,
person-month )
Effort was reported in
work logs in JIRA.
Project observation
SLOC KSLOC added and
modified were counted
from several snapshots
of the SVN code
repository using UCC
Project observation
MEM Calculated using:
– the number of
work interruptions per
week
– weekly effort
– reimmersion time
Individual or
project
evaluation
Other effort multipliers
for the COCOMO II
Evaluated using
projects description.
Project evaluation
Reimmersion time Student surveys,
existing research on
interruptions.
Individual evaluation
20
Amount of work
Interruptions
Number of projects
Multitasking Overhead
(Metric: MEM)
Quality
(metric: reopened tasks
or grade deduction)
Effort
Reimmersion time
Evaluation
Evaluation
H1.a
H1.c
H1.b
Causal Graphical Model
Implied causality or
functional relationship
Hypothetical association
or causality
Observation
Observation & Evaluation
Observation Observation
Observation
Evaluation
Individual level Individual and
team level
Individual and team level
Team level
Individual level
Individual level
Number of
projects
Metric or measurment
H1.a – tested by hypothesis H1.a
Figure 5. Causal Graphical Model.
Assumptions:
Interruptions cause effort increase due to reimmersion time.
Interruptions increase MEM.
MEM depends on interruptions, reimmersion time and effort as follows:
21
COCOMO II
calibration
Compute
MEM for each
data point
Test hypothesis
H2a,b
Source data: work logs of different project groups
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
Collected data (work logs and schedules)
Industry projects
Students’ projects
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
Extended data set from simulation
Simulation
Test hypothesis
H1a,b,c
Count cross-project
interruptions for
each data point
Compute
MEM for each
data point
Count cross-project
interruptions for
each data point
Compute
quality
metric
Evaluate model
prediction accuracies
RQ1: impact of multitasking on effort and quality
RQ2: COCOMO II calibration
link between data source and data
transformation/manipulation
Figure 6. Research methodology overview.
Figure 7. Structure of each data point.
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
...
COCOMO II
calibration
Compute multitasking overhead for each data point / project group
using work log analysis algorithm
Test hypothesis
Work logs and schedules of project groups
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
R
1
R
2
R
3
R
3
Project 1 schedule
Each data point consists of
- work log (effort reported over time by each resource)
- schedule
- SLOC added and changed
- number of projects in the group (parallel projects)
- number of resources
22
To perform work log analysis we used a DES framework tool, which was developed by
the research team for software and system engineering process simulation (more information
about the DES framework is provided in appendix B).
R1
R2
R3
R3
Project 1 schedule
R1
R2
R3
R3
Project n schedule
R1
R2
R3
R3
Project 2 schedule
R1
R2
R3
R3
Project x schedule
R1
R2
R3
R3
Project 1 schedule
R1
R2
R3
R3
Project n schedule
R1
R2
R3
R3
Project 2 schedule
R1
R2
R3
R3
Project 1 schedule
R1
R2
R3
R3
Project n schedule
R1
R2
R3
R3
Project 2 schedule
R1
R2
R3
R3
Project x schedule
R1
R2
R3
R3
Project 1 schedule
R1
R2
R3
R3
Project n schedule
R1
R2
R3
R3
Project 2 schedule
3 industry projects
2010-2011
6 industry projects
2011-2012
CSCI577 student projects
2016-2017
Other parallel projects. Only
their existence is know but no
schedules or work logs
Projects with known
work logs
Group of concurrent projects
R
1
R
2 R
3
R
3
Project 1 schedule
R
1
R
2 R
3
R
3
Project 1 schedule
R
1
R
2 R
3
R
3
Project 1 schedule
R
1
R
2 R
3
R
3
Project 1 schedule
...
Alternative work execution scenarios generated by simulation based on work logs of actual projects
~120 project groups of various configurations (1-6 projects in each group)
R
1
R
2 R
3
R
3
Project 1 schedule
R
1
R
2 R
3
R
3
Project 1 schedule
R
1
R
2 R
3
R
3
Project 1 schedule
R
1
R
2 R
3
R
3
Project 1 schedule
...
...
10 projects
Simulation Simulation
Collected data
(work logs and schedules)
Data from simulation
(work logs)
Work logs and schedules of actual projects
Figure 8. Detailed structure of source data.
3.2 Data collection and analysis
The data has been collected from two sources: 10 software engineering class projects and
9 industry projects.
In order to evaluate the impact of cross-project multitasking in class projects, we
observed 10 software projects for three months. All projects were developed as part of a graduate
level software engineering class at the University of Southern California. Clients of these
projects represent various non-profit organizations, entrepreneurs and private companies. Clients
23
joined teams of students of software engineering class to work on their projects. Most of the
projects were scoped to fit into a single semester of the class. Working on real life projects and
reporting project progress is part of the class curriculum. It is aimed to help students to acquire
practical skills while applying the knowledge gained in the class.
In total, we collected work logs and weekly progress reports of 68 students for one
semester of work. All students had degrees in computer science related fields and experience in
software development.
Daily work logs were part of the work process, which allowed us to collect data on
development effort and acquire development effort distribution across projects and individuals
without interfering with their work flow. Project tasks were assigned to developers via the
project tracking system used in the class. Every student reported their daily and weekly progress
(time spent) on each task, that he/she performed on that day. In addition to the work logs,
students also provided weekly information about work interruptions and self-evaluated
reimmersion time. They were instructed how to evaluate their reimmersion time and how to use
the project tracking system.
Some of the students also took one or two other classes and had no outside jobs;
therefore, they had to distribute their time between the following three types of activities:
1. a project in the software engineering class,
2. group projects/tasks/assignments in classes other than the software engineering class,
3. individual assignments in the software engineering class and all other classes.
These three types of activities have very different context of work, and independent
deadlines. We viewed a switching between these three types of activities as a cross-project
interruption.
24
In this study, we collected detailed information about projects in the software engineering
class via the project tracking system (Atlassian JIRA [47]). The project tracking system tracked
tasks‘ statuses. A task‘s status, reported by developers, can show if it is completed, in progress or
in the backlog. A task‘s labels also allowed us to distinguish different types of engineering
activities (requirement engineering, coding, testing, documentation, etc.).
We collected information about the impact on the class project from the other two types
of activities via weekly progress reports, where students provided self-evaluated reimmersion
time and types of activities they were involved with throughout the week.
For the purpose of consistent terminology in this paper, we will call the three types of
activities listed above as projects (activities with different context of work). We also will count
switching between them as cross-project switching.
In order to analyze the impact of interruptions on effort, we selected a subset of 29 full-
time students who worked in similar conditions:
they only took two classes (one of them was the software engineering class) and had
no outside jobs,
at least 25% of their time in the class project was dedicated to development activities
(programming, bug fixing, etc.).
The following data has been collected in CSCI577 class projects:
Work logs in JIRA – hours were reported weekly for each task in each project.
Schedules in MS Project format were updated every 2 weeks as part of bi-weekly
project report.
SLOC added and changed for each project every 2 weeks as part of bi-weekly project
report.
25
Only the data from the development/construction phase was used for this research effort.
Information about cross-project multitasking was collected directly from work logs and
from weekly surveys (577 class projects). When students were working on their class projects,
they multitasked among other classes, team projects in those classes, work in industry (usually
distance-learning students). All these parallel activities were counted as parallel projects. For the
purpose of this research, students reported number of work interruptions on CSCI577 projects
along with total number of other classes/projects they worked on every week. This information
allowed the multitasking analysis algorithm to evaluate multitasking overhead.
In order to evaluate the impact of cross-project multitasking in industry projects, we
collected work logs of 81 software developers from 9 software projects for two years. This data
set was partially used in [35]. As described in [35]: ―All software developers worked in one
company. The company had a matrix organizational structure, which allowed resource sharing
across multiple projects. The company‘s primary domain of work is the development of large-
scale distributed systems for electrical energy consumption monitoring in large cities. The
company was a private software development organization, which worked on software products
for smart grid solutions.‖
Daily work logs were part of the organization‘s work process, which allowed us to collect
effort data and its distribution across projects and individuals without interfering with their
workflow. Project tasks were assigned to developers via the project tracking system used in the
company. Every developer reported their daily progress (time spent) on each task he/she worked
on that day. Depending on schedule and urgent requests, developers could work on tasks from
different projects in one day. The project tracking system also tracked tasks‘ statuses. A task‘s
status, reported by developers, can show if it is completed, in progress or in the backlog. In this
26
study, we analyzed only work logs of software engineers, so all the tasks reported in work logs
were mostly development tasks (architecting, designing, coding, bug fixing, testing,
documenting the source code, etc.).
The following data has been collected from industry projects:
Work logs in JIRA – hours reported daily for each task in each project
Schedules were updated every 2-3 days
SLOC added and changed for each project changed for each project every week
Project lengths varied from 9 to 13 months.
27
Chapter 4
Model framework
4.1 Reimmersion time evaluation
Reimmersion time is amount of time necessary for switching from one work context to
another and restore ‗flow‘ mode of operation [14].
Figure 9. Reimmersion time when work interrupted.
28
Reimmersion time can be evaluated differently, and for each individual work interruption
it can be different. In this research we use average reimmersion time for cross-project work
interruptions, which can be evaluated with a constant value (e.g. 5minutes, 0.5 hour, 1 hour,
etc.). According to the research conducted by Parnin [28] the average reimmersion for an
individual is always within the range of 5 minutes to 30 minutes. Although it may vary across
different individual, it usually remains close to constant if observation interval is long enough for
a given individual.
T1
T2
T1
Interruption of task T1
Reimmersion time – extra effort to complete
the task after interruption.
Tasks
Multitasking of a resource
working on two tasks (T1 and T2)
T1
Time
T1
With multitasking
T1 takes extra time to
complete
Interrupted task size compared to
uninterrupted task size
No multitasking
(T1 is not interrupted)
Time to complete task T1
Time to complete task T1
Figure 10. Reimmersion time.
For industry projects we have used constant values within the range of 5 minutes to 60
minutes. The following table shows the three different constant values of reimmersion time that
have been used to analyze data for industry projects.
Table 4. Three values of reimmersion time.
Pessimistic 1 hour Estimate by DeMarco and Lister [14].
Nominal 22 minutes DeMarco and Lister [14]: ―it takes from fifteen
minutes to several hours before the flow state is
locked in.‖
Gonzalez and Harris [38] evaluates it as 23 minutes
on average. Parnin [28] evaluates it as 22 minutes.
29
Van Solingen et al [39] evaluates it as a range of 15 to
25 minutes.
Optimistic 5 minutes Personal observations for industry projects confirm
that switching between different DBs and recompiling
projects took at least 5-6 minutes.
The nominal value was taken from Gonzalez and Harris [38]. The optimistic value is based on
personal observation of industry projects (switching between different source code repositories
and DB servers took at least 5-6 minutes, so cross-project interruptions could not be faster).
These values are just parameters of the work log analysis algorithm and serve as an example.
For industry CSCI577 projects analysis we used students self-evaluate reimmersion time
(from weekly surveys Appendix A).
4.2 Work log analysis algorithm and MEM evaluation
The work log analysis algorithm is presented below. For industry projects, cross-project
interruptions are deducted from work log observations. For CSCI 577 class projects,
interruptions are provided directly from weekly surveys as well as from work logs.
30
Work log analysis algorithm
Identify cross-project interruptions
Work log
observations of
multitasked teams
Determine reimmersion time for each
cross-project work interruption
Sum up cost/effort of multitasking
overhead of all interruptions
Multitasking overhead
for a team working on
a group of projects
Figure 11. Work log analysis algorithm.
It allows to compute multitasking overhead effort multiplier (MEM) as follows:
( )
(
)
Where I – number of cross-project interruptions over some period of time t, Reimmersion
time, E/t – total effort over some period of time t, t – some period of time. As we can see MEM
is a function of ration
which is parameterized with reimmersion time R. If
if
fixed (e.g. fixed 40 person-hours per week per developer), then MEM is only a function of the
average number of interruptions
.
Originally the Weinberg‘s heuristic was formulated as follows: if you work on two
projects, you can only commit up to 40% of your time to each project; if you work on three
projects, you can only commit up to 20% of you time to each project, if you work on four
31
projects, you commit less than 10% of you time to each project, if you work on five projects, you
commit less than 5% of you time to each project. This formulation allows us to express the
heuristic as multitasking effort multiplier (see next table and figure).
Table 5. Multitasking overhead evaluated by Weinberg’s heuristic as MEM
Number of
parallel projects
(P)
Total % of effort on
multitasking
% of effort spent
on each project
MEM =
1 0 100 1
2 20 40 1.25
3 40 20 1.65
4 60 10 2.5
5 75 5 4
Figure 12. Multitasking overhead evaluated by Weinberg’s heuristic as MEM.
Trend line shows exponential approximation of MEM value.
0
0.5
1
1.5
2
2.5
3
3.5
4
0 1 2 3 4 5 6
MEM
Number of projects
Weinberg's heuristic as MEM
32
4.3 Simulation overview (alternative work log configuration
generation)
To be able to get enough data points with various organizations configurations (e.g. with
2, 3, 4 or 5 projects running in parallel), a simulation has been used to generate alternative work
execution scenarios where certain projects and their influence are removed from the actually
recorded schedule. The following diagram shows simulation steps.
Rescheduling algorithm
Identify cross-project context switching
Work log observations
of a multitasked team
over a group of projects
Identify tasks that must be removed from
work log
Compute multitasking overhead for tasks
identified above
New work log
Perform left shift of the schedule removing
multitasking overhead
Figure 13. Simulation: a new work log generation algorithm.
33
The following figure shows an example of how new work log is generated by removing
one project from the group. Along with the removed project rescheduling removes estimated
impact of interruptions caused by the removed project (yellow portion of the effort removed).
T1
T2
T1 P1
P2
T1 P1
Collected work log (direct observation)
New work log (generaged/simulated)
Simulation
T1 T1 Left shift
- overhead removed
Figure 14. Simulation example.
4.4 COCOMO II calibration and extension
The COCOMO II adaptation for a multitasking environment is done in three steps:
Step 0. Identify COCOMO II parameters for a specific organization.
Step 1. Locally calibrate the COCOMO II model using effort and SLOC for project
groups of size 1 (only one project, no cross-project multitasking overhead).
In this research we only calibrate the multiplicative constant A.
Step 2. Introduce a new effort multiplier in the model – number of parallel projects –
and calibrate A
∑
∏
(
)
Find using linear regression for data points where R is reimmersion time.
34
As result, we get a locally calibrated COCOMO II extension for a specific organization or
team, which accounts for multitasking overhead.
4.5 DES framework - simulation tool overview
In order to be able to analyze potentially complex schedules and event sequences and
behavior characteristics, we employed a discrete-event simulation approach, which gives total
control over the analyzed behavior.
The model describes engineering processes in an SoS as a discrete sequence of
timeframes. All the system engineering activities in this model are represented as a set of work
item (WIs) or tasks grouped by aggregation nodes such as system requirements. Together the
WIs and aggregation nodes form a WIs network. A WI network also uses precedence
relationships between WIs. The way the WI network evolves is defined by the event scenario,
input schedule, and other input parameters such the scheduling algorithm and team resources.
The simulation model covers the following aspects:
Organizational model – organizational structure of the SoS
Work Item (WI) model – describes work that is done by agents (e.g. organizations)
Behavioral model describes:
o WI queue management algorithms (work scheduling, resources allocation,
etc.)
o WI network evolution (how it changes over time)
o Technical and managerial processes described via agents activities.
35
WIs and organizational structure define agents‘ structure and relationships. Work flow in
the SoS is a dynamic property that emerges in the simulation. These three aspects of the
simulation were originally introduced in the DATASEM project [48].
36
Chapter 5
Results evaluation and analysis
5.2 Projects and source data overview
This section summarizes analysis of work logs and schedules of industry projects for
years 2010-2012 and CSICI577 students‘ projects for an academic year starting fall 2016. Table
2 presents an overview of these two groups of projects.
Table 6. Analyzed project groups.
Industry projects CSICI577 projects
fall 2016-spring
2017
2010-2011 2011-2012
Number of software
developers in a team
73 81 10
Project group size (the
number of concurrent
3 major projects
and 3 background
6 major projects
and 3 background
2-3
37
projects)
Added and changed SLOC 94K (Java backend
development)
243K (Java
backend
development)
11K
Total effort over for the
whole project (developers
only)
295 person-months 756 person-months 1216 person-hours
Schedule 36 weeks 38 weeks 11 weeks
5.3 Cross-project work interruptions
To answer the first research question and test the first hypothesis we used the linear
correlation between subject means (the average number of projects each developer was involved
in and the number of cross-project interruptions of each developer) [49, 50]. The following
results are based on work log observations in two groups of projects: CSCI577 class projects and
industry projects.
5.3.1 The association between the average number of cross-project
work interruptions and the average number of projects
The average number of cross-project work interruptions per week was computed over the
whole period of observation. CSCI577 class projects were observed over the period of 11 weeks,
and industry projects were observed over the period of two years (36 and 38 weeks each). For
each week the average number of cross-project task interruptions were computed using algorithm
described in chapter 3.
38
Figure 15 shows the average number of parallel projects and the average number of
interruptions each developer experienced every week in the CSCI577 class. The total number of
data points is 29. Although we collected work logs of 68 students, we chose to analyze a subset
of 29 full-time students who worked in similar conditions:
they only took two classes (one of them was the software engineering class) and had no
outside jobs,
at least 25% of their time in the class project was dedicated to development activities
(programming, bug fixing, etc.).
Figure 15. The average number of interruptions vs. the average number of projects per week per
developer in CSCI577 class projects.
Table 7. Correlation between the average number of interruptions vs. the average number of
projects per week per developer in CSCI577 class projects.
R
2
0.61110
S 2.418
p 0.0003
n 29
0.00
2.00
4.00
6.00
8.00
10.00
12.00
14.00
16.00
18.00
20.00
1.00 2.00 3.00 4.00 5.00 6.00
Average number of interruptions
Average number of projects per week
Average number of interruptions per week per developer
39
Some of the students also took one or two other classes and had no outside jobs;
therefore, they had to distribute their time between the following three types of activities:
a project in the software engineering class,
group projects/tasks/assignments in classes other than the software engineering class,
individual assignments in the software engineering class and all other classes.
Figure 16 shows the average number of parallel projects and the average number of
interruptions each developer experienced every week in industry projects. There are 154 data
points each representing a developer. We collected observation of work logs for two years. In
first year, we had 3 projects running in parallel, and in second year we had 6 projects running in
parallel. Due to personal turn over, we considered these two data sets as independent groups of
developers. The following charts show both of them together.
Figure 16. The average number of interruptions vs. the average number of projects per week per
developer in industry projects.
0.00
2.00
4.00
6.00
8.00
10.00
12.00
14.00
16.00
18.00
20.00
1.00 2.00 3.00 4.00 5.00 6.00
Average number of interruptions
Average number of projects per week
Average number of interruptions per week per developer
40
Table 8. Correlation between the average number of interruptions vs. the average number of
projects per week per developer industry projects
R
2
0.4563
S 2.198
p <0.005
n 154
In both samples R
2
is relatively small, so it is not possible to conclude presence of the
linear correlation. Null hypothesis is also rejected in both cases.
In addition to tests above, we also studied correlation in log space by taking natural
logarithm of the response and predictor. Next two figures summarize results in log space. Some
data points with no multitasking (with the weekly average number of projects = 0) were
excluded.
Figure 17. The average number of interruptions vs. the average number of projects per week per
developer in CSCI577 class projects (in log space).
-2.00
-1.50
-1.00
-0.50
0.00
0.50
1.00
1.50
2.00
2.50
3.00
3.50
0.00 0.50 1.00 1.50 2.00
ln(average number of interruptions)
ln (average number of projects per week)
Average number of interruptions per week per developer in log space
41
Table 9. Correlation between the average number of interruptions vs. the average number of
projects per week per developer in CSCI577 class projects (in log space).
R
2
0.785
S 0.617
p 0.0001
n 21
Figure 18. The average number of interruptions vs. the average number of projects per week per
developer in industry projects (in log space).
Table 10. Correlation between the average number of interruptions vs. the average number of
projects per week per developer in industry projects (in log space).
R
2
0.48
S 379
p <0.005
n 142
We can see that in log space R
2
is higher, which suggests that relationship between
predictor and response is better described polynomial equation like this
-2.00
-1.00
0.00
1.00
2.00
3.00
4.00
0.00 0.50 1.00 1.50 2.00
ln(average number of interruptions)
ln(average number of projects per week)
Average number of interruptions per week per developer in log
space
42
.
B here is some constant.
5.3.2 The number of cross-project work interruptions and the
number of projects per week
The following group of charts shows distribution of work interruptions and the number of
projects involved for each week and for each developer. Compare to the previous section, it is
not averaged over the whole period of observations. Each data point here is a number of
interruptions and a number of projects for some developer on some week.
Figure 19. The number of cross project interruptions vs. the number of projects per week per
developer in CSCI577 class projects.
0
2
4
6
8
10
12
14
16
18
20
0
2
4
6
8
10
12
14
16
18
20
1 2 3
Number of interruptions per week
Number of projects per week
Interruptions between projects per week
average
43
Figure 20. The number of cross project interruptions vs. the number of projects per week per
developer in industry projects.
In total, we were able to receive 5506 data points from industry work logs and 345 data
points from students‘ work logs.
5.4 Cross-project multitasking effort multiplier
In this section, we show how MEM correlate with the number of parallel projects
developers were working on. MEM was computed individually for each developer for each
week. Averages were computed over the whole period of observation. In case of student projects
it is a period of 11 weeks, in case of industry projects it is a period of 36-38 weeks. Results are
based on work log observations and students‘ weekly surveys.
0
5
10
15
20
25
30
35
0 1 2 3 4 5 6 7
Number of interruptions
per person per week
Number of projects
Number of interruptions
44
Next figure shows the average number of parallel projects and MEM of each developer in
the CSCI577 class.
Figure 21. MEM vs. the average number of projects per week per developer in CSCI577 class
projects.
Table 11. Correlation between MEM and the average number of projects per week per developer
in CSCI577 class projects.
R
2
0.11
S 0.2
p 0.008
n 29
Next figure shows the average number of parallel projects and MEM for each developer
experienced in industry projects.
0.00
0.50
1.00
1.50
2.00
2.50
0.00 1.00 2.00 3.00 4.00 5.00 6.00
MEM
Average number of projects per week
Average MEM per week per developer
45
Figure 22. The average number of interruptions vs. the average number of projects per week per
developer in industry projects.
Table 12. Correlation between the average number of interruptions and the average number of
projects per week per developer in in industry projects.
R
2
0.45
S 0.01
p <0.005
n 154
In both industry and CSCI577 projects R
2
is small, which allows us to conclude that there
is no linear correlation between predictor and response. Null hypothesis is rejected for CSCI577
class projects.
In addition to tests above, we also studied correlation in log space by taking natural
logarithm of the response and predictor. Next two figures summarize results in log space. Some
data points with no multitasking (with the weekly average number of projects = 0) were
excluded.
0.98
1.00
1.02
1.04
1.06
1.08
1.10
1.12
1.14
1.16
1.18
0.00 1.00 2.00 3.00 4.00 5.00 6.00
MEM
Average number of projects per week
Average MEM per week per developer
46
Figure 23. MEM vs. the average number of projects per week per developer in CSCI577 class
projects in log space.
Table 13. Correlation between MEM and the average number of projects per week per developer
in CSCI577 class projects in log space.
R
2
0.1
S 0.2
p 0.02
n 21
Next figure shows the average number of parallel projects and MEM for each developer
experienced in industry projects.
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
0.00 0.20 0.40 0.60 0.80 1.00 1.20
ln(MEM)
ln( average number of projects per week)
Average MEM per week per developer in log space
47
Figure 24. The average number of interruptions vs. the average number of projects per week per
developer in industry projects in log space.
Table 14. Correlation between the average number of interruptions and the average number of
projects per week per developer in in industry projects in log space.
R
2
0.3
S 0.02
p <0.05
n 142
We can see that in both cases there is no significant evidence of linear correlation
between the average number of projects per week and cross-project MEM. This suggests that the
average number of cross-project interruptions and MEM may not depend on the number of
project each developer was involved throughout the week. In other words, it is a common case
when developers working on a small number of projects experiences the same number of work
interruptions as developers working on a bigger number of projects.
0.00
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16
0.18
0.00 0.50 1.00 1.50 2.00
ln(MEM)
ln (average number of projects per week)
Average MEM per week per developer in log space
48
5.5 Comparison with G.Weinberg’s heuristic
Next figure shows the average number of parallel projects and MEM of each developer in
the CSCI577 class compared with G.Weinberg‘s heuristic predictions.
Figure 25. MEM vs. the average number of projects per week per developer in CSCI577 class
projects vs. G.Weinberg’s heuristic.
Table 15. Model fit of G.Weinberg’s heuristic and observations of CSCI577 class projects
MEM Multitasking
overhead (effort)
R
2
0.1 0.16
p
0.1 0.03
CSCI577 sample size 29
Next figure shows the average number of parallel projects and MEM of each developer in
industry projects compared with G.Weinberg‘s heuristic predictions.
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
0 1 2 3 4 5 6
MEM
Average number of projects per week
Average MEM per week per developer
Weinber's heuristic
49
Figure 26. MEM vs. the average number of projects per week per developer in industry projects
vs. G.Weinberg’s heuristic.
Table 16. Model fit of G.Weinberg’s heuristic and observations of industry projects
MEM Multitasking
overhead (effort)
R
2
0.36 0.44
p
<0.05 <0.05
Sample size 154
G.Weinberg‘s heuristic has weak model fit in both industry (MEM R
2
<0.36) and
students‘ projects (MEM R
2
<0.01). From figures below, we can see that G.Weinberg‘s heuristic
overestimates average MEM. Next group of figures plots all data points (not averages) and
G.Weinberg‘s heuristic.
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
0 1 2 3 4 5 6
MEM
Average number of projects per week
Average MEM per week per developer
50
Figure 27. Average % of effort spent on cross-project interruptions vs. G.Weinberg’s heuristic
prediction in class projects.
Figure 28. % of effort spent on cross-project interruptions vs. G.Weinberg’s heuristic prediction
in class projects.
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
1 2 3
% of full time equivalent (FTE)
Number of projects per week
Comparison of effort evaluation and estimations
Average effort on spent on interruptions
Weinberg heuristic
0
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
0 1 2 3 4 5 6 7
% of effort total spent on interruptions
Number of projects per week
Multitasking overhead
Observations
G. Weinberg's heuristic
51
In this study, we evaluated the impact of cross-project multitasking on software projects
by analyzing work logs. The evaluation showed that among all 9 projects at least 14% effort was
spent on context switching between tasks from different projects. Developers who were involved
in more projects tend to have more cross-project work interruptions. However, the linear
correlation between the number of projects each resource is working on in one week and the
number of interruptions is weak.
We compared multitasking overhead results with the G. Weinberg‘s heuristic.
Weinberg‘s heuristic predicts a larger multitasking overhead. Since we only evaluated the lower
bound of the cross-project multitasking overhead, the overall multitasking overhead, including
multitasking within each project/task and across different projects, should be higher.
5.6 The impact of cross-project multitasking on quality of work
We wanted to see if there was any impact of cross-project multitasking on quality of
work.
CSCI577 projects were evaluated during architecture reviews meetings and by instructors
and clients throughout the semester. The impact on the quality of work was measured in terms of
grade points deducted. We had to remove one project from the data set because of the incomplete
data on grade deduction in one of the projects. Therefore, we only used 9 projects in the analysis.
Next figure shows grade deduction versus the average number of interruptions per week
per developer. R
2
= 0.44 is relatively low and p-value is slightly more than 0.05. This indicates
that we need more data points to infer positive correlation between the number of average cross-
project work interruptions and grade deduction. We also rejects null hypothesis.
52
Figure 29. Grade deduction vs. the average number of cross-project interruptions.
Table 17. Grade deduction vs. the average number of cross-project interruptions.
R
2
0.44
S 2.25
p 0.051
n 9
For industry projects we used a different quality metric. We counted number of reopened
tasks. Reopened tasks in JIRA indicate rework. We could not use this metric for CSCI577
projects because protocol for using JIRA was different, and it was allowed to use create new
tasks instead of reopening existing for rework in class projects.
Next figure shows the average number of reopened tasks versus the average number of
interruptions per week per developer.
0
5
10
15
20
25
30
35
40
45
50
0 2 4 6 8 10 12
Grade deduction
Average number of interruptions per week per developer
Grade deduction
53
Figure 30. The average number of reopened tasks vs. the average number of cross-project
interruptions.
Table 18. The average number of reopened tasks vs. the average number of cross-project
interruptions.
R
2
0.71
S 1.01
p 0.004
n 9
R2 = 0.71 is high and p-value = 0.004 which allows us to conclude existence of the linear
correlation between the average number of reopened tasks and the average number of cross-
project interruptions.
0
1
2
3
4
5
6
7
0 2 4 6 8 10 12
Average number of reopened tasks
Average number of interruptions per week per developer
Average number of reopened JIRA tickets
54
5.7 MEM evaluation for projects
For each project we calculated MEM as follows:
where t – total period of observation (e.g. week, months, total project length), I – the total
number of cross-project interruptions of all tasks of that project, E – effort of the project, R-
reimmersion time. If t is a period of one week, is a weekly average number of interruptions,
– weekly effort (for industry projects nominal weekly effort for a developer is 40 person-
hours). There is not direct measure to estimate the reimmersion time R from work logs. There are
several ways we can evaluate R:
Conduct direct experiment on participating developers
Ask developers to self-evaluate their reimmersion time
Use prior research results on reimmersion time and resumption lag to evaluate
average values of R
For industry projects reimmersion time is an estimated average value from a prior
research [14, 36, 37]. We used the average R = 22 minutes for evaluations below. It could be
considered as a lower bound for MEM, actual MEM could be even higher.
For CSCI577 class projects we used reimmersion time reported by students, and then for
each calculated effort-weighted sum of individual MEMs for the whole team. CSCI577 class
projects have a higher MEMs because of that. Additionally we also computed MEM using R=22
minutes. Tables below show results.
55
Table 19. MEM in industry projects
Project id MEM
(R=22minutes)
1 1.173
2 1.115
3 1.112
4 1.167
5 1.161
6 1.123
7 1.159
8 1.269
9 1.157
Table 20. MEM in CSCI577 class projects
Project id MEM
Self-evaluated
MEM
(R=22minutes)
1 1.213966807 1.229553189
2 1.450926771 1.080351114
3 1.489315652 1.170568562
4 2.538734812 1.063207025
5 1.288377166 1.305574336
6 1.253473943 1.192461937
7 2.396829062 1.117615753
8 2.544677798 1.221196482
9 1.384029443 1.087185129
10 2.013966807 1.229553189
56
5.7.1 Comparison of self-evaluated reimmersion time in CSCI577
class projects to prior studies
We compared our observation of the reimmersion time with distributions found in the
study by Parnin and Rugaber [28].
Figure 31. Reimmersion time distribution.
Table 21. Two sample Kolmogorov–Smirnov test
D
stat
0.223
D
crit
0.229751
CSCI577 sample size 65
Parnin and Rugaber‘s sample size 444
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
5 10 15 30 45 60 120 360
Frequency
Minutes
Reimmersion time distribution
Self-evaluated by CSCI577 students
Parnin and Rugaber's study
57
D
stat
< D
crit
therefore we cannot reject the null hypothesis at level of 0.005. This finding
suggest that reimmersion time from both samples belongs to the same distribution.
Next figure compares reimmersion time from different studies.
Figure 32. Reimmersion time in different studies.
Table 22. Reimmersion time in different studies.
Study/research Average reimmersion time
(max)
Average reimmersion time
(min)
DeMarco and Lister [14] 120 15
Gonzalez and Harris [38] 23 23
Van Solingen et al [39] 25 15
Parnin and Rugaber [28],
Czerwinski and Horvitz [51]
30 20
Self-evaluate in CSCI577
class projects (via surveys)
45 35
0
20
40
60
80
100
120
140
DeMarco and
Lister [14]
Gonzalez and
Harris [38]
Van Solingen et
al [39]
Parnin and
Rugaber [28]
Self-evaluate in
CSCI577 class
projects (via
surveys)
Minutes
Reimmersion time comparison
58
For the purpose of this research, nominal value of reimmersion time is average minimum
among all studies. It is 22 minutes. It allows us not to be bias towards overestimation of the
multitasking impact on productivity. In other words, we do not want to overestimate significance
of the multitasked environment.
5.8 COCOMO II calibration
All projects had their COCOMO II effort multipliers evaluated (see Appendix D).
CSCI577 class projects were evaluated by students themselves (it was a class assignment).
Industry projects were evaluated by involved experts and participating developers.
We compared predictions of the COCOMO II, the locally calibrated COCOMO II and the
locally calibrated COCOMO with MEM for projects. Next figure shows how actual project effort
was predicted by each of the models. Y-axis is a prediction, X-axis is an actual value, diagonal
dashed line is a hypothetical model with perfect prediction accuracy. Other trend lines show how
our models (COCOMO II, locally calibrated COCOMO II and locally calibrated COCOMO with
MEM) misfit the perfect prediction. In other words, the more our model close to the diagonal
dashed line the better it is.
59
Figure 33. Models evaluation: COCOMOII, locally calibrated COCOMOII and locally
calibrated COCOMOII with MEM. Prediction errors for industry projects. Dashed line shows
“perfect” prediction line
Table 23. The correlation between COCOMOII, locally calibrated COCOMOII and locally
calibrated COCOMOII with MEM vs. perfect prediction (actual effort)
COCOMO II Local calibration
of COCOMO II
Local calibration of
COCOMO II with MEM
R
2
0.33 0.33 0.44
p 0.02 0.02 0.03
n 9 9 9
A (multiplicative
constant)
2.94 1.57 1.34
We can see that the locally calibrated COCOMO II with MEM gives a higher R
2
which
means it is more close to a dashed line (more accurate) than other two models. This can also be
seen from visual observation of the figure above.
50.00
100.00
150.00
200.00
250.00
300.00
350.00
50.00 100.00 150.00 200.00 250.00 300.00
Predicted effort (person -months)
Actual effort (person -months)
Prediction vs. actual effort
COCOMO II
Local calibration
Local calibration with MEM
60
We do not have enough projects to measure the prediction accuracies of models on the
same data set (e.g. we only have 9 industry projects). That is why we used simulation to create
additional data points from available schedules. Section 4.3 discussed simulation detail. From a
schedule with 6 projects running in parallel simulation can create 27=128 variations of
alternatives schedules and their MEMs. We randomly selected 64 schedules (data points) and
used them as a training set for computing multiplicative constant in the locally calibrated
COCOMO II and the locally calibrated COCOMO with MEM. Other 64 data points were a
validation data set. We repeated the process 16 times. Results are summarized in the table below.
Table 24. Prediction accuracies of COCOMOII, locally calibrated COCOMOII and locally
calibrated COCOMOII with MEM on simulated data set from (industry projects)
COCOMO II Local calibration
of COCOMO II
Local calibration of
COCOMO II with MEM
PRED(.20) 32% 56% 63%
PRED(.25) 37% 58% 75%
PRED(.30) 38% 61% 81%
We can see that a local calibration of the COCOMO II with MEM is produces more
accurate predictions than a local calibration of the COCOMO II without MEM. This allows us to
conclude that MEM predicts multitasking overhead properly. Multitasking overhead is a
significant predictor of effort estimation errors and it can be evaluated using the weekly average
number of cross-project work interruptions and reimmersion time.
61
5.9 Hypotheses summary
H1.a. The number of cross-project interruptions is (not) linearly proportional to number of
projects.
Tested negative
For a given group of industry projects and CSCI577 projects the number of cross-project
interruptions is not linearly proportional to the number of projects. The linear correlation
is relatively weak (R^2 is between 0.46 and 0.61 in all tests).
H1.b. The G. Weinberg‘s heuristic is (not) applicable for cross-project multitasking overhead
estimation in software development teams.
Tested negative
For a given group of industry projects and CSCI577 projects cross-project multitasking
overhead cannot be evaluated with the G.Weinberg‘s heuristic because it overestimates
effort for this type of work interruptions.
H1.c. The number of cross-project interruptions is (not) linearly proportional to the number of
reopened tasks.
Tested positive
For a given group of industry projects cross-project interruptions are linearly correlated
with the number of reopened JIRA tickets.
H2.a. The multitasking effort multiplier (MEM) can be automatically evaluated based on work
log observations.
Tested positive
62
Work log analysis algorithm was implemented as part of the DES simulation framework
and used to compute results presented above.
H2.b. Using the multitasking effort multiplier (MEM) in the locally calibrated COCOMO II
model can improve prediction accuracies of the COCOMO II.
Tested positive
The local calibration of the COCOMO II with MEM increases prediction accuracy of the
model.
Amount of work
Interruptions
Number of projects
Multitasking Overhead
(Metric: MEM)
Quality
(metric: reopened tasks
or grade deduction)
Effort
Reimmersion time
Evaluation
Evaluation
H1.a weak
correlation
H1.c positive
correlation
H1.b no
correlation
Causal Graphical Model – results summary
Implied causality or
functional relationship
Hypothetical association
or causality
Observation
Observation & Evaluation
Observation Observation
Observation Evaluation
Individual level
Individual and
team level
Individual and team level
Team level
Individual level
Individual level
Number of
projects
Metric or measurment
H1.a – tested by hypothesis
Figure 34. Summary of RQ1.
63
5.10 Threats to validity
One main threat to the validity of this research is that the period of the reimmersion time
does not necessarily capture the time needed to restore work context. In a controlled study, a
researcher would be able to control for whether a programmer was resuming an incomplete task
or starting a new task. In this study, we used the average minimum estimation of the reimmersion
time. In the worst case, the values found in this study serve as a lower bound for multitasking
overhead.
Self-tracking of the reimmersion time and hours devoted to projects may not be accurate.
However, these values do not affect our conclusions about the correlation between the number of
projects and the number of interruptions.
Counting only cross-project interruptions might underestimate the overall multitasking
overhead caused by internal (within a project) and self-inflicted interruptions. This, however,
does not affect our conclusions about cross-project interruptions, which is the scope of this
research.
Using a relatively small number of projects and simulation may not evaluate models‘
accuracy properly. This is still ongoing reach, where data is being collected from the CSCI577
class. Future observation should improve or disprove conclusions of this research.
Academic environment, where graduate students of the CSCI577 class worked, is
different from developers in industry. It may miss some cross-project interruptions in work logs.
To limit the impact of deadlines in other classes, students in the selected sample were enrolled in
the same classes (e.g. students who took 3 classes were all enrolled in software engineering class,
algorithms class and AI class).
64
Students may inaccurately report effort in JIRA. To address this threat, students effort
reports in JIRA were graded every week. Additionally, weekly reminders were sent out about the
reports.
Simulation of alternative scenarios of work execution may reschedule work inadequately.
To prevent inadequate work rescheduling by the simulation, precedence relationships between
tasks were taken from plans (MS Project plans).
65
Chapter 6
Conclusion and future work
6.1 Conclusion
In this study, we evaluated the impact of cross-project work interruptions on developers‘
productivity by analyzing work logs and evaluating multitasking effort multiplier. We analyzed
work logs of 81 software developers working on 6 projects for one year and another 73
developers working on 3 parallel projects for almost one year. We also analyzed 10 projects form
the graduate level software engineering class (CSCI577).
Our findings contribute new knowledge about how multitasking in multiproject
environment affects developers‘ effort and work quality. The evaluation showed that among all 9
industry projects at least 11% effort was spent on context switching between tasks from different
projects. Analyses of projects form the CSCI577 class shows that developers who work on 2 or 3
66
projects spend on average 17% of their effort on context switching between tasks from different
projects.
Developers who were involved in more projects tend to have more cross-project work
interruptions. However, the linear correlation between the number of projects each resource is
working on in one week and the number of interruptions is relatively weak.
As this study demonstrates, the number of interruptions can be used to evaluate the
impact of multitasking on the effort. We introduced metric for the impact – multitasking effort
multiplier (MEM). We also successfully demonstrated that using MEM in a local calibration of
the COCOMO II increases predictions accuracies for projects in the environment were resources
experience cross-project multitasking.
Summary:
Tools and model developed:
A model that evaluates MEM using work logs
A hybrid agent based & discrete-event simulation model was developed and
independently implemented in two software tools (more detail about the tools
provided in appendix B):
DES framework (old name – KSS simulator)
DATASEM – a Simulation Suite for SoSE Management Research
Published conference papers:
ICSSP‘17: Impact of context switching and work interruptions on software
development efficiency
CSER‘17: Evaluation of cross-project multitasking in software projects.
67
INCOSE Symposium 2016: What does it mean to be Lean in SoSE
environment? IEEE SoSE‘2016: DATASEM: A Simulation Suite for SoSE
Management Research.
IEEE SoSE‘2015: Modeling an Organizational View of the SoS Towards
Managing its Evolution.
CSER‘15: Simulation of Kanban-based scheduling for systems of systems:
initial results.
6.2 Future work
The approach for counting work interruptions and evaluating their impact on effort can be
applied to other types multitasking. To do so we need to develop methods for measuring the
number of work interruptions caused by certain type of multitasking.
The further accuracy evaluation of models with MEM (any COCOMO family model with
MEM) should be done on a bigger sample of projects.
The work log analysis tools, we used in the research, can be integrated with project
tracking systems such as Atlassian JIRA to provide real-time information about work
interruptions and their impact on productivity (e.g. calculate MEM).
Further should be done on how to schedule and organize work to reduce negative effects
of work interruption on productivity. Reducing the number of context switches can be
encouraged in the personal development processes as well as in the whole organizations.
Reducing reimmersion time using tools to maintain work context is another coping mechanism.
68
Reducing the total number of interruptions and reducing the impact of each interruption are two
major strategies that should be further explored.
While trying to measure the impact of work interruptions we use can use different
measurements. The number work interruptions can be measured differently. In this research, we
used work log observation, which is a high-level observation. Another possible measure is
tracking pointer movements and keyboard strokes.
Finally, to make any conclusions about how different events (e.g. meetings, task
switching) affect work productivity, causal relationships between observed measures and metrics
should be explored. Graphical causal models and causal discovery methods can be used to
further explore relationships among the measurements.
69
Bibliography
1. Appelbaum, S.H., Marchionni, A. and Fernandez, A., 2008. The multi-tasking paradox:
Perceptions, problems and strategies. Management Decision, 46(9), pp.1313-1325.
2. Ford, R.C. and Randolph, W.A., 1992. Cross-functional structures: A review and
integration of matrix organization and project management. Journal of management,
18(2), pp.267-294.
3. Galbraith, J.R., 1971. Matrix organization designs How to combine functional and project
forms. Business horizons, 14(1), pp.29-40.
4. Brykczynski, B. and Stutz, R.D., 2006. Software engineering project management. John
Wiley & Sons.
5. NDIA-National Defense Industrial Association, 2010. Top Systems Engineering Issues In
US Defense Industry. Systems Engineering Division Task Group Report, http://www.
ndia. org/Divisions/Divisions/SystemsEngineering/Documents/Studies/Top% 20SE%
20Issues, 202010.
6. Turner, R., Shull, F., Boehm, B., Carrigy, A., Clarke, L., Componation, P., Dagli, C.,
Lane, J.A., Layman, L., Miller, A. and O'Brien, S., 2009. Evaluation of Systems
Engineering Methods, Processes and Tools on Department of Defense and Intelligence
Community Programs-Phase 2 (No. SERC-2009-TR-002). SYSTEMS ENGINEERING
RESEARCH CENTER HOBOKEN NJ.
7. Turner, R. and Lane, J.A., 2013. Goal-Question-Kanban: applying lean concepts to
coordinate multi-level systems engineering in large enterprises. Procedia Computer
Science, 16, pp.512-521.
8. Dingsøyr, T., Nerur, S., Balijepally, V. and Moe, N.B., 2012. A decade of agile
methodologies: Towards explaining agile software development. Journal of Systems and
Software, 85(6), pp.1213-1221.
9. Boehm, B. and Turner, R., 2003. Balancing Agility and Discipline: A Guide for the
Perplexed, Portable Documents. Addison-Wesley Professional.
70
10. Larman, C. and Vodde, B., 2008. Scaling lean & agile development: thinking and
organizational tools for large-scale Scrum. Pearson Education.
11. Poppendieck, M. and Poppendieck, T., 2007. Implementing lean software development:
from concept to cash. Pearson Education.
12. Tregubov, A. and Lane, J.A., 2016, July. What does it mean to be Lean in SoSE
environment? In INCOSE International Symposium (Vol. 26, No. 1, pp. 1347-1358).
13. Weinberg, G.M., 1993. Quality software management. New York.
14. DeMarco, T. and Lister, T., 2013. Peopleware: productive projects and teams. Addison-
Wesley.
15. Delbridge, K.A., 2000. Individual differences in multi-tasking ability: Exploring a
nomological network. Unpublished Doctoral Dissertation, University of Michigan.
16. Dzubak, C.M., 2008. Multitasking: The good, the bad, and the unknown. The Journal of
the Association for the Tutoring Profession, 1(2), pp.1-12
17. Jett, Q.R. and George, J.M., 2003. Work interrupted: A closer look at the role of
interruptions in organizational life. Academy of Management Review, 28(3), pp.494-507.
18. Ds American Psychological Association, 2006. Multitasking: switching costs. URL:
http://www. apa. org/research/action/multitask.aspx.
19. Leonard-Barton, D. and Swap, W.C., 1999. When sparks fly: Igniting creativity in
groups. Harvard Business Press.
20. Jett, Q.R. and George, J.M., 2003. Work interrupted: A closer look at the role of
interruptions in organizational life. Academy of Management Review, 28(3), pp.494-507.
21. Wylie, G. and Allport, A., 2000. Task switching and the measurement of ―switch costs‖.
Psychological research, 63(3-4), pp.212-233.
22. Naveh-Benjamin, M., Craik, F.I., Perretta, J.G. and Tonev, S.T., 2000. The effects of
divided attention on encoding and retrieval processes: The resiliency of retrieval
processes. The Quarterly Journal of Experimental Psychology: Section A, 53(3), pp.609-
625.
71
23. Meyer, D. E. & Kieras, D. E., 1997. A computational theory of executive cognitive
processes and multiple-task performance: Part 2. Accounts of psychological refractory-
period phenomena. Psychological Review, 104, 749-791.
24. Ben-Shakhar, G. and Sheffer, L., 2001. The relationship between the ability to divide
attention and standard measures of general cognitive abilities. Intelligence, 29(4), pp.293-
306.
25. Boehm, B.W., Penedo, M.H., Stuckle, E.D., Williams, R.D. and Pyster, A.B., 1984. A
software development environment for improving productivity. In Computer.
26. Humphrey, W.S., 2000. The Personal Software Process (PSP).
27. Sanchez, H., Robbes, R. and Gonzalez, V.M., 2015, March. An empirical study of work
fragmentation in software evolution tasks. In Software Analysis, Evolution and
Reengineering (SANER), 2015 IEEE 22nd International Conference on (pp. 251-260).
IEEE.
28. Parnin, C. and Rugaber, S., 2011. Resumption strategies for interrupted programming
tasks. Software Quality Journal, 19(1), pp.5-34.
29. Meyer, A.N., Fritz, T., Murphy, G.C. and Zimmermann, T., 2014, November. Software
developers' perceptions of productivity. In Proceedings of the 22nd ACM SIGSOFT
International Symposium on Foundations of Software Engineering (pp. 19-29). ACM.
30. Vasilescu, B., Blincoe, K., Xuan, Q., Casalnuovo, C., Damian, D., Devanbu, P. and
Filkov, V., 2016, May. The sky is not the limit: multitasking across GitHub projects. In
Proceedings of the 38th International Conference on Software Engineering (pp. 994-
1005). ACM.
31. Salvucci, D.D., Taatgen, N.A. and Borst, J.P., 2009, April. Toward a unified theory of the
multitasking continuum: from concurrent performance to task switching, interruption,
and resumption. In Proceedings of the SIGCHI conference on human factors in
computing systems (pp. 1819-1828). ACM.
32. Katidioti, I., Borst, J.P. and Taatgen, N.A., 2014. What happens when we switch tasks:
Pupil dilation in multitasking. Journal of experimental psychology: applied, 20(4), p.380.
72
33. Parnin, C. and DeLine, R., 2010, April. Evaluating cues for resuming interrupted
programming tasks. In Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems (pp. 93-102). ACM.
34. Tregubov, A., Boehm, B., Rodchenko, N. and Lane, J.A., 2017, July. Impact of task
switching and work interruptions on software development processes. In Proceedings of
the 2017 International Conference on Software and System Process (pp. 134-138). ACM.
35. Tregubov, A., Lane, A. and Boehm, J.A., 2017, March. Evaluation of cross-project
multitasking in software projects. In Proceedings of the 2017 Conference on Systems
Engineering Research.
36. Altmann, E.M. and Trafton, J.G., 2004. Task interruption: Resumption lag and the role of
cues. Michigan State Univ. East Lansing Dept. of Psychology.
37. Parnin, C. and DeLine, R., 2010, April. Evaluating cues for resuming interrupted
programming tasks. In Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems (pp. 93-102). ACM.
38. Mark, G., Gonzalez, V.M. and Harris, J., 2005, April. No task left behind?: examining
the nature of fragmented work. In Proceedings of the SIGCHI conference on Human
factors in computing systems (pp. 321-330). ACM.
39. Van Solingen, R., Berghout, E. and van Latum, F., 1998. Interrupts: just a minute never
is. IEEE software, 15(5), pp.97-103.
40. Boehm, B.W., 1981. Software engineering economics (Vol. 197). Englewood Cliffs (NJ):
Prentice-hall.
41. Estimating, C., 2010. Software Development Cost Estimating Guidebook.
http://www.stsc.hill.af.mil/consulting/sw_estimation/softwareguidebook2010.pdf
42. Kellner, M.I., 1991, October. Software process modeling support for management
planning and control. In Software Process, 1991. Proceedings. First International
Conference on the (pp. 8-28). IEEE.
43. Abdel-Hamid, T. and Madnick, S.E., 1991. Software project dynamics: an integrated
approach. Prentice-Hall, Inc..
73
44. Madachy, R.J., 1994. A software project dynamics model for process cost, schedule and
risk assessment. PhD Dissertation, Department of Industrial and Systems Engineering,
University of Southern California, Los Angeles, California.
45. Tregubov, A. and Lane, J.A., 2015. Simulation of Kanban-based Scheduling for Systems
of Systems: Initial Results. Procedia Computer Science, 44, pp.224-233.
46. Borshchev, A. and Filippov, A., 2004, July. From system dynamics and discrete event to
practical agent based modeling: reasons, techniques, tools. In Proceedings of the 22nd
international conference of the system dynamics society (Vol. 22).
47. Hira, A., Tregubov, A., Alsarra, S., Sharma, S. and Boehm, B. 2017 September.
Comparing JIRA and Bugzilla for Software Project Tracking and Data Collection. In
Proceedings of 28th Annual IEEE Software Technology Conference.
48. Turner, R., Yilmaz, L., Smith, J., Li, D., Chada, S., Smith, A. and Tregubov, A., 2015,
May. Modeling an organizational view of the SoS towards managing its evolution. In
System of Systems Engineering Conference (SoSE), 2015 10th (pp. 480-485). IEEE.
49. Bland, J. Martin, and Douglas G. Altman. "Calculating correlation coefficients with
repeated observations: Part 2—Correlation between subjects." Bmj 310, no. 6980 (1995):
633.
50. Bland, J. Martin, and Douglas G. Altman. "Statistics notes: Calculating correlation
coefficients with repeated observations: Part 1—correlation within subjects." Bmj 310,
no. 6977 (1995): 446.
51. Czerwinski, M., Horvitz, E. and Wilhite, S., 2004, April. A diary study of task switching
and interruptions. In Proceedings of the SIGCHI conference on Human factors in
computing systems (pp. 175-182). ACM.
74
Appendix A
Multitasking survey
The following survey questions were used to collect self-evaluated reimmersion time and
the average number of cross-project work interruptions in CSCI577 class projects.
75
.
Figure 35. Multitasking survey.
76
Appendix B
DES framework simulation tool
B.1. Simulation model overview
The simulation model allow as to describe system engineering and software engineering
projects. The simulation model defines three aspects:
organizational model – structure and types of interactions between teams and resources.
governance model – process models, scheduling and planning strategies.
work items network model – actual work that creates work flow: capabilities,
requirements, tasks.
User can define parameters of each project that is modeled.
The simulation model allows to express different scheduling and management principles
in a form of different governance model configurations. These principles can be added to the
governance model separately or all together to perform different types of what-if analysis.
77
B.2. Types of analysis
Simulation models allow set up different governance strategies (e.g. scheduling
techniques), organizational structures, work flow parameters. Each parameter can be view as
independent variable.
Typically the following outputs of performance indicators are used to describe and
compare simulation results (aka raw output data):
schedule,
effort spent over time,
value delivered over time,
number of complete capabilities,
number of work items in progress over time (for each queue),
size of backlogs.
By running simulation run with different configurations we can infer the following outputs:
context switching overhead,
bottle necks – overloaded teams/queues,
blocked work – number of WIs that are blocked by unresolved dependencies that are nor
being resolved,
overloaded specialties – resources with certain skill that are always in demand,
transition readiness over sprints/delivery cycles – how many capabilities are completed
by the end of each sprint.
78
B.3. Architecture
DES framework was originally developed as a Kanban-based scheduling system
simulator [45] in 2013. DES framework is a JAVA web application (with web UI). It uses a
custom discrete event simulation engine (figure 36).
Results
storage
(.ser, .xls, .txt
files)
Experiment
builder UI
Result
visualization UI
New experiment
parameters
Experiment
Sim. results
DES Framework Architecture
Experiment
DB UI
Simulation
Engine
Gov.Model
algorithms
(java code)
Experiment
builder
(from templates)
Templates
(java code)
Experiment DB
(collection of .ser (or.xml) files
of runnable sim. scenarios)
Experiment
Edit
View
Run
View
DES Experiment is an independent self-contained web application
with its own simulation engine and result visualization UI
Data flow
between modules
Figure 36. DES framework architecture.
79
B.4. Development team
Over the course of the DES framework development from 2013 to 2016, the following
students contributed to the project:
Lucas Rena
Felipe Laskoski
Lucas Monteiro
Thais Mombach
Nikita Vlassenko
Bowei Zhang
Huiyuan Ren
Mitesh Jain
The development was led by the author (Alexey Tregubov). Mentoring and guidance of
the project was provided by Jo Ann Lane and Rich Turner.
80
Appendix C
Source data
C.1. Cross-project work interruptions in CSCI577 class projects
Table 25. Data summary for CSCI577 class projects. Sample of students not working in industry.
# Id Average number of
projects per week
Average effort spent
on interruptions
(person-hours)
Average
MEM
Average number of
interruptions per week
1 4 2.00 18.81 1.89 7.50
2 6 1.00 0.00 1.00 0.00
3 8 1.33 0.13 1.00 0.58
4 10 1.17 0.12 1.00 0.25
5 11 1.00 0.00 1.00 0.00
6 17 2.08 19.58 1.96 6.25
7 22 2.33 6.85 1.21 6.04
8 27 1.42 2.05 1.05 1.17
9 29 2.50 3.61 1.10 16.67
10 30 1.92 4.43 1.12 2.52
11 31 2.00 0.74 1.02 3.42
12 33 1.75 21.54 2.17 6.88
13 37 1.00 0.00 1.00 0.00
14 38 1.08 0.10 1.00 0.21
15 40 1.92 10.39 1.35 9.17
16 41 1.92 0.50 1.01 3.75
17 43 1.00 0.00 1.00 0.00
81
18 44 2.67 0.72 1.02 5.42
19 45 2.00 0.55 1.01 4.13
20 46 1.58 0.41 1.01 0.74
21 47 1.83 2.64 1.07 4.17
22 51 2.08 0.23 1.01 1.75
23 59 1.08 0.25 1.01 0.83
24 62 1.08 0.10 1.002 0.21
25 63 1.00 0.00 1.00 0.00
26 64 1.00 0.00 1.00 0.00
27 66 1.00 0.00 1.00 0.00
28 67 2.00 1.52 1.04 3.96
29 68 1.00 0.00 1.00 0.00
C.2. Cross-project work interruptions in industry projects
Table 26. Data summary for industry projects.
Group User id Average number of
projects per week
Average
MEM
Average number of interruptions
per week
1 1 1.00 1.00 0.00
1 2 2.00 1.04 4.82
1 3 2.00 1.08 9.66
1 4 3.00 1.06 7.53
1 5 2.00 1.04 4.71
1 6 2.00 1.04 5.03
1 7 4.00 1.08 9.79
1 8 4.00 1.08 9.58
1 9 4.00 1.09 10.74
1 10 2.68 1.04 5.39
82
1 11 2.00 1.04 4.92
1 12 2.58 1.04 5.50
1 13 2.00 1.08 9.82
1 14 1.00 1.00 0.00
1 15 2.45 1.04 5.45
1 16 2.00 1.08 9.74
1 17 3.00 1.05 6.76
1 18 2.37 1.04 4.87
1 19 2.34 1.04 5.29
1 20 2.29 1.04 4.95
1 21 2.92 1.04 5.21
1 22 2.00 1.04 5.00
1 23 2.50 1.04 5.03
1 24 2.47 1.04 5.61
1 25 2.42 1.04 5.61
1 26 2.92 1.04 5.68
1 27 1.00 1.00 0.00
1 28 1.71 1.01 1.76
1 29 2.00 1.04 5.26
1 30 3.58 1.07 8.39
1 31 2.71 1.04 5.24
1 32 2.34 1.04 5.13
1 33 2.00 1.04 5.05
1 34 2.79 1.04 5.50
1 35 2.76 1.04 5.13
1 36 2.34 1.04 4.84
1 37 3.45 1.05 6.03
1 38 3.00 1.04 5.58
1 39 1.00 1.00 0.00
83
1 40 2.21 1.01 1.40
1 41 3.00 1.12 14.50
1 42 4.00 1.17 19.32
1 43 2.00 1.04 5.29
1 44 2.00 1.04 5.29
1 45 2.29 1.04 5.37
1 46 2.42 1.09 10.89
1 47 2.37 1.04 5.32
1 48 2.29 1.03 4.34
1 49 2.45 1.04 5.71
1 50 5.00 1.09 11.47
1 51 1.00 1.00 0.00
1 52 2.34 1.04 5.26
1 53 2.29 1.04 5.74
1 54 3.00 1.06 7.53
1 55 2.45 1.04 5.47
1 56 2.00 1.04 4.92
1 57 4.32 1.09 10.97
1 58 3.95 1.07 9.11
1 59 3.00 1.06 7.24
1 60 2.42 1.04 4.89
1 61 2.92 1.05 6.05
1 62 2.89 1.04 5.66
1 63 2.00 1.04 5.13
1 64 2.68 1.04 5.42
1 65 2.39 1.04 5.39
1 66 1.00 1.00 0.00
1 67 2.50 1.01 1.20
1 68 2.37 1.04 5.55
84
1 69 3.26 1.06 7.42
1 70 2.53 1.04 5.29
1 71 2.47 1.04 5.21
1 72 3.39 1.06 7.18
1 73 4.47 1.09 11.03
1 74 6.00 1.13 14.92
1 75 3.39 1.06 7.50
1 76 3.53 1.05 6.87
1 77 3.00 1.06 8.05
1 78 2.37 1.04 5.53
1 79 2.37 1.04 5.58
1 80 2.45 1.04 5.66
1 81 2.45 1.08 10.05
2 1 1.00 1.00 0.00
2 2 1.43 1.03 4.31
2 3 1.30 1.06 7.85
2 4 2.39 1.06 7.50
2 5 1.51 1.03 4.46
2 6 1.29 1.03 4.06
2 7 1.68 1.07 8.20
2 8 1.80 1.07 8.29
2 9 1.85 1.08 9.44
2 10 2.14 1.04 5.38
2 11 1.49 1.04 4.59
2 12 1.83 1.04 4.89
2 13 1.29 1.06 7.93
2 14 1.00 1.00 0.00
2 15 1.86 1.04 5.16
2 16 1.58 1.08 9.60
85
2 17 2.04 1.05 5.76
2 18 1.86 1.04 4.77
2 19 1.58 1.03 4.46
2 20 1.56 1.03 4.20
2 21 2.20 1.04 4.91
2 22 1.39 1.03 4.35
2 23 1.81 1.04 4.56
2 24 1.76 1.04 4.99
2 25 1.63 1.04 4.72
2 26 2.34 1.04 5.68
2 27 1.00 1.00 0.00
2 28 1.31 1.01 1.68
2 29 1.53 1.04 5.04
2 30 1.58 1.05 6.82
2 31 2.01 1.04 4.86
2 32 1.65 1.04 4.51
2 33 1.57 1.04 4.97
2 34 1.97 1.04 4.86
2 35 2.02 1.04 4.68
2 36 1.51 1.03 3.89
2 37 2.09 1.04 5.62
2 38 2.31 1.04 5.38
2 39 1.00 1.00 0.00
2 40 1.48 1.01 1.17
2 41 2.23 1.07 9.29
2 42 1.70 1.07 8.43
2 43 1.31 1.03 4.32
2 44 1.43 1.04 4.71
2 45 1.74 1.04 5.09
86
2 46 1.78 1.07 9.21
2 47 1.60 1.03 4.50
2 48 1.67 1.03 3.96
2 49 1.87 1.04 5.46
2 50 1.80 1.08 9.94
2 51 1.00 1.00 0.00
2 52 1.66 1.04 4.65
2 53 1.74 1.04 5.45
2 54 2.23 1.06 6.98
2 55 1.68 1.04 4.69
2 56 1.55 1.04 4.77
2 57 2.32 1.08 9.82
2 58 2.38 1.07 9.07
2 59 2.09 1.05 6.32
2 60 1.63 1.03 4.11
2 61 2.05 1.04 5.32
2 62 2.12 1.04 5.19
2 63 1.44 1.04 4.60
2 64 1.96 1.04 4.94
2 65 1.68 1.04 4.74
2 66 1.00 1.00 0.00
2 67 1.68 1.01 1.01
2 68 1.74 1.04 5.09
2 69 2.05 1.05 6.86
2 70 1.80 1.04 4.71
2 71 1.95 1.04 5.14
2 72 2.35 1.06 7.11
2 73 1.85 1.08 9.69
87
C.3. Cross-project work interruptions and grade deduction in
CSCI577 projects
Table 27. Cross-project work interruptions and grade deduction in CSCI577 projects.
Project
id
Number of
interruptions
Average number of
interruptions per
week per developer
MEM
Self-evaluated
MEM
(R=22minutes)
Grade
deduction
1 738 8.785714286 1.21396680 1.229553189 44.955
2 294 3.584127764 1.45092677 1.080351114 19.81
3 576 6.857142857 1.48931565 1.170568562 36.46
4 235 2.797619048 2.53873481 1.063207025 30.24
5 925 11.01428571 1.28837716 1.305574336 32.89
6 638 7.595238095 1.25347394 1.192461937 28.55
7 457 7.858654914 2.39682906 1.117615753 N/A
8 416 4.952380952 2.54467779 1.221196482 33.975
9 716 8.523809524 1.38402944 1.087185129 38.23
10 317 3.773809524 2.01396680 1.229553189 30.76
88
C.5. Cross-project work interruptions and reopened tasks in
industry projects
Table 28. Cross-project work interruptions and reopened tasks in industry projects.
Project Number of
interruptions
Average number of
interruptions per week
per developer
MEM Number of
reopened tickets
1 738 8.785714286 3.013967 4.769111975
2 294 3.539720323 1.450927 1.387756571
3 576 6.857142857 4.489316 4.557047866
4 235.2 2.884237411 2.538735 1.424832708
5 925.2 11.01428571 1.288377 5.870204959
6 638 7.595238095 1.253474 4.147297059
7 416.4 4.957142857 2.396829 5.227430909
8 716.4 8.528571429 3.544678 5.962616851
9 316.8 3.771428571 1.384029 2.746694036
89
Appendix D
Effort multipliers for COCOMO II
D.1. Effort multipliers in industry projects
Sum of scales factors of all projects was 18.97. Effort multipliers are shown below.
Table 29. Effort multipliers in industry projects.
Project id
1 2 3 4 5 6 7 8 9
RELY
1 1 1 1 1 1 1 1 1
DATA
1 1 1 1 1 1 1 1 1
CPLX
1 1 1 1 1 1 1 1 1
RUSE
1.07 1.07 1 1 1 1 1.07 1.07 1
DOCU
1 1 0.91 0.91 0.91 1 1 1 0.91
TIME
1 1 1 1 1 1 1 1 1
STOR
1 1 1 1 1 1 1 1 1
PVOL
1 1 1 1 1 1 1 1 1
ACAP
1.19 1 1.19 1 1 1 1.19 1.19 1.19
PCAP
1 1.15 1.15 1.15 1 1 1.15 1.15 1.15
PCON
1 1 1 1 1 1 1 1 1
APEX
1 1 1 1 1 1 1 1 1
90
PLEX
1 1 1 1 1 1 1 1 1
LTEX
1 1 1 1 1 1 1 1 1
TOOL
1 1 1 1 1 1 1.09 1.09 1.09
SITE
1 1 1 1 1 1 1 1 1
SCED
1 1 1 1 1 1 1 1 1
Abstract (if available)
Abstract
This research assesses the effect of cross-project multitasking on productivity of software development teams working in environments where resources are shared across several projects. Cross-project multitasking is typical for matrix organizational structures and facilitates the horizontal cross-project flow of skills and information. Depending on how heavily people multitask, it may introduce an excessive amount of cross-project interruptions in work flow, affecting productivity of software developers. Cross-project interruptions are considered harmful for productivity of information and knowledge workers, because they may require time for switching between independent work contexts. If the environment with cross-project multitasking is not taken into account, it can significantly influence cost and schedule estimations. ❧ This research investigates the cross-project multitasking and its quantitative effect on software development effort and quality. The research introduces a cross-project multitasking evaluation model, which improves cost estimations in environments with cross-project multitasking. The model calculates a multitasking effort multiplier, which can be evaluated based on work log observations, and incorporates it into the base COCOMO II ® model to achieve a better accuracy.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Calibrating COCOMO® II for functional size metrics
PDF
A model for estimating schedule acceleration in agile software development projects
PDF
Domain-based effort distribution model for software cost estimation
PDF
The effects of required security on software development effort
PDF
Improved size and effort estimation models for software maintenance
PDF
Incremental development productivity decline
PDF
Quantitative and qualitative analyses of requirements elaboration for early software size estimation
PDF
Calculating architectural reliability via modeling and analysis
PDF
Security functional requirements analysis for developing secure software
PDF
A reference architecture for integrated self‐adaptive software environments
PDF
Experimental and analytical comparison between pair development and software development with Fagan's inspection
PDF
Process implications of executable domain models for microservices development
PDF
A search-based approach for technical debt prioritization
PDF
Architecture and application of an autonomous robotic software engineering technology testbed (SETT)
PDF
The incremental commitment spiral model process patterns for rapid-fielding projects
PDF
Software security economics and threat modeling based on attack path analysis; a stakeholder value driven approach
PDF
Estimating systems engineering reuse with the constructive systems engineering cost model (COSYSMO 2.0)
PDF
Toward better understanding and improving user-developer communications on mobile app stores
PDF
Assessing software maintainability in systems by leveraging fuzzy methods and linguistic analysis
PDF
Using metrics of scattering to assess software quality
Asset Metadata
Creator
Tregubov, Alexey
(author)
Core Title
A model for estimating cross-project multitasking overhead in software development projects
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Computer Science
Publication Date
10/27/2017
Defense Date
06/28/2017
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
cost estimation model,effort estimation,multitasking overhead,OAI-PMH Harvest,software development,work interruptions
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Boehm, Barry (
committee chair
), Golubchik, Leana (
committee member
), Khoshnevis, Behrokh (
committee member
), Nakano, Aiichiro (
committee member
)
Creator Email
alexeytregubov@gmail.com,tregubov@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-447973
Unique identifier
UC11265545
Identifier
etd-TregubovAl-5858.pdf (filename),usctheses-c40-447973 (legacy record id)
Legacy Identifier
etd-TregubovAl-5858.pdf
Dmrecord
447973
Document Type
Dissertation
Rights
Tregubov, Alexey
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
cost estimation model
effort estimation
multitasking overhead
software development
work interruptions