Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
An intelligent tutoring system’s approach for negotiation training
(USC Thesis Other)
An intelligent tutoring system’s approach for negotiation training
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
An Intelligent Tutoring System’s Approach for Negotiation
Training
by
Emmanuel Johnson
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(Computer Science)
December 2021
Copyright 2021 Emmanuel Johnson
To my mother, Romel Acquah, for encouraging me to pursue my love and always
supporting my interests. To my family for being a constant source of love and
comfort.
ii
Acknowledgements
I would like to acknowledges the financial support of the National Science Foundation through the
Graduate Research Fellowship and Grants BCS-1419621 and 1822876, and the Army Research
Office under Cooperative Agreement Number W911NF-20-2-0053.
I would like to thank my advisor, Dr. Jonathan Gratch for his guidance, patience and mentor-
ship. His mentorship has helped me grow as a researcher and a thinker. I would also like thank my
committee members: Dr. Sven Koenig, Dr. Peter Kim, Dr. Gale Lucas and Dr. John Slaughter for
all of their assistance and guidance.
I would also like to thank the many staff, researchers and students who made ICT a home.
Especially, the members of the emotions group.
Finally, I would like to thank my partner(she doesn’t like the word girlfriend), Danielle Thomas
for her unconditional love and support, and her family for making Southern California a home away
from home.
iii
Table of Contents
Dedication ii
Acknowledgements iii
List of Tables vi
List of Figures vii
Abstract ix
Chapter 1: Introduction 1
1.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Advancing a pedagogical agent that allows students to practice negotiating 4
1.1.2 Innovating methods to diagnose student faults from their negotiation trace . 4
1.1.3 Innovating methods to provide personalized feedback . . . . . . . . . . . . 5
1.1.4 Demonstrating learning effectiveness in a Salary Negotiation Mini-course . 6
1.1.5 Demonstrating the broader impact of methods developed . . . . . . . . . . 6
Chapter 2: Background 7
2.1 Intelligent Tutoring Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Traditional Methods for Teaching Negotiation . . . . . . . . . . . . . . . . . . . . 9
2.2.1 Negotiation Formalization and Task . . . . . . . . . . . . . . . . . . . . . 9
2.2.1.1 Multi-Issue Bargaining Task . . . . . . . . . . . . . . . . . . . . 10
2.3 Using Technology to Teach Negotiation . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Experiential Learning and the Importance of Feedback . . . . . . . . . . . . . . . 13
Chapter 3: Quantifying Principles of Good Negotiation 14
3.1 Principles of Good Negotiations . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Quantifying Negotiation Principles . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.1 Avoiding Early Commitment . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.2 Making Efficient Concessions . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.3 Inducing Opponent Concessions . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Metric Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Validating Negotiation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5 Summary of Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
iv
Chapter 4: Providing Automated Feedback Based on Negotiation Principles 22
4.1 Automated Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2 Study One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2.1 Study Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3 Study Two . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.3.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.4 Summary of Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Chapter 5: Value Creating Metrics Using Opponent Modeling 34
5.1 Opponent Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.1.1 Offers Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.1.2 Offers and Preference Statements . . . . . . . . . . . . . . . . . . . . . . 36
5.2 Translating Models into Insight . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.3 Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.3.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.3.1.1 Understanding of Negotiator’s Transparency . . . . . . . . . . . 41
5.3.1.2 Investigative skills of a Negotiator . . . . . . . . . . . . . . . . 42
5.4 Summary of Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Chapter 6: Bringing it all together 45
6.1 Learning Management System and Course Design . . . . . . . . . . . . . . . . . 46
6.1.1 Course Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.1.1.1 Video Lectures on Value Claiming and Value Creating . . . . . . 47
6.1.2 Learning Management System . . . . . . . . . . . . . . . . . . . . . . . . 47
6.2 Enhancements to Negotiation Agent . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.2.1 Research Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.3 Summary of Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Chapter 7: Broader Impact: Negotiation Agent for Social Good 61
7.1 Disparity in Negotiation Process and Input . . . . . . . . . . . . . . . . . . . . . . 61
7.2 Current Approach to Understanding Negotiation Processes . . . . . . . . . . . . . 62
7.2.1 Using Automated Agents to Understand Negotiation Processes . . . . . . . 63
7.3 Gender Trigger and Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.4 Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.4.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
7.5 Summary of Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Chapter 8: Conclusion 76
8.1 Future Direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
References 79
v
List of Tables
2.1 Negotiator A Payoff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Negotiator B Payoff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1 Example Negotiation Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 The relationship between negotiation metrics and lottery tickets within the CRA
Corpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.1 Agent Payoff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2 Participant Payoff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.3 Agent and Participant’s Payoff Matrix Negotiation 1 . . . . . . . . . . . . . . . . . 30
4.4 Agent and Participant’s Payoff Matrix Negotiation 2 . . . . . . . . . . . . . . . . . 30
6.1 Agent and Participant’s Payoff Matrix . . . . . . . . . . . . . . . . . . . . . . . . 53
vi
List of Figures
3.1 Conflict Resolution Agent engaged in a negotiation with participant #362 . . . . . 19
4.1 Default IAGO Agent Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2 Personalized vs Generic Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.1 Pareto Frontier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.2 Accuracy of Model by Agent Type . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.3 Accuracy of Agent’s preference across Groups . . . . . . . . . . . . . . . . . . . . 43
6.1 Screenshot of video lectures by Dr. Peter Kim . . . . . . . . . . . . . . . . . . . . 47
6.2 Screenshot of Learning Management System . . . . . . . . . . . . . . . . . . . . 48
6.3 Example of Module Assignment List for Module 1 in Control Condition . . . . . . 49
6.4 Hiring Manager Agent built using IAGO. Users can exchange offers (highlighted in
red), or exchange information about preferences over issues (highlighted in yellow). 50
6.5 Relationship between video lectures watched and outcome measures . . . . . . . . 55
6.6 Percentage of lectures watched by experimental condition . . . . . . . . . . . . . . 56
6.7 Final User Points across condition . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.8 Final Joint Points across condition . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.9 Level claimed by condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.10 SVI results for feeling of outcome,self and relationship by condition . . . . . . . . 59
7.1 Job description (partial) showing female-threatening required characteristics . . . . 64
7.2 Hiring Manager Agent built using IAGO . . . . . . . . . . . . . . . . . . . . . . . 67
7.3 Example dialog involving information exchange . . . . . . . . . . . . . . . . . . . 68
7.4 Minimally acceptable salary (USD) as reported before the negotiation began. . . . 73
vii
7.5 Final negotiated salary (USD) for those willing to negotiate versus those simply
accepting the first offer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
viii
Abstract
Research in artificial intelligence has made great strides in developing ’cognitive tutors’ that teach a
range of technical skills. These automated tutors allow students to practice, observe their mistakes,
and provide personalized instructional feedback. Evidence shows that these methods can increase
learning above and beyond traditional classroom instruction in topics such as math, reading, and
computer science. Although such skills are crucial, students entering the modern workforce must
possess more than technical abilities. They must exhibit a range of interpersonal skills which al-
lows them to resolve conflicts and solve problems creatively. However, both traditional curricula
and learning technologies afford students limited opportunities to learn these interpersonal skills,
particularly in STEM fields. This thesis seeks to fill this gap by developing automated learning
methods for teaching the crucial interpersonal skill of negotiation. Negotiation skills can help
workers obtain equitable compensation, gain greater control over their work responsibilities, and
help them work effectively with teammates. Currently, students often must resort to costly re-
sources to learn how to negotiate. Options include self-study guides with limited value, or costly
professional training programs that are developed by various companies and educational institu-
tions. For those who can not afford costly training options, they find themselves at a loss for
options. However, technology is beginning to fill this gap. AI has made strides in creating agents
that can negotiate with people, and research shows students can improve their negotiation abili-
ties by practicing with such agents [1, 2]. To date, these methods allow students to practice but
little emphasis is placed on the analysis of mistakes and feedback. Yet lessons from cognitive
tutors emphasizes that feedback is crucial for learning [3]. To address this limitation, this thesis
contributes to advancing the science of interpersonal skill training by developing and evaluating
ix
the effectiveness of an automated analysis of student errors and personalized feedback in a ne-
gotiation training system. This is done by innovating artificial intelligence techniques to analyze
student behavior, identify weakness in their understanding, and provide targeted and personalized
feedback. In Chapter 2, I provided an overview of the state of the art in teaching negotiation and
their limitations. In Chapters 3 and 4, I develop metrics for assessing student’s negotiation abili-
ties based on a set of principles and show that these metrics can be used to provide personalized
feedback. In Chapter 5, I extend these metrics to include better predictors of negotiator’s under-
standing of an opponent. Chapter 6 builds on the previous chapters by incorporating the opponent
modeling and feedback proposed in Chapters 3, 4, and 5 into a mini negotiation course. I show
how my previous work can be combined with video lectures to mimic how negotiation is taught
in the classroom. Results show that these metrics are good predictors of negotiation outcomes,
and participants who received personalized feedback do fair better in subsequent negotiations than
those who did not. Lastly, I show that these diagnostic metrics do have applications outside of a
training setting. Chapter 7 illustrates how these methods can provide insight into societal issues.
x
Chapter 1
Introduction
Students entering the modern workforce must successfully interview, negotiate their salaries and
job responsibilities, work in teams, resolve conflicts, and solve problems creatively and collabora-
tively. Such interpersonal skills are rarely assessed or taught in the classroom, and little research
has explored the potential for automated tutoring of these foundational social skills. Social skills
are crucial across a wide range of jobs globally [4, 5]. One of the key social skills we utilize
to address conflicts and work well with others is the ability to negotiate. As the world becomes
more connected and professionals find themselves working and learning with colleagues who hold
a range of different views, the importance of negotiation becomes ever more apparent. However,
many avoid negotiations out of fear or lack of skill. This contributes to income inequality, political
gridlock, and economic inefficiencies. For example, 93% of women avoid salary negotiations and
this behavior serves as a major contributor to unequal pay for women [6]. In politics, poor negotia-
tors have less legislative influence [7]. In the military, soldiers of all ranks are increasingly involved
in negotiations and poor skills can have geopolitical consequences [8]. In the courtroom, over 90%
of cases are settled through negotiations before they ever reach trial, and systematic inequities in
negotiation ability across lawyers can undermine civil society [9, 10]. The US Academy of Sci-
ences and the World Economic Forum identify negotiation as a foundational social skill needed for
the future of work through its impact on an organization’s creativity and productivity [5, 11]. Yet,
students graduating from high school, universities, and even MBA programs are underprepared in
1
key interpersonal competencies [12]. As a result, individuals and organizations spend vast sums in
remedial training.
A number of options are available for training ranging from semester-long courses to online
preparation. Negotiation is typically taught through a mix of classroom lectures and experiential
learning (where students receive hands-on experience by applying their knowledge in simulated
negotiations with other students or negotiation experts). These courses are offered in professional
schools, as part of a business or law degree, and by private consulting firms. For example, as part of
a master’s degree program in business administration, students might take a semester-long course
on negotiation concepts. For those seeking a more cursory introduction, universities and consulting
firms offer intensive short courses. Evidence suggests that the experiential aspect of negotiation
training is an especially important component of training [13, 14] and is particularly effective
at enhancing student motivation and commitment [15]. Yet these simulations add considerably
to the expense and logistical constraints in teaching negotiation. In business schools, simulated
negotiations are often run by dedicated staff trained to be experts in experiential learning techniques
and within companies they are overseen by university executive-education programs or high-priced
consultants.
Beyond traditional in-class learning, a number of commercial applications provide online ne-
gotiation training. Online educational platforms such as Coursera offer video lectures, and learning
materials to teach the principles of negotiation without the experiential component. These systems
are limited for a number of reasons. For one, they do not allow students to practice their negoti-
ation skills or receive personalized feedback. Research within the autonomous agent community
has worked to improve these online training platforms. Virtual role-playing agents offer an oppor-
tunity to dramatically reduce the cost and increase access to negotiation training. However, the role
of technology in current training practice is surprisingly limited, particularly regarding the expe-
riential aspects of training. Some online negotiation courses incorporate such techniques to teach
concepts [16], but methods to “gamify” experiential learning similar to that found in intelligent
2
tutoring systems have proved elusive. To advance the state of the art, we must look to expand these
systems by incorporating the ability to provide personalized tutorial feedback [3].
To that end, this thesis seeks to advance the science of intelligent tutoring systems by extending
their capabilities to support the tutoring of soft skills. The focus of my approach is to build upon
the proven approach underlying so-called “cognitive tutors” [17]. In this approach, the automated
tutor leverages a model of the skill to be learned and performs model-based diagnosis over student
solutions to identify these possible mistakes. The tutor includes both models of skill that is ideally
executed by students, but also models of common mistakes. For example, in the domain of simple
arithmetic, a cognitive tutor might include a model of the steps students should follow to perform
column addition. But the tutor will also include models of common mistakes, such as forgetting
to carry a digit to the next column. If students execute the correct procedure, they receive positive
feedback. However, if students return incorrect answers, cognitive tutors can identify the specific
misconception and provide targeted and personalized feedback. My work seeks to extend this basic
idea to the realm of negotiation skills. To do that, I have innovated artificial intelligence techniques
to analyze student behavior, identify weakness in their understanding, and provide targeted and
personalized feedback.I also show how these techniques can be used to provide insight into societal
issues.
1.1 Contributions
In achieving the goal of extending intelligent tutors to the interpersonal skill domain, my thesis
makes five contributions. The first contribution is to advance a pedagogical agent that allows
students to practice negotiating. The second contribution involves innovating methods to diagnose
student’s errors. Third, I also innovate methods for providing effective personalized feedback, and
lastly I show that these advances have farther impact outside of the training environment.
3
1.1.1 Advancing a pedagogical agent that allows students to practice negotiating
Although my thesis starts with a pre-existing negotiation framework, IAGO, I innovate several
methods to ensure it’s algorithms are well suited to skills learning. In particular, building on the
ideal of “productive failures” [18], I tailor the agent’s behavior to present students with situations
where students often make poor choices. For example, students often fail to make ambitious
opening offers or concede too quickly when their opponent pushes back. This lesson can be hard to
learn when both students in a practice exercise lack ambition. Indeed, computer agents can provide
a more effective practice partner than a human opponent as the computer can be designed to evoke
“teachable moments” that can then motivate subsequent pedagogical feedback. Examples of these
enhancements can be found in Chapters 3, 4, 5, and 6.
1.1.2 Innovating methods to diagnose student faults from their negotiation
trace
The second contribution (detailed in chapter 3) explores the principles of good negotiations and
methods for quantifying them. This results in a number of metrics that correlate with good ne-
gotiation outcomes. To do this, I identified evidence-based principles of good negotiations that
have been studied and validated through years of negotiation research [1, 19–21]. As these princi-
ples were developed using a dyadic human-human negotiation corpus, I wanted to verify that they
still worked in the human-agent scenario. From these principles, I developed a set of quantifiable
metrics that are applied to a corpus of human-agent negotiations and verified that indeed these
principles still hold for a human-agent negotiation.
After evaluating how well these methods can inform pedagogical feedback (see Chapters 3
and 4), I identified the need for additional diagnostic methods, specifically with regard to how
students use information to form a model of their opponent’s goals (what is sometimes called
“opponent modeling”). Negotiation feedback can be divided into two categories, value claiming
and value creating. Value claiming feedback focuses on enabling users to gain more value in a
4
negotiation. Value creation focuses on understanding an opponent so that a negotiator can reach a
fair deal that both parties can willingly accept. I shows that the current metrics excels at providing
value claiming feedback but falls short at teaching value creating. Chapter 5 innovates advanced
methods where by an agent can learn to better model the student and the common faults they
make, such as over-weighting their opponents explicit words and under-weighting the information
revealed by their pattern of offers. To do this, I used an opponent model to assess the extent to
which a negotiator gathers information about an opponent and provides information about their
own preferences. My work shows that opponent models serve as a great tool for providing insight
on common value creating mistakes that novice negotiators make.
1.1.3 Innovating methods to provide personalized feedback
The third contribution (chapter 4) made is to develop and verify a personalized feedback system
across different virtual agent platforms. Research shows that feedback is critical to improving
learning outcome. However, very little work has focused on the impact different types of feedback
may have on learning outcomes, or the impact of the system design on feedback. To do this, I con-
ducted two studies to evaluate the effectiveness of various kinds of feedback across different types
of virtual agents. In the first study, I compared the impact of personalized feedback on negotiation
outcomes using a “wizard of Oz” virtual agent. Participants joined a between-subject design study
where they completed two negotiations with a virtual agent. After the first negotiation, they were
either provided personalized feedback or no feedback. This work showed that personalized feed-
back was most effective at getting negotiators to gain more value in the negotiation. In the second
study, I compared the impact of personalized feedback to generic feedback and no feedback using
an online virtual agent platform. Unlike the first study, this system was fully automated. This too
showed personalized feedback to be the most effective at improving user’s value claiming abilities.
5
1.1.4 Demonstrating learning effectiveness in a Salary Negotiation Mini-
course
In collaboration with an instructor of negotiations, I constructed a mini-course that integrates our
simulation approach with traditional instruction. This course was tailored towards teaching com-
puter sciences undergrads how to better negotiate their salaries. This approach brings together the
advanced feedback methods into a fully automated course of instruction. Results suggest that stu-
dents improved on several aspects of their behavior but also highlight additional opportunities for
improving these methods (See Chapter 6).
1.1.5 Demonstrating the broader impact of methods developed
Although the focus of my research has been on developing automated methods for teaching ne-
gotiation skills, the methods I have developed have important use in other important scientific
problems. Chapter 7 illustrates how the negotiation agents developed to teach salary negotiation
can contribute to psychological research on why women obtain lower offers when they negotiate
their salaries.
1
1
This work was done in collaboration with Dr. Peter Kim, Professor of Management and Organization in the
Marshall School of Business, Dr. Gale Lucas, Research Assistant Professor in the Viterbi School of Engineering and
Dr. David DeVault, Founder, Anticipant Speech. The results are published in the following paper [22].
6
Chapter 2
Background
The use of artificial intelligence to enhance learning systems is not a new concept. Extensive work
has been done in the intelligent tutoring systems community. Much of this work has excluded the
teaching of social skills, but there is growing interest in using technology to teach these skills. In
this Chapter, I provide an overview of intelligent tutoring systems, highlight common approaches
for teaching negotiation as well as the state of the art in automated negotiation training systems.
2.1 Intelligent Tutoring Systems
Intelligent tutoring systems(ITS) are computer systems that aim is to help students acquire a new
skill by mimicking human tutors [23, 24]. Their design and implementation varies across domains
and application, however, ITS generally have three components; domain model, student model
and tutor model [24]. These three components work to understand how a student solves problem
and determine the best way to address flaws in a student’s understanding. A number of modeling
approaches have been proposed for these systems, but the two most popular are constrain-based
modeling and model tracing [25, 26]. Constraint-base models are product-centric models that ex-
amine the final results to determine whether or not a student understands a concept. These models
do not care about the steps a student takes to get the final results. This approach assumes that if a
student arrived at a correct solution, then their steps must be correct. If a student arrived at the in-
correct solution, then they must not have understood the problem and thus the steps they took were
7
incorrect [27–29]. The model-tracing approach, on the other hand, is a process-centric approach.
This approach tries to understand the process by which the student arrives at the solution. This is
done by building models of the possible paths a student would take to reach a solution and then
comparing those paths to the actual path a student took to solve a problem [30–33]. This provides
insight into potential errors a student may have made.
Research in the intelligent tutoring systems community has shown that intelligent tutors are
effective at teaching a number of skills including math [34–36] , reading [37, 38],science [39] and
even computer literacy [40, 41]. For example, Cognitive Tutor, one of the most successful intel-
ligent tutoring system, has been used to teach algebra, geometry and Lisp [35]. The success of
this system has led to its deployment in a number of schools throughout the United State to teach
math. Beyold math, Wijekumar and colleagues showed that ITS are more effective than traditional
methods of teaching reading comprehension in a Language Arts class [38, 42–45].
Emerging research suggests that learning technology holds promise for assessing and teaching
a range of interpersonal skills [46–48]. Researchers have tried to extend these techniques to so-
cial skills such as public speaking [49], collaborative problem solving [46] and more specifically
negotiation [1, 50, 51]. However, the systems for teaching social skills are limited. Part of the rea-
son is that social skills are much harder to teach. Unlike math, most social skills fall within what
Aleven and colleagues call as an ill-defined domain [52], and presents a challenge for intelligent
tutors. Social skills lack clear assessment metrics and lack prescribed formulas to guarantee suc-
cess. However, these skills are needed now more than every across numerous professions [53–57]
as success does not depend merely on technical knowledge. Employees must manage interpersonal
relationships, work across cultures, and negotiate both their job offers as well as their job functions
and mobility within an organization.
8
2.2 Traditional Methods for Teaching Negotiation
Typically, negotiation is taught in a lecture format where students first learn various negotiation
principles. Subsequently, students then apply their burgeoning knowledge of said principles by
performing negotiation exercises against each other[13, 58, 59]. These exercises are intended to
help students gain experience applying these negotiation principles. As the students negotiate with
one another in these exercises, the instructor walks around the room, observing and evaluating the
student’s use of various negotiation principles. Afterwards, an instructor might initiate general
class discussions highlighting specific individuals’ successes and failures. This use of negotia-
tion exercises aligns with an experiential theory perspective, suggesting that learning occurs when
students are able to practice and then reflect upon their performance [3, 60].
An important limitation to this common method of instruction is that, although the lecture-then-
exercise format encourages student practice, little emphasis is placed on personalized feedback.
Instructors’ attention is limited, and they do not have enough bandwidth to evaluate all students’
use of the negotiation principles, especially in large classes. This is problematic as receiving con-
structive personalized feedback is integral to skill development. Furthermore, when attempting to
reflect on their negotiation skills without such personalized feedback, students might try to im-
prove their skills through observing what others did. This may lead students to imitate suboptimal
strategies. In addition to this, in-person courses can be quite expensive. Two of the most well-
known programs are the Harvard Program on Negotiation and Northwestern University’s Conflict
Resolution Institute training program. Programs such as these can range anywhere from $3000 to
$10,000 for a two to four-week course, a sum the average American can not afford for a course
[61].
2.2.1 Negotiation Formalization and Task
Negotiation can generally be viewed as an interaction between at least two or more participants
over one or more issues where parties must decide how best to influence the other’s behavior to
9
maximize their value in the negotiation [62, 63]. Over the years, research in economics, psy-
chologies and a number of other fields have sought to formalize this approach and create general
abstractions of negotiation to study and teach this subject. Two of the most popular abstractions/
games are the ultimatum game and the multi-issue bargaining task. In the ultimatum game, one
party has a set of resources which they must decide how best to split amongst themselves and their
opponents. Once an offer is made by an opponent, the negotiator must decide to either accept the
offer or reject it. If a negotiator accepts the offer, then they split the issues based on the agreed-
upon offer, if a negotiator rejects the offer, then both parties gain nothing. The other structure
commonly used in research and teaching is the multi-issue bargaining task(I provide an overview
of this task below). This is the framework used in this thesis.
2.2.1.1 Multi-Issue Bargaining Task
The multi-issue bargaining task consist of two or more parties seeking a common agreement over
a set of issues(e.g., gold, silver, etc). Each issue may contain a number of levels (the number of
issues, e.g. 3 gold bars, 4 silver bars, etc). Each party has a unique preference ranking over the
issues, which is represented by the payoff matrix. These preferences tend to be unknown to an
opponent at the start of a negotiation. Preferences are typically modeled as additive functions.
For example, let us assume a sample negotiation contains four issues( gold, silver, copper and
diamond). The payoff for this hypothetical negotiation is represented in Tables 2.1 and 2.2. In this
example, participants must divide and agree on the division of 4 diamonds, 3 bars of gold, 4 bars
of silver, and 4 bars of copper. Parties may make offers, ask questions, and share information in
hopes of finding an acceptable agreement.
Two concepts in the multi-issue bargaining task that is essential to each party is the payoff
structure of the task as well as the BATNA. The payoff structure can either be integrative or dis-
tributive. When the payoff for a task is integrative, it means that participants have complementary
preferences for a set of issues and thus it is easy to find a win-win deal where both negotiators get
their desired outcome. In the case of the sample negotiation, this would be considered integrative
10
Table 2.1: Negotiator A Payoff
Diamond Gold Silver Copper
Level Value Level Value Level Value Level Value
0 $0 0 $0 0 $0 0 $0
1 $30 1 $10 1 $20 1 $15
2 $60 2 $20 2 $40 2 $30
3 $90 3 $30 3 $60 3 $45
4 $120 4 $80 4 $60
Table 2.2: Negotiator B Payoff
Diamond Gold Silver Copper
Level Value Level Value Level Value Level Value
0 $0 0 $0 0 $0 0 $0
1 $15 1 $40 1 $10 1 $20
2 $30 2 $80 2 $20 2 $40
3 $45 3 $120 3 $30 3 $60
4 $60 0 $40 4 $80
because negotiator A can take diamonds and silver and give negotiator B all the gold and copper to
maximize their outcome. The other type of structure is a distributive payoff where both negotiators
want the same thing. This makes it harder to find a win-win solution as negotiators must concede
some of the issue they care about. Negotiators must also be aware of their BATNA, the Best Alter-
native To a Negotiation Agreement. This is the value of the negotiation should the negotiations be
unable to come to a common agreement. All of these collectively form the basis of the multi-issue
bargaining task framework.
2.3 Using Technology to Teach Negotiation
As an alternative to the classroom role-playing exercises, a number of low-cost technological solu-
tions have been proposed for introductory skills development. However, many of these platforms
lack the engagement and feedback comparable to the experience of an in-person training pro-
gram. The simplest and easiest method for incorporating technology into negotiation training is
11
to upload video lectures online where students can watch those lectures and complete associated
assignments. Instructors can use a learning management system to store the lesson plan along with
video lectures. This is often the case with platforms such as BlackBoard. To reach a large number
of students, Instructors can also utilize Massive Open Online Courses(MOOCs). Example of pop-
ular MOOC platforms include Coursera and Udacity. The benefit of this approach is that students
are able to take the course at their own pace. Some courses offer students the ability to engage in
role-playing exercises either via video chat or text based interaction. Matz and Ebner show that
there are pros and cons to this method with one of the main cons being that it lacks the engagement
and feedback one might get in an actual course[64].
Beyond online video lectures, commercial platforms have been developed to teach negotiation.
One such product, IDecisionGame, is meant to be both a platform to aid teachers in managing
the role playing exercise in the classroom, as well as provide an opportunity for students to prac-
tice[65]. Instructors are able to assign roles to different students in a training exercise as well as
collect data on the students’ interactions. This platform can facilitates negotiation simulations via
video call. IDecisionGame offers an online agent which enables students to practice. However,
these agent’s ability to engage with participants are limited and often fails to interact naturally with
users. Another commercial system developed for negotiation training is Trenario[66]. Trenario is
a mobile app that introduces students to basic negotiation terms and concepts. It also has built-in
avatars that allow users to practice using a number of different scenarios. However, there is cur-
rently no mechanism for debriefing/feedback.
Researchers have proposed and studied the implementation of virtual agents in simulated ne-
gotiation training. Previous research has shown that individuals can improve their negotiation
abilities by practicing with autonomous agents [1, 2, 67] without losing out on the realism that a
human negotiator provides. Negotiators’ feelings of satisfaction, cooperation, and rapport were
found to be similar in negotiations conducted with virtual agents as they would be with human
partners [68]. In some ways, the use of virtual agents may be more advantageous than human
partners. As a recent study shows, students felt more comfortable practicing with the computer
12
than another human [1]. Therefore, in addition to providing opportunities to practice (akin to
classroom role-playing exercises), automated human negotiation partners may help mitigate some
of the aforementioned issues with the anxiety that students experience during negotiations. Nev-
ertheless, previous implementations of these systems face a similar limitation as the classroom
approach. Like in traditional classrooms where negotiation is taught, students receive many oppor-
tunities to learn about the negotiation principles and to practice negotiating; however, there has not
been enough emphasis on providing personalized feedback to all students. Systems that provide
virtual partners for practicing negotiation do not currently provide personalized feedback to users.
2.4 Experiential Learning and the Importance of Feedback
Autonomous negotiation systems can be effective learning tools, as they allow negotiators to prac-
tice and provide personalized feedback. To be most effective, this feedback should be grounded in
good principles and must help students reflect on (and improve) their performance. Experiential
Learning Theory states that learning is best viewed as a process rather than an outcome. This pro-
cess is enhanced when learners are able to engage their cognitive abilities, emotions, perceptions,
and behavior [3] and can reflect upon their subsequent actions. Thus, feedback plays a valuable
role in the learning process. It informs students of their performance and also identifies areas of
improvement. One advantage of autonomous agents is that they can provide targeted feedback
based on objective measures. Feedback, in a traditional classroom setting, typically involves the
teacher sharing with students their performance relative to others in the course, and the principles
of expert negotiators. The limitations of this approach are that the feedback is ambiguous, not
personalized, and delayed in time. An automated system can provide more instantaneous feedback
that is personalized to the specific needs of the student. Most of the current research on negotiation
feedback systems has focused on providing feedback before or during the negotiation, and very
little work focuses on providing feedback at the conclusion of a negotiation.
13
Chapter 3
Quantifying Principles of Good Negotiation
In this Chapter, I take principles propose in the negotiation literature to measure student success
and show how they can be quantified for an automated agent to reason about a student’s perfor-
mance in a negotiation. To do this, I survey the literature for principles of good negotiators and
show how these principles can be quantified and extracted from a student’s negotiation trace. I
then validate that success in following these principles (as measured by my metrics) leads to better
outcome in a negotiation exercise.
3.1 Principles of Good Negotiations
To date, a set of principles that guarantee successful negotiation outcome has not been established.
However, principles proposed [19, 20, 68] seem to be robust enough to be used as good indicators
of negotiation success. Kelley and colleagues [19] found that good negotiators do a number of
things that correlated with positive negotiation outcomes; they avoid early commitment, make
efficient concessions, induce their opponent to concede, shape their opponent perception of value,
and gather information in advance of the negotiation. In accomplishing these ends, negotiators
take a number of different actions.
To avoid early commitment means that a negotiator did not concede too early in the negotiation
or commit to a deal prior to understanding their opponent’s preferences. There are many ways to
avoid early commitment. One could begin with a high initial offer, and research has shown that
14
starting with a high initial offer leads to better outcome [20]. They could also utilize all of the avail-
able time in the negotiation. By utilizing their time effectively, a negotiator forces their opponent
to become fatigued by the negotiation and concede. Lastly, one could avoid early commitment by
negotiating multiple issues simultaneously rather than individually. In terms of making efficient
concession, a negotiator must understand their opponent’s preferences. This means negotiators
may ask questions directly related to an opponent’s preferences. For those who employ the princi-
ple of inducing their opponent to concede, this is typically done using two methods. Negotiators
either claim more over the course of the negotiator or continuously reject their opponent’s offers.
This forces an opponent to concede as they lose hope that the negotiator might concede. For those
looking to influence their opponent’s perception of value, negotiators tend to misrepresent their
preferences in hopes that their opponent can undervalue what the negotiator truly cares about. By
doing this, a negotiator may seem like they are conceding by giving up an item they care about
when in reality they are giving away issues they care very little about. Lastly, good negotiators
gather information before the negotiation by researching an opponent and gathering as much data
about the negotiation scenario prior to its commencement.
3.2 Quantifying Negotiation Principles
Based on the principles of expert negotiators presented in Section 3.1, a number of quantifiable
metrics were developed. These metrics address three - avoiding early commitment, making ef-
ficient concessions, and inducing opponents to make concessions – of the five principles. These
three were selected because they are the most straightforward to quantify and prior research has
emphasized their predictive value in determining negotiation outcomes (e.g. [19, 20]). The metrics
are described below according to the principles they correspond to.
15
3.2.1 Avoiding Early Commitment
For avoiding early commitment, I computed three variables corresponding to Kelley’s sub-principles
(initial claim, agreement time, and single-issue offers) for realizing this high-level principle. Initial
claim measures the fraction of the total outcome space contained in an initial offer. To compute
this variable, I look at the negotiator’s first offer and compute the maximum utility of the outcome
space. From there, the ratio of the value of the first offer to the maximum utility is calculated. For
example, in Table 3.1, at 78 sec, the user made an initial offer. The user offers to take 3 records
and give their opponent two lamps. I compute the value of three records and report what fraction
of the total value of the outcome space is captured by the three records. This output is represented
as the initial claim. Agreement time measures how long it takes negotiators to reach an agreement.
I compute this variable by looking at the time either the agent or the user accepted an offer. We
see that the agent accepted an offer at 132 seconds. 132 seconds would be stored as agreement
time. Agreement time has been shown to positively correlate with earnings. Another variable is
single-issue offers which measures the percentage of offers made involving only one issue. I check
offers to see which contains only one issue and calculate the percentage of the total offer space
represented by single-issue offers. This has been shown to negatively correlate with earnings[1,
19].
3.2.2 Making Efficient Concessions
For making efficient concessions, I measured unasked questions and triangulations. Unasked ques-
tions are the number of questions a user can ask to gain more knowledge about the opponent’s
preferences. This is computed by examining each asserted statement from the opponent, deciding
what it tells us about an opponent and generating a list of possible questions one could have asked
to gain more information about their opponent. For example, in Table 3.1, the agent makes a total
of four assert statements: “L>P” which means “I like lamps more than paintings”, “L>R” asserts
“I like lamps more than records” and two “R>0” meaning “The record has value to me”. From
these statements, one can infer that the agent likes the lamps more than the records and paintings.
16
Table 3.1: Example Negotiation Log
Time(sec) Speaker Action
35 User Assert R> O
44 Agent Assert L> P
44 Agent Assert L> R
49 Agent Assert R> O
52 Agent Question P> ?
55 User Assert P = O
78 User Offer R3:0 L0:2 P0:0
84 Agent Decline declineOffer
88 Agent Assert R> O
96 Agent Offer R1:2 L2:0 P0:1
123 User Decline declineOffer
124 User Offer R2:1 L1:1 P0:0
130 User Offer R2:1 L1:1 P0:1
132 Agent Accept acceptOffer
However, it is unclear as to whether or not the agent likes the paintings more than the records
or records more than the paintings. There are two questions that could be asked to gain clarity
on the opponent’s preferences: “What do you like the least?” or “do you like records more than
paintings?” (Could also be “do you like painting less than records?”, I treat both the same). In
this case, the variable unasked questions would be set to two. Unasked questions are negatively
correlated with earnings [1]. Triangulation quantifies the number of exploratory offers the user
makes. This counts the number of times a user made multiple offers of equal value for themselves,
yet a different value for their opponent. We see at times 124 and 130 seconds, the user makes two
offers of equal value to themselves but two different values to the agent. At 130 seconds, the user’s
s offer claims the same items for the user as the offer at 124 seconds, but now offers the painting
to the agent, which has no value for the user . This would constitute a triangulation.
17
3.2.3 Inducing Opponent Concessions
Lastly, for inducing others to make concessions, I compute the negative concessions and the num-
ber of rejections. Negative concessions measure the number of times a negotiator makes a con-
cession and later negates that concession by bidding with a higher utility. From Table 3.1, at time
78 seconds, the user bid for three records. They later conceded by bidding for two records and
one lamp at 124 seconds. If they were to bid again for items whose values are greater than two
records and one lamp (e.g. 3 records), that would count as a negative concession. Those who made
more negative concessions earned more in a negotiation. The number of rejections measures the
number of times a user rejects an opponent’s offer. This is computed by counting the number of
declineOffer dialogue acts. Table 3.1 shows that the agent declines one offer and the user declines
one. Number of rejections has shown to positively correlate with earnings [19].
3.3 Metric Validation
The intent of these metrics are to be used to provide feedback. Feedback is most valuable to
students when it is visibly grounded in reality. For example, research has shown that making a
high initial offer is important to gaining a good outcome [20], but this fact only has pedagogical
value to the extent that a student can clearly see a meaningful impact of this factor on student
behavior in classroom exercises. When a student sees that their own or other students’ outcomes
are significantly impacted by their initial offer, the lesson becomes vivid and meaningful. Thus,
to test the validity of my metrics, I examined how well they predicted actual performance in a
negotiation task. I obtained a dialogue corpus from the Conflict Resolution Agent and tested
whether our method does indeed discriminate between good and bad negotiators.
18
Figure 3.1: Conflict Resolution Agent engaged in a negotiation with participant #362
3.4 Validating Negotiation Metrics
I obtained permission to use a large annotated corpus of students interacting with the Conflict Reso-
lution Agent(CRA) [68] and validated our metrics on this data. CRA(see Figure 3.1) is a game-like
environment which enables researchers and educators to build a number of role playing exercises
for participants and students to engage in the multi-issue bargaining task. It is a “wizard-of-Oz”
system that allows users to communicate through natural language and non-verbal expressions.
CRA was built using the Virtual Human Toolkit [69]. The system works by allowing two “wiz-
ards” to make high-level decisions about the agent’s behavior through a web-based interface [70].
These commands are then used as inputs to a lower level component which automatically gener-
ate the appropriate non-verbal behavior( using the Virtual ToolKit’s non-verbal behavior generator
[71] to generate the behavior and SmartBody character animation system [72] to implement this
behavior on the agent) as well as it’s utterances( using NeoSpeech [69]) . The agent has over 5000
distinct utterances
19
The corpus (described in [1]), consists of 159 negotiations between participants and CRA.
Each participant engaged in some variant of the “Auction War” negotiation. In this negotiation,
both participant and the agent assume the role of an Antique Dealers who must decide how best
to divide the collection of items in an abandoned storage locker. Participants were incentivized
to do well (they received tickets of a cash lottery based on their performance) and were given 15
minutes to reach a deal with the agent. The behavior of the agent was controlled by a human wizard
following a pre-determined interaction policy. Before negotiating, each student received a payoff
matrix which described the earnings they would receive (in terms of lottery tickets) based on how
good a deal they could negotiate. Thus, the number of lottery tickets earned serves as our objective
measure. Additionally, all participants were asked to guess how much value the agent assigned to
each item to index how well they understood their opponent’s interests.
3.4.1 Results
Table 3.2: The relationship between negotiation metrics and lottery tickets within the CRA Corpus
Negotiation Metrics Lottery tickets
Initial Claim .47**
Agreement Time .59**
Single-Issue Offers -.16*
Make Efficient Concessions:
Unasked Questions -.35**
Triangulation .36**
Induce Opponent Concessions:
Number of Rejections .53**
Negative Concessions .26**
Note: *p< .05 **p< .01
I examined the relationship between metrics and outcomes by conducting a Pearson’s corre-
lation between each metric and lottery tickets. The results, shown in Table 3.2, illustrate that my
20
metrics are indeed indicative of a negotiation outcome and are aligned with the outcomes of [19].
Concerning how participants avoid early commitment, users with high initial offers tend to gain
more lottery tickets (r = .47, p< .001). This shows that participants who make strong initial offers
end up getting more from the negotiation. Additionally, users with higher agreement time gained
more ticket (r = .59, p< .001). Thus, users who utilize more time in a negotiation tend to get more.
Moreover, users who made less single-issue offers gain more lottery tickets (r = -.16, p = .05). This
means participants who made more multi-issue offers got more in a negotiation. Regarding mak-
ing efficient concessions, the more unasked questions a user had about their opponent’s preference,
the less lottery tickets they received (r = -.35, p < .001). This signals that those who asked more
questions about their opponent’s preferences tended to gain more in a negotiation. This could be
due to improved understanding of their opponent’s preferences, but it is inconclusive. Addition-
ally, the more triangulation offers a user makes, the more lottery tickets they received (r = .36, p
< .001); participants who made offers that were of equal value to them but had different values
to their opponent did better in the negotiation. Lastly, in terms of inducing opponent concessions,
users with a higher number of rejections also claimed more lottery tickets (r = .53, p < .001).
That is, users who are less likely to accept an opponent’s offer perform better. Furthermore, users
who made more negative concessions earned a greater number of lottery tickets (r = .26, p = .01).
Thus, negotiators who conceded but later increased their utility in offers tended to do better in the
negotiation.
1
3.5 Summary of Contribution
In this chapter, I have highlighted some of the key principles taught to novice negotiators. From
these principles, I have shown how a number of metrics can be quantified automatically given
a dyadic negotiation dataset. I have validated these quantitative metrics using experimental data
and shown that they correlate with negotiation outcomes in ways that underscore key negotiation
principles.
1
These results are published in the following paper [73]
21
Chapter 4
Providing Automated Feedback Based on Negotiation
Principles
Given a set of metrics that are predictive of negotiation outcomes, this Chapter explores the impact
of providing various types of feedback (e.g., generic and personalized feedback) based on these
metrics.I first describe how metrics are converted to feedback and then describe the two studies that
validate this finding. I show that personalized feedback is most effective at helping users improve
their negotiation ability compared to generic and no feedback.
4.1 Automated Feedback
To have the greatest impact on learning outcomes, automated assessment should operate in a way
that is independent of the specific scenarios or algorithms students use to practice negotiating.
This feedback should also follow common approaches used in the classroom. Typical feedback
in a negotiation classroom can be divided into two categories. Feedback generally focuses on
value claiming and value creating. Value claiming feedback provides insight on the negotiator’s
ability to gain more value in a negotiation. To claim more value, a negotiator must make offers
in a way that claims more of the issues they want. Value creating feedback measures this by
focusing on the extent to which a negotiator gathers information about an opponent and is able to
use that to gain more value in the negotiation. To accomplish this end, I adopt a set of general
22
assessment based on the metrics developed earlier. These metrics analyze very basic information
about student behavior during the negotiation (e.g., pattern of offers, messages sent, etc.). From
this, I derive outcome measures (i.e., was the final deal successful at creating and claiming value),
but also process measures that assess the extent to which students used tactics that create and claim
value.
Value Claiming: A student’s ability to claim value was assessed by measuring the individual
points they obtained in the final deal. Another process measure was used to gain insight into
why they may have failed to claim value. Specifically, the point value of the student’s initial
offer was examined. The assessed metrics are input into a decision-tree that chooses feedback
to provide to students. The feedback is based on instructor-crafted templates that include slots
filled by automatic assessments. When students achieve good outcomes or follow recommended
tactics, this is positively reinforced (e.g., “The first offer you made would have gotten you about
76% of the points. Pretty good.”) and the principle emphasized (“By claiming most of what you
want early in the negotiation, you can manage your negotiation partner’s expectations of what
they will receive.”). When students fail, these are highlighted (e.g., You failed to fully understand
your opponent’s preferences. This prevented you from making good tradeoffs”) and a specific
suggestion is provided (e.g., “For example, if you realized your opponent wanted bananas the
most, a win-win solution would be giving them all bananas and taking the gold for yourself.”).
Value Creation: A student’s ability to create value was assessed by measuring the joint points
achieved in the negotiated agreement (i.e., the points obtained by both the student and the agent).
Several process measures were assessed to gain insight into why a student may have failed to create
value. For example, I assessed a student ability to employ the tactic of logrolling by the extent to
which they made tradeoffs in their initial offer to the agent (specifically, the number of highest-
value items they claimed minus the number of lowest-value items they offered). The “inefficiency”
of the student’s final offer was measured by the offer’s distance from the Pareto frontier (the set of
deals in which neither the agent nor the participant could have done better without the other doing
worse). This is essentially a measure of how much value is left on the table.
23
In order to teach value creation and value claiming, two randomized between subjects’ studies
were ran to study the impact of personalized feedback (i.e., “mere practice”). In one study, students
interacted with a “Wizard of Oz” system and the other with an “off-the-shelf” negotiation agent.
Students did an initial negotiation, received the experimental treatment, and then performed a sec-
ond negotiation to assess any improvements. In preview, Study 1 examined the basic technique for
teaching value creation and claiming. I found a benefit of mere practice on both creation and claim-
ing, and a benefit of personalization for value claiming but not value creation. Study 2 attempted
to improve on the instructional materials for providing feedback by comparing personalized vs.
generic feedback using a fully automated online system. I also see the benefit of personalization.
Two studies validated the effectiveness of this feedback system across different agent types.
4.2 Study One
4.2.1 Study Design
Agent Design:In this study, participants completed two negotiations with the Conflict Resolution
Agent (CRA). As described in 3.4, CRA is a “Wizard-of-Oz” system that enables participants to
engage in a multi-issue bargaining task through both verbal and non-verbal communication.
Participants:A total of 63 participants (34 females and 30 Men) were recruited through Craigslist.
Participants were compensated $30 for their participation. Technical failures resulted in 3 partic-
ipant’s data being unused, thus the analyses were conducted on data from the remaining 60 par-
ticipants (30 per condition). In addition to the amount paid for participation, participants were
incentivized to perform well by entering them into a $100 lottery based on how much they got in
the negotiation.
Negotiation Task: In each of the two negotiations, participants were asked to role-play as
an antique salesperson participating in a multi-issue bargaining task. In this task, there were six
items to be negotiated in each negotiation. Participants had ten minutes to negotiate with the
agent. In the first negotiation, these items included three records, two lamps, and a painting. In the
24
second negotiation, the items were changed to chairs, plates, and a clock, respectively. This change
was done to prevent the participant from knowing the agent’s preferences before the negotiation.
However, these three item types were direct analogs to the original items in terms of value. For
simplicity, I will refer only to the original item types (records, lamps, and painting).
The goal of each negotiator in this task was to reach an agreement that afforded them the
highest total value. Each item had a set value to the participant and agent. For both players, the
records was worth 30 points each, and the lamps 15 points. This was designed to be a distributive
negotiation; thus, items were generally of equal value to both negotiators. The painting was the
only item that held a different value to the participant: it was worth 5 points to the participant but
had no value to the agent.
Table 4.1: Agent Payoff
Records Lamps Painting
Level Value Level Value Level Value
0 0 0 0 0 0
1 30 1 15 1 0
2 60 2 30
3 90
Table 4.2: Participant Payoff
Records Lamps Painting
Level Value Level Value Level Value
0 0 0 0 0 0
1 30 1 15 1 5
2 60 2 30
3 90
Participants could thus discover that the painting could be claimed without consequence, as
it had no value to the agent. Although all participants reached an agreement, they were told that
if they failed to reach an agreement within the 10-minute limit (or chose to walk away from the
25
negotiation), they would receive one of their highest priority items as an alternative to negotiated
agreement. Thus, the negotiation outcome for the participants ranged from 30 to 125.
Prior to each negotiation, participants were given a description of the task, which gave the
relative value of each item. Specifically, they were told that each of the records was worth at least
twice as much as one lamp, and that the painting was the least valuable item. They were told that
they would earn tickets toward the lottery for $100 based on how many (and which) items they
acquired in the negotiation.
Experimental Manipulation: In this study, participants completed two negotiations with the
Conflict Resolution Agent (CRA). They were randomly assigned to either the feedback (exper-
imental) or control conditions: participants in the experimental condition received personalized
feedback on their first negotiation performance prior to the second negotiation beginning. How-
ever, those in the control group were told to just reflect on the negotiation for five minutes after
receiving their score following the first negotiation.
Measures: Participants were given a short quiz to verify that they understood the negotiation
task, their priorities for the different item types, and their real-world incentive to do well in the
negotiation (entries into a lottery for $100). The negotiation began once the quiz was checked and
any misunderstandings resolved. Participants were measured according to the metrics described
above: strength of initial offer, value claimed (total across negotiation and average for any given
offer), and identifying the value of the items to the agent(value creating). Additionally, the final
outcome of the negotiation was measured.
4.2.2 Results
Participants’ negotiation metrics in the negotiation were all analyzed by performing 2 (feedback:
personalized feedback versus no feedback) x 2 (time: negotiation 1 versus negotiation 2) mixed
ANOV As. First, metrics around value claiming were analyzed: strength of initial offer, and value
claimed (total across negotiation and average for any given offer). Analysis of initial offer revealed
that a significant main effect of time (F(1,47) = 9.10, p = .004) was qualified by feedback condition
26
(F(1, 47) = 8.26, p = .006). Participants who received personalized feedback made stronger initial
offers in the second negotiation (M = 95.42, SE = 2.91) than the first (M = 78.96, SE = 2.97; F(1,
23) = 15.07, p = .001), but there was no difference in the control condition (M = 80.60, SE = 2.91
vs. M = 80.20, SE = 2.91; F(1, 24) = 0.01, p = .91). Next, value claimed was analyzed across the
negotiation. Again, there was a main effect of time (F(1, 48) = 7.37, p = .009), which was qualified
by feedback condition (F(1, 48) = 3.92, p = .05). Those who received personalized feedback tried
to claim more total value in the second negotiation (M = 476.80, SE = 39.58) than the first (M =
322.00, SE = 31.25; F(1, 24) = 8.89, p = .006), there was no difference in the control group (M
= 367.69, SE = 39.58 vs. M = 343.40, SE = 31.25; F(1, 24) = 0.35, p = .56). To test whether
the effect found for total value claimed could be found at any given point during the negotiation,
I also analyzed the average value claimed during each offer. This also revealed a main effect of
time (F(1, 47) = 24.79, p< .001), which again was qualified by the feedback condition (F(1, 47) =
7.14, p = .01). While only marginally more value was claimed by the control group in the second
negotiation (M = 73.37, SE = 2.00) than the first (M = 69.90, SE = 1.65; F(1, 24) = 3.38, p =
.08), participants who received feedback between negotiations made higher average claims in the
second negotiation (M = 82.42, SE = 2.05) than the first (M = 70.88, SE = 1.68; F(1, 23) = 23.81,
p < .001). For value creating, participants did a better job of identifying the value of the items to
the agent in the second negotiation (M = 6.27, object relationships found; SE = 0.49) than in the
first (M = 5.24 object relationships found; SE = 0.42; F(1, 56) = 3.42, p = .07), however, there
was no interaction with condition (F(1, 56) = 0.39, p = .54). Finally, the ultimate outcome (final
score) of the negotiation was then analyzed. As with the above metrics, the significant main effect
of time (F(1, 58) = 45.28, p < .001) was qualified by feedback condition (F(1, 58) = 13.47, p =
.001). While only marginally better outcomes were obtained for the control group in the second
negotiation (M = 58.50, SE = 1.92) than the first (M = 54.33, SE = 2.19; F(1, 29) = 3.92, p = .06),
participants who received personalized feedback significantly improve in the second negotiation
(M = 67.33, SE = 1.92) compared to the first (M = 53.17, SE = 2.19; F(1, 29) = 67.05, p< .001)
1
.
1
These results are published in the following paper [50]
27
4.3 Study Two
One of the limitations of the previous study is that users did not interact with a fully automated
system. My next step was to create a task agnostic feedback system that was fully automatic and
verify these promising findings still hold. Thus, I adapted my metrics and feedback system to work
with an off the shelf automated negotiation platform (the IAGO platform [74]).
Figure 4.1: Default IAGO Agent Interface
Agent Design: In this study, participants negotiated with an agent built using the IAGO online
negotiation platform( See ??). This platform allows researchers and developers to build a number
of agents which students can practice negotiating with. IAGO is designed to support the basic
tactics that expert negotiators used to create and claim value. Negotiators can exchange offers
but also information (do you like A more than B?) and send other messages such as threats and
28
emotions. The platform also provides tools to customize agent behavior, including the ability to
incorporate common biases shown by negotiators (such as the fixed-pie bias). It has been used
by researchers to build human-like negotiating agents [75, 76]. To better simulate the experience
of a novice student, I adopted an existing IAGO agent that incorporates several common biases
found in novice negotiators. The agent incorporates some behaviors that were shown to undermine
value claiming. Specifically, it adopts a “fixed-pie bias” (it assumes it is fighting over how to
divide a fixed-sized pie) and is not motivated to exchange information unless the student initiates
the exchange (but it will use preference information if the student provides it). It also incorporates
behaviors known to undermine value creation. Specifically, the agent employs anchoring (it makes
a strong initial offer). Finally, the agent adopts a fair concession strategy. After its initial anchor, it
responds to user offers by adjusting them towards a fair split (based on whatever knowledge it has
about the student’s preferences).
Participants: 240 English speaking participants from the United States were recruited via
Amazon Mechanical Turk following standard experimental practices. To motivate their perfor-
mance, participants were paid for their participation in the study and entered into a lottery to win a
prize of $10. 54 participants were removed because they had completed study1(17) previously, or
the system crashed(37).
Negotiation Task:Participants were asked to engage in two negotiations. Each had the same
mathematical structure (a 4-issue, 6-level multi-issue bargaining task) but used a different cover
story and a different ordering of the issues to obscure this similarity. The tasks were framed as
a negotiation between antique dealers on dividing the contents of an abandoned storage locker.
The first negotiation involved splitting 5 bars of gold, 5 bars of iron, 5 shipments of spices, and 5
shipments of bananas. The second involved 5 clocks, 5 records, 5 paintings, and 5 lamps. Both
the agents and participant had distinct preferences across the items, and neither the agent nor the
participant knew the other’s preference. The structure of these points ensures that parties can create
value by making tradeoffs between items (e.g., in the first negotiation, the player can create value
by taking all the gold and iron and offering all spices and bananas). Prior to each negotiation,
29
participants were told how much each item was worth to them( see agent and student’s payoff
metrics in Tables 4.1 and 4.2.) In addition to the worth of items, participants were also told they
would receive only 4 points if they failed to reach an agreement.
Table 4.3: Agent and Participant’s Payoff Matrix Negotiation 1
Gold Iron Bananas Spices
Agent 1 2 3 4
Student 4 3 2 1
Table 4.4: Agent and Participant’s Payoff Matrix Negotiation 2
Gold Iron Bananas Spices
Agent 2 3 1 4
Student 3 2 4 1
Experimental Manipulation: Participants were randomly assigned to one of three experimen-
tal conditions:
1. Personalized Feedback: Participants were provided personalized feedback on their initial
claim, an understanding of their opponent’s preferences, and the overall value of their final
claim. Prior to providing participants with feedback, the system asked participants if they
remember their preferences as well as their opponents. This information is logged and used
to guide the personalized feedback.
2. Generic Feedback: Participants received feedback using the same templates as with per-
sonalized feedback, but whereas slots will be filled in with the student’s own behavior for
personalized feedback, in generic feedback they were filled with information from the same
“generic” student and described as feedback on a hypothetical negotiation. For example,
participants are shown the initial offer, the final deal, and information exchanged in a hypo-
thetical negotiation. They are provided suggestions on how good that person did and how
their results could have been improved.
30
3. No Feedback: Those in the no feedback condition were told the points they received but
provided no other information.
Figure 4.2: Personalized vs Generic Feedback
Figure 4.2 shows an example of personalize value-claiming feedback. This is contrasted with
generic feedback (feedback a student might receive if personalization was not available). In this
example, both the personalized and generic feedback are provided for a participant who has made
an initial offer which claimed 30% of the total points. If this was indeed their initial offer, then
the personalized feedback would reflect that. Regardless of their initial offer, the generic feedback
uses this single example to illustrate a poor first offer. It does not take the users initial offer into
account.
Measures: Prior to the negotiation, we gathered basic demographic information and self-
reported negotiation skill level. Participants received information of their goals in the negotiation
and how to use the tool. They were quizzed on this information and had to answer correctly before
proceeding. During the negotiation, I automatically derived the metrics discussed in Section 2.
After completing both negotiations, participants were asked an attention check (did they correctly
remember their own preferences) and then asked to rate their performance and indicate to what
extent they have felt the exercise helped improve their negotiation skills. We also asked participants
to indicate the learned tactics they would most likely implement in real life.
31
4.3.1 Results
I performed a 3 x 2 mixed ANOV A to evaluate the effects of feedback on participants’ performance.
For value claiming, the new instructions led to a much stronger use of value-claiming tactics in the
personalized condition. In terms of making a strong initial claim, there is a significant benefit of
practice such that participants make stronger initial claims in the second negotiation than the first
(F(1, 183) = 68.516, p < .001), and a highly significant interaction with condition (F(2, 183) =
6.737, p = .002). Those in the personalized feedback condition made higher initial offers in the
second negotiation than those in the generic or no-feedback conditions. However, the better use
of tactics did not translate into a better outcome. There is a significant benefit of practice on final
scores from the first to the second negotiation (F(1,183) = 34.457, p< .001). However, in contrast
with Study 1, though participants obtained the highest score with personalization, the interaction
with condition was no longer significant (F(2,183) = 2.118, p =.123). For creating value, a clear
benefit of practice but not for personalized feedback was found. Joint points significantly improved
from the first negotiation to the second (F(1,183) = 10.758, p = .001), but there was no interaction
with condition (F(2,183) = 2.575, p = .079). The same pattern was observed for the number of
questions asked – there was a significant main effect of time (F(1,183) = 15.41, p = .009), but no
significant interaction with condition (F(2, 183) = .178), p = .837). Likewise, for logrolling, there
was also a significant effect of time (F(1,183) = 38.478, p< .001) but, if anything, only a trend for
an interaction with condition (F(2,183) = .706, p = .495). Overall, participants claimed more of the
items they wanted in the second negotiation than the first, however, this did not differ significantly
across conditions
2
.
4.4 Summary of Contribution
In this chapter, I examined how one may build a feedback system and test the impact of personal-
ized feedback. The results show participants who received personalized feedback improved their
2
These results are published in the following paper [77]
32
outcomes in the second negotiation more than those who did not. Furthermore, personalized feed-
back did help students to improve their use of good negotiation principles. It increased learning
by helping students to make more ambitious offers (in both studies), and claim more final points.
Study 2 also emphasized that the behavior of the intelligent agent has a strong impact on student’s
behavior and these effects, depending on the agent, may not be well-aligned with the pedagogy.
33
Chapter 5
Value Creating Metrics Using Opponent Modeling
In this Chapter, I look to expand upon the metrics described in Chapter 3 by utilizing opponent
modeling techniques to better understand a negotiator’s ability to understand their opponent’s pref-
erences as well as refrain from sharing too much about themselves either through their offers or
explicit preference statements. The results suggest that opponent models can be useful tools for
understanding negotiator’s value creation behaviors.
5.1 Opponent Modeling
As discovered in the two experiments described in Chapter 3, participants who received feedback
improved their value claiming abilities but not their value creating. It is possible participants either
did not understand the feedback or the feedback did not highlight correctable mistakes they are
making in value creating. As such, the next step was to explore different methods for assessing
how well participants understood their opponent’s preferences. One tool used in the automated
agent community is opponent modeling. Opponent modeling is crucial for effective negotiations.
It allows negotiators to maximize joint value from a better understanding of their opponent. The
more a negotiator knows about an opponent, the better they are at finding win-win solutions. Let us
imagine a negotiator is to split a cake with an opponent. Additionally, let’s assume this negotiator
only wants the actual cake and their opponent only wants the icing. A good opponent model
34
would allow a negotiator to integrate an opponent’s preferences with their own to propose a win-
win deal: i.e., giving their opponent the icing and keeping the rest of the cake instead of splitting
the cake in half. Good opponent modeling ultimately leads negotiators to gain more value for
themselves. Integrative deals, such as the one described above, are said to be efficient, as they
take advantage of tradeoffs across each negotiator’s interests. Ideally, negotiators should focus on
deals that are on the Pareto frontier. These are deals in which no party can reallocate resources to
gain more value without losing value for themselves or their opponent. The cake deal would be
on the Pareto efficient frontier because there is no other way to split the cake where one person
gets more without someone losing a little of the items they value. An example of such a frontier
is graphically depicted in Figure 5.1. Deals below this frontier can always be improved for one
or both negotiators. To find the Pareto efficient deals, a negotiator needs to learn their opponent’s
interest.
Figure 5.1: Pareto Frontier
35
Research into automated negotiation agents has yielded effective opponent modeling methods
for inferring the preferences of an opponent. Later, I will show how to use these techniques to
lend insight into the above-mentioned errors. Several techniques have been proposed in the AI
literature. These methods differ depending on their assumptions and inputs, and whether the model
generation involves a collection of either offers, preference statement or both.
5.1.1 Offers Modeling
Most automated techniques were developed for agent-agent negotiation and attempt to learn from
the pattern of an opponent’s offer only ( [78] provides a good overview of the current state-of-the
art). Bayesian and frequency models tend to be the most successful and widely used. Bayesian
models try to understand an opponent’s preferences by finding the most likely candidate given
a set of possible preferences over all issues. They assuming some prior distribution over a set
of preference profiles and use Bayes rule to update their beliefs given a sequence of observations.
Frequency models try to learn weights that represents the relative value of each issue. These models
estimate the issue weight by noting the frequency with which the value of an issue is offered, as in
the N.A.S.H. frequency models [78], or at how often the amount of an issue claimed is changed,
as seen in the hardheaded model [79].
5.1.2 Offers and Preference Statements
Unlike agent-agent negotiations, human negotiators rely heavily on explicit preference statements
[80]. Thus, research on human-human negotiation has extended opponent modeling techniques to
this domain. Previous work by Nazari [81] showed how one could take models from agent-agent
negotiations and adopt them to the human agent scenario. In their work, Nazari and colleagues
compared the performance of three models based on the source of information used to generate
them. One of those models were based solely on what an opponent said (statement model), another
was based solely on offers (Offer Model) and a combination of both (Offer Statement Model). This
work found the Offer-Statement model was highly accurate and outperformed models that focused
36
solely on offers or preference statements. This suggests that, for a given negotiation, the Offer-
Statement model could serve as an approximate measure of the transparency of the opponent
5.2 Translating Models into Insight
Novice negotiators making several errors when they negotiate. They fail to communicate their pref-
erences to their opponent, they fail to elicit their opponent’s preferences, and they fail to utilize the
information available to them. If we assume the Offer-Statement model is a good approximation of
the transparency of a negotiator, several conclusions naturally follow (which was empirically tested
in the study following this section). First, if the Offer-Statement model fails to learn an accurate
model of a player, this implies errors in information exchange: If humans show low transparency,
this suggests they erred in providing useful information to their agent opponent; if the agent is
opaque, this implies the human negotiator erred by failing to elicit useful information from the
agent (although different types of opponents may also differ in their propensity to share). Second,
if the Offer-Statement model is more accurate than the human at predicting the agent’s preference,
I can infer that the human erred in utilizing the information available to them. For example, they
may have failed to attend to one of the channels of information that the Offer-Statement model
uses. Third, by analyzing differences in the three opponent models (Offer-Only, Statement-Only,
and Offer-Statement), we can further diagnose which channel the human negotiator likely ignored.
For example, if an opponent’s Offer-Only model is more accurate than the Statement-Only, this
implies that the information was mainly present in the pattern of offers.
In piloting my models on human-agent data, one difference was discovered: people tend to
exchange fewer offers and information with IAGO agents than what Nazari found in her human-
human corpus. With IAGO, negotiators exchanged on average 3.7 offers and 2.08 preference
statements and the agent exchanged 3.11 offers and 3.43 preference statements. In [81] corpus,
humans exchanged on average 5.8 offers and 9.9 preference statements. The consequence is that
the model often fails to recommend a difference between issues. This led us to make some small
37
adjustments to Nazari’s method. In the Offer-Only model, Nazari computes the ratio of items a
negotiator claims over the items that are allocated to an opponent. One source of information which
is ignored is the items left on the table (items not claimed by either party). In this model, I treat the
items left on the table as items the negotiator does not want. Therefore, instead of computing the
weight per issue as a ratio of items claimed over the items given to their opponent, I incorporate the
information about items left on the table. The weight for each item is updated as follows, where
l
k
is the items claimed by a negotiator, l
0
k
is the items given to an opponent and l
u
is the items left
unclaimed.
w
i
=
l
k
l
0
k
+ l
u
(5.1)
As prior research indicates that people tend to assume their opponent wants the same things as
them in the absence of information (fixed-pie bias), to help resolve ambiguity in the model and
incorporate more human-like bias, ties were broken in issue weights by generating a set of all
possible ranking given current knowledge.
5.3 Study
To validate my approach, I recruited a panel of participants to practice negotiating against one
of several possible automated opponents. I evaluated how well students understood their oppo-
nents and how this impacted their ability to create and claim value. I then tested if my proposed
method yields the predicted insights into student errors. All participants negotiated with the IAGO
platform.
Classifying Novice Negotiators by Error: Opponent models provide a continuous measure
of transparency, but in assessment it is often useful to discretely classify students into different
categories to guide feedback. In the remainder of the chapter, I divide the negotiators into groups
based on how well they themselves understood their agent’s preferences. I divided participants into
tertiles (three equal-sized groups) and labeled them either A-students, B-students, or C-students
based on how well they modeled their opponents. A-students can essentially be seen as experts
38
whereas the other groups should be targeted for feedback. In the experiments that follow, I also find
that this grouping helps to better visualize the consequences of failing to model one’s opponent.
Agent Design: Participants engaged in a single multi-issue negotiation task with an IAGO
agent. To ensure that the results were not specific to the behavior of the automated opponent,
participants negotiated against one of four possible agents. Agents varied in terms of two common
differences found amongst human negotiators. First, agents varied as to whether or not they used
anchoring [20]. Anchoring is a negotiation tactic that involves making a very strong initial offer
and has been found to help negotiators claim more value. Second, agents vary as to whether
they adopt a “fixed-pie bias” [82]. When negotiators exhibit a fixed-pie bias, they approach a
negotiation with the assumption that their opponent wants the same things as they do unless the
opponent reveals information that contradicts this assumption. Non-fixed-pie agents assume the
negotiation is maximally integrative unless the opponent reveals otherwise. Other than these two
factors, the agents followed the default “Pinocchio” agent behavior provided by the IAGO agent
platform [83]. Anchoring and bias were manipulated independently to yield four agent types.
Participants: A total of 609 English speaking participants were recruited via Prolific, an online
platform similar to Amazon Mechanical Turk, which is often used for recruiting research partic-
ipants. Of the 609, 132 were removed for failing to pass the attention check questions and other
requirements, leaving 477 negotiations in the corpus. To motivate their performance, participants
were paid for their participation in the study and provided tickets into a $100 lottery proportional
to their outcome in points.
Negotiation Task: Participants engaged in a multi-issue bargaining task in which they and
the agent had to divide a number of items amongst each other. The negotiation took a total of 10
minutes. The items to be divided were as follows; 7 bars of gold, 5 bars of iron, 5 shipments of
spices, and 5 shipments of bananas.
Both the agent and participant had unique preferences across the items, and neither the agent
nor the participant knew the other’s preference. The payoff metric for each negotiator is shown in
Table 1. Prior to the negotiation, participants were told how much each item was worth to them.
39
In addition to the worth of items, participants were also told they would receive only four points if
they failed to reach an agreement. The task allows the opportunity to create value. Agreements can
be made more efficient by trading off value between iron and bananas. The joint value of the final
deal is maximized if the participant claims all the iron and the agent claims all the bananas. Gold
and spices are fixed-pie issues. Participants can create more efficient solutions if they correctly
model their opponent’s preferences.
Measures: To assess the quality of the outcome, I measure the individual points obtained by
the participant and the joint points (i.e., the sum of individual points obtained by the participant and
the agent). Participant points are a measure of value claiming. Joint points are a measure of value
creation. Opponent modeling measures: To assess the opponent models, I collect four measures.
Following the negotiation, participants were asked to rank the priorities of their opponent to give
insight into how well they understood their opponent’s preferences. I then ran the three automatic
models (statement model, offer model, and dual model) over the IAGO logs to give an “expert”
opinion on how well the opponent could have been modeled, in principle, from the various infor-
mation channels. Each of these approaches yields a ranking over the opponent’s priorities. I then
adopt a standard approach to quantify the accuracy of these four models. A number of approaches
have been proposed for assessing the accuracy of an opponent model. Baarslag and colleagues
[78] provides an overview of the state-of-the-art in evaluating opponent modeling techniques. One
common measure used is assessing the accuracy of an opponent model is the rank distance. Given
that it’s a common practice to represent agent’s preference as a rank ordering over a set of issues,
I felt that it would be the best metric for measuring differences between rankings. This is done
by comparing the utility of all possible deals (W) in the outcome given a rank r
a
and rank r
b
, and
computing the average number of conflicts:
d
r
(r
a
;r
b
)=
1
jWj
2
å
w2W;w
0
2W
c< r;< r
0
(w;w
0
) (5.2)
40
5.3.1 Results
5.3.1.1 Understanding of Negotiator’s Transparency
I claimed that opponent models could serve as an objective way to characterize how transparently
a negotiator communicates their preferences and which channel (statement vs. offers) is the most
diagnostic. Here I examine if this notion of transparency gives insight into the behavior of the
agent and student negotiators.
Agent Transparency: The different agents adopt quite different tactics and I expect this should
impact their transparency. To test, this, I examined the transparency of the different automated
agents based on their type (anchoring and fixed-pie bias). Figure 5.2 shows the accuracy of the
three automated models and the users’ estimate broken out by the four agent types: optimistic (i.e.,
no fixed-pie) anchoring, optimistic no-anchoring, fixed-pie anchoring, and fixed-pie no-anchoring.
This Figure also shows the result of collapsing across agent types (average agent).
Figure 5.2: Accuracy of Model by Agent Type
Overall, as expected, the most accurate inferences come from combining both information
channels (i.e., the Offer-Statement model), though the statement model also yields reasonable
41
accuracy. The offer only model performed worse across most agents. Users performed uniformly
poorly in their estimates. The results also show differences in which channel was most diagnostic
depending on the agent type. There were significant, sizable differences between the different
models (F(3, 1425) = 169.66, p < .001; see average agent in Figure 5.2), and the models also
varied in their differences by agent type (F(3, 1425) = 28.43, p < .001; see remainder of Figure
5.2). The differences are driven by the optimistic-anchoring agent. This can be explained by the
fact that, as this agent assumes there is a win-win solution, it leads with a strong initial offer that
incorporates tradeoffs (it claims all of what it wants most while offering the participant all of what
it wants least). Note also that the fixed-pie agents are the least transparent in regard to their offers.
Again, this can be explained by the fact that, in contrast to optimistic agents, fixed-pie agents split
issues evenly unless the participant reveals their own asymmetric preferences. Thus, the offers of
fixed-pie agents provide little information about their true preferences (just as tends to occur in
human negotiators that hold this bias).
Participant Transparency: Opponent models should be able to provide insight into how well
participants are communicating their own preference information. To test this, I examined the
transparency of the human participants by group (see Figure 5.3). There are clear differences in
transparency by group, with A-students the most transparent (F(2,474) =11.762, p< .001) . I also
see that the Offer-Statement model is better at predicting the preferences of A-students, though
these are less transparent than the automated agents (F(2, 474)=156.860, p < .001). Unlike the
automated agents, participants communicate more information through their pattern.
5.3.1.2 Investigative skills of a Negotiator
The above analysis shows that some negotiators are easier “to read” than others. While this is
partially due to the characteristics of the negotiator, it also reflects their opponent’s skill in draw-
ing out diagnostic information through asking good preference questions and through exploring
tradeoffs in their pattern of offers. I next examined how well students could draw out diagnostic
information from their automated opponent. To do this, I examined how accurately I could infer
42
the automated agent’s preferences based on the skill of the human participant (i.e., A-students vs.
B-students vs. C-students). If one group is better at interrogating their opponents, this should allow
automated techniques to more accurately predict these opponent’s preferences. Figure 5.3 shows
how accurate different opponent models were at predicting the agent’s preference compared to the
student broken out by groups. A students are as good as the Offer-Statement model at estimating
their opponent’s preferences (F(1, 316) = 1.747, p < .5). This suggests that A students are effec-
tive at integrating both offer and preference information into their estimates. The B and C students
performed much worse than the agent-based models
1
.
Figure 5.3: Accuracy of Agent’s preference across Groups
5.4 Summary of Contribution
In this chapter, I showed that although the human-agent negotiation is different from human-human
and agent-agent negotiation, opponent modeling techniques from both domains can be used in
1
These results are published in the following paper [84]
43
human-agent negotiation to assess the negotiator’s value creation ability. I evaluated the perfor-
mance of these models by how well they measured negotiator’s ability to infer the agent’s prefer-
ences across various agent types. I show that these models were good at diagnosing both the agent
and human preference modeling abilities. Using these models, I can also determine how much in-
formation the agents and participant are revealing about themselves vs how much they are gaining
from their opponent. This information can be used to determine if the participant is providing too
much information about themselves which could lead to exploitation from their opponent, or if the
participant is making effective use of the information the agent shared.
44
Chapter 6
Bringing it all together
Previous chapters have sought to prove, through a series of experiments, that individuals are able
to improve their negotiation ability by practicing with an automated agent and receiving feedback.
This feedback is more effective if personalized based on a set of metrics that are correlated with
good human-agent negotiation outcome. However, the impact of these simulations and feedback
on learning have not been evaluated in an online mini-course on negotiation that includes videos
and other learning components (the format in which I envision my methods will ultimately be
deployed). In order to show the effectiveness of an automated training system for teaching negoti-
ation, this must be evaluated. Thus this chapter seeks to demonstrate how the present methods of
using automated agents can be incorporated into an online learning course and its effectiveness. To
do this, an online course was created and evaluated to understand the impact of automated meth-
ods for teaching salary negotiation. Results show that the video content did influence participant
behavior. In particular, participants that received instruction and feedback were able to think more
globally about the negotiation and shift from a focus on increasing their salary towards making
trade-offs across other issues they care about.
45
6.1 Learning Management System and Course Design
In this section, I describe the course content as well as the learning management system. As
described Chapter 2, most online negotiation courses are administered through a learning manage-
ment system. These system manage course flow for multi-module courses and track participant’s
progress and results. Given that this was a multi-module online course, I developed a learning
management system to manage the participant’s progress in the course, and allow them to to take
the course at their own pace.
6.1.1 Course Content
The mini-course draws on content from a larger semester-long online negotiation course developed
by my committee member, Dr. Peter Kim of USC Marshall School of Business. For the mini-
course, I focus on two of the course modules that addresses the foundational concepts of value
claiming and value creating. From this content, I created a short mini-course on these two key
introductory topics. Whereas the original online course incorporates a lab where participants can
practice these skills with each other, my course replaces this face-to-face interaction with a practice
session against a pedagogical agent. At a high-level the mini-course has the following structure:
1. A brief introductory lecture on negotiation
2. Initial baseline practice on a salary negotiation exercise.
3. A lecture on value claiming followed by an assessment of their baseline performance in terms
of value claiming, and personalized pedagogical feedback on how they could improve.
4. A lecture on value creation followed by an assessment of their baseline performance in terms
of value creation and personalized pedagogical feedback on how they could improve.
5. A second simulated negotiation to assess their skill-improvement
6. A concluding lecture that summarizes what they should have learned.
46
(a) Value Claiming Lecture (b) Value Creating Lecture
Figure 6.1: Screenshot of video lectures by Dr. Peter Kim
6.1.1.1 Video Lectures on Value Claiming and Value Creating
All lectures are drawn from videos from the original course. The introductory video emphasizes
that participants will learn from a mixture of lectures and experiential exercises. As stated earlier in
this thesis, instruction around the topic of value claiming focuses on teaching negotiators a number
of tactics for obtaining the largest slice of the pie. Thus, the video on value claiming encourages
participants to understanding their Best Alternative to the Negotiation Outcome (BATNA), to es-
tablish a bottom line, to set reasonable aspirations, and to make ambitious initial offers and efficient
concessions.
Value creating is the notion that negotiators can find common ground with each other and
are thus able to ”grow the pie” in a way where both the negotiators get more of what they want
without the other feeling cheated or that the deal is unfair. The video on value creation highlights
the importance of learning what one’s opponent wants, making trade-offs and focusing on interest
rather than position. In particular, as negotiators assume their opponent wants the same thing (the
“fixed-pie bias”), the video focuses on overcoming this bias by exploring tradeoffs across issues.
6.1.2 Learning Management System
I developed a learning management system in order to manage users’ state, results, and feedback.
This system allows participants to complete their training at their own pace. The system enables
the management of the course progression to ensure participants complete modules and lessons in
47
Figure 6.2: Screenshot of Learning Management System
the appropriate order as well as log their performance. The course was divided into four modules
with each module containing a set of assignments. The assignments were ordered and presented
based on the experimental condition a participant was assigned in the evaluation study. Figure 6.2
provides an example of what participants see when they access the course. As one may notice, only
one module is enabled at a time. In this example, a participant must complete module 1 in order
to gain access to the following modules. Figure 6.3 provides a screenshot of how the assignments
within each module are displayed.
6.2 Enhancements to Negotiation Agent
Participants practice their newly-learned negotiation skills with the IAGO agent described earlier
in this thesis. To reiterate, IAGO is a toolkit that allows users to build web-base negotiating agents.
48
Figure 6.3: Example of Module Assignment List for Module 1 in Control Condition
These agents allow users to engage in a timed negotiation using a menu driven interface for com-
municating offers, preferences as well as allow users to ask questions and make statements.
I modified IAGO in a number of ways to support the mini-course’s learning goals. The objec-
tive of these changes was to reinforce participant’s adoption of good negotiation strategies.Thus the
agent was designed to support the participant clearly communicating their preferences as the agent
reward participants by adjusting it’s offer based on what the participant wanted. These changes
also supported the participant making high initial offer by adjusting the agent’s theshold of accept-
able value base on the participant’s initial offer. Lastly, participants often fail to pay attention to
the information an opponent share. To reinforce this lesson, I made the agent communicate it’s
preferences more clearly through it’s pattern of offer so that it would be apparent to the partici-
pant. First, the simulation was adapted to support a salary negotiation and the appearance of the
49
agent was changed to a more professional dressed agent with an appropriate job title and company.
Figure 6.4 illustrates the new agent’s look and feel of the agent.
Figure 6.4: Hiring Manager Agent built using IAGO. Users can exchange offers (highlighted in
red), or exchange information about preferences over issues (highlighted in yellow).
In addition to the superficial appearance of the domain, several changes were made to the ne-
gotiation algorithm and pedagogical analysis based on lessons learned in Chapters 4 and 5 and
objective. To encourage participants to communicate their preferences more, the opponent mod-
eling methods were enhanced to provide a more detailed and realistic analysis of the information
exchanged during the negotiation: I incorporated the hybrid model that attends to both explicit
preference statements and implicit information contained in the pattern of a player’s offers. Be-
sides providing a more accurate model of the participant’s revealed-preferences, this enhanced
model supports other behavioral enhancements to the agent.
50
This enhanced model was then used to make important improvements to how the agent makes
offer as well as which offer the agent accepted. By default, the agent communicates it’s prefer-
ences through statements. However, to aid participants in understanding the need to attend to their
opponent’s offers, a change was made to make the agent’s offers more communicative by having
the agent claim issues in the negotiation proportional to it’s preferences. Previously, the agent
made offers based on how much of the pie it wanted. Thus if a offer was within it’s threshold
of acceptable offers, the agent would make said offer. In this case, I want the agent to be more
communicative to enhance learning outcome. Thus the agent makes offers where it claims levels
of issues proportional to it’s preferences. For example, if the agent likes salary more than stock,
it should claim more salary than stock in an offer. To do this, I compute a list of potential offers
that are on the pareto frontier. An offer is said to lie on the pareto frontier if it is pareto efficient.
An offer is pareto efficient if there is no offer an agent can that increases its value without hurting
an opponent. More formally, let’s assume there are two negotiators with preference rank A and B
respectfully and an offer o. This offer is pareto efficient if there is no outcome o´ such that:
(o
0
> o
A
^ o
0
o
B
)_(o
0
> o
B
^ o
0
o
A
) (6.1)
Given a set of potential offers O, the agent pick the offer that is above the threshold of ac-
ceptable value and claims levels of items that are strongly correlated with the agent’s preference
ranking. Thus the offer o
f
the agent selects is expressed as:
o
f
= argmin
o
getO f f er(o
i
;threshold) (6.2)
where getOffer is the threshold minus v
A
, the value of the offer to the agent divided by t
P
total
potential point value of an offer :
getO f f er = threshold(
v
A
t
P
) (6.3)
51
Lastly, the IAGO agent was update to respond to the participant initial offer . One of the
lessons covered in the value claiming video was to make high initial offer as that has been shown
to strongly correlated with negotiation. To reinforce this lesson, the agent should respond to the
participant’s first offer. If the participant makes a high initial offer, the agent should reduce its
threshold of acceptable offers. If the participant makes a weak initial offer, the agent should in-
crease its threshold. Thus, the agent was designed to adjust its threshold of acceptable offers based
on the first offer made by the user. The agent begins the negotiation wanted to accept 60% of the
total value of the issues being negotiated and adjust its acceptable threshold based on the value of
the participants first offer within a range of points(between 40 to 80 percent of the pie). Thus given
a first offer o
1
st and previous threshold, the new threshold of the agent is adjusted as follows:
newT hreshold = prevT hreshold+(:20
v
A
v
p
t
P
) (6.4)
where v
A
is the value of an offer to the agent and v
P
is the value of the offer to the participant and
t
P
is the total value of the offer.
6.2.1 Research Study
To examine the effectiveness of the mini-course, I devised a two-condition between-subjects exper-
iment contrasting improvements due to instruction and personalized feedback with mere practice
(i.e., negotiating multiple times without pedagogical feedback).
Participants: A total of 150 (40 female) participants were recruited through Amazon Me-
chanical Turk to participate in the study. Of that 150, 49 were removed because the system crashed
during their session and I could not recover their data or were removed for duplicate entries.
Negotiation Task: Participants engaged in a 4-issue salary negotiation where they assumed
the role of an employee seeking a position at a large technology company and the agent assumed
the role of the company’s HR manager. The issues to be negotiated were salary, bonuses, stock
options, and vacation days. For each issue, participants and the agent had to agree upon one of
52
ten levels. For example, participants could negotiate a salary between $70,000-$160,000 in $10K
increments (see Figure 6.4). Each party received points based on the level they negotiated for each
issue (seen in Table 6.1). For example, if participants negotiated a salary of $90,000 (l=3), they
would receive 15 points. They were told the total number of points they would receive was the sum
across the four issues (i.e., a linear utility function). Neither the agent nor the participant knew the
other’s preferences. In addition to these preferences, participants were told they would receive only
six points (their BATNA) if they failed to reach an agreement within the seven minutes allotted.
Table 6.1: Agent and Participant’s Payoff Matrix
Salary Bonuses Stock Option Vacation Days
Agent 5*(10 -l) 1*(10 -l) 2 -(10 -l) 3*(10 -l)
Human 5*l 3*l 2*l 1*l
Experimental Manipulation: Participants were assigned to either the control condition,
which assessed the effect of mere practice with the simulation, or experimental condition, which
provided video lectures and personalized feedback. Both groups completed a total of 4 learning
modules, but in different order. All participants first watched an introductory video on negotiation
that introduced the course and the simulation environment, then completed a baseline negotia-
tion to assess their initial performance. Those in the control condition went on to immediately
complete a second simulated negotiation back-to-back, to assess the amount of improvement that
could be expected simply from repeated practice with the simulation exercise. (They also had the
opportunity to view the lessons and feedback after completing both simulations.) Those in the
experimental condition received instruction interspersed with practice. After the baseline simula-
tion, the second module introduced concepts around value claiming. In this module, participants
watched a value claiming lecture and received feedback on how well their baseline negotiation
followed value claiming principles. The third module contained a video lesson on value creating
as well as feedback on how well participants followed those principles in their baseline simulation.
53
Finally, participants in the experimental condition watched a conclusion video and completed a
second negotiation to compare any learning gains with the control group.
Measures: Prior to the negotiation, I gathered a number of demographic information and
self-reported negotiation skill assessments. To assess the quality of the outcome, I measure the
individual points obtained by the participant and the joint points (i.e., the sum of individual points
obtained by the participant and the agent). Participant points are a measure of value claiming. Joint
points is a measure of value creation. Lastly, I assess how well participants grasped principles cov-
ered in the video lectured by examining their aspirations and trade-offs made in the negotiation.
After the second negotiation, I assessed their feeling of the negotiation outcome and their per-
formance using the Subjective Value Inventory [85].Specifically, I examined their feelings of the
negotiation outcome, feelings of self, and feelings towards the relationship. To verify that partici-
pants actually watched the video lectures, I measured the amount of time participants spent in the
lecture section. This can be considered a check on the manipulation.
6.2.2 Results
Manipulation check: One concern with this design is that participants might skip over the video
lectures, essentially rendering the experimental condition equivalent to the control condition. To
assess this possibility, I analyze the extent to which participants watched the video lectures. For
this analysis, I average the amount of time spent watching the value claiming and value creating
lectures. Next, I divided participants into five quantiles based on the percentage of the video lec-
tures watched. The first quantile contains participants who watched less than 25% of both videos,
the second quantile contained participants who watched 25% to 50%, the third quantile were par-
ticipants who watched 50% to 75% of the video. Quantile four was composed of participants
who watched 75% to 100% and the fifth quantile were participants who watched the entire video
or spent more than the allocated time on the video (this could mean they rewatched parts of the
video). This revealed an almost bimodal distribution where participants either watched all of the
video or barely any. The histogram in Figure 6.6 shows the percentage of the lectures participants
54
watched broken into quantiles (25, 50, 75,100,100+). Given this bimodal distribution, I wanted
to understand the impact of watching video lecture on participant’s performance measured by the
outcome metrics. Based on the emphasis of each lecture, I expect participants who watch more of
the value claiming lecture to get more user points in the negotiation, and those who watched more
of the value creating lecture should be gain more joint points. To test this, participants in the exper-
imental condition were divided into two groups based on how much of each video they watched.
Participants in the first group watched less than 50% of the video lectures and participants in the
second group watched 50% or more of the lectures. A one way ANOV A was conducted to under-
stand the differences between these groups. There was a statistically significant difference in user
points across groups (F(1,49) = 5.023, p = .030). This indicates that those who watched more of
the value claiming lecture did claim more user points in the negotiation. In analyzing joint points
across groups, we also see a statistically significant difference between groups (F(1,49) = 4.097,
P = .048). These results provide support that the video lectures are indeed impacting participants’
behaviors. These results are highlighted in 6.5.
Figure 6.5: Relationship between video lectures watched and outcome measures
55
This analysis also revealed that a portion of the participants (12 participants) in the experimen-
tal condition watched less than 25% of the lectures and thus were never exposed to the manipula-
tion. These were removed from subsequent analysis.
Figure 6.6: Percentage of lectures watched by experimental condition
Negotiated outcome: For the remaining participants, user points and joint points were ex-
amined with a 2(condition: control vs experimental) x 2(time: negotiation 1 versus negotiation 2)
repeated measures ANOV A to understand the impact of pedagogy and mere-practice on partici-
pant outcomes. With regard to the participant’s individual outcome (user points), there as a main
effect of practice (F(1,87) = 9.586, p = .003), but there was no interaction effect (F(1,99) = .009,p
= .924) (see Figure 6.7). This means that in both conditions participants are performing relatively
the same.
Analysing joint points revealed similar results to user points, there is a main effect of time
(F(1,87) = 3.639, p = .059), but there is no interaction with condition (F(1,87) = 1.037, p = .311).
56
Figure 6.7: Final User Points across condition
This shows that across both the control and experimental condition, the participants claimed more
joint value in the second negotiation compared to the first (see Figure 6.8).
Making tradeoffs: To test the effects of training, I assessed whether or not participants were
making tradeoffs. To do this, I conducted a 2(condition: control vs experiment) x 2(time: nego-
tiation 1 vs negotiation 2) x 4(issues: salary vs bonuses vs stocks vs vacation) repeated measure
ANOV A. This analysis revealed a significant main effect of issues (F(1,261) = 8.717, p = .000)
and time (F(1,87) = 7.179, p = .009). There is an interaction between issue, time and condition
(F(3,261) = 3.180, p = .025). The results are shown in Figure 6.9 as the differences in levels ob-
tained from N1 to N2. This result shows a clear shift to thinking more globally in the type of
deal participants found in the experimental condition. Participants in the experimental condition
are making trade-offs by claiming more stock at the expense of salary. However, in the control
condition, participants are claiming more salary and similar levels of other issues. This supports
my previous assumption that participants in the experimental groups are learning to make tradeoffs
between issues where as those in the control groups are simply trying to get more salary.
57
Figure 6.8: Final Joint Points across condition
Figure 6.9: Level claimed by condition
58
Subjective Value of Negotiation In assessing participants’ feelings towards the negotiation,
an independent t-test was conducted. I examined participants’ feelings about the negotiation out-
come, their performance, and their relationship with the agent. In terms of participants’ feelings
of the negotiation outcome, there is no significant effect of condition, t(84) = -1.378, p = .305,
although those in the experimental condition (M = 5.162, SD = .739) scoring higher than the con-
trol condition (M = 4.92,SD = .866). Analysis of how they felt about the negotiation revealed no
significant effect of condition, t(84) = -1.725, p = .846. However, participants in the experimental
condition (M = 5.02, SD = .883) did score higher than the control condition (M = 4.67, SD = .959).
These two results indicate that participants in both the control and experimental groups felt the
same regarding their final outcomes and their feelings about their performance. Lastly, I examined
their feeling of the relationship with the agent. This revealed a significant effect of condition, t(83)
= -.398, p = .003. Participants in the experimental condition (M = 5.711, SD = .984) rated the
feeling of the relationship with the agent much higher than the control condition (M = 5.596, SD
= 1.542). The results are depicted in Figure 6.10.
Figure 6.10: SVI results for feeling of outcome,self and relationship by condition
59
6.3 Summary of Contribution
In this chapter, I argue that the methods for providing personalized feedback proposed earlier
in this thesis coupled with enhancements to IAGO can be utilized to develop a mini negotiation
training course.This is evident as this system is effective in helping participants improve their
negotiation tactics. The results show that there is a correlation between participants watching the
video lectures and their performance on outcome measures. This suggests that participants who
watched more of the value claiming lectures are getting more user points in the negotiation and
that those who watched the value creating lectures are gaining more joint points. However, this did
not do significantly better than the control condition. Participants who watched the training video
as well as received personalized feedback, did do better at making trade off between the various
issues of the negotiation as well as aspired for more issues broadly. Although this did not lead
to significant better outcomes, this could be due to the issues that participants in the experimental
condition are making tradeoffs with. Participants in the experimental group are making tradeoffs
between salary and stock options. Given that salary is worth 5 points and stock options, 2, this
would naturally lower the value of the experimental group’s total points. One explanation as to
why participants are asking for more salary options even though its worth less, is that they may
be bringing outside bias into the negotiation. Rather than focusing on the preferences we provide
them, they may be bringing their natural bias into the role-playing exercise. This is one of the
problems that can occur in a role-playing exercise highlighted by Alexander and LeBaron [86].
These results suggest that utilizing automated agents as a teaching tool can be effectively coupled
with traditional video lectures to help participants improve their negotiation abilities.
60
Chapter 7
Broader Impact: Negotiation Agent for Social Good
The methods developed thus far have been used primary in the context of training. So far, I have
shown how the metrics developed in Chapter 3 and the opponent modeling techniques presented
in Chapter 5 can provide insights into the behavior of a negotiator. In this Chapter, we explore
how more sophisticated agents using opponent modeling can be utilized to provide insight into
societal problems.We focus on understanding the differences in negotiation abilities between men
and women when certain threats are presented. Insights like these are also crucial in determining
how best training can be leverage to have the largest impact on students.
7.1 Disparity in Negotiation Process and Input
Starting salaries have a strong impact on career earnings and even small initial differences can
compound into substantial differences over time. Yet many people accept the first salary they are
offered. A 2016 survey by Glassdoor (one of the world’s largest recruiting sites) found that 60%
of new hires failed to negotiate [87] and a 2019 Swedish national survey found equivalent numbers
[88]. This failure to negotiate has consequences: candidates that engage in negotiations achieve
higher salaries, particularly when they engaged in open discussions of tradeoffs across different
elements of their compensation package [89]. These disparities are concerning as negotiation suc-
cess varies systematically by discipline and demographics. For example, a recent Academy of Sci-
ences report suggests that students in science, technology, engineering and mathematics (STEM)
61
are especially under prepared to negotiate successfully [p. 2, 5] and numerous studies document
women’s inability to obtain equitable salaries [88, 90, 91]. Understanding the reasons behind these
disparities can inform interventions for addressing structural inequities across our society and max-
imizing the inclusion of underrepresented groups in CS. Research on negotiation disparity focuses
on either negotiation inputs or processes. Inputs are information and dispositions that might influ-
ence candidates in advance of a negotiation. For example, text analysis of job postings shows they
use language that can put women on the defensive [92]. Women may also bring different goals
to the negotiation, such as greater willingness to accept a low salary in exchange for job security
or flexibility [93]. Processes, in contrast, refer to actions parties initiate during a negotiation. For
example, women may face more aggressive opening offers and greater use of deception by their
counterparts [94]. In this chapter, we explore the potential for intelligent agents to yield insights
into the relationship between negotiation inputs and processes.
7.2 Current Approach to Understanding Negotiation Processes
Existing research into negotiation processes has adopted one of two experimental methods (dyadic
or scripted), each with its own methodological limitations. We argue that intelligent agents offer
a third methodological approach that complements the weaknesses of existing methods. In the
dyadic approach, two participants negotiate with each other (one playing the role of a hiring man-
ager and the other playing the role of the prospective employee). Dyadic studies have found, for
example, that women negotiate lower salaries than men when job qualifications are described in
stereotypically male ways [95, 96]. The advantage of the dyadic approach is it allows for a rich and
natural give-and-take between participants (analogous to what they would face in a real salary ne-
gotiation) and facilitates the study of emergent processes such as information exchange and value
creation [80]. The disadvantage is the difficulty in attributing causality to poor outcomes. Do
women perform badly because they are poor negotiators or do they perform poorly because their
partners treat them differently than men [94]. To address causality, the scripted approach replaces
62
the hiring manager with a completely deterministic computer program that makes an identical se-
quence of statements and concessions, regardless of the participant’s gender or behavior [e.g., 97].
This ensures the hiring manager is truly blind to the participant’s gender and other characteristics
(thus any differences can be safely attributed to the employee). However, this increase in experi-
mental control comes at a great cost. One of the hallmarks of strong negotiators is the ability to
mutually adapt to one’s partner: to understand their opponent’s interests, communicate their own,
and guide the negotiation towards win-win tradeoffs [80, 89]. Such interactive processes and value
creation are precluded by deterministic scripts.
7.2.1 Using Automated Agents to Understand Negotiation Processes
Intelligent negotiation agents offer a way to merge the strengths of these two methodological ap-
proaches while avoiding their chief limitations. Research on automated negotiation agents has
yielded interactive systems that implement the processes that underlie successful negotiations [98].
For example, agents are able to form accurate models of their opponent’s goals, discover opportu-
nities for tradeoffs, and propose efficient solutions [99]. A recent focus has been to create agents
that can negotiate with human users [2, 75, 100] and even model human-like psychological pro-
cesses. For example, algorithms can simulate or even exploit common negotiation bias-es such as
the fixed-pie bias [101] – a tendency to assume your opponent holds the same preferences as you
– or the anchoring bias [20] – a tendency to be influenced by your opponent’s opening offer [82].
In this chapter, we argue human-like agents realize the strength of the previous two approaches
while overcoming their limitations. Like the dyadic approach, they support the study of interactive
processes while also providing strong experimental control. Like the scripted approach, they allow
for strong experimental control over the factors that might shape outcomes but also enable a level
of dynamism that allows the agent to adapt its responses to the user’s actions without succumbing
to human bias.
We use intelligent agents to explore how negotiation inputs – “gender triggers” [102] and gen-
der differences in negotiation goals [93] – impact the negotiation behavior of CS students. Gender
63
triggers are descriptions about a potential job that appear to trigger gender-divergent behaviors.
These include terms that activate gender stereotypes [95] or suggest hostility towards women in the
workplace (such as stories or reviews suggesting gender discrimination is rampant). We bring two
novel contributions to this problem. To our knowledge, this is the first work examining gender-
bias in negotiations with agent-based technology. It is perhaps the first work to examine salary
negotiation processes in the context of undergraduate CS majors (prior literature has focused al-
most entirely on business students). This adds to the nascent literature on using virtual agents as
psychological tools [103, 104] but also highlights the advances needed in the field of autonomous
agents to fully realize the potential of such methods. The next section describes the idea of gender
triggers and how they can be manipulated. We then describe the agent-based technology used in
this study, before presenting the experiment and discussion.
7.3 Gender Trigger and Goals
Figure 7.1: Job description (partial) showing female-threatening required characteristics
64
Research on gender in negotiation often accepts the premise that women exhibit poorer salary
negotiation skills, but some research attempts to qualify this tendency. Rather than claiming that
women are less skilled in general, we highlights specific situations where negotiation behavior
diverges. Bowles and colleagues introduced the term “gender triggers” for situational factors that
elicit gender-related differences in how people negotiate [105]. The most commonly studied trigger
is stereotype threat [104] . This is the phenomenon that women’s performance suffers when a
job is described as requiring stereotypical male traits (e.g., requires “strong bias for achievement
and confidence when facing risks”) but the gender gap can be reversed when the job is described
as requiring stereotypical female traits (e.g., requires “good listening skills and good intuition
in understanding others, [106]). Figure 7.1 illustrates a common way to manipulate stereo-type
threat, which we adopt in the present study. Following Shantz [107] and Bem’s Sex Role Inventory
[108], we manipulate stereotype threat by adding language to the description of the job participants
are negotiating . Jobs included a “Required Characteristics” section containing five statements
crafted to be either female-threatening or female-supportive. Female-threatening job descriptions
included statements that candidates should “be the technical authority in your team”, “drive your
team toward perfection” and show “confidence working on large applications” whereas female-
supportive jobs stated that candidates should “be an asset to your team”, “encourage your team to
achieve excellence” and show “readiness to collaborate on large applications.”
Most research on stereotype threat has utilized business students, where ideal candidates are
described as confident, aggressive and self-interested, but stereotypes may play out differently in
CS, where teamwork is often emphasized. Thus, we also explore a second potential gender trigger:
gender-based hostility in the workplace [102]. Prospective applicants often discover a company’s
culture through reviews at job recruiting sites such as Glassdoor. To manipulate the perceived
hostility or openness of a company, we used actual quotes from Glassdoor about real technology
companies’ work environments. For example, female-threatening jobs included reviews stating
“sexism is prevalent”, “women have a tough time getting ahead.” For female supportive jobs, we
65
modified these lines to reverse the impressions: “sexism is not an issue”, and “women have oppor-
tunities to get ahead.”
Although gender triggers comprise one input to a negotiation, salary disparity could simply
reflect different goals of female versus male negotiators. Some research argues that women’s skills
are judged unfairly by being compared with metrics developed by largely-male negotiation re-
searchers. Instead, women may simply be trying to achieve other sources of value besides salary.
For example, as a group, women tend to be more risk-averse [93] and make greater use of vacation
time [109]. Unfortunately, most laboratory studies do not provide an opportunity to examine nego-
tiators’ goals. A typical study might use a standard job negotiation exercise, such as New Recruit
[110]. In this exercise, each side is provided a number of issues (salary, vacation, etc.) and a fixed
payoff table on the points they can earn based on what they negotiate for each issue.
To examine gender differences in negotiator goals, we adapt the standard New Recruit case
to include compensation elements that men and women have been shown to value differently in
prior studies. Specifically, stock options and bonuses are seen as riskier issues to which women
would presumably assign less value, whereas vacation days should be more valued by women, at
least according to this prior research. Rather than assigning fixed value to these issues, however,
we allow participants to assign their own priorities to these issues to assess if these hypothesized
differences are truly present in CS students.
7.4 Study
We performed an IRB-approved experiment to assess the potential of negotiation agents to give
insight into the behavior of CS undergraduates in salary negotiations. Specifically, we examined
(1) student’s willingness and skill at negotiating, (2) if gender triggers explained differences in
negotiation processes and outcomes, and (3) if these differences could be explained by gender-
specific preferences (e.g., are women risk-averse).
66
Agent Design: To simulate how a recruiter might approach this task, we use IAGO (see 7.2).
IAGO supports online negotiation with human participants. Participants exchange offers through
sliders. Using menus, participants can assert they prefer stocks over salary, or ask the hiring
manager about their relative preferences. Participants can express an emotional attitude towards
the negotiation through emojis, or send canned statements such as “I expect to be compensated
better” or “We should consider each other’s interests.”
Figure 7.2: Hiring Manager Agent built using IAGO
For this experiment, we adapt the default Pinocchio agent provided with the IAGO framework
(see [83] for details). The agent attempts to model how people commonly negotiate, including two
tendencies that make it hard for negotiators to find win-win-solutions. First, the agent incorporates
a “fixed-pie” bias (it assumes a zero-sum orientation to the negotiation). Specifically, it tries to
make offers on the Pareto frontier by estimating the human participant’s preferences from their
67
Figure 7.3: Example dialog involving information exchange
preference assertions (e.g., see Figure 7.3), but this estimate is biased using a fixed-pie prior (i.e.,
absent information to the contrary, the agent assumes the participant ranks issues the same way as
the agent). Second, the agent is reluctant to exchange preference information. The agent responds
truthfully to the participant’s preference requests but does not actively volunteer this information
unless the participant shares first, in which case, it reciprocates each preference revealed. Together,
these properties mean that students will fail to find a win-win solution unless they explore the
recruiters underlying preferences.
We altered IAGO’s text messages to better match the language used in actual salary negoti-
ations. We made one minor adaptation to offer behavior to better align the agent with human
performance. Pinocchio aims to be fair by default, always making offers that match the estimated
Nash Bargaining Solution [111]. We adjusted this offer behavior to incorporate the aforementioned
anchoring bias into the system. As in typical salary negotiations, the agent makes the first offer
and tries to induce an anchoring bias in the participant by making a strong initial offer (1,1,1,1).
68
However, the agent was also made susceptible to anchoring. Whereas Pinocchio always attempts
to offer 50% of the pie, the revised agent initially tries to claim 60% of the pie but this shifts to as
low as 40% depending on the strength of the opponent’s initial offer.
Participants: A panel of 440 U.S. undergraduate CS students (308 male) were recruited for
this experiment through Qualtrics’ panel services. Two declined to report their gender and were
excluded from analyses. The strong male-skew (70%) matches the gender imbalance in CS [112].
They were paid for their participation and, further, performance was incentivized with entries into
five $25 USD lotteries, one entry for each point earned in the negotiation.
Negotiation Task: The prospective employee (the participant) and a hiring manger (the agent)
negotiate over salary, bonus, stock options and vacation. Each issue has 10 discrete levels (see
Figure 7.2). Players bid by specifying a level for each issue. There are 10
ˆ
4 possible negotiated
agreements. Following New Recruit, participants are provided an explicit payoff function that
defines the value of deals as a linear utility function based on the weight they assign to each issue
in isolation.
Participants were told to imagine they were offered a job and must negotiate their final package.
They were provided a cover letter and background material based on actual job descriptions (see
Figure 7.1). They received background research summarizing the typical salary range for this
type of job (and gender triggers that vary with experimental condition). Background materials
specified the expected range for this type of job (including historical data stating that salaries
range from 70120K). Each participant was asked to express their own priorities over the issues
in the negotiation. They assigned weights to issues by distributing 11 discrete points across the four
issues. For example, a participant might assign nine points to salary and two points to vacation if
they strongly preferred salary over stocks and bonuses. The agent always held the same preferences
(as would be expected with any given company). The agent assigned 5 to salary, 3 to vacation, 2
to bonuses and 1 to stock. Participants had ten minutes to complete the negotiation. If they failed
to reach an agreement, each side received six points.
69
Experimental Manipulation: We employed a 2 (Gender: male vs. female) × 2 (Threat:
female-threatening vs. female-supportive) × 2 (Manipulation type: Required characteristic vs.
Glassdoor comments) between-subjects design. After consenting and completing the demographic
questions, they were then told to imagine that they were offered a job as a software engineer at
a fictitious tech company. Everyone read the same job description, but additional language was
added to manipulate threat in a female-supportive or female-threatening manner. As described
above, threat was manipulated either by a section describing “Required Characteristics” or by a
section that included Glassdoor Reviews.
Measures:To measure the effectiveness of the threat manipulation, participants next completed
a manipulation check by rating the masculinity of the job using 3 items (”...reflects masculine char-
acteristics,” ”men would have an easier time...,” ”there would be challenges...as a woman”) on a
scale from 1 (strongly disagree) to 7 (strongly agree). To rule out whether the threat manipula-
tion impacted the perceived job quality, they also rated the status of the job using 3 items (”...is a
high-status position,” ”...impressive on a resume,” ”...people...consider this an attractive opportu-
nity”) on a scale from 1 (strongly disagree) to 7 (strongly agree). Before negotiating, participants
reported the minimum salary that they would be willing to accept (minimally acceptable salary).
To measure what issues participants valued, they were asked to prioritize the relative importance
of salary, bonus, stock options, and vacation time by assigning points to each issue (out of eleven
total). Participants then engaged in the negotiation with the virtual agent. The interface allowed
the user to: accept or reject the agent’s offer(s), make offers themselves, as well as send prepro-
grammed responses and emojis (happy, sad, angry, surprised, neutral). Counts of each of these
responses were tracked and served as dependent variables. We focus on: accepting the agents offer
(vs. making offer(s) themselves) and the number of times each emoji was used. In addition to
the pattern of offers, the final outcome for participants and the agent, as well as joint points, were
derived. Two measures were derived from joint points. Maximum possible integrative potential
was calculated by finding the maximum joint points of any deal that could be reached between a
70
given participant and the agent (using the participant’s stated points); Realized integrative poten-
tial was measured by the joint points obtained compared to the maximum joint points possible.
Finally, individual differences in motivation were also measured. Specifically, participants rated
themselves on 4 items for each type of motivation using a scale from 1 (not at all) to 7 (to a great
extent). For achievement motivation, they rated to what extent they would 1) get their work tasks
done, and 2) get them done well, 3) get a lot of work accomplished, and 4) finish their work in this
job. For communal motivation, they rated to what extent they would 1) get along with coworkers,
2) be a team player, 3) be liked by coworkers, and 4) build relationships in this job.
7.4.1 Results
Manipulation Check: The manipulation was successful; participants in the female-threatening
condition reported that the position was significantly more masculine (M = 4.89, SE = 0.12) than
those in the female-supportive condition (M = 4.21, SE = 0.11; F(1, 430) = 18.66, p < .001).
The manipulation type also affected the perception of masculinity such that participants in the
Glassdoor comments condition reported that the position was significantly more masculine (M
= 4.72, SE = 0.11) than those in the Required Characteristics condition (M = 4.38, SE = 0.11;
F(1, 430) = 4.59, p = .03). Moreover, there was a significant interaction such that these effects
were driven by the female-threatening Glassdoor comments condition (M = 5.27, SE = 0.16; F(1,
430) = 6.70, p = .01), which was perceived the most masculine, followed by female-threatening
Characteristics condition (M = 4.52, SE = 0.17); ratings in the female-supportive conditions were
lower and comparable to each other (M = 4.17, SE = 0.16 vs M = 4.24, SE = 0.15, respectively).
Although it was only a trend (F(1, 430) = 2.60, p = .11), the effect of the manipulation was slightly
stronger for female participants (M = 4.05 vs. M = 4.99) than male participants (M = 4.37 vs. M
= 4.79). All other effects failed to approach significance (Fs < 1.3, ps > .26).The gender-trigger
manipulation did not impact the perceived desirability of the job. The job was uniformly seen as
having high status (M = 5.90, SD = 1.00, on a scale from 1 to 7). All effects for perceived status
failed to reach significance (Fs< 2.9, ps> .09). In summary, we successfully manipulated threat,
71
but for CS students, this was most effectively achieved through the Glassdoor reviews rather than
Required Characteristics. As intended, this manipulation did not make jobs seem more or less
desirable. As we did see a main effect of threat regardless of how it was conveyed, the remainder
of the section will focus on the impact of gender and threat (ignoring if the threat was conveyed
through Glassdoor comments or Required Characteristics).
Minimally Acceptable Salary: Participants’ minimum acceptable salary fell near the bottom
of the stated salary range for this job (which was $70K). There was a borderline significant inter-
action (F(1, 414) = 3.81, p = .052) showing that women’s bottom line was influenced by threat. As
can be seen in Figure 7.4, while men did not significantly differ across threat conditions (t(418)
= -0.88, p = .38), women marginally raised their bottom line when the job was described in a
female-supportive way (t(418) = 1.87, p = .06). All other effects failed to reach significance (Fs
< 1.6, ps > .21). In summary, consistent with prior research on business students, women were
particularly impacted by gender triggers. When job advertisements contained female-threatening
language, women lowered their salary minimums compared to when jobs were described in a
female-supportive way, whereas men were unaffected.
Willingness to Negotiate: If women have lower aspirations, they may feel less motivation to
negotiate and simply accept the first offer. To examine this, we created a dichotomous variable
distinguishing participants that accepted the agent’s first offer versus those who made at least
one offer to the agent. Overall, 43% did not negotiate. However, willingness to negotiate was
unaffected by gender or threat (x2s< 0.58, ps > .44); e.g., women were not less likely to counter
the first offer (54%) compared to men (56%). Motivations did influence willingness to negotiate.
Negotiators with higher achievement motivation were significantly more likely to make at least one
offer (vs. none at all; B = .23, Wald(1) = 5.10, p = .02), although communal motivations had no
effect (B = .02, Wald(1) = 0.05, p = .82). In summary, there was no evidence that gender triggers
lead women to be more reluctant to negotiate their salary.
Final Negotiated Salary: To examine the outcome of the negotiation, we excluded 13 partic-
ipants that failed to reach agreement in the negotiation (neither men nor women were more likely
72
Figure 7.4: Minimally acceptable salary (USD) as reported before the negotiation began.
to fail to reach an agreement). We find that the final outcome was not significantly influenced by
gender or threat (Fs < 1.01, ps > .31). Final salary was strongly impacted by willingness to ne-
gotiate: participants who made at least one offer did significantly better in terms of overall points
(M = 25.13, SE = 0.79) than those who just accepted the agent’s first offer (M = 11.29, SE =
0.25; t(419) = -14.76, p < .001) and final salary (see Figure 7.5): participants who made at least
one offer received a significantly higher salary (M = $92,946, SE = 1106.44) than those who just
accepted the agent’s first offer (M = $80,000, SE = 0.00; t(419) = -10.11, p< .001). Note, regard-
less of willingness to negotiate, the vast majority (85.5%) achieved a salary that exceeded their
minimally acceptable salary. There was no significant difference, based on gender or threat, in the
proportion of participants that settled for a salary below their stated bottom line. In summary, in
contrast to prior work (e.g., [88]), there were no gender differences in the salary attained, nor an
impact of gender triggers. Rather, salary was determined by participants willingness to counter
the hiring manager’s initial offer (which was also unaffected by gender or threat). Those that did
73
negotiate obtained $13,000 USD more per year (a difference roughly comparable with what has
been previously reported).
Figure 7.5: Final negotiated salary (USD) for those willing to negotiate versus those simply ac-
cepting the first offer
Integrative Potential: Although we did not see differences in outcome by gender or gender
triggers, it is possible that differences could emerge when we control for the goals people bring to
the negotiation. If men and women systematically weight issues differently, they might face quite
different integrative potential. Participants expressed a wide range of priorities and this led to a
wide distribution of possible joint outcomes, and priorities differed by gender. Women valued stock
options significantly less (M = 1.70, SE = 0.12) than men (M = 1.99, SE = 0.08; F(1, 430) = 4.55,
p = .03). This is consistent with the existing literature which suggests that women are less tolerant
of risk. All other effects failed to reach significance (Fs < 0.8, ps > .13). Gender-differences in
preferences did not, however, alter the distribution of points that participants had the potential to
receive. To examine this, we again excluded the participants that failed to reach agreement in the
negotiation (n = 13), and then examined the effect of condition on maximum integrative potential.
74
Men and women showed no significant differences. All other effects failed to reach significance (Fs
< 1.8, ps > .18). In summary, CS undergraduates brought varying preferences to the negotiation.
Women valued stocks less than men, but this gender difference did not substantially shift the space
of possible deals that could be reached given the agent’s preferences.
7.5 Summary of Contribution
In conclusion, this chapter highlights the potential for agent-based technology to yield insights
into an important societal problem, namely gender-bias in salary negotiations. We used a gender-
blind automated negotiation agent to examine the salary negotiation behavior CS undergraduates,
particularly with regard to how gender bias might shape outcomes. Our approach provides little
support for the idea that women’s negotiation ability is undermined by gender threat (claims that
have been made and have been widely-accepted in the gender and negotiation literature). This lack
of support, furthermore, can be observed despite our ability to manipulate both gender and threat
quite effectively.This insight supports our ultimate goal of illustrating the value of using agent-base
technology to provide insights into social issues.
75
Chapter 8
Conclusion
The integration of approaches from Intelligent Tutoring Systems and automated negotiation agents
provide an interesting test bed to study how humans interact with each other in strategic situations.
Combining these two technologies can be a powerful tool for studying as well as teaching negoti-
ation. Although previous work shows that mere practice with a virtual human does help students
improve their negotiation skills, truly personalized feedback does enhance the learning experience.
In chapter 3 of this thesis, I showed that principles correlated with good negotiation outcomes
in human-human negotiation are also indicative of good outcomes in a human agent negotiation.
Later on, in chapter four, I illustrate how these principles can be used to provide feedback, as well
as the most effective type of feedback. Through a number of studies, I highlighted the importance
of personalized feedback. Although individuals can improve their negotiation ability through prac-
tice, the most effective method for learning to negotiate is allowing students to practice and receive
personalized feedback. The ability for a student to reflect on their learning is important, and we
can leverage techniques from intelligent tutoring system communities to build feedback systems.
Not only are these feedback systems effective in an experimental environment, but also effective
in the context of a mini-course when personalized feedback is combined with video lectures and
experiential learning exercises.
76
8.1 Future Direction
As the world becomes a more globally connected communities, the need for individuals who can
understand and work well across cultures becomes even more prevalent. Negotiations and a num-
ber of softskills will become even more important. Due to the automation of most jobs, and the
need for human workers in a more higher order reasoning task, workers would require more of the
skill set this thesis has attempted to teach. Thus, the need for more work in how to teach other
social skill is necessary. The main contribution of this thesis is to show how one can leverage
research and principles for teaching soft skills into an effective automated learning and feedback
system. One potential future direction is to examine how best these methods transfer to other soft-
skills training. The general technique for building these types of systems is skill agnostic, although
the principles and metrics would need to be adapted to a new domain. In addition to this, the
autonomy and interaction of the system could be improved to provide a more realistic interaction.
The two agents used in this thesis have a number of limitations which would need to be ad-
dressed to make this approach scalable and more realistic. Although the CRA agent has many of
the nonverbal behavior as well as a high fidelity agent, it is semi-autonomous(CRA’s behavior is
controlled by a human “Wizard”). This allows the human to give off the illusion that the system
is fully automated, thus allowing for far richer experience. However, this needs to be addressed
should it be a viable online training platform. The second system, IAGO, is a fully automated agent
that allows users to interact with it through a menu-driven system. This is not very conductive to
how humans naturally communicate. Thus, to ensure the knowledge gained through this system
are more transferable and scalable, one future line of work is to explore ways of combining CRA
human-like agents with IAGO into a fully automate online training course, and test the effective-
ness of such system. These enhancements would provide a much richer learning experience that
is natural. This also allows for the teaching of higher level negotiation concepts as well as other
softskills.
Beyond the behavior of the agents, another line of future work would be to explore the effect
of nonlinear opponent modeling on diagnosing students. Currently this work assumes both the
77
agent and students have linear preference models. However, there are times that assumption may
be false. Negotiators may not have a linear preference model in certain cases. Thus, to make the
training more align with how negotiation occurs in the wild, nonlinear opponent modeling shows
promise for more accurate opponent modeling. Very little, if any, work has explored nonlinear
models in human-agent negotiation and this offers a rich space for future work.
78
References
1. Gratch, J., DeVault, D. & Lucas, G. The Benefits of Virtual Humans for Teaching Negotiation
in Proceedings of the 16th International Conference on Intelligent Virtual Agents (IVA),
2016 (Springer, Los Angeles, CA, 2016).
2. Lin, R., Oshrat, Y . & Kraus, S. Investigating the benefits of automated negotiations in en-
hancing people’s negotiation skills in Proceedings of The 8th International Conference on
Autonomous Agents and Multiagent Systems-Volume 1 (2009), 345–352.
3. Kolb, A. Y . & Kolb, D. A. in Encyclopedia of the Sciences of Learning 1215–1219 (Springer,
2012).
4. Bedwell, W. L., Fiore, S. M. & Salas, E. Developing the future workforce: An approach for
integrating interpersonal skills into the MBA classroom. Academy of Management Learning
& Education 13, 171–186 (2014).
5. National Academies of Sciences, E., Medicine, et al. Promising practices for strengthening
the regional STEM workforce development ecosystem (National Academies Press, 2016).
6. Babcock, L. & Laschever, S. Women don’t ask: Negotiation and the gender divide (Princeton
University Press, 2009).
7. Hall, R. L. Measuring legislative influence. Legislative Studies Quarterly, 205–231 (1992).
8. Wunderle, W. How to negotiate in the Middle East. Military Review (2007).
9. Eisenberg, T. & Lanvers, C. What is the settlement rate and why should we care? Journal
of Empirical Legal Studies 6, 111–146 (2009).
10. Samborn, H. V . Vanishing Trial, The. ABAJ 88, 24 (2002).
11. Forum, W. E. The future of jobs: Employment, skills and workforce strategy for the fourth
industrial revolution in Global Challenge Insight Report, World Economic Forum, Geneva
(2016).
12. Rubin, R. S. & Dierdorff, E. C. How relevant is the MBA? Assessing the alignment of re-
quired curricula and required managerial competencies. Academy of Management Learning
& Education 8, 208–224 (2009).
13. Movius, H. The effectiveness of negotiation training. Negotiation Journal 24, 509–531
(2008).
14. Bredemeier, M. E. & Greenblat, C. S. The Educational Effectiveness of Simulation Games.
Simulation & Games 12, 307–332. doi:10.1177/104687818101200304 (1981).
15. Henry, E., Fowlie, J. & Gordon, M. Using Games and Simulations in the Classroom: A
Practical Guide for Teachers (Taylor & Francis, 2013).
16. Kapp, K. M. The gamification of learning and instruction fieldbook: Ideas into practice
(John Wiley & Sons, 2013).
79
17. Aleven, V . A. & Koedinger, K. R. An effective metacognitive strategy: Learning by do-
ing and explaining with a computer-based cognitive tutor. Cognitive science 26, 147–179
(2002).
18. Kapur, M. Productive failure. Cognition and instruction 26, 379–424 (2008).
19. Kelley, H. H. A classroom study of the dilemmas in interpersonal negotiations. Strategic
interaction and conflict 49, 73 (1966).
20. Galinsky, A. D. & Mussweiler, T. First offers as anchors: the role of perspective-taking and
negotiator focus. Journal of personality and social psychology 81, 657 (2001).
21. Zuckerman, I., Rosenfeld, A., Kraus, S. & Segal-Halevi, E. Towards automated negotiation
agents that use chat interfaces in The sixth international workshop on agent-based complex
automated negotiations (ACAN) (2013).
22. Johnson, E., Gratch, J., Boberg, J., DeVault, D., Kim, P. & Lucas, G. Using Intelligent
Agents to Examine Gender in Negotiations in Proceedings of the 21th ACM International
Conference on Intelligent Virtual Agents (2021), 90–97.
23. Nwana, H. S. Intelligent tutoring systems: an overview. Artificial Intelligence Review 4,
251–277 (1990).
24. Nkambou, R., Mizoguchi, R. & Bourdeau, J. Advances in intelligent tutoring systems (Springer
Science & Business Media, 2010).
25. Kodaganallur, V ., Weitz, R. R. & Rosenthal, D. A comparison of model-tracing and constraint-
based intelligent tutoring paradigms. International Journal of Artificial Intelligence in Edu-
cation 15, 117–144 (2005).
26. Mitrovic, A., Koedinger, K. R. & Martin, B. A comparative analysis of cognitive tutoring
and constraint-based modeling in International Conference on User Modeling (2003), 313–
322.
27. Mitrovic, A. & Ohlsson, S. Evaluation of a constraint-based tutor for a database language
(1999).
28. Mitrovic, A., Mayo, M., Suraweera, P. & Martin, B. Constraint-based tutors: a success story
in International Conference on Industrial, Engineering and Other Applications of Applied
Intelligent Systems (2001), 931–940.
29. Mitrovic, A. An intelligent SQL tutor on the web. International Journal of Artificial Intelli-
gence in Education 13, 173–197 (2003).
30. Heffernan, N. T., Koedinger, K. R. & Razzaq, L. Expanding the model-tracing architec-
ture: A 3rd generation intelligent tutor for algebra symbolization. International Journal of
Artificial Intelligence in Education 18, 153–178 (2008).
31. Blessing, S. B., Gilbert, S. B., Ourada, S. & Ritter, S. Authoring model-tracing cognitive
tutors. International Journal of Artificial Intelligence in Education 19, 189–210 (2009).
32. Aleven, V ., McLaren, B., Sewall, J. & Koedinger, K. R. Example-tracing tutors: A new
paradigm for intelligent tutoring systems (2008).
33. Huang, Y ., Aleven, V ., McLaughlin, E. & Koedinger, K. A general multi-method approach
to design-loop adaptivity in intelligent tutoring systems in International Conference on Ar-
tificial Intelligence in Education (2020), 124–129.
34. Koedinger, K. R. & Corbett, A. in The Cambridge Handbook of the Learning Sciences (ed
Sawyer, R. K.) 61–78 (Cambridge University Press, 2005). doi:10.1017/CBO9780511816833.
006.
80
35. Koedinger, K. R., Corbett, A., et al. Cognitive tutors: Technology bringing learning sciences
to the classroom (na, 2006).
36. Koedinger, K. R., Anderson, J. R., Hadley, W. H. & Mark, M. A. Intelligent tutoring goes to
school in the big city. International Journal of Artificial Intelligence in Education (IJAIED)
8, 30–43 (1997).
37. Mills-Tettey, G. A., Mostow, J., Dias, M. B., Sweet, T. M., Belousov, S. M., Dias, M. F.,
et al. Improving child literacy in Africa: experiments with an automated reading tutor in In-
formation and Communication Technologies and Development (ICTD), 2009 International
Conference on (2009), 129–138.
38. Wijekumar, K., Meyer, B. & Spielvogel, J. Web-based intelligent tutoring to improve read-
ing comprehension in elementary and middle schools: Design, research, and preliminary
findings in E-Learn: World Conference on E-Learning in Corporate, Government, Health-
care, and Higher Education (2005), 3206–3211.
39. Graesser, A. C., VanLehn, K., Rose, C. P., Jordan, P. W. & Harter, D. Intelligent Tutoring
Systems with Conversational Dialogue. AI Magazine 22, 39. doi:10.1609/aimag.v22i4.
1591 (2001).
40. Olney, A., Bakhtiari, D., Greenberg, D. & Graesser, A. C. Assessing Computer Literacy of
Adults with Low Literacy Skills. in EDM (2017).
41. Guo, P. J. Codeopticon: Real-time, one-to-many human tutoring for computer programming
in Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technol-
ogy (2015), 599–608.
42. Wijekumar, K. K., Meyer, B. J. & Lei, P. Large-scale randomized controlled trial with 4th
graders using intelligent tutoring of the structure strategy to improve nonfiction reading
comprehension. Educational Technology Research and Development 60, 987–1013 (2012).
43. Wijekumar, K. K., Meyer, B. J. & Lei, P. High-fidelity implementation of web-based intelli-
gent tutoring system improves fourth and fifth graders content area reading comprehension.
Computers & Education 68, 366–379 (2013).
44. Wijekumar, K., Meyer, B. J., Lei, P.-W., Lin, Y .-C., Johnson, L. A., Spielvogel, J. A., et al.
Multisite randomized controlled trial examining intelligent tutoring of structure strategy for
fifth-grade readers. Journal of Research on Educational Effectiveness 7, 331–357 (2014).
45. Wijekumar, K., Meyer, B. J., Lei, P., Cheng, W., Ji, X. & Joshi, R. Evidence of an intelligent
tutoring system as a mindtool to promote strategic memory of expository texts and compre-
hension with children in grades 4 and 5. Journal of Educational Computing Research 55,
1022–1048 (2017).
46. Graesser, A. C., Fiore, S. M., Greiff, S., Andrews-Todd, J., Foltz, P. W. & Hesse, F. W. Ad-
vancing the science of collaborative problem solving. Psychological Science in the Public
Interest 19, 59–92 (2018).
47. Fenstermacher, K., Olympia, D. & Sheridan, S. M. Effectiveness of a computer-facilitated
interactive social skills training program for boys with attention deficit hyperactivity disor-
der. School Psychology Quarterly 21, 197 (2006).
48. H¨ akkinen, P., J¨ arvel¨ a, S., M¨ akitalo-Siegl, K., Ahonen, A., N¨ aykki, P. & Valtonen, T. Prepar-
ing teacher-students for twenty-first-century learning practices (PREP 21): a framework for
enhancing collaborative problem-solving and strategic learning skills. Teachers and Teach-
ing 23, 25–41 (2017).
81
49. Chollet, M., Sratou, G., Shapiro, A., Morency, L.-P. & Scherer, S. An interactive virtual
audience platform for public speaking training in Proceedings of the 2014 international
conference on Autonomous agents and multi-agent systems (2014), 1657–1658.
50. Monahan, S., Johnson, E., Lucas, G., Finch, J. & Gratch, J. Autonomous Agent that Provides
Automated Feedback Improves Negotiation Skills in International Conference on Artificial
Intelligence in Education (2018), 225–229.
51. Kim, J. M., Hill Jr, R. W., Durlach, P. J., Lane, H. C., Forbell, E., Core, M., et al. BiLAT:
A game-based environment for practicing negotiation in a cultural context. International
Journal of Artificial Intelligence in Education 19, 289–308 (2009).
52. Aleven, V ., Ashley, K., Lynch, C. & Pinkwart, N. Intelligent Tutoring Systems for Ill-Defined
Domains: Assessment and Feedback in Ill-Defined Domains. in The 9th international Con-
ference on Intelligent Tutoring Systems (2008), 23–27.
53. McDermott, P. J. & Hulse, D. Interpersonal skills training in police academy curriculum.
FBI L. Enforcement Bull. 81, 16 (2012).
54. Murphy, D., Slovak, P., Thieme, A., Jackson, D., Olivier, P. & Fitzpatrick, G. Developing
technology to enhance learning interpersonal skills in counsellor education. British Journal
of Guidance & Counselling 47, 328–341. doi:10.1080/03069885.2017.1377337 (2019).
55. Bach, S. & Grant, A. Communication and interpersonal skills in nursing (Learning Matters,
2015).
56. Schmid Mast, M., Kleinlogel, E. P., Tur, B. & Bachmann, M. The future of interpersonal
skills development: Immersive virtual reality training with virtual humans. Human Resource
Development Quarterly 29, 125–141 (2018).
57. Frey, C. B. & Osborne, M. A. The future of employment: How susceptible are jobs to
computerisation? Technological forecasting and social change 114, 254–280 (2017).
58. Loewenstein, J. & Thompson, L. The challenge of learning. Negotiation journal 16, 399–
408 (2000).
59. Fortgang, R. S. Taking stock: An analysis of negotiation pedagogy across four professional
fields. Negotiation Journal 16, 325–338 (2000).
60. Ericsson, K. A., Krampe, R. T. & Tesch-R¨ omer, C. The role of deliberate practice in the
acquisition of expert performance. Psychological review 100, 363 (1993).
61. Bank, F. R. Report on the economic well-being of US households in 2016 2017.
62. Thompson, L. Negotiation behavior and outcomes: Empirical evidence and theoretical is-
sues. Psychological bulletin 108, 515 (1990).
63. Rubin, J. Z. & Brown, B. R. The social psychology of bargaining and negotiation (Elsevier,
2013).
64. Matz, D. & Ebner, N. Using role-play in online negotiation teaching (2010).
65. Khan, M. A. & Baldini, G. M. An Experimental Study of a Software Based Teaching
Methodology in an Undergraduate Negotiation Course. Journal of Education and Learn-
ing 9, 75–88 (2020).
66. Dinnar, S. “, Dede, C., Johnson, E., Straub, C. & Korjus, K. Artificial intelligence and
technology in teaching negotiation. Negotiation Journal 37, 65–82 (2021).
67. Core, M., Traum, D., Lane, H. C., Swartout, W., Gratch, J., Van Lent, M., et al. Teaching
Negotiation Skills Through Practice and Reflection with Virtual Humans. Simulation 82,
685–701. doi:10.1177/0037549706075542 (2006).
82
68. Gratch, J., DeVault, D., Lucas, G. M. & Marsella, S. in Intelligent Virtual Agents: 15th
International Conference, IVA 2015, Delft, The Netherlands, August 26-28, 2015, Proceed-
ings (eds Brinkman, W.-P., Broekens, J. & Heylen, D.) 201–215 (Springer International
Publishing, Cham, 2015). doi:10.1007/978-3-319-21996-7_21.
69. Hartholt, A., Traum, D., Marsella, S. C., Shapiro, A., Stratou, G., Leuski, A., et al. All
together now in International Workshop on Intelligent Virtual Agents (2013), 368–381.
70. DeVault, D., Mell, J. & Gratch, J. Toward natural turn-taking in a virtual human negotiation
agent in 2015 AAAI Spring Symposium Series (2015).
71. Lee, J. & Marsella, S. Nonverbal behavior generator for embodied conversational agents in
International Workshop on Intelligent Virtual Agents (2006), 243–255.
72. Thiebaux, M., Marsella, S., Marshall, A. N. & Kallmann, M. Smartbody: Behavior real-
ization for embodied conversational agents in Proceedings of the 7th international joint
conference on Autonomous agents and multiagent systems-Volume 1 (2008), 151–158.
73. Johnson, E., DeVault, D. & Gratch, J. Towards An Autonomous Agent that Provides Auto-
mated Feedback on Students’ Negotiation Skills in Proceedings of the 16th Conference on
Autonomous Agents and MultiAgent Systems (2017), 410–418.
74. Mell, J. & Gratch, J. IAGO: Interactive Arbitration Guide Online in Proceedings of the 2016
International Conference on Autonomous Agents & Multiagent Systems (2016), 1510–1512.
75. Mell, J., Gratch, J., Baarslag, T., Aydo˘ gran, R. & Jonker, C. M. Results of the First Annual
Human-Agent League of the Automated Negotiating Agents Competition in Proceedings
of the 18th International Conference on Intelligent Virtual Agents (ACM, Sydney, NSW,
Australia, 2018), 23–28. doi:10.1145/3267851.3267907.
76. Roediger, S. The effect of suspicion on emotional influence tactics in virtual human negoti-
ation 2018.
77. Johnson, E., Lucas, G., Kim, P. & Gratch, J. Intelligent tutoring system for negotiation
skills training in International Conference on Artificial Intelligence in Education (2019),
122–127.
78. Baarslag, T., Hendrikx, M., Hindriks, K. & Jonker, C. Predicting the performance of op-
ponent models in automated negotiation in Proceedings of the 2013 IEEE/WIC/ACM In-
ternational Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies
(IAT)-Volume 02 (2013), 59–66.
79. Ito, T., Zhang, M., Robu, V . & Matsuo, T. Complex automated negotiations: Theories, mod-
els, and software competitions (Springer, 2013).
80. Thompson, L. L. Information exchange in negotiation. Journal of Experimental Social Psy-
chology 27, 161–179 (1991).
81. Nazari, Z., Lucas, G. M. & Gratch, J. Opponent modeling for virtual human negotiators in
International Conference on Intelligent Virtual Agents (2015), 39–49.
82. Nazari, Z., Lucas, G. & Gratch, J. Fixed-pie Lie in Action in International Conference on
Intelligent Virtual Agents (2017), 287–300.
83. Mell, J. & Gratch, J. Grumpy & Pinocchio: Answering Human-Agent Negotiation Ques-
tions Through Realistic Agent Design in Proceedings of the 16th Conference on Autonomous
Agents and MultiAgent Systems (International Foundation for Autonomous Agents and Mul-
tiagent Systems, São Paulo, Brazil, 2017), 401–409.
83
84. Johnson, E., Roediger, S., Lucas, G. & Gratch, J. Assessing common errors students make
when negotiating in Proceedings of the 19th ACM International Conference on Intelligent
Virtual Agents (2019), 30–37.
85. Curhan, J. R., Elfenbein, H. A. & Eisenkraft, N. The objective value of subjective value: A
multi-round negotiation study. Journal of Applied Social Psychology 40, 690–709 (2010).
86. Alexander, N. & LeBaron, M. Death of the role-play. Hamline J. Pub. L. & Pol’y 31, 459
(2009).
87. Team, G. 3 in 5 Employees Did Not Negotiate Salary 2016.
88. S¨ ave-S¨ oderbergh, J. Gender gaps in salary negotiations: Salary requests and starting salaries
in the field. Journal of Economic Behavior & Organization 161, 35–51 (2019).
89. Marks, M. & Harold, C. Who asks and who receives in salary negotiation. Journal of Orga-
nizational Behavior 32, 371–394 (2011).
90. Gray, K., Neville, A., Kaji, A. H., Wolfe, M., Calhoun, K., Amersi, F., et al. Career goals,
salary expectations, and salary negotiation among male and female general surgery resi-
dents. JAMA surgery 154, 1023–1029 (2019).
91. Leibbrandt, A. & List, J. A. Do women avoid salary negotiations? Evidence from a large-
scale natural field experiment. Management Science 61, 2016–2024 (2015).
92. Tang, S., Zhang, X., Cryan, J., Metzger, M. J., Zheng, H. & Zhao, B. Y . Gender bias in the
job market: A longitudinal analysis. Proceedings of the ACM on Human-Computer Interac-
tion 1, 1–19 (2017).
93. Croson, R. & Gneezy, U. Gender differences in preferences. Journal of Economic literature
47, 448–74 (2009).
94. Amanatullah, E. T. & Tinsley, C. H. Punishing female negotiators for asserting too much. . .
or not enough: Exploring why advocacy moderates backlash against assertive female nego-
tiators. Organizational Behavior and Human Decision Processes 120, 110–122 (2013).
95. Kray, L. J., Thompson, L. & Galinsky, A. Battle of the sexes: gender stereotype confirmation
and reactance in negotiations. Journal of personality and social psychology 80, 942 (2001).
96. Tellhed, U. & Bj¨ orklund, F. Stereotype threat in salary negotiations is mediated by reserva-
tion salary. Scandinavian Journal of Psychology 52, 185–195 (2011).
97. Amanatullah, E. T. & Morris, M. W. Negotiating gender roles: Gender differences in as-
sertive negotiating are mediated by women’s fear of backlash and attenuated when negoti-
ating on behalf of others. Journal of personality and social psychology 98, 256 (2010).
98. Jennings, N. R., Faratin, P., Lomuscio, A. R., Parsons, S., Sierra, C. & Wooldridge, M.
Automated negotiation: prospects, methods and challenges. International Journal of Group
Decision and Negotiation 10, 199–215 (2001).
99. Baarslag, T., Hendrikx, M., Hindriks, K. & Jonker, C. Measuring the performance of online
opponent models in automated bilateral negotiation in Australasian Joint Conference on
Artificial Intelligence (2012), 1–14.
100. Rosenfeld, A., Zuckerman, I., Segal-Halevi, E., Drein, O. & Kraus, S. NegoChat: a chat-
based negotiation agent in Proceedings of the 2014 international conference on Autonomous
agents and multi-agent systems (2014), 525–532.
101. Harinck, F., De Dreu, C. K. & Van Vianen, A. E. The impact of conflict issues on fixed-
pie perceptions, problem solving, and integrative outcomes in negotiation. Organizational
behavior and human decision processes 81, 329–358 (2000).
84
102. Bowles, H. R., Babcock, L. & Lai, L. Social incentives for gender differences in the propen-
sity to initiate negotiations: Sometimes it does hurt to ask. Organizational Behavior and
human decision Processes 103, 84–103 (2007).
103. Blascovich, J., Loomis, J., Beall, A. C., Swinth, K. R., Hoyt, C. L. & Bailenson, J. N.
Immersive virtual environment technology as a methodological tool for social psychology.
Psychological inquiry 13, 103–124 (2002).
104. De Melo, C. M., Carnevale, P. J. & Gratch, J. Using virtual confederates to research inter-
group bias and conflict in Academy of Management Proceedings 2014 (2014), 11226.
105. Bowles, H. R., Babcock, L. & McGinn, K. L. Constraints and triggers: situational mechanics
of gender in negotiation. Journal of personality and social psychology 89, 951 (2005).
106. Kray, L. J., Galinsky, A. D. & Thompson, L. Reversing the gender gap in negotiations:
An exploration of stereotype regeneration. Organizational behavior and human decision
processes 87, 386–409 (2002).
107. Shantz, A. The effect of stereotype threat on the interview performance of women. Advanc-
ing Women in Leadership Journal 32, 92–106 (2012).
108. Bem, S. L. Bem sex role inventory. Journal of personality and social psychology (1981).
109. Maume, D. J. Gender differences in taking vacation time. Work and Occupations 33, 161–
190 (2006).
110. Neale, M. A. New recruit. Teaching materials for negotiations and decision making. Evanston,
IL: Northwestern University, Dispute Resolution Research Center (1997).
111. Fujita, K., Ito, T. & Klein, M. Approximately fair and secure protocols for multiple interde-
pendent issues negotiation. in AAMAS (2) (2009), 1287–1288.
112. Camp, T. Women in computer sciences: Reversing the trend. Syllabus Magazine, 24–26
(2001).
85
Abstract (if available)
Abstract
Research in artificial intelligence has made great strides in developing 'cognitive tutors' that teach a range of technical skills. These automated tutors allow students to practice, observe their mistakes, and provide personalized instructional feedback. Evidence shows that these methods can increase learning above and beyond traditional classroom instruction in topics such as math, reading, and computer science. Although such skills are crucial, students entering the modern workforce must possess more than technical abilities. They must exhibit a range of interpersonal skills which allows them to resolve conflicts and solve problems creatively. However, both traditional curricula and learning technologies afford students limited opportunities to learn these interpersonal skills, particularly in STEM fields. This thesis seeks to fill this gap by developing automated learning methods for teaching the crucial interpersonal skill of negotiation. Negotiation skills can help workers obtain equitable compensation, gain greater control over their work responsibilities, and help them work effectively with teammates. Currently, students often must resort to costly resources to learn how to negotiate. Options include self-study guides with limited value or costly professional training programs that are developed by various companies and educational institutions. For those who can not afford costly training options, they find themselves at a loss for options. However, technology is beginning to fill this gap. AI has made strides in creating agents that can negotiate with people, and research shows students can improve their negotiation abilities by practicing with such agents. To date, these methods allow students to practice but little emphasis is placed on the analysis of mistakes and feedback. Yet lessons from cognitive tutors emphasizes that feedback is crucial for learning. To address this limitation, this thesis contributes to advancing the science of interpersonal skill training by developing and evaluating the effectiveness of an automated analysis of student errors and personalized feedback in a negotiation training system. This is done by innovating artificial intelligence techniques to analyze student behavior, identify weakness in their understanding, and provide targeted and personalized feedback. In Chapter 2, I provided an overview of the state of the art in teaching negotiation and their limitations. In Chapters 3 and 4, I develop metrics for assessing student's negotiation abilities based on a set of principles and show that these metrics can be used to provide personalized feedback. In Chapter 5, I extend these metrics to include better predictors of negotiator’s understanding of an opponent. Chapter 6 builds on the previous chapters by incorporating the opponent modeling and feedback proposed in Chapters 3, 4, and 5 into a mini negotiation course. I show how my previous work can be combined with video lectures to mimic how negotiation is taught in the classroom. Results show that these metrics are good predictors of negotiation outcomes, and participants who received personalized feedback do fair better in subsequent negotiations than those who did not. Lastly, I show that these diagnostic metrics do have applications outside of a training setting. Chapter 7 illustrates how these methods can provide insight into societal issues.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
A framework for research in human-agent negotiation
PDF
Computational foundations for mixed-motive human-machine dialogue
PDF
Auction and negotiation algorithms for cooperative task allocation
PDF
Automated negotiation with humans
PDF
Decoding information about human-agent negotiations from brain patterns
PDF
Virtual extras: conversational behavior simulation for background virtual humans
PDF
Towards social virtual listeners: computational models of human nonverbal behaviors
PDF
Incrementality for visual reference resolution in spoken dialogue systems
PDF
Efficient and effective techniques for large-scale multi-agent path finding
PDF
Understanding and generating multimodal feedback in human-machine story-telling
PDF
Transfer learning for intelligent systems in the wild
PDF
Target assignment and path planning for navigation tasks with teams of agents
PDF
Advancements in understanding the empirical hardness of the multi-agent pathfinding problem
PDF
Machine learning in interacting multi-agent systems
PDF
The interpersonal effect of emotion in decision-making and social dilemmas
PDF
Improving decision-making in search algorithms for combinatorial optimization with machine learning
PDF
Improving language understanding and summarization by leveraging auxiliary information through self-supervised or unsupervised learning
PDF
Building and validating computational models of emotional expressivity in a natural social task
PDF
Automatic evaluation of open-domain dialogue systems
PDF
Algorithms and systems for continual robot learning
Asset Metadata
Creator
Johnson, Emmanuel
(author)
Core Title
An intelligent tutoring system’s approach for negotiation training
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Computer Science
Degree Conferral Date
2021-12
Publication Date
10/28/2021
Defense Date
09/16/2021
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
automated negotiation,intelligent tutoring systems,negotiation training,OAI-PMH Harvest,virtual agents
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Gratch, Jonathan (
committee chair
), Kim, Peter (
committee member
), Koenig, Sven (
committee member
), Lucas, Gale (
committee member
), Slaughter, John (
committee member
)
Creator Email
emmanuej@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC16251444
Unique identifier
UC16251444
Legacy Identifier
etd-JohnsonEmm-10178
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Johnson, Emmanuel
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
automated negotiation
intelligent tutoring systems
negotiation training
virtual agents