Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
A framework for research in human-agent negotiation
(USC Thesis Other)
A framework for research in human-agent negotiation
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
A Framework for Research in Human-Agent Negotiation
by
Johnathan Todd Mell
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements of the Degree
DOCTOR OF PHILOSOPHY
COMPUTER SCIENCE
May 2020
© 2020 Johnathan T. Mell
ii
Acknowledgments
This work was supported by grants funded by the National Science Foundation, the Air Force
Office of Strategic Research, and the U.S. Army. Any opinion, content or information presented does not
necessarily reflect the position or the policy of the United States Government, and no official endorsement
should be inferred.
I wish to thank a number of people who were instrumental in reviewing and critiquing this work.
In particular, I would like to thank my advisor, Dr. Jonathan Gratch, for his mentorship, as well as the other
members of my proposal and review committees: Dr. Nathanael Fast, Dr. Sven Koenig, Dr. Milind Tambe,
Dr. Peter Kim, and Dr. Paul Rosenbloom. Specific gratitude is also given to my many co-authors, software
testers, lab members, and research participants.
Additionally, I would like to thank Dr. Gale Lucas, whose insights and mentorship were
instrumental throughout my Ph.D. Finally, I would like to thank my family, including my parents, Dennis
and Ellen Mell, my grandmother, Jean Roosman, and my late grandfather, William Roosman, for the
continuous personal support.
iii
TABLE OF CONTENTS
ACKNOWLEDGMENTS II
LIST OF TABLES V
LIST OF FIGURES V
ABSTRACT VI
CHAPTER 1. INTRODUCTION 1
1.1 MOTIVATION 1
1.2 STATE-OF-THE-ART OVERVIEW 5
1.3 KEY CONTRIBUTIONS 7
1.3.1 ADVANCEMENT OF COMPUTATIONAL MODELS 8
1.3.2 PLATFORM DEVELOPMENT 9
1.3.3 SELECTED PERSONAL PUBLICATIONS 11
1.3.4 DISSERTATION OUTLINE 12
CHAPTER 2. RELATED WORK 13
2.1 THE IMPORTANCE OF NEGOTIATION 13
2.2 THE IMPORTANCE OF HUMAN-AWARE NEGOTIATING AGENTS 14
2.2.1 HUMAN-INFORMED AGENTS VS. HUMAN-STUDYING AGENTS 15
2.2.2 HELPFUL HUMAN TECHNIQUES 15
2.2.3 STUDYING AND GUIDING HUMAN BEHAVIOR 16
2.3 THE REQUIREMENTS FOR HUMAN-AWARE NEGOTIATING AGENTS 17
2.4 NEGOTIATION FORMALIZATIONS 19
2.4.1 THE ULTIMATUM GAME 19
2.4.2 THE MULTI-ISSUE BARGAINING TASK 20
2.5 TRADITIONAL AI APPROACHES TO NEGOTIATING AGENTS 21
2.5.1 AGENT-AGENT NEGOTIATION 21
2.5.2 HUMAN-AGENT NEGOTIATION 23
2.5.3 EXISTING HUMAN-AGENT ATTEMPTS 25
CHAPTER 3. THE IAGO NEGOTIATION PLATFORM 28
3.1 FROM THE GROUND UP: BUILDING HUMAN-LIKE NEGOTIATING AGENTS 28
3.2 DESIGN PRINCIPLES 30
CHAPTER 4. MODELS OF HUMAN BEHAVIOR AND THEIR IMPLEMENTATIONS 34
4.1 OFFER STRATEGIES 34
4.1.1 COMPETITIVE VS. CONSENSUS-BUILDING AGENTS (CONCESSION CURVES) 34
4.1.2 ANCHORING AND FRAMING 36
4.2 OPPONENT MODELING 36
4.2.1 PREFERENCE MODELING AND BATNA 37
iv
4.2.2 PERSONALITY 39
4.2.3 NORMS AND BIASES 40
4.2.4 MODELS FOR TEACHING AND TRAINING 41
4.3 COMMUNICATION AND EMOTIONS 42
4.3.1 INDIVIDUAL EMOTIONS 43
4.3.2 WITHHOLDING INFORMATION AND LYING 44
4.4 REPUTATION & INDIVIDUAL RELATIONSHIPS 45
4.4.1 TEMPORALLY-AWARE (FAVOR) AGENTS 46
CHAPTER 5. BASIC NEGOTIATING AGENTS AND THEIR PROBLEMS 48
5.1 FIRST-LAYER AGENTS: GRUMPY & PINOCCHIO 49
5.2 THE PATH TO SOCIALLY-AWARE AGENTS 51
5.2.1 FAVOR EXCHANGE IN NEGOTIATION: A MULTI-ISSUE ULTIMATUM CASE STUDY 51
CHAPTER 6. ADVANCED SOCIALLY-AWARE AGENTS 59
6.1 STUDY #1: EXPLORATORY LONG-TERM INTERACTION AGENTS (ANAC) 60
6.1.1 STUDY #1A: ANAC 2018 RESULTS OF REPEATED NEGOTIATION 60
6.1.2 STUDY #1B: ANAC 2019 RESULTS OF REPEATED NEGOTIATION WITH CUSTOM INTERFACES 63
6.2 STUDY #2: ADVANCED INTERACTION AGENTS USING FAVORS 64
6.2.1 STUDY #2: DESIGN 65
6.2.2 STUDY #2: IMPLEMENTATION AND ANALYSIS 67
6.2.3 STUDY #2: RESULTS AND DISCUSSION 68
CHAPTER 7. DISCUSSION AND RESEARCH IMPLICATIONS 71
7.1 SUMMARY 71
7.2 RESULT IMPLICATIONS AND FUTURE DIRECTIONS 72
REFERENCES 76
APPENDIX A RULES OF THE AUTOMATED NEGOTIATING AGENTS COMPETITION (ANAC) 2018 85
APPENDIX B CONTRIBUTIONS SUMMARY 92
APPENDIX C IAGO PLATFORM TECHNICAL OVERVIEW 95
APPENDIX D PROPOSAL SATISFACTION OVERVIEW 97
v
List of Tables
TABLE 1: SAMPLE UTILITY TABLE, 2-ISSUE NEGOTIATION _____________________________________________________ 22
TABLE 2: CURRENT IAGO AGENTS, WITH BROAD CHARACTERISTICS _____________________________________________ 34
TABLE 3: PARETO OPTIMALITY OVER TIME FOR A FAVOR-SEEKING AGENT ___________________________________________ 57
TABLE 4: AGENT TYPES IN EXPERIMENTAL CONDITIONS, ULTIMATUM GAME STUDY ___________________________________ 57
TABLE 5: AGENT BEHAVIOR, STUDY #2 _________________________________________________________________ 68
TABLE 6: TOTAL AGENT AVERAGE POINTS, STUDY #2 _______________________________________________________ 70
List of Figures
FIGURE 1: FRAMEWORK FOR AGENT-AGENT NEGOTIATION ___________________________________________________ 24
FIGURE 2: FRAMEWORK FOR HUMAN-LIKE NEGOTIATING AGENT _______________________________________________ 26
FIGURE 3: LAYER FRAMEWORK FOR HUMAN-AWARE NEGOTIATING AGENT STRATEGY _________________________________ 30
FIGURE 4: IAGO INTERFACE ________________________________________________________________________ 31
FIGURE 5: THE SPACE OF NEGOTIATION OPTIONS IN A SIMPLE, TWO-NEGOTIATION REPEATED INTERACTION ____________________ 54
FIGURE 6: WEB-DEPLOYMENT OF COLORED TRAILS FRAMEWORK _______________________________________________ 55
FIGURE 7: COST OF BETRAYAL AS MEASURED BY ACCEPTANCE RATE IN ROUND 3_____________________________________ 58
FIGURE 8: ALL AGENT PERFORMANCE ACROSS ALL THREE NEGOTIATIONS INDIVIDUALLY, ANAC 2018 ______________________ 64
FIGURE 9: AGENT PERFORMANCE OVER TIME, STUDY ANAC 2019 _____________________________________________ 65
vi
Abstract
Increasingly, automated agents are interacting with humans in highly social interactions. Many of
these interactions can be characterized as negotiation tasks. There has been broad research in negotiation
techniques between humans (in business literatures, e.g.), as well a great deal of work in creating optimal
agents that negotiate with each other. However, the creation of effective socially-aware agents requires
fundamental basic research on human-agent negotiation. Furthermore, this line of enquiry requires highly
customizable, fully-interactive systems that are capable of enabling and implementing human-agent
interaction. Previous attempts that rely on hypothetical situations or one-shot studies are insufficient in
capturing truly social behavior.
This dissertation showcases my invention and development of the Interactive Arbitration Guide
Online (IAGO) platform, which enables rigorous human-agent research. IAGO has been designed from
the ground up to embody core principles gleaned from the rich body of research on how people actually
negotiate. I demonstrate several examples of how IAGO has already yielded fundamental contributions
towards our understanding of human-agent negotiation. I also demonstrate how IAGO has contributed to
a community of practice by allowing researchers across the world to easily develop and investigate novel
algorithms. Finally, I discuss future plans to use this framework to explore how humans and machines can
establish enduring and profitable relationships through repeated negotiations.
1
Chapter 1. Introduction
1.1 Motivation
Human: Hello, how can I help you?
Google: Hi, I’m calling to book a woman’s haircut for a client. I’m looking for something on May 3
rd
.
H: Sure, give me one second…
G: Mm-hmm
H: Sure, what time are you looking for around?
G: At 12pm
H: We do not have 12pm available. The closest we have to that is a 1:15
G: Do you have anything between 10am and, uh, 12pm?
H: Depending on what service she would like…what service is she looking for?
G: Just a women’s haircut for now.
H: Ok. We have a 10 o’clock
G: 10am is fine
The above (purportedly) real-world dialog is a conversation between a fully autonomous system,
Google Assistant, and a human receptionist at a hair salon [83]. Ignoring the technological wizardry
required to support natural language interaction, this conversation seems simple and mundane. Yet it
highlights the pervasive nature of negotiation in human in human social interactions. Each party is
motivated to reach an agreement. Yet they have different preferences and constraints, and can only resolve
these differences through information exchange and compromise. This is the essence of all negotiation.
And while the state-of-the-art as demonstrated by this slick trade-show example is impressive, there
are concerns and limitations. These automated negotiators are constrained to a fairly strict script which
causes them to function more like information databases than nuanced strategists. With no context or
memory, they are likely to lack generalizability, and are rarely customized to their users.
1
Unfortunately for these neophyte conversational agents, human negotiation, in its ecologically
wild, true form, requires a more robust approach. Human negotiation represents a dynamic and nuanced
procedure where value can be created as well as traded. It is characterized by flexible rules, non-binding
1
Not to mention the dangers of end-to-end learning in such systems, which renders debugging higher-level
behavioral hiccups problematic.
2
agreements, norms, emotions, and so-called cheap talk
2
, to name only a few of its most salient
characteristics. And yet, as humans begin to rely increasingly on automated agents to represent them, and
as they demand fully-natural partners in many social (and market-driven) transactions, the need for human-
aware negotiation agents has increased. From automated cars to personal smartphone assistants, these
agents must be dynamic, generalizable, and (increasingly) explainable in their behaviors.
Research on negotiation has been conducted within two large but rarely interacting communities—
computational science, and psychology/behavioral economics. The former is concerned with finding
“optimal” solutions to negotiation problems (at scale), while the latter is primarily concerned with
understanding and influencing human behavior. And while interactions have increased in recent years,
there still exists a gap—designing computational agents that are capable of interacting with humans in a
social and human-like way—and the contributions of this thesis are focused on reducing this gap.
There has been a great deal of excellent work on developing optimal negotiation agents.
3
Much of
this work has a background in game theoretic concepts, and is designed to optimize for agent success and
efficiency. A variety of problems have been considered (e.g., complex utility functions, opponent modeling
and repeated interactions), and agents have advanced substantially in the state-of-the-art in recent years. A
major facilitator of this research has been the creation of a community of scholarship through publicly-
available frameworks (e.g., the GENIUS platform [58] allows researchers to easily create negotiation
agents, distribute algorithms, and compare results with a repository of existing agents) and challenge
competitions (e.g., Automated Negotiation Agents Competition) where agent vs. agent negotiations are run
at major AI conferences.
In parallel, there is a strong community of research focused on characterizing how humans
negotiate with each other, and quantifying the myriad effects that influence it (e.g., common human biases,
2
Cheap talk includes a variety of unenforceable dialogue acts, from threats and promises to conversational patter.
See [23].
3
See the bodies of work in multi-agent systems, from such publications as [4], [21], [37], [58], [65], or [74].
3
effective persuasion techniques, and the impact of personality).
4
This work largely proceeds through
empirical studies, and these studies tend to either optimize for ecological validity (and are thus either
observational or allow fairly unconstrained interactions between human participants) or for control (and
thus make use of highly-scripted human or computer confederates). Both approaches have drawbacks.
The development of social negotiating agents fits into this gap, and provides clear benefits to both
communities. Designing advanced artificial agents that are capable of acting as humans do (or acting in a
way that accounts for human behavior) allows for specific improvements. First, agents that accurately
model human behavior could help people transcend common biases (or exploit these biases to “win”).
Second, these agents can act as realistic partners, thus enabling practice and training for those wishing to
learn negotiation. Third, these agents mark an advancement in experimental control when studying human-
agent interactions. Specifically, these agents can be finely controlled, allowing experimental manipulation
of numerous negotiation sub-tasks (user-modeling, decision processes, emotional communication, to name
a few). Furthermore, use of realistic negotiation agents eliminates the need for fallible (inconsistent) human
confederates in behavioral experiments. Finally, agents allow previously impossible experiments that allow
analysis of interactive experiences, rather than pre-scripted dialogues.
These proposed automated agents that are capable of realistic negotiation with humans would be
uniquely poised to find jointly positive value in situations where human and agent goals are partially aligned
(but still distinct). Specifically, while existing agents can beat humans in a number of zero-sum or hyper-
constrained economic scenarios, these scenarios bear little resemblance to the richer, more nuanced domain
of human negotiation. Indeed, humans often outperform “rational” algorithms in these richer settings [41].
Human-aware agents should be able to emulate the best elements of human negotiating strategies, finding
ways to cooperate over time and discover stable equilibria that have hitherto been limited to human
negotiation.
4
Here, turn to [10], [12], [24], [28], [27], [31], [46], [104], and [110].
4
However, research on these nascent agents is severely hampered by a lack of existing frameworks
for rapidly and consistently developing said agents. Negotiation remains a complex problem that is difficult
to study in isolation without either trained human confederates in detailed study setups, or with highly
evolved agents that are capable of simulating a reasonable human-like partner. Negotiation itself is an ideal
challenge problem for artificial intelligence research, as it requires significant advancement in a variety of
subfields and skills. These include cognitive skills and strategies, natural language, and non-verbal
behaviors and embodiment [48].
Unfortunately, current research platforms that can negotiate with humans are lacking in several
dimensions. Notably, prior efforts do not often simulate the free-form nature of human negotiation, lack
emotional signaling, and do not provide the abilities for agents (or their human counterparts) to use
advanced strategies such as lying and threatening to terminate the negotiation. Nor do these existing
frameworks reason about and diffuse the common biases that pervade human decision-making. Finally,
while some research has explored human-negotiation research, it has fundamentally focused on either a
very narrow section (e.g., natural language chatbots [100]) or has been presented as hypothetical situations
rather than real in vivo negotiation interactions [112]).
In this dissertation, I present my efforts to advance fundamental research in human-agent
negotiation but, more importantly, to establish a community of practice around this problem through the
creation of a shared research infrastructure and challenge competitions. I showcase the Interactive
Arbitration Guide Online (IAGO), an extensible framework for advancing research in realistic human-agent
negotiation. I motivate the need for this research framework and reify the algorithmic and engineering
contributions of such a system. I then discuss several fundamental research findings my framework has
fostered (relevant to computer science, psychology, and business interests), and also illustrate how this
framework has promoted both research and applications across a community of researchers.
5
1.2 State-of-the-Art Overview
Negotiation is a complex human social task that requires a diverse set of skills: from strategic
planning to rhetorical argument. And while structured negotiation is still a relatively uncommon activity,
most people engage in some degree of small negotiation quite often—from deciding what to eat for dinner
with a group of friends, to negotiating a job offer, to making a customer service request. Regardless of the
domain, negotiation plays a critical role in human interaction. Research shows that people who are highly
skilled negotiators tend to receive better salaries [45], and negotiation training pedagogy has long been the
domain of prestigious business prep courses [14].
Perhaps surprisingly, humans also prove quite reliable at discovering joint value in negotiation,
often finding mutually beneficial solutions, even in classical game theoretic scenarios (such as the
Prisoner’s Dilemma). And while reliably “getting to yes” [40], often requires years of training, even novice
negotiators are capable of employing heuristics, discerning norms, and engaging in so-called “cheap talk”
to create joint value.
As current technological tools continue to evolve into ever-more sophisticated artificial agents,
however, humans find themselves relying on increasingly human-aware agents to interact with the world
around them. In some cases, these agents are high-fidelity simulacra of humans themselves, designed to
act with all the foibles of their mortal antecedents; in others, these agents merely need to understand and
anticipate human negotiating behavior. Negotiating agents can act as representatives to humans,
automating online bidding [2], providing the moral core to autonomous vehicles [13], and even negotiating
the time of appointments for their users [83]. Negotiating agents are increasingly finding themselves in
situations where they have partially-aligned, but not identical goals as their human partners. These agents,
for example, have been used to negotiate with humans in order to find optimal power-saving schedules in
smart grids [109], or to plan fuel-efficient routes to complete a delivery task [52].
Beyond their roles in assisting humans in partially-aligned negotiations, human-aware agents are
capable of serving in other capacities. As direct competitors, artificial agent negotiators are potentially
6
capable of outperforming humans in negotiation, as they can understand user strategies, recall information
about past negotiations with perfect clarity, and adapt to tailor their strategies to the situation. Agent
negotiators might also be capable of providing individualized feedback for humans learning negotiation,
and a number of systems have been proposed for teaching negotiation [59] [60][68]. Agents also make for
nigh-perfect confederates in behavioral studies that merely try to understand human negotiation behavior:
the agents are tireless, consistent, and largely customizable to answer specific research questions. And
finally, human-aware negotiating agents may be able to enhance humans’ already-impressive ability to
create joint value in integrative situations.
But, compared to the current cutting-edge of negotiating agents, the best human negotiators are far
more capable of exploiting so-called “cheap talk” in negotiation. They can use unenforceable patter to
influence deals, read social signals to model their opponents, and tailor their efforts to each opponent based
on their personal history, even without the benefit of computer-enabled processing power. Beyond this,
many human negotiators can establish relationships, in which they must consider not only the current
negotiation, but how its outcome will affect future interactions. Experienced human negotiators can
understand cultural information, parse natural language, and use rhetorical techniques to succeed in
negotiation. This experience differs radically from automated negotiating agents that are not designed
specifically to negotiate with humans—these agents tend to see rational, game-theoretically-optimal
outcomes.
However, even agents designed to act as humans do (or at least to account for human quirks and
emotions) often take simple approaches in constrained domains. Existing work has resulted in the creation
of agents capable of following dominant strategies or other “best practices”, such as effectively employing
anger or strategic information exchange [III]. Yet, existing automated agents are still lacking in several
areas. Most critically, many automated agents are used in scenarios where they interact with a human
partner over a very limited timeframe (often, one-shot games) (see VII, [96]). Due to this, our current
7
generation of automated negotiating agents lacks the ability to thrive in the broader social landscape of
repeated negotiation, where trust and reputation effects are paramount.
Unfortunately, the research and design of human-like negotiating agents is an arduous process, and
current efforts have centered around bespoke solutions that are custom-fit for individual research studies,
or lack key features that are critical to simulating realistic human negotiation. To truly research
negotiation—a complex social procedure necessitating numerous AI processes—comprehensive solutions
and research platforms are required.
Advancement in research has occurred with the development of such negotiating platforms.
Probably the most notable examples is GENIUS [58], which has proved pivotal in driving research in agent-
agent negotiation by providing a clear framework for the design of negotiating agents—as well by
encouraging research competitions in the domain. As discussed in Chapter 2.5, however, GENIUS is
insufficient for simulating human negotiation. Other efforts have yielded incremental efforts in
understanding human interaction—the Colored Trails platform [44] simulates a complex multi-issue
bargaining task (see Chapter 2.4.2), and the NegoChat chat-bot [100] responds to natural language in a
negotiation settings. These examples, while helpful to examine specific questions, lack customizability,
and do not contain all the features that a human-human or human-agent negotiation interaction would
require.
1.3 Key Contributions
This dissertation presents my work in advancing the understanding and modeling of human-agent
negotiation, while simultaneously enabling a community of scholarship around this research domain. This
includes the creation of computational models that capture fundamental processes underlying human
negotiation, the construction of agents that incorporate these models, and careful empirical studies that
validate these models’ fidelity. Critically, however, this work includes explicit efforts to establish a
community of research practice. This includes creating a novel and immense framework that substantially
simplifies the challenge of creating and validating negotiation agents, freely sharing this framework and
8
our validation data, and sponsoring challenge competitions at major AI conferences to grow interest in the
area. In short, I argue that the IAGO platform has enabled novel research in human-agent negotiation,
and that through it, I have contributed novel human behavioral insights and agent design advancements.
These key contributions are summarized:
• (Chapter 4) Advancement of computational models of human negotiation processes, specifically:
o Modeling the use and impact of deceptive communication
o Modeling common human biases in how people infer their opponent’s interests
o Modeling anchoring and framing effects
o Modeling the use of emotional expressions to influence concession-making
o Modeling the impact of prior negotiation experience on the choice of influence tactics
o Modeling the priming of certain emotions/states (such as suspicion)
o Modeling the use of favors and ledgers to foster long-term relationships
o The use of these models to support automated tutoring of negotiation skills
• (Chapter 3) Development of a web-based research platform for human-agent negotiation research
(IAGO), which has enabled:
o (Chapter 7) Establishment of a community of research, rooted in the keystone of the
Automated Negotiating Agents Competition (ANAC), which has driven exploratory
research that is further formalized herein
o (Chapter 5) Empirical results that demonstrate real-world human-human cognitive effects
in a human-agent context
o (Chapter 6) Advancement of the state-of-the-art in developing and implementing AI
negotiating agents that interact with humans (based on computational models developed,
per point 1, above)
Further detail and publication information can be found in Appendix B. The remainder of this
section details these contributions, broken out by sub-topic, with references to my own published work
(cited via Roman numeral) as well as others’ (cited by endnote).
1.3.1 Advancement of Computational Models
In order to develop agents that are capable of understanding humans and reacting to them in a
variety of ways, computational models of actual human behavior must be constructed. My work (and that
of others) achieves this by modeling these human negotiation processes. In particular, this work describes:
9
modeling the use and impact of deceptive communication (see Chapters 4.3.2, 4.4, and VI), modeling
common human biases in how people view their own interests (see Chapter 4.2.1, 4.2.3), modeling framing
and anchoring techniques (see Chapter 4.1.2), modeling emotional influence tactics (see Chapter 4.3, and
III), modeling the impact of prior negotiation experience on tactic choice (see Chapter 4.4, and VIII,X),
modeling priming emotions such as suspicion (see Chapter 4.1.1, and X), and modeling the use of favor
exchange to foster long-term relationships (see Chapter 4.4 and IV, [76]), and the use of models for
providing teaching feedback in automated tutoring (see Chapter 4.2.4, and [60]).
1.3.2 Platform Development
The IAGO platform is a freely distributed and easily customizable framework for creating human-
like negotiating agents and validating their behavior through online crowdsourcing. The framework
provides default models of the fundamental processes posited in theories of human negotiation (e.g.,
information exchange, opponent modeling, value creation, and value claiming). Moreover, it allows
researchers to posit and test novel methods and novel combinations of these algorithms. To date, IAGO has
enabled the creation of over 50 agents from over a dozen different universities. These agents have tested
these techniques over more than 2500 negotiations with human users
5
. IAGO is described in detailed in
Chapter 3, and itself has been described in published work; its initial debut was a Best Demonstration
Finalist at AAMAS [I].
1.3.2.1 Research Community Establishment
By creating an effective research platform, I have enabled myself and others to build intelligent,
interactive agents that can model the standard psychological paradigms surrounding negotiation at multiple
levels. Various researchers have used IAGO to develop insights about human behavior [66][99]. IAGO
has been the platform used for the Human-Agent League of the Automated Negotiating Agents Competition
(ANAC) for four years, and dozens of unique agents have been developed by universities around the world
to take part in the competition, which has been included in the program at the Interactional Conference on
5
Not all datasets have been publicly released.
10
Automated Agents and Multiagent Systems (AAMAS) as well as the International Joint Conference on
Artificial Intelligence (IJCAI), where it has resided for the last two years.
1.3.2.2 Empirical Human Research Results
One general contribution of my work is to adapt well-studied psychological strategies and show
that they work in the context of interactive negotiation agents. My work on favors and ledgers is example
of this (see Chapter 6.2), as are the results on the impact of competitiveness [II], as is work that highlights
the importance of information-exchange [III]. Furthermore, insofar as prior psychological work has tended
to focus on fixed, scripted scenarios, the IAGO platform has allowed an increase in ecological validity by
allowed truly interactive (but still experimentally controlled) experiences. Indeed, by moving to more
interactive approach while provided fine-tuned “knobs” for adjusting the behavior of agents in consistent
ways, I show these policies work in fully interactive contexts, while maintaining an unprecedented level of
experimental control.
To this point, IAGO has been used for a number of studies that provide information about human
behavior in negotiation, especially the development of behavior and relationships over time through
repeated negotiation. The foundation for IAGO has been very extensively laid by a number of publications
relating to the use of human-like agents in various scenarios
6
, from simple economic games in a repeated
context [IV], to cross-cultural studies [V], to investigations into agents that can understand individual
differences [VI].
1.3.2.3 Advancement of Effective Agents
Through partnership with ANAC, I have been able to further research in several areas. A number
of agents have been developed using IAGO that represent significant improvements over the state-of-the-
art. Simple, “one-shot” agents are shown to be robust competitors against humans [III], and the results of
the first annual Human-Agent League of ANAC [VII] provided a number of unique agents that were shown
6
Only the most relevant papers are included here, but a full list of the author’s publications can be found in
Appendix B—they include over a dozen papers, appearing at AAMAS, IJCAI, AAAI, ACII, and IVA.
11
to influence human success against and rapport with the agents. In particular, we have shown that agents
that use a variety of manipulative techniques heavily influence user’s opinions on the use of those
techniques in the future [VIII]. It is this idea agents that reason over time and statefully over individuals
that is the subject of current work, and this idea drove the design of all annual Human-Agent League of
ANAC [VIII] from the second onward. Furthermore, results from IAGO agents have been used in
conjunction with machine learning techniques to improve feature selection and outcome prediction for
negotiation scenarios (see XI), showing their applicability to mainstream techniques. These publications
show that agents affect humans in predictable ways, and support that assertion that agents are effective tools
to influence humans. Together, these agent advancements and designs inform the future of IAGO as the
premier platform for human-agent negotiation research.
1.3.3 Selected Personal Publications
I. Mell, J., Gratch, J. (2016) “IAGO: Interactive Arbitration Guide Online”, In Proceedings of the
2016 International Conference on Autonomous Agents and Multiagent Systems International
Foundation for Autonomous Agents and Multiagent Systems. Best Demonstration Nominee.
II. Mell, J., Gratch, J. Lucas, G., (2018) "The Effectiveness of Competitive Agent Strategy in
Human-Agent Negotiation." Orally Presented at the 2018 American Psychological Association’s
Technology, Mind, and Society conference.
III. Mell, J., Gratch, J. (2017) “Grumpy & Pinocchio: Answering Human-Agent Negotiation
Questions through Realistic Agent Design”. In Proceedings of the 2017 International Conference
on Autonomous Agents and Multiagent Systems International Foundation for Autonomous
Agents and Multiagent Systems.
IV. Mell, J., Lucas, G., Gratch, J. (2015) “An Effective Conversation Tactic for Creating Value over
Repeated Negotiations.” In Proceedings of the 2015 International Conference on Autonomous
Agents and Multiagent Systems (pp. 1567-1576). International Foundation for Autonomous
Agents and Multiagent Systems.
V. Mell, J., Lucas, G., Gratch, J. Rosenfeld, A. (2015) “Saying YES! The Cross-cultural
Complexities of Favors and Trust in Human-Agent Negotiation”, In Proceedings of the 2015
International Conference on Affective Computing and Intelligent Interaction, 2015, Xi'an, China.
VI. Mell, J., et al. (2018) “Towards a Repeated Negotiating Agent that Treats People Individually:
Cooperation, Social Value Orientation, & Machiavellianism”, Intelligent Virtual Agents.
12
VII. Mell, J., et al. (2018) “The Results of the First Annual Human Agent League of the Automated
Negotiating Agents Competition”, Intelligent Virtual Agents.
VIII. Mell, J., Lucas, G., Gratch, J. (2018) “Welcome to the Real World: How Agent Strategy
Increases Human Willingness to Deceive”, In Proceedings of the 2018 International Conference
on Autonomous Agents and Multiagent Systems. International Foundation for Autonomous
Agents and Multiagent Systems. Best Paper Nominee.
IX. Mell, J., Gratch, J., Aydogan, R., Baarslag, T., and Jonker, C.M. (2019) “The Likeability-Success
Trade Off: Results of the 2nd Annual Human-Agent Automated Negotiating Agents
Competition”, In Proceedings of the 8th International Conference on Affective Computing &
Intelligent Interaction.
X. Mell, J., Lucas, G., Gratch, J. (2020 expected) “The Role of Experience in Negotiation”, Journal
of Artificial Intelligence Research. Fast-tracked, under resubmission review 2020.
XI. Mell, J., Beissinger, M., Gratch, J. (2020 expected) “An Expert-Model & Machine Learning
Hybrid Approach to Predicting Human-Agent Negotiation Outcomes in Varied Data”, Under
review, invited submission, Journal of Multimodal User Interfaces.
1.3.4 Dissertation Outline
The remainder of this document will present the case for IAGO, and for human-aware negotiating
agents in general. The remainder of this chapter provides additional motivation for how agents can benefit
from utilizing human-like techniques, and how these agents can provide insight into human behavior.
Chapter 2 defines negotiation as it is addressed in this document and provides information on related work.
The following section (Chapter 3) provides a high-level technical overview of IAGO itself. Chapter 4
describes models of human behavior that have been advanced (and how they are implemented within
IAGO). Chapter 5 and Chapter 6 both provide overviews of my contributions to the empirical scholarship
of this topic, starting with the antecedent studies that demonstrated the need for IAGO in the former, then
proceeding into the development of more advanced agents using IAGO in the latter. Finally, Chapter 7
discusses the future of the IAGO platform, and the areas of potential future long-term (multi-year) research
for those interested in the subfield.
13
Chapter 2. Related Work
2.1 The Importance of Negotiation
Negotiation, if broadly defined, entails all social interactions in which goals and interests collide
[95]. It is the subject of numerous glossy-covered tomes that purport to ensure future success in business
[40][108], and has been shown to be predictive of future success in a number of fields [45][89]. But even
in the less grandiose and more narrowly defined area of multi-issue bargaining, negotiation remains a key
component of human interaction. Whether negotiating for a better salary or for world peace, negotiation
requires a number of cognitive processes in humans. Good negotiators are capable of understanding their
opponent’s goals and are capable of employing various strategies to ensure that their own goals are met.
However important negotiation may be for business coaches and psychological modelers, it is as
important (if not more) for agent designers and computer scientists. Whether positive or not, an increasing
amount of humanity’s socialization, economy, and information exchange now takes place online.
Advertising/tech giants (Facebook/Google/Apple/Microsoft/Amazon) wish to negotiate deals with their
users for their personal data and negotiate with third parties to sell it. Politicians have long sought to
understand human desires and needs, especially when their mutable positions on talking points involves an
en banc negotiation for voting margins. Since it is untenable (or at least impractical) to send competent
human negotiators to speak with each individual in these cases, agents that are capable of understanding
human positions and behaviors are increasingly critical.
Eschewing practical considerations, negotiation as a field represents a nigh-perfect microcosm of
human behavior, ripe for study. Negotiation involves social interaction, future-thinking strategy,
argumentation, natural language understanding/generation, emotion processing, coping strategies,
optimization and search, and a host of other research foci
7
. Some in the research community have even
7
These foci are not unique to computer science—behavioral economists and psychologists are perhaps equally
focused on these issues.
14
gone so far as to say that “Autonomous systems that are capable of negotiating on our behalf are among
society’s key technological challenges for the near future” [6].
Negotiation itself as well as automated agents’ role within it represents an immense open-area of
research. Negotiation remains an area of social interaction in which humans are generally very, very
effective. As such, there are many lessons that agent designs can glean from mindful analysis of human
behavior. And research [97] indicates that agents themselves may not yet be at the point where they can be
used interchangeably with humans even if they demonstrate agency or affect.
8
As such, human-agent
negotiation represents a unique field in which the lessons of human-human and agent-agent research are
informative yet insufficient. Human-agent negotiation, regardless of its final deliverables, will need to be
well supported by empirical data as well as practical design decisions—a lofty goal we examine in the
remainder of this chapter.
2.2 The Importance of Human-Aware Negotiating Agents
Agents have already been used as useful tools in negotiation—from acting as mediators to
manifesting as full-fledged conversational partners [34][55]. Humans, however, seem frustratingly skilled
at several techniques that agents have yet to emulate—e.g., humans are very capable of developing rapport
and trust with their human partners. Humans, unsurprisingly, are not infallible however, often approaching
negotiation with detrimental biases that, without training, can often cause them to miss out on joint value
[27]. One goal of developing human-aware negotiating agents, therefore, is to allow them to profit from
the full range of useful human techniques, while limiting the drawbacks of humanity. These informed
agents might additional prove to be powerful tools for teaching negotiation skills; however, only if they
possess an accurate model of human behavior will they perform well both as partners and teachers. The
work described here demonstrates the need for a platform on which the current horizon of virtual negotiating
agents can be expanded to utilize human-aware strategies, creating truly human-aware negotiating agents.
8
The author does not know of any successful efforts to create “Turing Test” negotiation agents, although see [15].
15
Further agents developed using this framework can be cognizant of the social factors influencing
negotiation, including reputation effects and the implications of long-term repeated relationships.
2.2.1 Human-informed Agents vs. Human-studying Agents
The ideal human-aware negotiating agent could be used for twofold purpose: 1) to allow us to study
human behavior in a (relatively) ecologically valid manner, and 2) to adopt helpful human-like techniques
to make them better partners for humans in the long run. Automated agents are already widely used in both
these contexts, from mediated negotiation tasks [54] to elaborate training systems [63]. However, designing
virtual negotiating agents with human-like features is a multidisciplinary task, involving not only classical
problems of classification and machine learning, but also myriad topics in psychology, cognitive science,
business, graphics, animation, and natural language processing.
Regardless, at the core of many applications of human-aware negotiating agents is a necessity to
have a clear behavioral model that can be informed by the various input channels and produce believable
output behaviors on a virtual character. While there is no strict requirement that the behavior that human-
aware negotiating agents demonstrate be completely identical to that of real humans, systems that are
informed by data collected from behavioral studies on humans have shown to be both believable and
effective [50]. These effective agents can adopt human-like tactics.
2.2.2 Helpful Human Techniques
Specifically, many human negotiating techniques are capable of providing efficient signaling in
negotiation that enhances joint value (and/or claimed value). Human techniques of emotional display (in
this case, anger) are often used to demonstrate a strong bargaining position [112]. Humans are also capable
of quickly establishing metrics that can be used to judge negotiating history with a particular individual (or
agent). For example, humans can use norms of fairness to judge if they have been treated well in the past,
and use this to predict future behavior [24][39]. This allows humans to quickly establish trust and grow
joint value [106]. Of course, to adequately judge fairness, exchange of information regarding preferences
is required, which in turn improves the model of the user preferences—even if the preferences are uncertain,
16
or the opponent is suspected of dishonesty. Preference information is also capable of dispelling incorrect
biases, such as those stemming from a fixed-pie assumption (the often-incorrect belief that negotiation is a
zero-sum game). The concepts of user modeling, misinformation, and reputation, trust, and relationships
are explored in Chapter 3.
2.2.3 Studying and Guiding Human Behavior
Much as empirical work observing humans and agents negotiating together can improve the current
generation of agents, these improved agents can be used to conduct ever-more elaborate user studies
[32][64]. Thus, the validation of human-aware agents and their deployment as “virtual confederates” in
human studies creates a positive feedback loop—agents are improved in their technique, and can be used
as more effective confederates. Furthermore, these virtual confederates ease empirical studies by being
cheap, consistent, and customizable in a way human confederates rarely are.
Beyond simply acting as near-perfect confederates, human-aware negotiating agents may also be
used as training agents that provide feedback on various tasks, including negotiation and conflict resolution.
As these agents evolve from simply being tools to being full-fledged partners, a body of research is needed
on how to best develop these human-aware agent partners. These human-aware agents will be vital in
teaching negotiation, where having a consistent partner and/or teacher whose reactions are fully controllable
greatly helps training neophyte negotiators.
Human-agent negotiation research should go further than simple replacement of a human with an
agent: there is ample evidence that the framing of partners as virtual agent or human has a strong effect in
various economic games and negotiations [28]. Furthermore, there is relatively little work regarding the
effectiveness of negotiating agents as virtual coaches, although some work has been done on providing
simulations of crowds as a dynamic feedback mechanism during public speaking [9]. Virtual humans are
often shown to have less affect than a similar virtual avatar that is controlled by a human [12][67], but this
seeming limitation may actually be beneficial, as it allows them to provide feedback in a more direct and
clear manner without risking potential social consequences. These examples underline the main point:
17
understanding how agents could act as virtual teachers or coaches requires intimate knowledge of social
intelligence, and necessitates the development of more socially intelligent machines in general.
There have been numerous studies that have attempted to examine human behavior in the presence
of virtual agents. However, many of these studies lack true interactivity [31][112], instead opting to allow
participants to merely state “what they would do” in a given situation, and then proceeding according to a
given script. While this kind of research has its place (and is certainly easier to identify outcomes in
hypothesis testing) it is fundamentally asocial. As such, a truly interactive solution is needed that allows
human to negotiate with advanced agents in the most natural way possible. This is one key point that drove
the development of IAGO: an interactive research platform for studying negotiating agents in real situations
with humans.
2.3 The Requirements for Human-Aware Negotiating Agents
Having established the importance of human-aware negotiating agents, it is now important to
emphasize why current approaches to their design are lacking. Succinctly, most agent research is woefully
misaligned with how people actually negotiate in the real world—current negotiating agents do not use
risky moves, engage in cheap talk, have social goals, or utilize emotional signals. Most approaches to
negotiation agent design have focused on this kind agent-agent interaction— and make strong limits on the
type of information that can be exchanged between parties, resulting in oversimplification of the negotiation
task. For algorithms that can negotiate with people in a human-like manner, these limits need to be removed.
Human-aware negotiating agents must incorporate more complex forms of signaling, such as emotional
reactions to offers [29] or natural-language dialog [100] and sometimes may involve sophisticated virtual
embodiment [35]. Although some research has sought to provide a foundation for using these richer
communication channels (e.g., [28] provides a framework for emotional signaling), most of this research
has focused on short-term interactions and single negotiations.
18
Due to the importance of displaying robust agents with social emotions as well as the necessity of
testing them with real humans, I posit that any proposed human-aware negotiating platform should adhere
to the following design goals:
• Human-informed – while human-aware agents need not act just as humans do, their behavior
should be informed by actual behavioral studies using humans—increasingly, work is being
done that benefits from this approach [56]. This behavior may be emulative of humans (what
do humans do?) or simply analyzed by its effectiveness in interacting with humans (how can
the system best train/educate/entertain humans?).
• Generalizable to a domain – making a human-aware agent that is completely generalizable to
all situations is overambitious; however, making generic agents within a domain (negotiating
agents, conversational agents, question/answer agents) is a key goal.
• Easily customizable – being able to quickly adjust dialogue, goals, and even personality should
be a priority; in-depth knowledge about the system or psychology should not be required.
• Relationship-focused – while not all systems require agents to have memory from session to
session, social-emotional agents will benefit from remembering the results of past encounters
in a tractable fashion.
This final point is the focus of my current work, which aims to provide a framework for designing
relationship-aware and truly social agents.
This more realistic form of negotiation is realized by the IAGO programming platform for creating
virtual negotiators, which allows experiment designers to create agents exhibiting a variety of human-like
abilities: argumentation, emotional displays, preference statements and elicitations, and partial offers.
IAGO primarily deals with the multi-issue bargaining task, and its degenerate forms. An overview of the
negotiation domain and its considerations is provided in the remainder Chapter 2. We discuss uses for this
19
platform in designing effective studies with emotive virtual agents, as well as the design of the relationship-
aware agents themselves in later Chapters.
2.4 Negotiation Formalizations
While negotiations can, at their core, be characterized in game theoretic terms, in the wild, these
constructs do not always capture the full picture of negotiation. The multi-issue bargaining task (especially
as we characterize it here) combines several features that make it a difficult problem to examine—a full
treatment of which can be found in the following sections. Succinctly, however, it is a non-fully-observable
game, done in real-time with no formal round structures. Lying is allowed, and even the results of the game
are not viewable with certainty even after the game has concluded (it is hard to say who “won” the
negotiation). The remainder of this chapter will provide a concise overview of some of the more salient
items than can affect a negotiation outcome, to illustrate the scope of what human-aware negotiating agents
should be prepared to handle, as well as present previous work that has attempted to solve this problem.
2.4.1 The Ultimatum Game
The Ultimatum Game is an oft-used task in social economic contexts, and can be viewed as a
degenerate form of the more true-to-life Multi-Issue Bargaining Task, which we describe below. However,
the Ultimatum Game is a helpful tool for initial research, since it allows the design of studies that are
unburdened by branching behavior that can make analysis difficult. In the classic Ultimatum Game, one
party in a negotiation is given a certain amount of resources (points or dollars are often used). That party
must then decide how many of the resources to offer to his or her opponent. If the receiving party accepts
the resources offered, then the game concludes with the resources split as proposed. If, however, the
receiving party refuses the resources split, both parties receive nothing.
This game (in its single-interaction form) has a dominant strategy—the proposer should always
offer the minimum non-zero amount to the receiver, and the receiver should always accept. Any other
solution is irrational, since it will result in zero points for both sides. Like most game theoretic constructs,
however, the Ultimatum Game rarely works out in practice as the theory might suggest. When humans
20
play the Ultimatum Game, they will very often refuse splits that are viewed as “unfair” [87].
9
In human-
agent research, the Ultimatum Game is a helpful tool because it allows analysis of how various agent
behaviors may be able to shift this response (for example, by expressing certain emotions prior to making
the decision, or by varying the perceived agency of the agent [97][112]).
2.4.2 The Multi-Issue Bargaining Task
The multi-issue bargaining task, in contrast to the Ultimatum Game, allows for a variety of
additional actions, and has no such dominant strategy as the Ultimatum Game does.
10
Indeed, the multi-
issue bargaining task substantially complicates the problem—it involves negotiating over multiple issues,
each with different (non-observable to the opponent) preference weights. There may be external power
considerations, such as the presence of alternative deals, and negotiating takes place over a number of
rounds (or a time limit).
Regardless, in the formalization used in this work, the multi-issue bargaining task makes several
simplifying assumptions that allow algorithms (and human participants) to efficiently reason about task-
tradeoffs while retaining the core elements of real-world negotiations. Offers are typically formalized as an
allocation (or level) on each of a number of distinct issues. For example, we might represent a negotiation
over a number of fruits, of two types (apples and bananas) with several levels (Side A gets 1 banana, Side
B gets 3 bananas, e.g.). See Table 1. Each party assigns some utility to a deal (often formalized as a linear
combination of weights
11
associated with each issue allocation). Often this utility function is unknown to
the other party and must be discovered either by communication or through the exchange of offers. Often
there are incentives to misrepresent hidden information (e.g., lying about preferences or making and
breaking promises), so that trust becomes a significant facilitator or obstacle to efficient solutions.
9
This is an incredibly counterintuitive result to no one but economists. https://www.smbc-comics.com/comic/2014-
10-09
10
It is worth noting that the indefinitely repeated Ultimatum Game does not have a dominant strategy in Game
Theory, due to the potential to signal future behavior through action in the current round. A number of strategies
that take advantage of this fact are well-represented in behavioral game theory, “tit-for-tat” being one of the most
useful [3].
11
The linearity of the combination is not required, but is a common simplification.
21
There are of course logical and structural antecedents to negotiation that can inform the outcome
heavily. For instance, one side may assign a much higher utility to all the items than the other. This is a
form of structural effect, in which one side is more highly incentivized to do well (and may be able to more
effectively threaten to “walk away”). Furthermore, the items themselves possess utilities such that the
entire multi-issue bargaining task is distributive or integrative (further, there are degrees of integrativeness).
Fully distributive games are sometimes referred to as zero-sum since any gain to one side of the negotiation
is represented by a loss to the other side.
12
Integrative games represent at least a partial mismatch of utilities
across the two sides—one side will assign greater relative value to certain items which are worth less
relative value to the opposite side. Consider the following example:
Table 1: Sample Utility Table, 2-issue negotiation
Apples Bananas
Item Quantity 4 4
Item Utility to A 3 1
Item Utility to B 1 3
In the preceding example, it is a common but non-optimal solution to evenly split all items between
the two sides. In actuality, however, both sides could do better by simply giving Side A all of the apples,
and Side B all of the bananas.
2.5 Traditional AI Approaches to Negotiating Agents
2.5.1 Agent-Agent Negotiation
Agents as developed for negotiation purposes are the subjects of several areas of research. While
the use of agents as mediators is well-established [54], work on competitive negotiation has mostly yielded
robust models of agent-agent negotiation [37]. Indeed, several tools for developing and improving agents
12
It is not strictly true to classify fully distributive games as always zero-sum. For example, in a game with three
items, one side may value the items with the utilities {1,0,0}, while the other will value the items with utilities
{2,0,0}. While this is distributive (both sides want the same item), it is not zero-sum (since gains and losses are not
exactly mirrored). Since real-world negotiations are very rarely fully distributive (much less zero-sum), this
information has been relegated to its proper place as a footnote.
22
in agent-agent competition have been developed, such as the GENIUS platform [58]. These agents typically
participate in games that are characterized by rapid offer exchange (sometimes hundreds of offers per
second) and strict protocols (alternating offers only, e.g.). Communication of preferences is often
disallowed, or at best, restricted to certain phases of the negotiation. Emotional exchange is rarely used.
While these models are useful for many applications, they are poor comparisons to human negotiation. In
human negotiation, other factors like trust, rapport, and emotion are impactful. Different tactics and
different models are required for agents that are adequate negotiation partners for humans or (better yet)
can teach humans negotiation [20].
Agent-agent negotiation is summarized in the framework outlined in Figure 1. Basic agent-agent
negotiation requires maintaining a private state in which the agent knows its own preferences and BATNA
(see Chapter 4.2.1). Furthermore, agents can engage in opponent modeling by examine the series of offers
they receive from their opponent. Agents must also implement some kind of decision module where they
decide what offers to make and which incoming offers to accept. To an extent, agents can also engage in
opponent influencing by attempting to signal their own preferences (truthfully or not).
Figure 1: Framework for Agent-Agent Negotiation
23
There have been several different platforms that attempt to facilitate this simple form of negotiation
for various purposes, as well as numerous attempts to describe a model of the information contained within
the negotiation itself (and the human and agent mental states). The GENIUS platform [58] allows a variety
of multi-issue bargaining tasks to be completed rapidly between two agents. GENIUS represents a bold
step in agent-agent negotiation research: it presents both algorithmic solutions to the endemic problems (by
providing its own suggested library functions, e.g.,) as well as a platform that allows others to spur research.
IAGO seeks to replicate this feat in the human-agent domain.
GENIUS has been used in negotiation competitions held at AAMAS and elsewhere for several
years.
13
The platform provides a number of helpful features for examining agent-agent negotiation,
including simulation, data visualization and analytics, and easy customizability of negotiation scenarios
and formats. For example, the current version of GENIUS supports repeated negotiations and a “stacked
alternating offers protocol” for facilitating interactions with a three-party negotiation. However, as part of
its optimization for these types of agent-agent negotiations, GENIUS does not include several key features
of human-agent negotiation, such as partial offers, complex preference spaces, non-final offers, favors, and
other features.
2.5.2 Human-Agent Negotiation
Human-agent negotiation involves several additional features not captured by Figure 1. An
updated model, with the changes highlighted, is displayed in Figure 2 (adapted from [85]). Notably,
human-agent negotiation involves reasoning about the individual, on a longer timescale (where
relationships become relevant) and on richer channels. To summarize and extend on the model from the
previous section, specifically, any participant (human or agent) must be able to:
13
See information on ANAC, http://web.tuat.ac.jp/~katfuji/ANAC2015/
24
1. Understand their own state and make reasonable decisions based on it (maintain a private
state).
2. Adequately model their opponent’s state and possibly take action to improve their model
(opponent modeling).
3. Make decisions about incoming and outgoing offers, messages, and emotional display
(decision making).
4. Convey information about their own private state to their strategic, long-term benefit
(opponent influencing).
Human-aware negotiation must additionally necessitate agents that:
5. Reason about and work towards certain joint outcomes, such as rapport-building (blue, #1).
6. Employ advanced communication on additional channels, such as emotion or natural
language, and update its decision model accordingly (green, #2).
7. Comprehend individual differences, norms, and personalities (orange, #3).
Figure 2: Framework for Human-Like Negotiating Agent
1
2
3
3
3
3
25
In particular, negotiator relationships are recognized as being as important as the quality of offers
exchanged. Thus, trust and deception are key tools in an effective negotiator’s arsenal [90]. Further,
emotions can be used to affect the outcome of negotiations, particularly through the use of threats or anger
[112]. To benefit from these tools, our existing human-unaware agents are insufficient. Already however,
this is beginning to change—Van Kleef’s results have been replicated in the human-agent context, with
agents using an emotional channel to express anger and claim value in negotiations [29]. The use of trust-
building and social interactions in human-agent systems has also been explored [19]. While agents have
been developed specifically for human-agent interaction, these efforts tend to focus on a particular aspect
of negotiation, e.g., natural language [101], or negotiations without explicit preference exchange [56].
Some work has focused on designing agents that are capable of working in complex situations
(such as repeated interactions and multiple negotiations). Often these agents are designed to perform well
in theory against humans, but still make strong limits on the kinds of communication that can be conducted.
Work by Littman & Stone describes such agents that can learn optimal strategies in repeated games (but
limits this to offer exchange in bimatrix games) [69], while Crandall et al. design agents that could use
language to achieve good results in multiple negotiations [21][22]. This line of work indicates that effective
agents will need tools to reason (and persuade) over time.
In sum, the next generation of virtual agents needs to be able to both reason effectively about
strategy as current agents do, and also embody the strategies popular in human negotiation. These agents
need to model opponent preferences, log information about players over time, and utilize emotion
effectively to both perform as player partners and, perhaps eventually, as effective trainers for humans
seeking negotiation skills.
2.5.3 Existing Human-Agent Attempts
Although heretofore human-agent platforms have been lacking, there has been no shortage of
research on attempting to create viable human-aware negotiating agents. Work by Rosenfeld, Kraus, et al
has attempted to address the human-agent negotiation problem by creating agents of their own. The
26
NegoChat agent [100] and its successor, NegoChat-A, [101] have attempted to address some of these
concerns in a different domain, allowing purely free natural-language information with a chat agent as part
of a negotiation. NegoChat implements several features that are present in advanced negotiating agents:
namely, allows for partial agreements and makes allowances for bounded rationality, while still attempting
to model the urgency/importance of its opponent’s preferences. NegoChat is fully text-based however, and
does not provide additional channels for expressing emotions, nor does it attempt to reason about multiple-
issues at a time. Further, NegoChat involves a time-discount factor on utilities (although the authors note
such a factor is not required). As such, NegoChat simplifies some of the most human-like features of
negotiation. It does not allow for emotions to be exchanged, and its interactions are one-shot, leaving no
opportunity for relationships to be developed between agents and humans (thereby foiling attempts to
achieve value over time).
Other types of studies have focused on complex tasks such as the prisoner’s dilemma, impunity
game, and a modified version of the multi-issue bargaining task [30][31][33]. These studies are designed
primarily to examine human behavior in negotiation with agents, often showing similar (yet slightly
distinct) results to earlier human-human negotiation studies [112]. Often, these studies involve responding
to a fully-deterministic agent (which always makes the same series of offers) in a turn-based manner. In
other cases, the behavior of the user is “programmed” by fully specifying the responses to all hypothetical
offers by the agent, then watching the negotiation play out. While these types of studies have offered insight
into human behavior, they lack the ecological validity of human negotiation since they follow rigid turn-
based and deterministic formats.
In IAGO, we create a platform on which future negotiating agents (such as the one implemented
by NegoChat) can be built for use in negotiation competitions as well as independent research efforts. IAGO
supports a wide variety of features that have been shown to be critical to realistic human-agent
communication. These include partial offers, preference elicitation statements, a selected set of natural
27
language argumentation phrases
14
, and an expressive, embodied virtual human agent. The interface also
features a full conversational history, the ability to feature non-linear utility structures, and a customizable
graphic interface. IAGO exposes a simple API that allows negotiation scenarios and agents to be
customized with a single template Java class.
14
The current set of phrases is based on a curated list of dialogue acts pertaining to norms, fairness, and other
important communications adapted from similar work [57][86]. The phrases are entirely customizable to meet the
research need.
28
Chapter 3. The IAGO Negotiation Platform
3.1 From the Ground Up: Building Human-Like Negotiating Agents
While the model in Figure 2 emphasizes the importance of the states of the parties (with each
party holding bicameral concepts of their own state and their opponent’s), it does not indicate a functional
method for designing agents that utilize this information in a coherent strategy. We therefore propose the
layer framework shown in Figure 3 to fulfill this role and map out the maze of agent strategies.
Specifically, the layer framework lays out three classes of agents, each of which builds on the last to create
a more nuanced and effective human-aware negotiating agent.
1. The first layer of agents follow “best practices” in negotiation by taking tried-and-true
strategies that have been described human negotiation literature. This includes strategic
information exchange, use of anger as a manipulative tactic, and hard-anchoring of offers.
They make simplifying assumptions about the time horizon of negotiations (believing them
to be “one-shot”) and engage only in rudimentary opponent modeling (e.g., reacting only
to explicit statements of preferences).
2. The second layer, “repeated-interaction agents”, involves the temporal dimension. These
are the first agents that do not necessarily use all the tools they have developed so far, if
those tools are deemed likely to substantially damage the user’s long-term impression of
the agent. These agents try to maximize their outcome over time, coming out on top in
repeated negotiation. In short, they add stateful memory.
3. The third and final layer brings individual personalization to agents. These agents will be
able to adapt their user models to the behavior of individuals, and account for differences
in personality (and might attempt to discern lies). While the long-term agents from the
preceding layer may be able to reason about things like the cost of betrayal, individually-
29
optimized agents will know that the cost is amplified against certain types of tough
opponents (e.g., those high in the “Machiavellian” personality trait) [81]. Designing these
agents may involve “strategy selection” among agents from the inner two layers.
Figure 3: Layer Framework for Human-Aware Negotiating Agent Strategy
This framework has guided the development of the various features that IAGO provides to agent
and experimental developers and researchers. So far, various studies have been conducted that create agents
that vary extensively within the first layer. Recent submissions to this year’s ANAC have also begun
utilizing elements of the second layer (some agents have begun to exchange favors with human users). The
ongoing work proposed herein will continue to refine second-layer agents and extend these agents to the
third layer, demonstrating advanced human-like strategies such as favor exchange that are contingent on
individual behavior of one’s partner.
30
Figure 4: IAGO Interface
The implementation and importance of these features is discussed in the remainder of Chapter 3,
which gives a detailed description of IAGO’s ability to support these layers. Furthermore, the features
needed to encourage the development of more agents within the third layer are discussed.
3.2 Design Principles
The IAGO negotiation platform is designed to serve as a research tool for studying human-agent
negotiation with several features. First, it is designed to implement the more complex multi-issue
bargaining tasks that are now the cutting edge in agent-agent negotiation research. Second, it is designed
to be lightweight and easy to distribute to agent designers and users alike. Finally, it is designed to
implement a number of human-like negotiation features: these include a fully asynchronous interaction, the
presence of emotional channels, agent embodiment, explicit preference exchange, various types of lying
31
(including BATNA lies
15
), and the ability to threaten to depart the negotiation. Many previous automated
negotiation platforms are fine-tuned to attempt to achieve Pareto Optimality (or peak efficiency) among
two perfectly rational agents. These platforms therefore do not lend themselves readily to teaching
negotiation to human students, nor do they adequately represent a research space for understanding
negotiation in the wild.
As has been illustrated in Chapter 2.1, human-human negotiation has been studied extensively and
found to contain myriad signals beyond simple offer exchanges, from emotional display, to favor-and-
ledger-related patter, to complex natural language dialogue [93]. Within the agent community, simulating
these behaviors is a first step toward developing realistic agents, as well as teaching valuable negotiation
skills.
IAGO is a platform upon which human-negotiating agents can be adequately developed. Previous
platform development efforts have spurred interest and novel research by allowing agent-agent negotiations
competitions to be conducted (predominately at AAMAS and IJCAI). However, recent interest in human-
agent negotiation requires this new platform that contains features that are necessary for the new domain.
Such a platform must also meet practical research goals of being both readily deployable and easy to extend
to the particular needs of the study or simulation. IAGO addresses these points. Specifically, IAGO was
designed with the following principles in mind:
1) Must support current web-standards and require little to no installation of complex
support software on a user’s machine.
2) Must deploy a well-defined API that allows both agent designers and negotiation
game designers to easily create and specify behaviors for the purposes of competition/research.
15
See Chapter 4.2.1.
32
3) Must support currently unexamined aspects of human-human negotiation in a
human-agent context. Specifically, this must include partial offers, visual representation of
emotional signals, and relative preference elicitation/revelation.
More recently, this mandate has been expanded for further dovetail to the model of agent design
proposed in Figure 3. IAGO also:
4) Must support temporally-intelligent agents through stateful knowledge of past
negotiations/relationships.
IAGO provides a number of features that make it technically feasible. It is web-based, and designed
to be easily deployable. It uses current standards in web-based design, exposes a well-documented API,
and provides logging features. It is primarily event-based in design, which allows for intuitive
programming of agents to react to common negotiation proceedings. More technical information can be
found in Appendix C. The following sections in this chapter describe the core cognitive functions that are
facilitated by IAGO, and how they fit in the model of human-aware agents discussed in Figure 3. IAGO
has already led to the development of dozens of agents; some of these agents and their core cognitive
characteristics are summarized in Table 2. These characteristics are discussed in the sections that make up
the remainder of this chapter.
Table 2: Current IAGO Agents, with Broad Characteristics
33
34
Chapter 4. Models of Human Behavior and their Implementations
Research has long existed on modeling human behavior in social situations (including negotiation).
These models of various components can be thought of as sub-problems in negotiation, ranging from offer
strategies, to communication interchanges, to opponent modeling, to relationship understanding. IAGO as
a platform was designed to allow for each of these problems to be studied, and advances in the models of
these human behavior to be easily realized. This chapter describes some of the common models relevant
to negotiation, the research contributions to those models, and their implementation strategies win the
IAGO research framework.
4.1 Offer Strategies
4.1.1 Competitive vs. Consensus-Building Agents (Concession Curves)
One of the key features in initially creating a negotiating agent is creating a viable model of offers
and implementing an internal protocol for the exchange of those offers. The agents must be able to decide
which offers are beneficial to themselves by reasoning against their own internal state. Furthermore, they
must be able to craft offers which are both self-beneficial as well as likely to be accepted by their opponents
(given what the agent reasons about the opponent’s internal state). While this is therefore a broad topic that
encompasses several areas of the diagram shown in Figure 2, it predominately deals with the decision-
making section and opponent preference modeling. Given the rich literature on offer-exchange, Pareto
optimality, and negotiation in general, agents that (only) implement basic offer exchange are normally
classified as Layer 1 agents (Figure 3).
IAGO agents that have been developed can be broadly classified into two primary behaviors:
competitive versus consensus-building. These agents follow either a negatively-sloped concession curve,
or a positively-sloped “assignment” curve, respectively. Competitive agents attempt to drive the
negotiation sending offers to the human participants whenever possible, and starting with vastly unfair
offers in which the majority of items are assigned to the agents. Slowly, as humans reject these offers and
35
otherwise compete against the agent, the agent relaxes its aspirations and sends progressively more
favorable offers. Examples of agents that utilize this strategy are RedQueen and Cena.
Cooperative agents are expectedly different from their competitive counterparts. Firstly,
cooperative agents largely do not take initiative, instead responding to user-driven questions and offers with
communications and counter-offers of their won. Indeed, the first agent developed for IAGO, Pinocchio,
never sends offers of its own at all, only accepting or counter-offering when the user takes actions (for this
reason, it makes an excellent baseline agent in the ANAC competitions). These cooperative agents maintain
a model of their opponent’s preferences, and attempt to divvy up items in a fair manner. Pinocchio, for
example, looks at the items that have not been assigned to either side yet and attempts to give its opponent
their highest-rated item while taking its highest-rated item for itself. Updates to Pinocchio’s user model
(such as statements of user preferences) will cause it to prioritize different items. Of course, both
cooperative and competitive agents are capable of utilizing well-known negotiating techniques such as
anchoring, where a strong initial offer is made to the user first (see Chapter 4.1.2). Pinocchio does not do
this, but other cooperative agents do, starting with a strong offer and then settling into a consensus-building
pattern after an initial refusal.
IAGO enables the design of both consensus-building and competitive agents by exposing a number
of mechanisms for evaluating user preferences and offers. IAGO allows all incoming and outgoing offer
events to be evaluated for their point totals (including both linear and non-linear point assignments), and
exposes helper functions for enumerating legal preference orderings based on information garnered from
the user.
16
16
For example, in a simple, linear three-issue negotiation, there are 6 possible permutations of user preferences: A >
B > C, A > C > B, B > A > C, B > C > A, C > A > B, C > B > A. If the user expresses “I like A best”, then IAGO
can reduce this set to only the first two options. Whether the agent finds the user statement credulous is up to the
designer.
36
4.1.2 Anchoring and Framing
Among the processes important to agent offer strategies are those that incorporate framing or
anchoring—two commonly used human negotiation techniques. Framing is a well-studied phenomenon
that can manifest in different ways; commonly offers are framed as a gain (“You would save $40”) versus
a loss (“If you don’t take this, you’ll lose $40!”) (see the seminal work: [110]). Frames can also be used to
alter the perception of the opponent (e.g., framing as a human vs. a computer, a male vs. a female, or any
in-group vs. an out-group) or of the task (e.g., opportunity for negotiation vs. opportunity for conversation)
[102][28]. Modeling these processes within the context of human-agent negotiation is important, since
subtle framing differences can lead to differences in opponent strategy.
Another well-known technique to influence offer outcomes (particularly early in negotiation) is the
use of anchoring. With anchoring, an egregiously unfair offer is proposed first. While the offer itself is
likely to be rejected, it has been shown that the offer will psychologically “anchor” people, causing their
target acceptance to shift toward the offer [91]. Within agent contexts, anchoring has been shown to be a
viable negotiating technique, and IAGO has been used to implement agents than can take advantage of this
fact experimentally [99].
4.2 Opponent Modeling
While having an internally consistent model of its own preferences is necessary, negotiation-aware
agents can benefit from an accurate model of their opponents’ preferences as well. Much research in agent-
agent negotiation has focused on gleaning opponent models [5] from their offer behavior. With IAGO’s
richer channel for communication however, agents are able to reason about opponent preferences from
explicit preference statements as well. Furthermore, there are other aspects of opponent modeling that are
important—several aspects of human personality have been shown to affect how humans negotiate [81].
Humans may hold different norms that are relevant to the negotiating. Agents that are capable of
understanding their opponent well are classifiable as Layer 3 agents, although Layer 1 agents do engage in
37
rudimentary modeling. Several agents in the ANAC competitions have been including “reliability metrics”
to try to model how certain they are about their opponent preferences; these include Elphaba and Murphy.
IAGO allows for opponent modeling by exposing all relevant aspects of user behavior to the agent
to allow it to reason about opponents according to its own internal models. While IAGO of course exposes
all offers and messages (explicitly about preferences or not) that the user makes, it also provides other
information that may be relevant. IAGO timestamps all its events, so agents may choose to take into
account the amount of delay in user response in attempting to ascertain the veracity of a statement.
Furthermore, IAGO also support the importing of self-report data if desired—if the user previously records
a response to a personality survey for example, that information can be made available to the agent. Finally,
IAGO also includes a number of API functions that allow agent designers to quickly determine if the user
has made logically inconsistent statements in the past (“I like A better than B” and “I like B best”, e.g.).
4.2.1 Preference Modeling and BATNA
Modeling of an opponent’s preferences is often one of the first sub-goals of a negotiating agent.
Many agents may enter the negotiation with a pre-existing model or bias. For example, agents may assume
what most humans do—that the negotiation is distributive or “fixed-pie” [27]. They may assume that their
opponent has the same preferences over the issues that they do. Alternatively, agents may assume that the
situation is integrative. Depending on how the opponent preference model is updated over the course of
the negotiation, this initial bias may have an impact on how much integrative value is generated, and how
Pareto optimal the offers made are.
Negotiations typically involve some give and take between parties but ultimately a point is reached
where parties must decide to “take it or leave it.” In the real-world, parties often have the option to walk
away from a negotiation without a deal. For example, when negotiating the price of a new car, a smart
negotiator already knows what price they could obtain at another car dealer. This allows them to abandon
negotiations if they fail to improve upon this alternative. The concept of BATNA (Best Alternative To a
Negotiated Agreement) is used to capture this notion. Many negotiation games provide each player a
38
BATNA to represent the value they receive from walking away. In general, each player’s BATNA may be
unknown to the other party and not necessarily equal (if one player has a larger BATNA, this gives them
more power in the negotiation).
The concept of BATNA helps link negotiation research with the rich literature on ultimatum games.
Ultimatum games involve two parties—a proposer and a responder—who decide on a split some pie of
resources (usually a sum of money). The proposer offers a split, which the responder may then choose to
accept or reject. Acceptance results in the resources being split across the proposed lines, while a rejection
results in both parties receiving a BATNA (which is zero in the classical formalization but can be any
percentage of the total pie). Thus, the multi-issue bargaining task can be viewed as straightforward
extension of the ultimatum game: allowing multiple pies (issues) and allowing responders to counter-
propose back-and-forth some number of times. Alternatively, an ultimatum game is a special case of multi-
issue bargaining with a single issue and only one round of propose-respond. Game theorists sometimes go
so far as to argue the addition of multiple rounds adds no generality as “talk is cheap” and, ultimately, all
negotiations collapse to an ultimatum. This concept is somewhat belied by literature on the importance of
trust and rapport-building.
We build on these concepts in the initial experiments in Chapter 5.2.1. We formalize negotiations
as multi-issue bargaining, allowing multiple issues but, for simplicity and to reduce the cognitive burden
on participants, we only consider a single round of propose-respond. Thus, each negotiation can be viewed
as a multi-issue ultimatum game. In that experimental domain, we utilize repeated ultimatum games as the
units of negotiations as this decision allows us to directly measure the amount of joint value discovered by
the proposer. This history of ultimatum games can be to establish concepts like trust and favors as in games
with multiple rounds of propose-respond. But, maintaining an accurate ledger is more critical, as each
negotiation is self-contained, and all proposals are final. By measuring the amount of joint value generated
both over time and within a single negotiation, it is possible to analyze both Pareto efficiency and Pareto
efficiency over time in the same domain.
39
4.2.2 Personality
Previous work has examined two measures of individual differences that are specifically relevant
to the negotiation domain, and to creating Layer 3 agents. Two of these identified personality variables are
as Social Value Orientation (SVO) and Machiavellianism; each can influence public goods decisions [118].
These measures have been shown to affect outcomes in human negotiation, but there is little agreement on
how they affect behavior over time in negotiation, and even less on their impact for mixed human-agent
systems [105][113]. As such, examining their effects in repeated-form human-agent negotiation is largely
unexplored.
SVO [114] measures general “pro-self” vs “pro-social” tendencies. Earlier studies of people’s
social value orientations [70] have demonstrated that, when faced with social dilemmas in which actions
that are most personally beneficial conflict with actions that are most beneficial to a larger group or
community, many people will decide to pursue communal interests at the expense of their own. Thus,
concerns with their own individual outcomes do not appear to uniformly eclipse people’s considerations of
their broader connections to and responsibilities toward others [17].
Studies [82][114] have consistently confirmed that, compared to those who identify themselves as
individualistic, pro-social individuals are more likely to: (a) follow norms of social responsibility that
dictate cooperatively sacrificing their own potential gains to improve communal interests in both simulated
[8] and real-world [117] social dilemmas [1], and (b) follow norms of equality or fairness that dictate
actively seeking to equalize their own and others’ outcomes, even when they could easily keep more
benefits for themselves [111][113][116].
The second measure, Machiavellianism, quantifies strategic, manipulative, and goal-seeking
behavior. Like SVO, Machiavellianism is particularly relevant to the negotiation domain as it has also been
found to relate to strategies and behaviors employed in negotiation [119]. Specifically, Machiavellians tend
to have similarities with high-narcissism and high-psychopathy subjects [94]. A quintessential
Machiavellian might be considered overly rational, detached, or cold, and tend not to be influenced by
40
emotional arguments as easily as those scoring lower on the measure [38]. Further, Machiavellians often
hedge their answers when asked direct questions and prefer to obfuscate their desires and goals [119].
Because of these traits, Machiavellians can often gain advantage in certain negotiation and game
theoretic tasks, as they tend to frustrate opponents’ efforts to recognize beneficial offers. These strategies
have been studied primarily in short-term interactions, however, and may eventually lose their usefulness
in repeated negotiation. Therefore, Machiavellianism is a particularly important measure to examine while
studying repeated interactions [119]. Negotiators high in Machiavellianism have also been found to use
different language than those lower in Machiavellianism. This language is important when favors are
unenforceable, and must be relayed purely through discussion [25].
In summary, SVO is a helpful measure for discriminating pro-social from pro-self tendencies—
whether a person thinks of others or only of him or herself. Machiavellianism, by contrast, is a way of
distinguishing how people think of others—in a manipulative and exploitative way, or in a more altruistic
one. While IAGO does not directly attempt to measure SVO or Machiavellianism automatically from user
behavior, there are a number of attempts that have been somewhat successful in doing so [86]. Furthermore,
IAGO does support the integration of data from self-report surveys. Should an experiment be designed
such that SVO or Machiavellianism measures are asked of the user as self-report, that information could be
used by agents in IAGO to respond better to individual users.
4.2.3 Norms and Biases
Although personality is important in influencing negotiation outcomes by itself, personality is also
relevant through its ability to inform or activate certain societal norms. While traditional game theory would
indicate that negotiators are purely self-interested, the research on SVO indicates that, at least among some
individuals, concerns for outcome equality are also relevant. Previous work identifies three broad
categories of norms relevant to negotiation: equality, equity, and need [71]. While individuals may enter a
negotiation with certain norms, these norms are also subject to discussion and manipulation. For example,
a negotiator that is concerned with equality (“We should split everything equally”) could have need-based
41
norms activated (“These items are worth very little, so I need more of them to equal your outcome)”. Both
of these norms are ostensibly “fair”, but arrive at that conclusion through very different mechanisms.
Furthermore, equity as a norm is very important for the exchange of favors, since it often allows
negotiators to argue for an opposite outcome than what would be dictated by a need-based norm. For
example, an equity-based argument based on Table 1 might have side A claim “I should get all the apples
since they’re worth so much to me”. In the case of repeated negotiations, this equity claim can be “paid
back” in the future, allowing both sides to reach a form of fairness (and Pareto Optimality over time, in
certain situations). This topic is discussed in detail in Chapter 5.2.1.
While the literature discusses fairness at length, the inconsistent casual use of the word has led to
some confusion. Furthermore, as demonstrated by the favor exchange example, any adherence to norms of
fairness might be done purely out of self-interest [92]. Nevertheless, effects of varied norm activation have
been shown, especially in cross-cultural contexts [76][119]. Beyond the norms discussed above, there are
a great deal of other norms explored in the Subjective Value Inventory (SVI), as described in [119]. These
include important concepts such as the choosing certain actions in order to “save face” in front of an
opponent. In addition to norms, individual also often hold biases such as the fixed-pie bias described in
Chapter 4.2.1 or any of dozens of others found in the literature [119].
4.2.4 Models for Teaching and Training
Teaching “soft” skills like negotiation has been of interest for some time, with various agent and
non-agent (human) approaches all attempted in an effort to improve student learning. Approaches range
from imparting textbook techniques [40] to fully integrated learning systems that incorporate advanced
knowledge of student learning behaviors [60][61][68]. In the case for the latter, having strong models of
opponent and human behavior is a necessary antecedent—agents must be able to reason about human
behavior in some fashion before giving comment upon it in a teaching and training context. As such, the
IAGO system is valuable as an extensible framework: data that is used in the creation of IAGO agents as
computational opponents is also useful in the construction of computational tutors. Furthermore,
42
individualized feedback is available to participants due to cleanly annotated and extensive data. Numerous
publications indicate the need for such data to train and teach, some of which have either inspired or already
taken advantage of the human models advanced by my work [20][46][59]. In particular, Johnson et al. has
shown how to use the opponent models that IAGO provides in order to diagnose errors in how people form
mental models of their opponent’s interests. Building on the ideas of cognitive tutoring (where a system
models “good” behavior” and detects departures from that), these efforts aim to capture what a reasonable
person should have been able to infer from within a negotiation and identifies departures from this to
become opportunities for teaching [61][62].
4.3 Communication and Emotions
While agent-agent negotiation has long-focused on the opponent modeling and offer construction
aspects of negotiation, human-human negotiation has emphasized communication and emotion use as well.
When open dialogue is allowed in a negotiation protocol, parties are free to discuss whatever they wish,
from directly enumerating their preferences to talking about items that may be seemingly unrelated to the
negotiation (this type of patter has been shown to increase rapport) [84]. Negotiating agents that use basic
communication to express preferences may be classified as Layer 1 (best practices agents), but more
advanced versions that use communication and emotional manipulation to establish stateful or individually-
targeted representations of their opponents may be classified as Layer 2 or Layer 3 agents, respectively.
These agents may communicate a number of items that are shown as part of the opponent
modeling/influence sections of Figure 2, and their goals are often to affect the joint outcomes of trust and
rapport. Of course, agents are also able to mislead their opponents through lying, withholding information,
or aggressive behavior—in these cases the agents are willing to sacrifice their good reputations in exchange
for disrupting their opponents’ models of their own goals.
IAGO facilitates communication across these richer channels first and foremost by allowing it
within its protocols (many previous agent-based platforms to not include these channels at all). IAGO
exposes two events that are used to send text-based messages and emotions. IAGO’s base source also
43
includes four different agent avatars (with two genders), each of which is capable of expressing up to eight
different expressions (these include classic expressions such as surprise and anger, as well as less common
expressions such as non-Duchenne smiles [36]). Human users can express their emotions using a set of
emotion buttons (e.g. a “smiley” face), and agents can respond to these expressions as appropriate—for
example by intentional “mirroring” [47].
4.3.1 Individual Emotions
Though there is profound debate regarding the number and classification of emotions, emotions are
accepted as having a profound influence upon human behavior. In addition to allowing predictions to be
made concerning the intentions of intelligent actors, understanding and appropriately modeling emotions
allows more realistic computational agents to be built that more closely mimic actual human behavior.
Furthermore, appraisal theory may allow inferences to be made regarding general mood, as well as specific
emotional state.
Many attempts have been made to adequately capture the predictive power of emotions using
computational models. Appraisal theories are predominately used in computational systems due to their
structure, and their proliferation and importance has been well documented [73]. These models use
intrapersonal appraisal variables such as desirability or likelihood of an event to an agent to determine
prototypical emotions such as surprise or sadness. EMA, a process model of appraisal dynamics, does just
that, analyzing situations in real-time and outputting several features of interest, including discrete emotion
as well as dimensional mood [72]. Other models may be driven by other concerns, such as the artistic focus
of the Em architecture [98].
But in negotiation, much research has focused on the usage of emotions to manipulate human
behavior, and to compel opponents to concede value. Specifically, past work has focused on the use of
anger in negotiation. Anger has been shown to allow its employers to claim value from their opponents in
both human and human-agent contexts [74][112]. One area in which IAGO presents a new avenue for
research is exploring how well these strategies hold up to repeated interactions. The focus of ANAC 2018
44
was on showing the limitations of overly aggressive strategies when humans and agents were forced to
negotiation more than once. Several IAGO agents use positive emotion (Pinocchio and Merlin, e.g.), while
others use negative emotion (Cena and RedQueen, e.g.).
4.3.2 Withholding Information and Lying
Since negotiation is not a fully-observable game, it provides ample opportunity to strategically
handle privileged information. In the multi-issue bargaining task, each side in the negotiation is unaware
of a number of aspects of their opponents’ preferences—the relative ordering of preferences is unknown,
as well as their value (e.g., is one item worth vastly more than the others?). Negotiators also cannot know
their opponent’s BATNAs; without this, both sides may need to consider threats to walk away from the
negotiation entirely to be credible. Previous work examines withholding information and lying
separately—while they are related strategies, there is evidence [80] that many humans find the former less
ethically objectionable than the latter. However, both can be exceedingly useful strategies.
Several of the agents developed for IAGO strategically withhold information in an attempt to
balance the amount of information revealed by both sides—Merlin and Rumple are examples. Knowing
which items an opponent values may allow agents to discover integrative potential and thus grow the pie.
However, this information can also be used to perpetrate various techniques that claim an unfair share of
that value, as demonstrated by the LyingAgent [49]. The LyingAgent specifically attempts a Fixed-Pie Lie,
in which it claims its own preferences are identical to its opponent. This then allows it to “concede” items
which are thought erroneously to be valuable, and thus claim a large share of the seemingly useless items
in recompense.
There are other lies that are strategically useful in negotiation. Firstly, a negotiator can lie about
its BATNA, implying that the outcome of the negotiation is relatively unimportant to it, and thus signaling
that it would have no value in conceding much. Several agents that attempt this lie are in development, and
users of IAGO can communicate information about their own BATNA (and thus, can choose to lie about it
as well). The figure showing the IAGO interface displays this feature in the bottom-right corner (Figure
45
4). Alternatively, users can lie about the magnitude of the value of the items in the negotiation (claiming
each one is worth a large or small amount of points in total). This type of lie is particularly important in
cases where the norms of the opponent are known (see Chapter 4.2.3), and is a feature currently under
development for IAGO.
4.4 Reputation & Individual Relationships
Reputation is a critical component of negotiation. Not only does information about reputation
potentially provide information about how an opponent may conduct themselves before negotiations begin,
savvy negotiators can manipulate their own reputation in the same way to achieve an advantage. If a
negotiator is seen as trustworthy before a negotiation begins, then the opponent is less likely to mistrust
potentially valuable information provided about preferences, for example. A negotiator that is preparing to
face off against an opponent that is known to be very tough may be more guarded, but may also come in
ready to concede in order to preempt a long and vicious fight.
But reputation is more than just an a priori baseline on which to base initial strategies in the absence
of real information. Since reputation is assumed to be dynamic (what you do will change your reputation)
and semi-public (people will know your reputation), negotiators must select strategies that will not only
lead them to success in a single negotiation, but will also lead to the desired reputation.
Agents that can maintain an internal state of their own reputation while taking into account
opponents’ reputations will allow Layer 2 agents to be developed that can robustly deal with temporal
considerations. IAGO facilitates the development of these kinds of agents by providing session-long user
states that allow agents to be designed that can recall information from previous interactions. Future
improvements will also incorporate personalized agent databases to be stored for more longitudinal
examinations of reputation behavior.
As a case study in this phenomenon, we examine work by de Melo et al [31]. In this study, human
participants were instructed to provide information about how they would negotiate in the future, by
providing instructions to an agent that served as their representative. Participants who acted through a
46
representative construed the problem on a higher (relationship and norms) level, and selected fairer behavior
for their agent. This indicates an increased concern with concepts of reputation.
Still, the dynamics of this are not fully understood. In my previous work [79], I showed that while
this initial consideration for fairness may exist, experience leads to increasingly manipulative behavior, as
participants begin endorsing techniques like lying and negative emotion use. However, since this study did
not make explicit the expectation of repeated future interactions with the same partner, it is unclear how
relationships between the human and various agents would develop over time.
Reputation is of course a fuzzy concept—humans do not have a concrete formalism (or at least, not
a simple one) that quantifies the positivity and strength of their relationships with others. There are some
ways to make model-based estimations of relationships, however. Social distance provides one way of
considering this, as people who are further away psychologically from the current problem may be treated
less fairly [107]. People may also treat agents and non-agents differently, due to out-group effects [42],
and may treat friends differently than strangers [103]. Rapport is also a critical component of relationships
[51].
4.4.1 Temporally-Aware (Favor) Agents
Still, humans manage to maintain a concept of relationships, especially with regards to tasks with
clear outcomes, such as negotiation. For example, in previous work [78], we identified a particular strategy
that is effective in human negotiation called “favors and ledgers” (which we discuss at length in Chapter
5.2.1). This strategy allows negotiators to accept unfair outcomes in the short term with the expectation that
these favors will be repaid over time, thus unlocking greater shared value (“growing the pie”). Similar work
has been conducted, focusing on trust [43] and social dependencies [53]. It was this initial concept of favors
in relationships that led to an examination of a simple but very illuminating relationship over the course of
repeated Ultimatum Games, and the eventual development of IAGO.
Within IAGO, favor exchange is explicitly supported through a subtype of message events. Both
human players and agents are capable of expressing the key events required to update and maintain a ledger
47
of favors. Specifically, both sides can request favors, accept and reject these requests, and explicitly (claim)
to return favors. All of these are non-binding convenience communications—humans and agents both
maintain their actual ledgers internally. In particular, this means that claiming to return a favor is untied to
the truth of the actual return (something often better expressed through the “deeds” of the actual offers
received).
48
Chapter 5. Basic Negotiating Agents and their Problems
Why create socially-aware agents? How does studying advanced socially-aware agents in an
interactive way push forward research? Largely, there are two research camps that are benefitted by this
kind of work. The first includes behavioral scholars
17
who seek to answer questions about how humans act
(often around agents). But their work is sometimes limited by the simplicity of their agents (if agents are
present at all) and thus lacks some ecological validity. It also stops short of answering interesting questions
about how people solve the multi-armed bandit problem that is repeated negotiation. These are non-trivial,
and non-obvious problems, but these facts need to be repeatedly stated.
The second research camp is more interested in the design of effective agents that serve some
purpose beyond acting as research aids
18
. Much of this research can answer these questions as well: studies
such as those conducted in this chapter and Chapter 6 show both how people act as well as how to design
agents that could exploit this fact. However, designing agents that act like humans is not always good;
agents that use favors are likely effective, but agents that keep human-like fixed-pie biases are not. My
work provides clear guidance on these issues (or at least a framework on how to obtain clear guidance).
Some of these behaviors may be effective but potentially unethical, which is an interesting side question
related to some current work, but not directly to this thesis.
My work satisfies both goals by providing novel, useful agents that answer the question “are these
sorts of agents effective in ways that current agents are not?”. If that question appears trite, a narrower
question that this work also answers is “how might you act optimally in repeated negotiations with humans
as an agent?”.
To this end, the studies below (which focus on favors and ledgers behavior) are meant to show that
answering the repeated negotiation problem is “different” than answering the single negotiation problem.
While many manipulations (e.g., emotional displays) could be used to prove this point, examining favors
17
E.g., Van Kleef, DeDreu, or Carnevale, to name a few.
18
Many examples, but —e.g., this could include the work of Milind Tambe or Peter Stone.
49
and ledgers provides better insight into the mechanisms behind why people might act the way they do. This
exploration of human negotiating techniques in agent-contexts therefore extends and complements work
done earlier in my Ph.D., regarding the effectiveness of techniques such as favors and ledgers in simplified
negotiating tasks such as the repeated Ultimatum game, but now in a broader set of games.
Creating basic socially-aware agents requires a platform like IAGO. This section explores my
contributions to human-agent negotiation, from the design of effective agents that negotiate with humans,
to novel contributions that explain the underpinnings of human behavior in negotiation and negotiation-like
situations. The first section of this chapter explores the novel agents that have been created so far as part
of my research. The remaining section starts with a review of some of the basic agents created in a limited
domain, and shows how this work can be a steppingstone for fully-realized Layer 3 agents within IAGO.
We expose the limitations of the existing research frameworks, while still exposing novel results in human
behavior (particularly in regards to growing value over time and individual idiosyncrasies of humans). The
remainder of the chapter focuses on how IAGO has furthered exploration in these and other research areas.
5.1 First-Layer Agents: Grumpy & Pinocchio
The first agents that were developed were designed to implement the core tenets of Layer 1 agents.
Specifically, these agents implemented three Policies that allowed them to react and respond to users along
three channels of communication—Emotions, Messages, and Offers. Whenever the human negotiator used
IAGO to emote, send messages, or create allocations of goods, the agents would respond accordingly.
These nascent agents maintained an internal state of their own preferences over the issues. They
also engaged in rudimentary opponent modeling, by also determining their understanding of their
opponent’s preferences. These preferences were initialized in an optimistic fashion—agents with no prior
knowledge assumed the task was integrative, and assumed therefore that their human opponent’s
preferences were always opposed to their own. This proved to be a key insight in growing value for these
early agents, since most humans did not make the same assumption. The agent was therefore able to guide
its opponent to mutually beneficial solutions when the reality of the negotiation fit its initial assumptions.
50
Even with this initial set of assumptions, however, the opponent model could be updated. If the
user provided information about their own preferences, the agents would adapt their user model to this new
information. Furthermore, if the user provided conflicting information (such as logical impossibilities like
A > B, B > C, C > A), then the agents would prompt the user for more information. These agents did not,
however, attempt to model users from their behavior (beyond explicit statements), nor did they have any
ability to distinguish truth from lies on the human’s part.
The first agents varied according to several features, but the most salient was their use of emotion.
The agents, “Grumpy” and “Pinocchio”, always used the same offer strategies, but the former only used
angry expressions, while the latter used happy and sad expressions (depending on, for example, the quality
of an incoming offer). These two agents also had a number of natural language responses to various
scenarios (which were scripted), but differed radically in the tone and language they used. “Grumpy”,
expectedly, used gruff language: “This offer was even worse than the last one!”, while “Pinocchio” was far
more diplomatic: “I appreciate the effort, but I liked your last offer better!”.
All these early agents were highly fair, attempting only to split items one-to-one with their human
opponents. While they agents did attempt to give the human’s their own favorite items (based on their
current opponent model), this strategy often resulted in the agents scoring very well regardless (they often
beat their human opponents). None of this first generation of agents used competitive bargaining techniques
like anchoring.
In general, Grumpy and Pinocchio proved effective as first agents, and indeed were the subject of
IAGO’s first debut paper [74]. However, these agents have remained solidly within the first Layer of
proposed agents. They do not reason over time, nor do they have individual memories of their users. These
and other limitations led to the extension of IAGO and a number of additional proposals, detailed in the rest
of this Chapter.
51
5.2 The Path to Socially-Aware Agents
To create agents that are more human-aware and useful than our first generation, we must first look
at back at some initial work. Our current models of human-agent interaction from IAGO are largely based
on one-shot, dyadic interaction. While single, one-off negotiation still presents many challenges, it also
makes many simplifications to truly human-like negotiation. Humans that develop relationships in
negotiation often do so over time, and choose partners based on past interactions. To this end, we propose
to develop agents that are specifically intended to maximize long-term value. Agents that build trust over
time may be expected to succeed in the long-term over more short-sighted and greedy strategies. Indeed,
overuse of tactics like misrepresentation or hardline offers may come at a cost to reputation.
This topic of Layer 2, temporally-aware agents had been previously explored in our earlier work—
however, the domain was limited, and these agents lacked the full use of channels that IAGO provides.
Still, there are takeaways from this research, which is summarized in the following case study.
5.2.1 Favor Exchange in Negotiation: A Multi-Issue Ultimatum Case Study
One of the key areas for expanding research within human-agent negotiation is temporally-aware
agents. This is most clearly illustrated by examining the additional value that can be claimed across multiple
integrative negotiations. The following case study shows the benefit of reasoning across multiple
negotiations in a repeated Ultimatum Game scenario, and sets the stage for future work with favors using
IAGO in ANAC 2018.
Within a given negotiation, division of resources between competing sides can be represented
graphically by the set of points representing the utility that each participant receives from a given
distribution. Each point that does not generate strictly less utility for both parties is considered to be Pareto
optimal (lying on the Pareto frontier). Formally, given a set S of points representing the joint utility of a
deal, the set of Pareto optimal points P is defined as:
P = {p} | ∀ p ∊ S, ∄ q ∊ S, (px < qx ∧ py < qy)
52
Thus, points falling below the curve generated by these points are considered sub-optimal (or
“inefficient”), as it could be improved for one player without harming the other.
Unfortunately, when repeated negotiations are allowed to occur, simply combining Pareto optimal
solutions in each individual negotiation can be arbitrarily inefficient over time. This is clearest when the
Pareto frontier is convex (Figure 5). In this case, the “fair solution” (an even split, illustrated as deal “A”
in Figure 5), while efficient for that negotiation, will lead to a solution for the combined negotiation that
is well below the Pareto optimal one. Conversely, choices B1 and B2 are efficient but unlikely to occur as
they would be seen to be violating the norm of fairness, but combine to form a Pareto efficient solution
over time. Formally, we can define this in the two-negotiation solution as:
P2 = {p1 + p2} | ∀ p1, ∊ S1, ∀ p2 ∊ S2, ∄ q1 ∊ S1, ∄ q2 ∊ S2
(p1x + p2x < q1x + q2x ∨ p1y + p2y < q1y + q2y)
Repeated negotiations over time allow the notion of “efficiency” to change. Favors and ledgers is
one approach of social interaction that allows parties to discover and achieve such efficient solutions, by
recognizing the implications of the change.
Figure 5: The space of negotiation options in a simple, two-negotiation repeated interaction
53
Returning to the example of apples and bananas from Table 1, if one side recognizes that there will
be surplus of apples in the future, that side may agree to forego a “fair” split of the bananas today, with the
promise of receiving a similar favor in some future negotiation. This technique of “banking” joint value
can be very effective in negotiations, and help establish mutually beneficial relationships between
negotiation partners. Even in situations where payoffs are uncertain (one side doesn’t know the other prefers
bananas), exchanging favors is still a viable strategy. Malicious manipulation of favor returns, in which one
party claims to incur a favor by accepting a poor deal when in fact it was a good deal for them, is also
possible.
In this work, our efforts signaled intention using both behavior and language selection. To realize
the multi-issue, multiple-negotiation domain that we explored, we used the Colored Trails testing
framework. Colored Trails is a negotiation testbed for analyzing the strategies of participants, and has been
used in several types of games, including revelation games [44]. Our design involved a version of the
interface that was deployable via the web and customized to allow our agent to engage in multi-issue
bargaining games.
In Colored Trails, players both start with a set amount of different-colored “chips”. By expending
a chip, a player can move one space on the board of a similar color, with the intent to move toward a goal
location. In our version, the closer a player gets to the goal, the more points they receive. The set of spaces
that can be reached with the current set of chips is highlighted green on the board at certain stages of the
game, which limits incidence of players choosing suboptimal routes (Figure 6).
54
Figure 6: Web-deployment of Colored Trails framework
For an agent to be successful, it should endeavor to allow parties to discover as much integrative
potential as possible. By maximizing this joint value, there is a greater amount that can be distributed. For
pedagogical or teaching agents, this may be the end goal, as instructing human negotiators to discover such
value may be sufficient. In competitive or optimizing contexts however, this strategy is also beneficial so
long as it then results in some additional portion of the larger value being assigned to the agent by the other
party. What is not clear is what the driving force behind this joint value may be.
We hypothesized that cueing participants to look for joint value across negotiations by signaling a
willingness to engage in favors through simple chat messages might be sufficient to generate discovery of
joint value.
19
While we were primarily interested in showing that this could be an effective strategy for
finding previously unattainable Pareto efficiency over time, we also examined if cueing participants with
behavior or action may yield greater ability to discover joint value even within a single negotiation.
However, it is possible that while this may result in a greater joint value, it would also result in a greater
19
Some cultural effects may also affect the rate of favor exchange: members of collectivist cultures may discover
joint value without the need for cueing; to mitigate this risk, we examined only US participants.
55
share of that value being allocated to the player, with no benefit to the agent. Furthermore, a complex
interaction may exist between signaling through action and signaling through language; a mismatch may
be considered a betrayal from the human player’s point of view and result in a much smaller share being
allocated to the agent than would otherwise be attainable. The effects of culture, especially that of
collectivist cultures, are known to affect negotiation results and may have a complex interaction with cueing
favor exchange. To simplify our system, our design was such that it was optimized for U.S. participants,
and subjects were chosen accordingly.
To effectively measure these issues, a multi-issue game consisting of five negotiations was
designed in the Colored Trails framework. The first four negotiations are comprised of multi-issue
ultimatum games where the agent acts as the proposer. These negotiations are set up to create two separate
instances of integrative potential over time (Table 3). The player, acting as the responder, has the option to
accept the offer, or to reject it and receive their BATNA. Since the BATNA of the player is known, the
agent is capable of providing two broad classes of offers. “Poor” offers would result in a value for the player
that is less than his/her BATNA, while “Good” offers would result in a value for the payer that is more than
his/her BATNA. In a single negotiation, accepting offers less than one’s BATNA is an irrational decision.
However, if a participant wants to signal acceptance of a favor, it may be helpful to accept an offer below
one’s BATNA in order to hopefully signal reciprocal behavior in the future.
Table 3: Pareto optimality over time for a favor-seeking agent
Round 1 Favor opportunity Integrative potential possible
Round 2 Return-favor opportunity
Round 3 Favor opportunity Integrative potential possible
Round 4 Return-favor opportunity
Round 5 User-proposed offer Joint value discovered?
56
Broadly speaking, the agent could choose to offer either a poor or a good offer, and could also
choose to frame it as a demand for a favor or not. After some combination of poor or good offers in the first
four rounds, the player was given a chance to respond in the final round by crafting an offer. By measuring
both the total value discovered in the final round as well as the balance of allocation between the player and
the agent in that round, we were able to directly measure the integrative potential discovered, and measure
the benefit of cooperation/cost of betrayal.
The first four negotiations serve to establish a ledger. Depending on condition (favors returned vs.
favors never returned), the ledger may be even or uneven. The favor language regulates how salient this
ledger is made. In the final negotiation, with the user as the proposer, we see if establishing this relationship
allows the user and agent to discover more efficient solutions within a single negotiation. This result would
parallel more of the traditional, single-game unit of analysis that has been performed in prior literature. This
motivated our 2 (favors returned vs. favors never returned) by 2 (favor language vs. generic language)
design (Table 4).
Table 4: Agent types in experimental conditions, Ultimatum Game Study
Favors
returned
Favors never
returned
Favor
framing
Favor-
seeking
Betraying
No favor
framing
Cooperative Competitive
The results of the study are summarized in detail in [78], but the implication is clear: failing to
follow-through with previously promised favors leads to a substantial cost of betrayal to agents that ignore
this temporal implication (Figure 7).
20
20
These results are specific to American participants, where cross-cultural effects can be held (relatively) constant.
These cross-cultural effects are highly significant, and a treatment of this effect is the subject of another of our
papers [74].
57
Figure 7: Cost of Betrayal as Measured by Acceptance Rate in Round 3
This study presented some of the first quantifiable evidence that certain strategies, while effective
in one-shot negotiations, backfire horribly in repeated interactions. We later showed that this cost of
betrayal is mirrored by a benefit of cooperation, but on a different timescale. Specifically, while betrayal
was detected and punished early, cooperation was rewarded, but after an additional amount of negotiation
interactions [81]. This work also revealed the importance of individual personality differences. Both the
temporal results and the individual difference results are key to the design of Layer 2 and Layer 3 agents,
respectively.
And yet, while these results presented evidence that relationships are, of course, important in
negotiation, they are hampered by the same problems of agent-agent negotiation—namely, that the
Ultimatum game is a very degenerate case of more feature-rich multi-issue bargaining negotiation.
Unfortunately, no platform existed for the creation of agents that negotiate with humans in this fashion. Nor
was the scholarly community equipped to create such agents with existing tools. As such, in furtherance
of the goal of creating human-aware negotiating agents, a platform had to be developed.
Following this study, two additional studies were conducted, examining the details of altering the
temporal length of the negotiation, and examining cross-cultural effects. Please refer to the individual
references [76][81] on those papers for more detail. In both cases, however, similar Ultimatum-style games
58
were used due to limitations of the existing platforms. To properly examine human-agent negotiation in an
ecologically valid setting, development of IAGO was required.
59
Chapter 6. Advanced Socially-Aware Agents
My own work using IAGO has so far focused primarily on developing agents along two primary
axes. In the first, we design agents that adapt a specific strategy already in use (successfully) by human
negotiators. For example, in some of the first published work on IAGO, four agents were developed that
replicated human negotiation strategies of strategic information withholding and of positive/negative use
of emotional manipulation [74]. A number of other variants of these agents have been developed that
modify or extend upon these behaviors. For example, the RedQueen and Cheshire agents were developed
to allow for agents that followed a negative-sloped concession curve and utilized offer anchoring, in order
to claim value in negotiation [79]. Additional minor improvements have allowed agents to reason more
intelligently about user preferences, lie about their own preferences, and avoid repeating offers that the user
has already considered. External efforts using the IAGO platform have also resulted in advancements,
creating adaptive agents that employ user modeling techniques [65], or agents that are capable of lying [49].
The other main axis for IAGO studies is design agents that either accomplish a human-driven
purpose, or that provide insights about human behavior in negotiation. Prior work [79][80] has shown that
the use of certain agent strategies can affect the instructions that humans give to their representatives in
much the same way that real-world negotiation experience can. Other work has indicated that virtual agents
may be helpful for realizing known teaching strategies for promoting negotiation skills [60].
Recall that the goal of this work is to enable human-agent negotiation research that stretches the
bounds of what is currently practical to achieve with current tools. To that end, I conducted a series of
studies to develop and evaluate social negotiating agents in a human-agent context. These studies have
showcased the full capabilities of the IAGO platform as a tool for negotiation research. In the first study,
we proposed repeated multi-issue bargaining as a challenge problem for the Human-Agent League of
ANAC in both 2018 and 2019. The second studies build on the ideas found within the first exploratory
60
ANAC work by designing agents that are specifically interactive over time and with individuals (through
the mechanism of favors and ledgers).
6.1 Study #1: Exploratory Long-Term Interaction Agents (ANAC)
The first studies (Study #1a and #1b) were somewhat exploratory and community-driven. A
challenge was issued for the 2
nd
and 3
rd
annual human-agent leagues of ANAC. In these challenges (see
Appendix A), a series of three back-to-back negotiations are conceived using IAGO. Each negotiation
involved a partially integrative 4-item scenario. The three negotiations were identical in structure (although
the order of the issues and the names and descriptions of the items were shuffled, so as to obscure this fact
from the human participants). Agents were developed by the community of scholars involved in ANAC in
order to succeed in this challenge. A variety of tactics were used, and we performed post-hoc analysis of
these strategies in a manner similar to the work in [75]. This work serves as a jump-starting springboard
for our own development of repeated negotiating agents, and to provide us with clues as to the effective
tactics that may be used in creating robust, relationship-aware agents.
6.1.1 Study #1a: ANAC 2018 Results of Repeated Negotiation
Much like the First Annual Competition of ANAC, this second competition featured an array of
participant-submitted agents competing against humans in multi-issue negotiation. However, in contrast to
a single, 10-minute, multi-issue negotiation, participants engaged in three, 7-minute multi-issue
negotiations. While this by no means encapsulates the entirety of the human-agent negotiation space, it
does allow us to narrow in on a set of questions about the dynamics of likeability and success over multiple
negotiations, rather than in a single snapshot. The main results of this study were previously reported in
[76], but are summarized here.
Results from the 2017 competition had indicated that there was a tradeoff between scoring well and
being well-liked—this competition was designed to craft a measure of score that took into account
likeability. If such a tradeoff truly exists, then pursuing short-sighted strategies that increase points in the
first negotiation but come at a severe cost to likeability may result in fewer points overall.
61
Within repeated negations, the idea of favor exchange (or “logrolling”) has been explored. Since
agents are evaluated on their ability to win over several games, losing a single game may be a viable strategy
for building likeability and thus winning more in the long run. Indeed, one of the submitted agents used this
strategy to score the highest of any agent in the competition. Agent score took into account the total agent
points summed over all three negotiations. It considered agents points earned only (and did not consider
the human score).
For the purposes of the competition, all agents’ scores were calculated, as well as their “likeability
rating”. Likeability was determined by a series of self-reported 7-point Likert questions after negotiation:
• How satisfied were you with the final agreement?
• How much do you like your opponent?
• Would you negotiate with this opponent again?
Likeability was previously used in the ANAC 2017 results and found to have high reliability.
Likeability varied substantially across the submitted agents.
Due to the large differences between agent score across games in certain agents, we decided to
examine the differences as a structural feature of the interaction. We examine the maximum point spread
between negotiations and performed regression analysis to examine correlations to “winning” (as measured
by the agent point lead). We found a significant, positive correlations such that agents that have larger
differences between their scores in different negotiations tend to score better overall (t = 3.211, N=240, p
= 0.002). This effect is largely driven by the winning agent, which had a strategy that was designed to
maximize these differences by exchanging favors and “logrolling” across multiple negotiations. This result
can be easily visualized in Figure 8.
The winning agent, “Equalist engages in cross-game logrolling, and it manages to seek largely
distributive but unfair solutions in each individual game. This doesn’t appear to come at a hit to likeability,
and Equalist is able to exchange favors across games to ensure it still comes out ahead (particularly in
Negotiation 2). Equalist does end up with more than its fair share, but this may be indicative of human
62
behavior with regards to favors: they are able to exchange favors but have trouble keeping track of the exact
magnitude owed. That way, Equalist does somewhat poorly in Negotiation 1 (when it gives a favor) but
does extremely well in Negotiation 2 (when the favor comes due). Previous work does indicate that people
are perfectly capable of understanding reciprocity, but we posit that they may, in human-agent scenarios,
fall back on heuristics that do not fully capture the complexities of favor exchange. Indeed, such subtleties
are often the subject of advanced negotiation training courses. Of course, we emphasize that Equalist does
not succeed against every user; systems and studies examining individual differences remain highly
valuable.
In general, these results further expand the picture of human-agent negotiating behavior, and
provide more insight into the tradeoffs between winning and likeability, which leads us to the design of the
study #2.
Figure 8: All Agent Performance across All Three Negotiations Individually, ANAC 2018
63
6.1.2 Study #1b: ANAC 2019 Results of Repeated Negotiation with Custom Interfaces
While the results of the third annual league of ANAC provide several insights
21
, we focus only a
narrow result for the purposes of this discussion. In this year’s competition, the three back-to-back
negotiations were again conducted. In contrast to previous years, the negotiations were not identical in
structure. Instead, while there were integrative opportunities to “grow the pie” within each negotiation,
there was a larger, cross-negotiation possibility to find integrative potential between negotiations #1 and
#3. This higher performance opportunity is shown in Figure 9, where agents generally have more points
in negotiation #3 due to structural differences.
Regardless of this structural difference, we did find a variety of performance differences across the
submitted agents. In particular, we had two standout agents in terms of performance: agents “Dona” and
“Draft” Both of these agents took unique approaches to the challenges of negotiation by making agents
guided the “meta-rules” of the negotiation. The agents either customized the interface (e.g., the Dona agent
instructed the user to answer questions using the emoji buttons) or enforced strict protocols for the humans
to follow (e.g., the Draft agent required human users to describe their preferences in a set order). The
success of these agents speaks to the importance of protocols in negotiations—and further, that automated
agents can often set these rules and have them be followed by their human counterparts. This fact is used
to design the favor-seeking behavior of our own agents in the subsequent two studies.
21
And are discussed further in a currently-under-review publication.
64
Figure 9: Agent Performance Over Time, Study ANAC 2019
6.2 Study #2: Advanced Interaction Agents using Favors
We have demonstrated that the “Best Practices” type of agents can perform well in single-
negotiation scenarios. Further, we have demonstrated rudimentary “relationship”-aware agents in
simplistic, Ultimatum Game scenarios. By developing agents that have rules on negotiating that they
employ against all opponents, we can develop reasonably effective agents. However, their surface success
belies a more extensive problem—most of our best agents are competitive and aggressive. While this isn’t
a problem in one-off interactions, it is reasonable to assume that repeated interaction with these sorts of
agents may frustrate users to that point that interactions will no longer be successful. Indeed, data from
past studies indicate that human players become frustrated quickly with the kind of hardline tactics our
agents (successfully) employ.
Indeed, the results of Study #1a indicate that there is a tradeoff between agent success and
likeability—but that this tradeoff can perhaps be mitigated or avoided via clever uses of cross-game
logrolling/use of favors and ledger. Study #1b further emphasizes that small differences in agent protocols
(such as underlining the possible use of favors) can have a large effect on human behavior.
65
As such, since there has been some success with “best practices” agents in the single-negotiation
scenario, it seems reasonable to attempt to extend these agents to multiple-interactions. The goal of this
study (Study #2) is to explore different tactics with agents that can effectively make the leap to Layer 2/3—
where agents can negotiate effectively over long-term interactions, with adaptive individualistic behavior.
We focus on agents that attempt to promise and return favors, refrain from promising favors at all, or
promise favors and then betray. We detail the factorial design of this study in the next section.
6.2.1 Study #2: Design
Study #2 is a 3-cell experimental design, with an additional non-matched 4
th
cell as part of its
design. The study involves an agent with three variants of favor-exchanging behavior: the agent will either
ask and return favors (favor-reciprocating), ask for favors but weakly return them (demanding), or ask but
not return favors (betraying). The final agent makes no explicit calls for favors (no promise), and answers
a separate research question (see below). Agents try to secure the most points in a series of three back-to-
back 7-minute negotiations. This generally mirrors the structure of the ANAC challenge from Study #1/the
2018 competition, but the three rounds are not be identical in structure. Rather, there are structural
differences that promote the effective use of favor exchange for both sides. The first negotiation features a
structure that has high-value items for the agent, but low-value items for the human. The second
negotiation contains a reversed structure (high-value for human, low-value for agent) structure. The final
negotiation is structurally equal for both sides, with the items generally worth few points. This structure
serves two goals: it more closely mirrors the structure of the previous Ultimatum-Game Favor Study, and
also may induce people to more readily accept the favor proposition in negotiation 1 since there is an
obvious structural basis to do so.
All agents pursue an aggressive strategy in Negotiation 1, but the favor-reciprocating, limited, and
betraying agents justify their behavior by claiming that a favor will be paid back later. The no-promise
agent does not make any claims as to its future behavior and does not use favors. In Negotiation 2, the
agent negotiates aggressively if it is a betraying or no promise agent, negotiates on nearly-fair terms if it is
66
a demanding agent, and gives ground if it is a favor-seeking agent and a favor is owed from Negotiation 1.
In the final negotiation, all agents pursue a fair, consensus-building strategy. In this way, we are able to
examine the results of the history of their behavior leading up to Negotiation 3, while their behavior within
Negotiation 3 remains identical.
This study marks a departure from the identical structure of Study #1a, and also features agents that
explicitly attempt betrayal as well as sincere descriptions of their future behavior. It is hypothesized, based
on prior work in simpler domains, that there will be both a benefit of cooperation for the favor-seeking
agents in the third negotiation, as well as a cost of betrayal for the betraying agents. However, demanding
agents may or may not be viewed as returning the favor adequately in Negotiation 2, and therefore their
performance is as of yet unclear. It is further hypothesized that the three agents that request favors in
Negotiation 1 will result in higher acceptance rates than the agent that does not (the no-favor agent). The
results of this study will confirm which strategies, in general, are the most effective ones for agents to
employ when negotiating over an extended period of time. The behavior of the agents is summarized in
Table 5.
Table 5: Agent Behavior, Study #2
Negotiation 1 Negotiation 2 Negotiation 3
Structure supports? Agent side Human side Neither side
Favor-reciprocating agents
(“Jiminy”) Favor request
Return large favor
opportunity
Fair behavior
(grants large
favors)
Betraying agents
(“Gothel”) Favor request No favor behavior
No favor
behavior
Limited agents
(“Ursula”) Favor request
Return small
favor opportunity
Fair behavior
(grants small
favors)
No-promise agents
(“Gaston”)
No favor
behavior No favor behavior
No favor
behavior
67
6.2.2 Study #2: Implementation and Analysis
This study was conducted on Amazon’s Mechanical Turk (MTurk) service, with N=163 subjects
recruited. Best practices were followed, including tutorials, attention check questions, recruitment criteria
(high worker rating), and allotment of lottery entry for a cash payment for high performance across the
negotiation. After filtering for attention check failure and user absence (repeated timeouts), we retained
N=111. All study procedures were approved by the USC Institutional Review Board for ethical compliance.
Implementation of the favor behavior includes a new set of dialog options within IAGO to discuss
favor requests and returns. In particular, all favor-utilizing agents always open Negotiation 1 with a favor
request. If that request is accepted verbally, it leads to an actual favorable offer. If the offer is then accepted,
then agent’s ledger is updated accordingly. Betraying agents ignore their own ledger (so they never attempt
to return favors). Favor-reciprocating agents and limited agents both try to pay back any incurred favors
but do so in different magnitudes (the reciprocating agents offer an entire issue of items for free, while the
limited agents offer a single item for free). Both reciprocating agents and limited agents will grant users
favors if asked, but only if they are not already owed a favor (and not in Negotiation 1, where the structure
favors the agents).
While we believe Study #2 provides a great leap forward in our understanding of human-like
negotiating agents, it does have some limitations. The agents try to take into account any preferences that
users may state, and also must assess whether or not the users actually respond positively to the favor
requests in Negotiation 1. As such, analysis of the results of Study #2 could require subdividing the
experimental cells further, based on user behavior (whether they accepted the favor or not). This problem
quickly becomes non-Markovian in scope (how do we account for all user past behavior?), but we can limit
the number of signals that the agent responds to as signals between repeated negotiations, which simplifies
the structure. We pursue two separate approaches to this analysis: user-self report (“I did the agent a favor
in the last round”) as well as explicit FAVOR_ACCEPT events that were triggered by IAGO interface
buttons.
68
Additionally, interactive agents must be consistent for results to be meaningful. It is problematic
for a favor-reciprocating agent to return a favor in Negotiation 2 if it didn’t grant one in Negotiation 1 (for
example). The designed agents are therefore capable of altering their strategy to the individual situation,
as both favor-reciprocating and limited agents will only return favors if they owe the human a favor, OR if
directly asked when their ledger is neutral. Although this essentially splits the analysis for agents that were
engaging in favors vs. those that did not, we collapse across this distinction in the pursuit of interactive
behavior.
22
Agents are fundamentally dynamic and adaptive in their design. For example, all agents have an
internalized conception of “fairness” based on their internal mental model of the opponent. These agents
look for a moderately sized positive margin over their opponents in all deals. However, adverse events
(such as offer rejections) will reduce this margin over time.
This study answers three separate, but important research items. Study #2 is designed to show that
a) favors are an effective tactic, b) the “magnitude” of the favor returned is/isn’t important, and c)
negotiation history will affect human behavior even if current agent behavior is the same.
6.2.3 Study #2: Results and Discussion
The first analysis we perform merely compares each of the favor agents’ total score against the
single agent that did not use favor language. Through univariate analysis of variance, we found that the
favor-utilizing agents (which includes favor-returning, betraying, and limited agents) (M = 123.4)
performed significantly better than the no-favor utilizing agent, “Gaston”, (M = 116.6, p = 0.041, N=111).
This is in line with our first hypothesis as well as prior work (see Chapter 5.2).
We performed post-hoc analysis to determine the agents driving this effect. The average points
can be seen in Table 6. After using Tukey’s HSD to perform multiple comparison correction, we find that
22
Post-hoc analysis of “favor” agents (i.e., those that actually consistently returned a favor they owed) vs. “generic”
agents (i.e., those that did not) showed non-significant differences in average points (M = 31.05 points vs. 31.67
points, p = .683, n.s.), so this simplification seems justifiable.
69
the effect is driven by the difference between the betraying and no-favor agents (p = 0.045). Breaking this
down by negotiation, we find that the primary difference between favor-utilizing and non-favor-utilizing
agents is driven by the results in Negotiation 1. Due to the behaviors of the agents in asking for favors as
well as the structure of that negotiation favoring the agent, this result clearly indicates that merely asking
for favors is an effective technique.
Table 6: Total Agent Average Points, Study #2
Average Agent Points
(total)
Favor-Returning
(Jiminy) 123.6
Betraying (Gothel) 128.0
Limited (Ursula) 121.5
No Favor (Gaston) 116.6
While the differences among the three favor-granting agents do not reach traditional levels of
significance, the betraying agent, “Gothel” does trend toward being the highest-scoring agent. In previous
studies, such as in Chapter 5, there was a clear cost of betrayal that appeared early in the set of negotiations.
Here, this difference does not appear, and, if anything, appears to be reversed. One possible explanation
for this difference is the relative complexity of the task—an IAGO-driven full multi-issue bargaining task
is far removed from repeated ultimatum games. Indeed, while the favor results indicate that people are
indeed capable of perceiving that favors are being asked for, participants may not be able to grasp that they
are being outmaneuvered by the betraying agent. Our second research item is therefore somewhat
inconclusive, as the magnitude of the favor returned does not show significant differences.
To examine our final research item, we turn to analysis of the Negotiation 3 results among agents
that have identical behavior in that round (but differ according to their historical behavior). This is
accomplished by comparing the betraying (Gothel) and no-favor agents (Gaston). One-way analysis of
these two agents in Negotiation 2 indicates a significant difference (t = 2.281, p = 0.029, no variance
70
assumption). This result indicates that indeed, negotiation history is critically important in reaching
conclusions about socially-aware agent behavior and design.
These results seem to indicate that favor-exchange behavior is certainly perceivable by human users
in IAGO repeated multi-issue bargaining tasks. The usefulness of favors is demonstrated, although the
costs of failure to return is still unclear. Part of this mitigated cost of betrayal may be due to the complexity
of the task. We performed additional analysis comparing the two measures of favor acceptance in
negotiation 1: people who explicitly responded positively to favors vs participants who self-reported to
“doing the agent a favor”. Respondents to the self-report question believed they were giving a favor in
much higher quantities than they actually did so using the interface. Notably, some of these self-reports
came from Gaston, the no-favor agent that never explicitly asked for favors! Therefore, the non-favor
agents are still *perceived* as being given favors.
This result reinforces the idea that human perceptions of favors are highly mutable. Results from
Study #1b indicated that setting a protocol can have a heavy effect on user behavior. IAGO allows these
agent protocol-setting behaviors (such as favors and ledgers) to be implemented and evaluated for efficacy.
These results are based on actual interactive data, and are thus arguably more ecologically valid than other
methods.
71
Chapter 7. Discussion and Research Implications
7.1 Summary
The implications of the development of socially-aware negotiation agents are broadly relevant.
Negotiation represents a key challenge problem for multiple problems in computer science while
simultaneously serving as a “metaphor” for a great deal of human social behavior. The IAGO platform and
the studies conducted therein allow the furtherance of models of social intelligence in artificial agents whose
primary purpose is to interact with humans. As humans grow increasingly reliant on agent interaction, this
modeling is key to developing realistic and useful agents for training, teaching, and competition.
This research has used negotiation as a testbed for developing these higher-complexity agents and
models; negotiation is a complex and highly social task—successful human negotiators maintain
sophisticated models of their opponent, discover joint value over time, and respond to and generate
emotional signals that inform (or manipulate) their opponent. Automated agents that attempt this task must
be able to reason about these components as well, utilizing advanced techniques that draw from appraisal
theory and theory of mind research, attending to user signals (including emotion detection), and efficiently
searching for effective strategies to use within a given negotiation. Artificial agents must also understand
human-agent relationships, and may keep detailed records of past interactions with specific users and their
individual differences, creating an AI problem that is non-Markovian in scope. The applications of my
research apply to broader spaces with human-computer interaction, but negotiation remains a core problem
for understanding larger questions in human-computer relationship development.
The development of the IAGO platform opens up previously inaccessible avenues in artificial
intelligence and human-computer interaction research. By having an easily deployable and largely realistic
simulation environment for human and machine interaction in complex social tasks, we are able to research
AI-human interaction in as close to “real-world” conditions as possible.
My research represents an exciting opportunity for a future community of research. Currently, my
research answers questions about human behavior that social psychology cannot adequately answer. Even
72
the best-designed “traditional” user study relies on fallible human actors or confederates, or is woefully
non-interactive. To truly answer the question of how a human treats an agent, computer programs with
algorithmic social intelligence are needed, and computer science is the only discipline that can deliver these
proposed entities.
Furthermore, to design agents that are capable of interacting with humans and are capable of solving
the problems that a networked world presents, computational methods must be employed. Even the most
robust and accepted social psychological theory must be translated and adapted in algorithmically feasible
ways in order to be used in the computer agents of the future. For that, this research presents an ambitious
leap forward for the future of computer agents that can be realistically used to interact with the increasingly
technologically entangled human of tomorrow.
7.2 Result Implications and Future Directions
The construction of ever-more robust agents (capable of both temporal reasoning and individual-
optimization) opens up myriad opportunities for research that benefits both the social sciences and computer
science. Agents that can adequately handle repeated negotiation and other temporal aspects of negotiation
represent a great step forward. The results of the ANAC competitions (now entering their fourth year) have
provided numerous insights in agent design. The competition still continues to attract new researchers and
designs. In this work, we examine two such competitions (Studies #1a and #1b)—these competitions
formalizing concepts such as the likeability-success tradeoff and the importance of negotiation protocol.
Beyond merely encouraging community involvement, this work has also described the construction
of actual prototype agents based on these insights. These agents are capable of temporal awareness and can
also benefit from individualization, as shown from the results of Study #2.
The work that has been presented provides ample insight on how human-like negotiating agents
may be developed using the IAGO platform, and how the platform itself contributes to the community of
scholarship in human-agent negotiation. Furthermore, it illustrates several areas in which further research
may be conducted that will further elucidate human behavior, and agent design that emulates it. While this
73
work has expanded on traditional “best practices” agents in favor of individually-aware and temporally-
aware agents that can be developed using IAGO, there are limitations to this work that can be examined in
future work.
First, while the agents described in this work represent a substantial improvement over existing
agents (which reason only in temporal snapshots), they still make simplifications to the subject of temporal
dynamics. The work on individualized agents [81] indicates that these agents realize costs of betrayal and
benefits of cooperation on different timescales. While the results of the Study #2 address temporal
dynamics over lengthened timescales to previous research, they are still in the realm of an interaction that
lasts less than an hour in total. Further work can expand the timescales examined, with fully longitudinal
studies that involve humans returning to interact with agents over days or weeks. These studies will allow
us to isolate short-term effects (such as mood) from longer-term, stickier aspects (such as personality).
The IAGO platform is well-suited for this kind of research, since users can be identified by unique
accounts, and their behavior examined over time. This will allow agents to adopt more robust non-
Markovian models of behavior, and develop more accurate models of individual users. These kinds of
agents would likely be instrumental for a number of applications, including market research, conjoint
analysis, and understanding the dynamics of relationships between agents and humans over the long-term.
Secondly, this work has illustrated the evolution of the work between negotiating agents and
humans over the course of two broad types of games: ultimatum games and multi-issue bargaining tasks.
However, there are various other established games and scenarios that are helpful in many established
research tasks. Some are simpler, such as the Dictator Game or the Investment Game, but there are ever-
more complex games as well that can examine other aspects of human behavior.
In particular, to be truly “community-aware”, an agent needs to be customizable to the needs of its
user. While many of the agents developed so far using IAGO are intended to compete with humans, IAGO
also has the potential to develop agents whose primary purpose is to represent humans’ interests. IAGO is
currently being developed to allow a new class of interaction in which agents negotiate with each other, but
74
the agents can be “programmed” by the user to adopt different strategies. This allows examination of
human behavior when agents act as representatives for their users. In the future, IAGO will be expanded
to account for a number of additional scenarios, and will feature an expanded game library.
Finally, this work has focused exclusively on dyadic negotiations. While the area of dyadic
interactions over time remains a hitherto unexplored field, there are a number of additional directions that
can be explored from this springboard. Interactions with multiple partners have been explored previously
in the agent-agent negotiation literature (see the Agent-Agent League of ANAC 2018 for a concrete
example). Seeing how humans negotiate with multiple agent partners is a viable avenue of future research.
This is of particular interest to the field of supply-chain management, in which automated agents are being
developed to simulate the role of a “middleman” that negotiates with a buyer and seller simultaneously.
Beyond multiple parties, there is still a great deal of work to be examined in dyadic interactions
which exist within a dedicated community. In the studies described so far, humans (or their agent
representative) are restricted to interacting with a single, predetermined agent. As such, the agent’s
behavior is critical since future interactions will necessarily include the same human-agent pair. However,
if the human is given a choice of negotiating partner between one of several agents, there can be
“community-effects” in which the human must weigh the risk of pairing with a known agent of some known
quality versus an unknown agent. This bears some similarity to Multi-Armed Bandit problems in structure,
but is vastly more complex, since each agent’s behavior is not determined by a random variable, but by an
entire history of interaction with various users.
This last area is of particular interest for future research. By developing a community of agents,
we can further unpack reputation effects, as well as develop dynamic agents that can evolve to fill a
particular niche in a community based on the interactions with human users over time. For example, while
our results with guaranteed repeated negotiations may indicate that a behavior counter-predicts success
(e.g., lying), this may no longer be dominantly true in communities of agents. While the majority of agents
may choose not to lie, the social benefit of short-term gain may be high enough (in communities with certain
75
parameters) that a small group of agents that lie may develop. These dynamics, and the initial conditions
that lead to them, are a broad area of future research that can be facilitated by IAGO (and can draw from a
vast literature in evolutionary simulation).
In general, I believe that the IAGO platform provides a number of advancements that allow the
burgeoning field of human-agent negotiation to be conducted with robustness and ease. Much of this future
work will allow a deeper model of human behavior to be constructed, which in turn will facilitate ever-
more human-aware agents for negotiation. Agents that negotiate with humans can benefit from these
advancements in repeated interaction, and will become ever-more realistic and comprehensive when they
become truly social.
76
References
[1] Alto, K.M., McCullough, K.M. and Levant, R.F., 2016. Who is on Craigslist? A novel approach to participant recruitment
for masculinities scholarship.
[2] Anthony, P. and Jennings, N.R., 2003. Developing a bidding agent for multiple heterogeneous auctions. ACM Transactions
on Internet Technology (TOIT), 3(3), pp.185-217.
[3] Axelrod, R., & Hamilton, W. D. 1981. The evolution of cooperation. science, 211(4489), 1390-1396.
[4] Baarslag, T. and Gerding, E.H., 2015. Optimal incremental preference elicitation during negotiation.
[5] Baarslag, T., Hendrikx, M.J., Hindriks, K.V. and Jonker, C.M., 2016. Learning about the opponent in automated bilateral
negotiation: a comprehensive survey of opponent modeling techniques. Autonomous Agents and Multi-Agent
Systems, 30(5), pp.849-898.
[6] Baarslag, T., Kaisers, M., Gerding, E., Jonker, C.M. and Gratch, J., 2017. When will negotiation agents be able to represent
us? The challenges and opportunities for autonomous negotiators.
[7] Balliet, D., 2009. Communication and cooperation in social dilemmas: a meta-analytic review. Ration. Soc. 54, 39–57.
[8] Balliet, D., Parks, C. and Joireman, J., 2009. Social value orientation and cooperation in social dilemmas: A meta-
analysis. Group Processes & Intergroup Relations, 12(4), pp.533-547.
[9] Batrinca, L., Stratou, G., Shapiro, A., Morency, L.-P., and Scherer, S., 2013. Cicero—towards a multimodal virtual
audience platform for public speaking training. In Intelligent Virtual Agents. Springer, 116-128.
[10] Bazerman, M. H., & Neale, M. A. 1993. Negotiating rationally. Simon and Schuster.
[11] Berman, J.J., Murphy-Berman, V. and Singh, P., 1985. Cross-cultural similarities and differences in perceptions of
fairness. Journal of Cross-Cultural Psychology, 16(1), pp.55-67.
[12] Blascovich, J., Loomis, J., Beall, A. C., Swinth, K. R., Hoyt, C. L., & Bailenson, J. N., 2002. Immersive virtual
environment technology as a methodological tool for social psychology. Psychological Inquiry, 13(2), 103-124.
[13] Bonnefon, J.F., Shariff, A. and Rahwan, I., 2016. The social dilemma of autonomous vehicles. Science, 352(6293),
pp.1573-1576.
[14] Bordone, R.C., 2000. Teaching interpersonal skills for negotiation and for life. Negotiation Journal, 16(4), pp.377-385.
[15] Bosse, T. and Jonker, C.M., 2005, July. Human vs. computer behavior in multi-issue negotiation. In Rational, Robust, and
Secure Negotiation Mechanisms in Multi-Agent Systems, 2005(pp. 11-24). IEEE.
[16] Breazeal, C., 2003. Toward sociable robots. Robot. Auton. Syst. 42, 167–175.
77
[17] Caporael, L.R., Dawes, R.M., Orbell, J.M. and Van de Kragt, A.J., 1989. Selfishness examined: Cooperation in the absence
of egoistic incentives. Behavioral and Brain Sciences, 12(4), pp.683-699.
[18] Caputo, A., 2013. A literature review of cognitive biases in negotiation processes. International Journal of Conflict
Management, 24(4), pp.374-398.
[19] Castelfranchi, C. and Falcone, R., 1998, July. Principles of trust for MAS: Cognitive anatomy, social importance, and
quantification. In Multi Agent Systems, 1998. Proceedings. International Conference on (pp. 72-79). IEEE.
[20] Core, M., Traum D., Lane, H.C., Swartout., W., Gratch, J., Van Lent, M., and Marsella, S., 2006. Teaching negotiation
skills through practice and reflection with virtual humans. Simulation 82, no. 11: 685-701.
[21] Crandall, J. W., & Goodrich, M. A., 2005. Learning to teach and follow in repeated games. In AAAI workshop on
Multiagent Learning.
[22] Crandall, Jacob W., Mayada Oudah, Fatimah Ishowo-Oloko, Sherief Abdallah, Jean-François Bonnefon, Manuel Cebrian,
Azim Shariff, Michael A. Goodrich, and Iyad Rahwan, 2018. Cooperating with machines. Nature communications 9, no. 1:
233.
[23] Croson, R., Boles, T., & Murnighan, J. K., 2003. Cheap talk in bargaining experiments: lying and threats in ultimatum
games. Journal of Economic Behavior & Organization, 51(2), 143-159.
[24] Curhan, J.R., Elfenbein, H.A. and Xu, H., 2006. What do people value when they negotiate? Mapping the domain of
subjective value in negotiation. Journal of personality and social psychology, 91(3), p.493.
[25] Czibor, A., Vincze, O. and Bereczkei, T., 2014. Feelings and motives underlying Machiavellian behavioural strategies;
narrative reports in a social dilemma situation. International Journal of Psychology, 49(6), pp.519-524.
[26] Dautenhahn, K., 2007. Socially intelligent robots: dimensions of human–robot interaction. Philos. Trans. Roy. Soc. B 362,
679–704.
[27] De Dreu, C.K., Koole, S.L. and Steinel, W., 2000. Unfixing the fixed pie: a motivated information-processing approach to
integrative negotiation. Journal of personality and social psychology, 79(6), p.975.
[28] de Melo, C. M., Carnevale, P. J., Read, S. J., & Gratch, J., 2014. Reading people’s minds from emotion expressions in
interdependent decision making. Journal of personality and social psychology, 106(1), 73.
[29] de Melo, C. M., Carnevale, P., & Gratch, J., 2011, May. The effect of expression of anger and happiness in computer
agents on negotiations with humans. In The 10th International Conference on Autonomous Agents and Multiagent
Systems-Volume 3 (pp. 937-944). International Foundation for Autonomous Agents and Multiagent Systems.
[30] de Melo, C. M., Gratch, J., & Carnevale, P. J., 2015. Humans versus computers: Impact of emotion expressions on people's
decision making. IEEE Transactions on Affective Computing, 6(2), 127-136.
78
[31] de Melo, C. M., Marsella, S., & Gratch, J., 2016, May. Do as I say, not as I do: Challenges in delegating decisions to
automated agents. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems(pp.
949-956). International Foundation for Autonomous Agents and Multiagent Systems.
[32] de Melo, C.M., Carnevale, P. and Gratch, J., 2012, September. The effect of virtual agents’ emotion displays and appraisals
on people’s decision making in negotiation. In International Conference on Intelligent Virtual Agents (pp. 53-66).
Springer, Berlin, Heidelberg.
[33] Dehghani, M., Carnevale, P. J., & Gratch, J., 2014. Interpersonal effects of expressed anger and sorrow in morally charged
negotiation. Judgment & Decision Making, 9(2).
[34] DeVault, D., Artstein, R., Benn, G., Dey, T., Fast, E., Gainer, A., Georgila, K., Gratch, J., Hartholt, A., Lhommet, M. and
Lucas, G., 2014, May. SimSensei Kiosk: A virtual human interviewer for healthcare decision support. In Proceedings of
the 2014 international conference on Autonomous agents and multi-agent systems (pp. 1061-1068). International
Foundation for Autonomous Agents and Multiagent Systems.
[35] DeVault, D., Mell, J., and Gratch, J.. 2015. Toward Natural Turn-Taking in a Virtual Human Negotiation Agent. In AAAI
Spring Symposium on Turn-taking and Coordination in Human-Machine Interaction. AAAI Press, Stanford, CA.
[36] Ekman, P., Davidson, R.J. and Friesen, W.V., 1990. The Duchenne smile: Emotional expression and brain physiology:
II. Journal of personality and social psychology, 58(2), p.342.
[37] Faratin, P., Sierra, C., and Jennings, N.R., 1998. Negotiation decision functions for autonomous agents. In Robotics and
Autonomous Systems 24, no. 3-4: 159-182.
[38] Fehr, B., Samsom, D. and Paulhus, D.L., 1992. The construct of Machiavellianism. Twenty years later, CD Spielberger, JN
Butcher (red.), Advances in personality assessment (t. 9, s. 77-116).
[39] Fehr, E. and Schmidt, K.M., 2006. The economics of fairness, reciprocity and altruism–experimental evidence and new
theories. Handbook of the economics of giving, altruism and reciprocity, 1, pp.615-691.
[40] Fischer, R., Ury, W. and Patton, B., 1981. Getting to yes. Negotiating Agreement Without Giving in.
[41] Filzmoser, M. 2010. Automated vs. human negotiation. International Journal of Artificial Intelligence, 4(10), 64-77.
[42] Fox, J., Ahn, S.J., Janssen, J.H., Yeykelis, L., Segovia, K.Y. and Bailenson, J.N., 2015. Avatars versus agents: a meta-
analysis quantifying the effect of agency on social influence. Human–Computer Interaction, 30(5), pp.401-432.
[43] Fulmer, C.A. and Gelfand, M.J., 2013. How do I trust thee? Dynamic trust patterns and their individual and social
contextual determinants. In Models for intercultural collaboration and negotiation (pp. 97-131). Springer, Dordrecht.
[44] Gal, Y.A., Grosz, B.J., Kraus, S., Pfeffer, A. and Shieber, S., 2005, July. Colored trails: a formalism for investigating
decision-making in strategic environments. In Proceedings of the 2005 IJCAI workshop on reasoning, representation, and
learning in computer games (pp. 25-30).
79
[45] Gerhart, B. and Rynes, S., 1991. Determinants and consequences of salary negotiations by male and female MBA
graduates. Journal of Applied Psychology, 76(2), p.256.
[46] Gillespie, J. J., Thompson, L. L., Loewenstein, J., & Gentner, D. 1999. Lessons from analogical reasoning in the teaching
of negotiation. Negotiation Journal, 15(4), 363-371.
[47] Gonsior, B., Sosnowski, S., Mayer, C., Blume, J., Radig, B., Wollherr, D. and Kühnlenz, K., 2011. Improving aspects of
empathy subjective performance for HRI through mirroring emotions. In Proc. IEEE Intern. Symposium on Robot and
Human Interactive Communication, RO-MAN 2011, Atlanta, USA.
[48] Gratch, J., DeVault, D., Lucas, G. M., & Marsella, S. 2015, August. Negotiation as a challenge problem for virtual humans.
In International Conference on Intelligent Virtual Agents (pp. 201-215). Springer, Cham.
[49] Gratch, J., Nazari, Z. and Johnson, E., 2016, May. The Misrepresentation Game: How to win at negotiation while seeming
like a nice guy. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems (pp.
728-737). International Foundation for Autonomous Agents and Multiagent Systems.
[50] Gratch, J., Rickel, J., André, E., Cassell, J., Petajan, E., & Badler, N., 2002. Creating interactive virtual humans: Some
assembly required. University of Southern California, Institute for Creative Technologies, Marina del Rey, CA.
[51] Gratch, J., Wang, N., Gerten, J., Fast, E. and Duffy, R., 2007, September. Creating rapport with virtual agents.
In International Workshop on Intelligent Virtual Agents (pp. 125-138). Springer, Berlin, Heidelberg.
[52] Grosz, B. J., & Kraus, S., 1999. The evolution of SharedPlans. In Foundations of rational agency (pp. 227-262). Springer,
Dordrecht.
[53] Grosz, B.J., Kraus, S., Talman, S., Stossel, B. and Havlin, M., 2004, July. The influence of social dependencies on decision-
making: Initial investigations with a new game. In Proceedings of the Third International Joint Conference on Autonomous
Agents and Multiagent Systems-Volume 2 (pp. 782-789). IEEE Computer Society.
[54] Guttman, R. H., & Maes, P., 1999. Agent-mediated integrative negotiation for retail electronic commerce. In Agent
Mediated Electronic Commerce (pp. 70-90). Springer Berlin Heidelberg.
[55] Guttman, R.H. and Maes, P., 1998, May. Agent-mediated integrative negotiation for retail electronic commerce.
In International Workshop on Agent-Mediated Electronic Trading(pp. 70-90). Springer, Berlin, Heidelberg.
[56] Haim, G., An, B. and Kraus, S., 2017. Human–computer negotiation in a three player market setting. Artificial
Intelligence, 246, pp.34-52.
[57] Hilty, J. A., & Carnevale, P. J., 1993. Black-hat/white-hat strategy in bilateral negotiation. Organizational Behavior and
Human Decision Processes, 55(3), 444-469.
80
[58] Hindriks, K., Jonker, C.M., Kraus, S., Lin, R. and Tykhonov, D., 2009, May. Genius: negotiation environment for
heterogeneous agents. In Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems-
Volume 2 (pp. 1397-1398). International Foundation for Autonomous Agents and Multiagent Systems.
[59] Hindriks, K.V. and Jonker, C.M., 2008, December. Creating human-machine synergy in negotiation support systems:
Towards the pocket negotiator. In Proceedings of the 1st International Working Conference on Human Factors and
Computational Models in Negotiation (pp. 47-54). ACM.
[60] Johnson, E., Gratch, J. and DeVault, D., 2017, May. Towards An Autonomous Agent that Provides Automated Feedback
on Students' Negotiation Skills. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems(pp.
410-418). International Foundation for Autonomous Agents and Multiagent Systems.
[61] Johnson, E., Lucas, G., Kim, P., & Gratch, J. 2019, June. Intelligent Tutoring System for Negotiation Skills Training.
In International Conference on Artificial Intelligence in Education (pp. 122-127). Springer, Cham.
[62] Johnson, E., Roediger, S., Lucas, G., & Gratch, J. 2019, July. Assessing Common Errors Students Make When
Negotiating. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (pp. 30-37).
[63] Kenny, P., Hartholt, A., Gratch, J., Swartout, W., Traum, D., Marsella, S., & Piepol, D., 2007, November. Building
interactive virtual humans for training environments. In Proceedings of I/ITSEC (Vol. 174).
[64] Khooshabeh, P., McCall, C., Gandhe, S., Gratch, J. and Blascovich, J., 2011, May. Does it matter if a computer jokes.
In CHI'11 Extended Abstracts on Human Factors in Computing Systems (pp. 77-86). ACM.
[65] Koley, G. and Rao, S. 2018, Adaptive Human-Agent Multi-Issue Bilateral Negotiation Using the Thomas-Kilmann Conflict
Mode Instrument.
[66] Lee, M., Lucas, G., Mell, J., Johnson, E., & Gratch, J. 2019. What's on Your Virtual Mind? Mind Perception in Human-
Agent Negotiations. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (pp. 38-45).
[67] Lim, S., & Reeves, B., 2010. Computer agents versus avatars: Responses to interactive game characters controlled by a
computer or other player. International Journal of Human-Computer Studies, 68(1), 57-68.
[68] Lin, R., Oshrat, Y. and Kraus, S., 2009, May. Investigating the benefits of automated negotiations in enhancing people's
negotiation skills. In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems-
Volume 1 (pp. 345-352). International Foundation for Autonomous Agents and Multiagent Systems.
[69] Littman, M. L., & Stone, P., 2001. Leading best-response strategies in repeated games. In In Seventeenth Annual
International Joint Conference on Artificial Intelligence Workshop on Economic Agents, Models, and Mechanisms.
[70] MacCrimmon, K.R. and Messick, D.M., 1976. A framework for social motives. Behavioral Science, 21(2), pp.86-100.
[71] Mannix, E.A., Neale, M.A. and Northcraft, G.B., 1995. Equity, equality, or need? The effects of organizational culture on
the allocation of benefits and burdens. Organizational Behavior and Human Decision Processes, 63(3), pp.276-286.
81
[72] Marsella, S. C., & Gratch, J., 2009. EMA: A process model of appraisal dynamics. Cognitive Systems Research, 10(1), 70-
90.
[73] Marsella, S., Gratch, J., & Petta, P., 2010. Computational models of emotion. A Blueprint for Affective Computing-A
sourcebook and manual, 21-46.
[74] Mell, J. and Gratch, J., 2017, May. Grumpy & Pinocchio: Answering Human-Agent Negotiation Questions through
Realistic Agent Design. In Proceedings of the 16th Conference on Autonomous Agents and Multiagent Systems(pp. 401-
409). International Foundation for Autonomous Agents and Multiagent Systems.
[75] Mell, J., et al, 2018. The Results of the First Annual Human Agent League of the Automated Negotiating Agents
Competition, Intelligent Virtual Agents.
[76] Mell, J., Gratch, J., Aydogan, R., Baarslag, T., and Jonker, C.M. 2019. "The Likeability-Success Trade Off: Results of the
2nd Annual Human-Agent Automated Negotiating Agents Competition", In Proceedings of the 8th International
Conference on Affective Computing & Intelligent Interaction.
[77] Mell, J., Lucas, G., Gratch, J. and Rosenfeld, A., 2015, September. Saying YES! The cross-cultural complexities of favors
and trust in human-agent negotiation. In Affective Computing and Intelligent Interaction (ACII), 2015 International
Conference on (pp. 194-200). IEEE.
[78] Mell, J., Lucas, G., Gratch, J., 2015. An Effective Conversation Tactic for Creating Value over Repeated Negotiations.
In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (pp. 1567-1576).
International Foundation for Autonomous Agents and Multiagent Systems.
[79] Mell, J., Lucas, G., Gratch, J., 2018. Welcome to the Real World: How Agent Strategy Increases Human Willingness to
Deceive, In Proceedings of the 2018 International Conference on Autonomous Agents and Multiagent Systems.
International Foundation for Autonomous Agents and Multiagent Systems.
[80] Mell, J., Lucas, G., Gratch, J., 2019. The Role of Experience in Negotiation, Journal of Artificial Intelligence Research.
Fast-tracked, under review.
[81] Mell, J., Lucas, G., Mozgai, S., Boberg, J., Artstein, R. and Gratch, J., 2018, November. Towards a Repeated Negotiating
Agent that Treats People Individually: Cooperation, Social Value Orientation, & Machiavellianism. In Proceedings of the
18th International Conference on Intelligent Virtual Agents (pp. 125-132). ACM.
[82] Messick, D.M. and McClintock, C.G., 1968. Motivational bases of choice in experimental games. Journal of experimental
social psychology, 4(1), pp.1-25.
[83] Metz, Rachel, 2018. Google demos Duplex, its AI that sounds exactly like a weird, nice human. Intelligent Machines.
Downloaded from https://www.technologyreview.com/s/611539/google-demos-duplex-its-ai-that-sounds-exactly-like-a-
very-weird-nice-human/
82
[84] Nadler, J., 2004. Rapport in legal negotiation: How small talk can facilitate e-mail dealmaking. Harv. Negot. L. Rev., 9,
p.223.
[85] Nazari, Z. 2016. Automated Negotiating with Humans. USC Proposal.
[86] Nazari, Z., Lucas, G. and Gratch, J., 2015, September. Multimodal approach for automatic recognition of
machiavellianism. In Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on (pp. 215-
221). IEEE.
[87] Nowak, M. A., Page, K. M., & Sigmund, K., 2000. Fairness versus reason in the ultimatum game. Science, 289(5485),
1773-1775.
[88] O'connor, K. M., Arnold, J. A., & Burris, E. R., 2005. Negotiators' bargaining histories and their effects on future
negotiation performance. Journal of Applied Psychology, 90(2), 350.
[89] O'shea, Patrick Gavan, and David F. Bush., 2002. Negotiation for starting salary: Antecedents and outcomes among recent
college graduates. Journal of Business and Psychology 16.3: 365-382.
[90] Olekalns, M. and Smith, P.L., 2009. Mutually dependent: Power, trust, affect and the use of deception in
negotiation. Journal of Business Ethics, 85(3), pp.347-365.
[91] Orr, D., & Guthrie, C. 2005. Anchoring, information, expertise, and negotiation: New insights from meta-analysis. Ohio St.
J. on Disp. Resol., 21, 597.
[92] Parkinson, B., 1996. Emotions are social. British journal of psychology, 87(4), 663-683.
[93] Patton, B., 2005. Negotiation. The handbook of dispute resolution, pp.279-303.
[94] Paulhus, D.L. and Williams, K.M., 2002. The dark triad of personality: Narcissism, Machiavellianism, and
psychopathy. Journal of research in personality, 36(6), pp.556-563.
[95] Provis, C., 2004. Negotiation, persuasion and argument. Argumentation, 18(1), pp.95-112.
[96] Raeesy, Z., Brzostwoski, J. and Kowalczyk, R., 2007, November. Towards a fuzzy-based model for human-like multi-
agent negotiation. In Proceedings of the 2007 IEEE/WIC/ACM International Conference on Intelligent Agent
Technology (pp. 515-519). IEEE Computer Society.
[97] Reeves, B., & Nass, C. I., 1996. The media equation: How people treat computers, television, and new media like real
people and places. Cambridge university press.
[98] Reilly, W. S., 1996. Believable Social and Emotional Agents (No. CMU-CS-96-138). Doctoral thesis, Carnegie-Mellon
University, Pittsburg, PA, Department of Computer Science.
[99] Roediger, S. 2018. The effect of suspicion on emotional influence tactics in virtual human negotiation (Master's thesis,
University of Twente).
83
[100] Rosenfeld, A., Zuckerman, I., Segal-Halevi, E., Drein, O., & Kraus, S., 2014, May. NegoChat: a chat-based negotiation
agent. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems (pp. 525-532).
International Foundation for Autonomous Agents and Multiagent Systems.
[101] Rosenfeld, A., Zuckerman, I., Segal-Halevi, E., Drein, O., & Kraus, S., 2016. NegoChat-A: a chat-based negotiation agent
with bounded rationality. Autonomous Agents and Multi-Agent Systems, 30(1), 60-81.
[102] Small, D. A., Gelfand, M., Babcock, L., & Gettman, H. 2007. Who goes to the bargaining table? The influence of gender
and framing on the initiation of negotiation. Journal of personality and social psychology, 93(4), 600.
[103] Stinson, L. and Ickes, W., 1992. Empathic accuracy in the interactions of male friends versus male strangers. Journal of
personality and social psychology, 62(5), p.787.
[104] Syna Desivilya, H., & Yagil, D. 2005, January. The role of emotions in conflict management: The case of work teams.
In IACM 17th Annual Conference Paper.
[105] Thompson, L. and Hastie, R., 1990. Social perception in negotiation. Organizational behavior and human decision
processes, 47(1), pp.98-123.
[106] Thompson, L.L., 1991. Information exchange in negotiation. Journal of Experimental Social Psychology, 27(2), pp.161-
179.
[107] Trope, Y. and Liberman, N., 2010. Construal-level theory of psychological distance. Psychological review, 117(2), p.440.
[108] Trump, D.J. and Schwartz, T., 2009. Trump: The art of the deal. Ballantine Books.
[109] Truong, N.C., Baarslag, T., Ramchurn, G. and Tran-Thanh, L., 2016. Interactive scheduling of appliance usage in the
home.
[110] Tversky, A., & Kahneman, D. 1981. The framing of decisions and the psychology of choice. science, 211(4481), 453-458.
[111] Van Dijk, E., De Cremer, D. and Handgraaf, M.J., 2004. Social value orientations and the strategic use of fairness in
ultimatum bargaining. Journal of experimental social psychology, 40(6), pp.697-707.
[112] Van Kleef, G. A., De Dreu, C. K., & Manstead, A. S., 2004. The interpersonal effects of anger and happiness in
negotiations. Journal of personality and social psychology, 86(1), 57.
[113] Van Lange, P.A., 1999. The pursuit of joint outcomes and equality in outcomes: An integrative model of social value
orientation. Journal of personality and social psychology, 77(2), p.337.
[114] Van Lange, P.A., De Bruin, E., Otten, W. and Joireman, J.A., 1997. Development of prosocial, individualistic, and
competitive orientations: theory and preliminary evidence. Journal of personality and social psychology, 73(4), p.733.
[115] Van Lange, P.A., De Cremer, D., Van Dijk, E. and Van Vugt, M., 2007. Self-interest and beyond. Social psychology:
Handbook of basic principles, pp.540-561.
84
[116] Van Lange, P.A.M., De Dreu, C.K.W., Hewstone, M. and Stoebe, W., 2001. Social interaction: Cooperation and
competition. Introduction to Social Psychology (Vol. 3), pp.341-370.
[117] Van Vugt, M., Van Lange, P.A. and Meertens, R.M., 1996. Commuting by car or public transportation? A social dilemma
analysis of travel mode judgements. European Journal of Social Psychology, 26(3), pp.373-395.
[118] Weber, J.M., Kopelman, S. and Messick, D.M., 2004. A conceptual review of decision making in social dilemmas:
Applying a logic of appropriateness. Personality and Social Psychology Review, 8(3), pp.281-307.
[119] Wilson, D.S., Near, D. and Miller, R.R., 1996. Machiavellianism: a synthesis of the evolutionary and psychological
literatures. Psychological bulletin, 119(2), p.285.
85
Appendix A Rules of the Automated Negotiating Agents Competition (ANAC) 2018
2
nd
Annual Human-Agent Negotiation track, ANAC 2018
Motivation:
The Human-Agent Negotiation (HAN) competition is conducted to further explore the strategies,
nuances, and difficulties in creating realistic and efficient agents whose primary purpose is to
negotiate with humans. Previous work on human-agent negotiation has revealed the importance
of several features not commonly present in agent-agent negotiation, including retractable and
partial offers, emotion exchange, preference elicitation strategies, favors and ledgers behavior,
and myriad other topics. To understand these features and better create agents that use them, this
competition is designed to be a showcase for the newest work in the negotiating agent
community.
Please note the submission deadline: Tuesday, May
21
st
, 2018. (to the IAGO website)
We encourage you to submit early to test your code compilation.
Notification of finalists: Friday, June 1
st
, 2018.
Competition special session: Friday, July 13
th
– Thursday, July 19
th
@ IJCAI
Summary:
The HAN competition requires each author or group of authors to submit an agent that will be
tested in competition against human subjects in a study run through the University of Southern
California. Based on the performance of the agent, we will determine which agent strategies are
most effective. The subject pool will be taken from the standard populace available on
Amazon’s Mechanical Turk (MTurk) service, with normal filtration done for participants who
are ineligible (see “Subject Selection”, below).
All agents must be compliant with the IAGO (Interactive Arbitration Guide Online) framework
and API, which will allow standardization of the agents and efficient running of subjects on
MTurk. The most up-to-date version of IAGO is required, and will be available for download in
February. The version of IAGO used in the 1
st
Annual HAN is available for download now, for
researchers interested in getting a head-start.
Agents will all be run on the same set of multi-issue bargaining tasks, examples of which are
included below (“Domain Example”). Agents will be allowed to communicate on several
channels, including a set of natural language utterances that have been pre-selected and curated
by the ANAC committee. Other channels include the exchange of offers through visual cues and
natural language, preference statements, and emotional displays.
2018 Challenge:
86
This year’s challenge will focus on the idea of repeated, multi-issue negotiations. Human
participants will compete against each submitted agent in three back-to-back negotiations. In
each negotiation, the agent and the human participant will have the same preference and utility
structure*, although these preferences will be unknown to the opposing side at the beginning of
the three negotiations. In this way, agents that do a good job of learning opponent’s preferences
will likely outperform agents that do not.
More fundamentally, this approach allows us to capture which agent strategies successfully
account for human behavior. While an aggressive strategy in the first negotiation may prove
effective, it could have such a backfire effect by the last negotiation that it is not the right choice
overall. This year’s challenge will provide insight into these and more choices when designing
agents whose primary purpose is to negotiate with humans over time.
*Note: The preference structure will remain the same, but the exact preferences may not remain
constant. See “Additional Rules”.
IAGO API:
IAGO is a platform developed by Mell and Gratch at the University of Southern California. It
serves as a testbed for Human-Agent negotiation specifically. IAGO is a web-based servlet
hosting system that provides data collection and recording services, a human-usable HTML5 UI,
and an API for designing human-like agents.
A full documentation of IAGO is available from the download site, available at
http://people.ict.usc.edu/~mell/IAGO. A brief summary is included here.
All agents may use the API to send and receive Events. Events are interpreted by the UI in
preset ways that allow a human user to interpret an agent’s intentions. Human users also
generate Events that are passed to the agent developer to interpret as desired. Example Events
include:
SEND_MESSAGE – sends a natural language utterance to be displayed on the chat log. Agents
may send any language they wish, while human participants are restricted to sending from a
preset list of utterances.
SEND_OFFER – sends an encoded offer for the multi-issue bargaining task wherein all items are
assigned to either the human player, the agent, or an “undecided” section of the offer table. Also
sends a pre-coded, descriptive message when sent from the agent to the human player.
SEND_EXPRESSION – sends an emoticon (either Happy, Angry, Surprised, or Sad) to the chat
log, and also briefly shows the corresponding emotion on the visual avatar of the agent.
GAME_START – indicates the beginning of a new game (used to notify the agent that a new
round is starting)
87
All Events may be sent with a delay, to allow chaining of related events (for example, an agent
designer could send a message, then wait 2 seconds, then follow-up with an offer and an
expression simultaneously). Flood protection will prevent messages from being sent too
frequently.
Further detail may be found in the IAGO documentation.
Subject Selection and Data Treatment:
Competition subject participants will be selected from the MTurk subject pool. Subjects will be
adults in the US (18 years or older), and will assert that they are permanent residents of the US
(this will be verified with IP address tracking). Restriction to the US will be done to reduce
cross-cultural effects. Each agent will be tested against 25 participants. Participants will not be
re-used or be matched against more than one agent.
Due to the fact that MTurk participants will be US-restricted and natural language statements are
used in the utterance set of the competition, participants will also be asked to affirm that their
first language is English.
Basic demographic information of subjects will be collected, and the subjects may be asked a set
of verification questions/attention checks to ensure they comprehend and are engaged in the
negotiation. Subjects who fail these questions will be removed from the competition and the
resulting data set. If a subject is removed due to failing an attention check, an additional subject
will be run against that agent (to ensure a 25-person subject count). Subjects whose data is not
captured due to agent malfunctions will not be rerun (see “Testing”, below).
The data set collected by the competition organizers may be released to the organizing
committee, and all agent developers/researchers may request access to the data after the
organizing committee releases it. All submitted source code may be released and/or reused by
the organizing committee. Researchers not wishing to release source code should contact the
organizers directly.
Competition Winners and Evaluation:
A set of prizes will be awarded to the winners of the competition according to the highest score
achieved by the agent. The winner will be the researcher whose agent has achieved the highest
score at the end of the bargaining time. Non-significant differences will be tie-broken by the
highest score. All differences, including differences between the control agent and submitted
agents, will be reported.
Note that since there will be a series of negotiations, aggressive strategies may backfire.
Note: The 2018 Challenge does not have a Likeability Prize. Please see “2018 Challenge”
for how likeability indirectly affects outcome.
We maintain the opportunity to examine other categories for “bonus” prizes.
88
Testing
Like the 2017 competition, we will provide automated compilation testing for all submissions.
We will also provide a guide for manual runtime testing with internal subjects (so that you may
test your own agents before submitting).
Note: Agents that experience malfunctions during runtime will have incomplete data excised,
and additional subjects will not be re-run.
Domain Example:
We present here an example domain. A domain similar to this will be used in the actual
competition.
This negotiation is a multi-issue bargaining task, which means both the agent and the human
participant will negotiate over the same set of items. Items may have differing values to each
side. A “full offer” means that all items are assigned to either the agent or the human participant.
A “partial offer” means that some items remain on the table and undecided. No offer is
considered binding until both players accept the same full offer.
A negotiation will only end when such a full offer is accepted, or the 8-minute time limit for the
negotiation has expired. Human participants will have a warning shown when there is only 1
minute remaining. Agents will have continuous access to the current negotiation time, accurate
within approximately 5 seconds. In the case that time expires with no full offer, each player will
take points equal to their respective Best Alternative To Negotiated Agreement (BATNA).
Note that the IAGO API allows agent designers to read the natural language descriptions of the
issues at runtime (e.g., “Issue1” can be understood to be something like “Lumber” or “Luxury
Cars”). However, agents will make use of domain-agnostic calls.
The following example challenge is a simple multi-issue bargaining task over resources between
two countries. There will be four distinct resources, with five items in each category. The items
will have images and descriptions identifying them as either “Oil”, “Iron”, “Foodstuffs” or
“Lumber”. The human player is assigned a value of 4 points to each Oil, 3 points to each Iron, 2
points to each Lumber, and 1 point to each Foodstuff. The agent player is assigned a value of 4
points to each Foodstuff, 3 points to each Lumber, 2 points to each Iron, and 1 point to each Oil.
Each player’s BATNA is equal to 4, i.e., the value of a single one of their highest item.
In the second negotiation, the values are swapped, but the structure is identical. The human
player is assigned a value of 2 points to each Oil, 1 point to each Iron, 4 points to each Lumber,
and 3 points to each Foodstuff. The agent player is assigned a value of 2 points to each
Foodstuff, 1 point to each Lumber, 4 points to each Iron, and 3 points to each Oil. The third
negotiation follows a similar pattern.
89
Note than in both domains, the human’s point values and BATNA will NOT be revealed to the
agent designers prior to the competition.
Natural Language Utterances:
Please see the IAGO website for the most up-to-date version of the following utterances. At the
time of this writing, these represent the complete list of utterances the human player may send to
the agent:
It is important that we both are happy with an agreement.
I gave a little here; you give a little next time.
We should try to split things evenly.
We should each get our most valuable item.
Accept this or there will be consequences.
Your offer sucks.
This is the last offer. Take it or leave it.
This is the very best offer possible.
I can’t go any lower than this.
We should try harder to find a deal that benefits us both.
There’s hardly any time left to negotiate!
90
Additional Rules:
Competition participants will be given a test scenario to practice with their agents. However, to
prevent hard-coding preference data into agents, a different set of utilities will be used for the
actual competition.
There will be no fewer than 3 distinct issues, and no greater than 5. Each issue will have fewer
than 20 items.
Issue utilities will adhere to the following rule:
k k
∑ Agent_utility(i) * (num_levels(i) – 1) = ∑ Human_utility(i) * (num_levels(i) – 1)
i=1 i=1
where k is the total number of issues.
Succinctly, this relationship means that the total for each side would be the same if that side
obtained every item.
Additionally, the total points and structure of utilities will not change between negotiations for
either side. Formally:
k k
∑ Utility_nego1(i) * (num_levels(i) – 1) = ∑ Utility_nego2(i) * (num_levels(i) – 1)
i=1 i=1
k
= ∑ Utility_nego3(i) * (num_levels(i) – 1)
i=1
It is highly encouraged that researchers use any technique by which an agent can successfully
store information within the three negotiations for a given participant. This includes methods
by which the agent may learn preferences in one negotiation and then subsequently passes
that information back to itself in future negotiation. IAGO’s agents persist across all three
rounds, so data may be stored within your agent Class, assuming it respects GAME_START
events. However, the intent of this competition is not to learn an entire domain, and therefore
data may not be stored across participants--all 25 participants are to be treated as fresh
instances against which the same agent will be run.
Note: Participation in this competition is done in good spirit and for the furtherance of
academic knowledge. Attempts to circumvent the rules described herein or as they are
described by the ANAC organizers will not qualify for prizes.
91
Reference:
Mell, J., Gratch, J. (2016) "IAGO: Interactive Arbitration Guide Online", In Proceedings of the
2016 International Conference on Autonomous Agents and Multiagent Systems. International
Foundation for Autonomous Agents and Multiagent Systems.
Mell, J., Gratch, J. (2017) "Grumpy & Pinocchio: Answering Human-Agent Negotiation
Questions through Realistic Agent Design", Proceedings of the 2017 International Conference
on Autonomous Agents and Multiagent Systems International Foundation for Autonomous
Agents and Multiagent Systems.
92
Appendix B Contributions Summary
Johnathan Mell
Ph.D. Candidate Summary
As of March 2020
Research Contributions, in summation:
• Development of web-based research platform for human-agent negotiation research
(IAGO)
• Advancement of the state-of-the-art in AI negotiating agents that interact with humans.
• Validation of the IAGO platform through empirical human studies that demonstrate real-
world human-human cognitive effects in a human-agent context, particularly:
o Use of “favors and ledgers” exchange strategy
o Use of anger and negative emotion to extract concessions
o Use of hard-bargaining techniques including offer anchoring
• Establishment of a community of research, rooted in the keystone of the Automated
Negotiating Agents Competition (ANAC), which has driven exploratory research that is
then further formalized herein. In particular, service on the program committee of ANAC
(for four years), an eleven-year-running international negotiating agents design event that
has been featured at AAMAS and IJCAI.
Invited Talks:
• Socially-Aware Negotiating Agents
Invited Colloquium Speaker, University of Central Florida (2020)
• Socially-Aware Negotiating Agents for Interdisciplinary CS Research
Invited Colloquium Speaker, Northwestern University (2020)
• Technology, Emotion, and Negotiation
Invited Lecture, Washington University in St. Louis (2019, 2020)
• Human-Agent Negotiation: Challenges and Case Studies
Invited Speaker, International Workshop on Automated Negotiation 2018, at IJCAI
• Human-Like Agents for Repeated, Social Negotiation
Invited Speaker, International Workshop on Conflict Resolution in Decision Making
2017, at IJCAI
93
Publications:
• Mell, J., Lucas, G., Mozgai, S., and Gratch, J. (2020 est) “The Effects of Experience on
Deception in Human-Agent Negotiation”, Under review, fast-tracked, Journal of
Artificial Intelligence Research.
• Mell, J., Beissinger, M., Gratch, J. (2020 est) “An Expert-Model & Machine Learning
Hybrid Approach to Predicting Human-Agent Negotiation Outcomes in Varied Data”,
Under review, invited submission, Journal of Multimodal User Interfaces.
• Aydoğan, R., Baarslag, T., Katsuhide, F., Mell, J., Gratch, J. De Jonge, D., Mohammad,
Y., Nakadai, S., Morinaga, S., Osawa, H., Aranha H., and Jonker, C.M. (2020).
“Research Challenges for the Automated Negotiating Agents Competition (ANAC)
2019”, Accepted, under revision, Agreement Technologies Conference 2020.
• Mell, J., Gratch, J., Aydogan, R., Baarslag, T., and Jonker, C.M. (2019) "The Likeability-
Success Trade Off: Results of the 2nd Annual Human-Agent Automated Negotiating
Agents Competition", In Proceedings of the 8th International Conference on Affective
Computing & Intelligent Interaction.
• Lee, M., Lucas, G., Mell, J., Johnson, E., and Gratch, J., (2019) “Exploring the Mind of
Virtual Agents through Negotiations”, In Proceedings of the 2019 International
Conference on Intelligent Virtual Agents.
• Mell, J., Beissinger, M., Gratch, J. (2019) “An Expert-Model & Machine Learning
Hybrid Approach to Predicting Human-Agent Negotiation Outcomes”, In Proceedings of
the 2019 International Conference on Intelligent Virtual Agents.
• Mell, J., Gratch, J., Baarslag, T., Aydoğan, R., Jonker, C. (2018) "Results of the First
Annual Human-Agent League of the Automated Negotiating Agents Competition", In
Proceedings of the 2018 International Conference on Intelligent Virtual Agents.
• Mell, J., Lucas, G., Gratch, J. (2018) "Welcome to the Real World: How Agent Strategy
Increases Human Willingness to Deceive", In Proceedings of the 2018 International
Conference on Autonomous Agents and Multiagent Systems. International Foundation
for Autonomous Agents and Multiagent Systems.
o —Finalist, Best Paper, Socially Interactive Agents Track
• Lucas, G., Krämer, N., Peters, C., Taesch, L., Mell, J., Gratch, J. (2018) "Effects of
Perceived Agency and Message Tone in Responding to a Virtual Personal Trainer", In
Proceedings of the 2018 International Conference on Intelligent Virtual Agents.
• Mell, J., Lucas, G., Mozgai, S., Boberg, J., Artstein, R., Gratch, J. (2018) "Towards a
Repeated Negotiating Agent that Treats People Individually: Cooperation, Social Value
94
Orientation, & Machiavellianism", In Proceedings of the 2018 International Conference
on Intelligent Virtual Agents.
• Mell, J., Gratch, J. Lucas, G., (2018) "The Effectiveness of Competitive Agent Strategy
in Human-Agent Negotiation." Orally Presented at the 2018 American Psychological
Association’s Technology, Mind, and Society conference.
• Mell, J., & Gratch, J. (2017). “Grumpy & Pinocchio: Answering Human-Agent
Negotiation Questions through Realistic Agent Design”. In Proceedings of the 16th
Conference on Autonomous Agents and Multiagent Systems (pp. 401-409). International
Foundation for Autonomous Agents and Multiagent Systems.
• Mell, J., Lucas, G. and Gratch, J. (2017). “Prestige Questions, Online Agents, and
Gender-Driven Differences in Disclosure”. In International Conference on Intelligent
Virtual Agents (pp. 273-282). Springer, Cham.
• Mell, J., & Gratch, J. (2016). “IAGO: Interactive Arbitration Guide Online.”
In Proceedings of the 2016 International Conference on Autonomous Agents &
Multiagent Systems (pp. 1510-1512). International Foundation for Autonomous Agents
and Multiagent Systems.
o —Finalist, Best Demonstration Paper
• Mell, J., Lucas, G., Gratch, J., Rosenfeld, A. (2015). "Saying YES! The Cross-cultural
Complexities of Favors and Trust in Human-Agent Negotiation", In Proceedings of the
2015 International Conference on Affective Computing and Intelligent Interaction, 2015,
Xi'an, China.
• Mell, J., Lucas, G., Gratch, J. (2015). "An Effective Conversation Tactic for Creating
Value over Repeated Negotiations." In Proceedings of the 2015 International Conference
on Autonomous Agents and Multiagent Systems (pp. 1567-1576). International
Foundation for Autonomous Agents and Multiagent Systems.
• DeVault, D., Mell, J., Gratch, J. (2015). "Toward natural turn-taking in a virtual human
negotiation agent." In 2015 AAAI Spring Symposium Series.
Papers in preparation or under review are available upon request.
95
Appendix C IAGO Platform Technical Overview
Web-Based Interface
IAGO is built to be easily deployed. Human-agent negotiations may be conducted wherever
study requirements demand. As IAGO is completely web-based, remote subject platforms such as
Amazon’s Mechanical Turk may be used as well as traditional methods. No installations are required
for IAGO to function; the platform is delivered through HTML5. JavaScript is used to create web socket
communication to the back-end, which is Java-based. Because of these features, IAGO is able to be
used on a wide variety of common web browsers with little to no visual distortion. IAGO also supports
SSL encryption within browsers.
IAGO is also, notably, fully asynchronous (and multithreaded). Humans or agents may both
choose to take any action at any time and these decisions will be represented in real-time through the
interface. IAGO is capable of passing information to and from external programs and third-party web
calls (e.g., survey software such as Qualtrics).
Easy-to-use API
IAGO allows the rapid development of agents by simplifying the communication between
agents through a set of pre-specified Events. A core set of Events is listed below:
1. SEND_EXPRESSION – allows for an agent to send or receive an expression from
a preset list defined by the game structure. These include prototypical expressions such as Anger,
Surprise, Happiness, and Sadness.
2. SEND_MESSAGE – allows for an agent to send or receive a string to be displayed
in the chat log. The game can expose the preset list of legal messages that the human can use,
simplifying the assumptions if natural language is not the focus of the competition or study.
• Notably, SEND_MESSAGE events are also subclassed, which allows data to be
automatically annotated for easy analysis. It also allows agents to be designed in
more context-agnostic ways. These events include but aren’t limited to:
• BATNA_REQUEST
• PREFERENCE_REQUEST_INFO
• FAVOR_RETURN
• Etc.
3. SEND_OFFER – the core of any negotiation, this allows an agent to send or
receive an offer (partial or full) that represents some ordering of the issues at hand. An offer will
move the items in the offer grid to the desired positions, while a message affects the chat log.
4. TIME – this allows agents to easily record periods of idleness from their partner,
waking after a specified amount of time to take another action (such as reiterating a message, or
relenting on a previous offer).
There are additional Events, as well as more complex versions of Events, which are not
discussed here. For example, the SEND_MESSAGE Event allows for encoding of preferences or
96
BATNA in an easy-to-decode manner for agents. Release versions of IAGO include open-source
samples of agents, as well as full documentation of the API.
23
Logging Features and History
One critical feature of this system is its ability to provide large amounts of data about the history
of the negotiation. This is important both for the users of the system, who will often wish to verify offer
or message history, and also for researchers, who require a more detailed description of the logs of a
negotiation.
IAGO logs all events along with a timestamp and user identification number, and exports much
of its data to files/databases (depending on configuration). On the visual side, the full offer history is
always available to the user through a scrolling window, as are user preferences and past emotional
displays.
23
For access to development software/documentation, please see https://myiago.com
97
Appendix D Proposal Satisfaction Overview
The original dissertation proposal approved in Winter 2018 listed several goals to complete the
dissertation. These proposal items are listed here, and the relevant dissertation sections are
linked.
• Conduct an exploratory study with data sourced from community submissions that
examines repeated negotiations in order to analyze a variety initial approaches to Layer 2
& 3 agents.
o This study was conducted with the results of the ANAC 2018 challenge, and is
detailed in Chapter 6.1.
• Above and beyond these requirements, we also include summary results of the ANAC
2019 challenge, which emphasizes the effect of social behavior of agents on results.
o This study was conducted with the results of the ANAC 2019 challenge, and is
detailed in Chapter 6.1.
• Examine the importance of favors in the fully interactive environment of IAGO by
creating agents that use a mix of favor behaviors, optimizing for Layer 2 temporally-
aware agents.
o This study was designed and conducted internally, and is detailed in Chapter 6.2.
• Above and beyond these requirements, we integrate aspects of adaptive agents in IAGO
that can vary their behavior to align with human behavior, optimizing for Layer 3
individually-aware agents.
o Elements of this were concept were integrated into Study #2.
Abstract (if available)
Abstract
Increasingly, automated agents are interacting with humans in highly social interactions. Many of these interactions can be characterized as negotiation tasks. There has been broad research in negotiation techniques between humans (in business literatures, e.g.), as well a great deal of work in creating optimal agents that negotiate with each other. However, the creation of effective socially-aware agents requires fundamental basic research on human-agent negotiation. Furthermore, this line of enquiry requires highly customizable, fully-interactive systems that are capable of enabling and implementing human-agent interaction. Previous attempts that rely on hypothetical situations or one-shot studies are insufficient in capturing truly social behavior. ❧ This dissertation showcases my invention and development of the Interactive Arbitration Guide Online (IAGO) platform, which enables rigorous human-agent research. IAGO has been designed from the ground up to embody core principles gleaned from the rich body of research on how people actually negotiate. I demonstrate several examples of how IAGO has already yielded fundamental contributions towards our understanding of human-agent negotiation. I also demonstrate how IAGO has contributed to a community of practice by allowing researchers across the world to easily develop and investigate novel algorithms. Finally, I discuss future plans to use this framework to explore how humans and machines can establish enduring and profitable relationships through repeated negotiations.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Decoding information about human-agent negotiations from brain patterns
PDF
An intelligent tutoring system’s approach for negotiation training
PDF
Target assignment and path planning for navigation tasks with teams of agents
PDF
Automated negotiation with humans
PDF
Computational foundations for mixed-motive human-machine dialogue
PDF
Modeling social causality and social judgment in multi-agent interactions
PDF
Parasocial consensus sampling: modeling human nonverbal behaviors from multiple perspectives
PDF
Towards social virtual listeners: computational models of human nonverbal behaviors
PDF
Common ground reasoning for communicative agents
PDF
Efficient bounded-suboptimal multi-agent path finding and motion planning via improvements to focal search
PDF
Auction and negotiation algorithms for cooperative task allocation
PDF
Artificial intelligence for low resource communities: Influence maximization in an uncertain world
PDF
Incremental search-based path planning for moving target search
PDF
Emotional appraisal in deep reinforcement learning
PDF
An investigation of fully interactive multi-role dialogue agents
PDF
Planning with continuous resources in agent systems
PDF
Thespian: a decision-theoretic framework for interactive narratives
PDF
Local optimization in cooperative agent networks
PDF
The interpersonal effect of emotion in decision-making and social dilemmas
PDF
Enabling human-building communication to promote pro-environmental behavior in office buildings
Asset Metadata
Creator
Mell, Johnathan Todd
(author)
Core Title
A framework for research in human-agent negotiation
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Computer Science
Publication Date
04/24/2020
Defense Date
03/09/2020
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
agents,artificial intelligence,Computer Science,empirical studies,Human Factors,human-agent negotiation,human-computer interaction,IAGO,negotiation,OAI-PMH Harvest
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Gratch, Jonathan (
committee chair
), Fast, Nathanael (
committee member
), Koenig, Sven (
committee member
)
Creator Email
johnathan@customtechnologies.com,johnathanmell@me.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-290455
Unique identifier
UC11674790
Identifier
etd-MellJohnat-8321.pdf (filename),usctheses-c89-290455 (legacy record id)
Legacy Identifier
etd-MellJohnat-8321.pdf
Dmrecord
290455
Document Type
Dissertation
Rights
Mell, Johnathan Todd
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
agents
artificial intelligence
empirical studies
human-agent negotiation
human-computer interaction
IAGO