Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Thespian: a decision-theoretic framework for interactive narratives
(USC Thesis Other)
Thespian: a decision-theoretic framework for interactive narratives
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
THESPIAN: A DECISION-THEORETIC FRAMEWORK FOR INTERACTIVE
NARRATIVES
by
Mei Si
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Ful¯llment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(COMPUTER SCIENCE)
May 2010
Copyright 2010 Mei Si
Dedication
This dissertation is dedicated to my parents.
ii
Acknowledgements
Itisanhonorformetothankthemanypeoplewhohavehelped,encouragedandinspired
me in my thesis work.
Foremost, I would like to express my sincere gratitude to my advisor Stacy Marsella.
With his patience, enthusiasm, and broad knowledge, he has provided a tremendous
amount of encouragement, inspiration and sound advice through my Ph.D. study. This
thesis would not have been possible without his help and support over the years.
Iwouldliketothanktherestofmythesiscommittee: JonathanGratch,MilindTambe
and Lynn Miller, for their insightful comments and hard questions.
I would also like to thank Mark Riedl, Peter Vorderer and Bill Swartout for partic-
pating in my qualify exam guiding committee and providing me guidance.
LewisJohnsondeservesconsiderablethanksforhisadviceandsupportovertheyears.
I have greatly bene¯ted from the friendship and association with many people dur-
ing my studies at the University of Southern California. I would like to thank David
Pynadath for fruitful discussions, advice, and encouragement. I am in debt to Jamison
Moore who has helped me with editing this thesis. I am grateful to my colleagues and
friends at the Information Sciences Institute and the Institute for Creative Technologies
of the University of Southern California, for their encouragement, inspiration and good
iii
company. This includes Brent Lance, Jina Lee, Teresa Dey, Erika Barragan-Nunez, Ning
Wang, Louis-Philippe Morency, Hannes Vilhj¶ almsson, Erin Shaw, Catherine LaBore and
Shumin Wu. Thank you all for making my Ph.D. study fun and enjoyable.
Last but not the least, I would like to thank my parents, who have given birth to me,
raised me, educated me and encouraged me to pursue a meaningful life.
iv
Table of Contents
Dedication ii
Acknowledgements iii
List Of Tables ix
List Of Figures xi
List Of Algorithms xiii
Abstract xiv
Chapter 1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Statement of Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Desiderata of Interactive Narrative . . . . . . . . . . . . . . . . . . 4
1.2.1.1 Coherence of Narrative . . . . . . . . . . . . . . . . . . . 4
1.2.1.2 Cognitive or A®ective E®ects . . . . . . . . . . . . . . . . 6
1.2.2 Desiderata of Authoring Framework . . . . . . . . . . . . . . . . . 6
1.2.2.1 Create Rich Characters . . . . . . . . . . . . . . . . . . . 7
1.2.2.2 Balance the Design of Characters and the Design of Events 8
1.2.2.3 Exert Directorial Control . . . . . . . . . . . . . . . . . . 10
1.2.2.4 Facilitate Authoring . . . . . . . . . . . . . . . . . . . . . 11
1.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6 Road Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Chapter 2 Overview of the Thespian Framework 17
2.1 Two-layer Runtime System . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Authoring Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 Example Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3.1 Tactical Language Training System (TLTS) . . . . . . . . . . . . . 21
2.3.1.1 TLTS Story I . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.1.2 TLTS Story II . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.2 A Grimms' Fairy Tale { Little Red Riding Hood . . . . . . . . . . 24
v
2.3.3 A Grimms' Fairy Tale { The Fisherman and His Wife . . . . . . . 24
Chapter 3 Related Work 26
3.1 CreateCoherentNarrativeandBalancetheDesignofCharactersandEvents 27
3.1.1 The State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.1.2 In the Thespian Framework . . . . . . . . . . . . . . . . . . . . . . 28
3.2 Exert Directorial Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2.1 The State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2.2 In the Thespian Framework . . . . . . . . . . . . . . . . . . . . . . 31
3.3 Facilitate Authoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3.1 The State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3.2 In the Thespian Framework . . . . . . . . . . . . . . . . . . . . . . 33
3.4 Model Social Normative Behaviors . . . . . . . . . . . . . . . . . . . . . . 34
3.4.1 The State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4.2 In the Thespian Framework . . . . . . . . . . . . . . . . . . . . . . 35
3.5 Model Emotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.5.1 The State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.5.2 In the Thespian Framework . . . . . . . . . . . . . . . . . . . . . . 37
3.6 Create the Experience of \Presence" . . . . . . . . . . . . . . . . . . . . . 39
Chapter 4 Character Level 42
4.1 Agent Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.1.1 State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.2 Actions/Dialogue Acts . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.3 Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.4 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.1.5 Beliefs (Theory of Mind). . . . . . . . . . . . . . . . . . . . . . . . 44
4.1.6 Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.1.7 Belief Revision and Decision-making Processes . . . . . . . . . . . 46
4.1.7.1 Belief Revision . . . . . . . . . . . . . . . . . . . . . . . . 47
4.1.7.2 Decision-Making Process . . . . . . . . . . . . . . . . . . 48
4.2 Model Social Normative Behaviors . . . . . . . . . . . . . . . . . . . . . . 50
4.2.1 Adjacency Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.2.2 Turn Taking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2.3 Conversational Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.2.4 A±nity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.2.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.2.5.1 Example I . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.2.5.2 Example II . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.3 Model Emotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.3.1 Appraisal Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.3.2 Appraisal Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.3.2.1 MotivationalRelevance&MotivationalCongruenceorIn-
congruence . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.3.2.2 Accountability . . . . . . . . . . . . . . . . . . . . . . . . 62
vi
4.3.2.3 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.3.2.4 Novelty . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.3.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.3.3.1 Example I: Small Talk . . . . . . . . . . . . . . . . . . . . 67
4.3.3.2 Example II: Firing-squad . . . . . . . . . . . . . . . . . . 68
Chapter 5 Directorial Control 70
5.1 Design Challenge and Approach . . . . . . . . . . . . . . . . . . . . . . . . 71
5.2 Directorial Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.3 Director Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.3.1 Overall Work°ow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.3.2 Fit Characters to Objectives . . . . . . . . . . . . . . . . . . . . . 79
5.3.3 Adjust Characters' Beliefs . . . . . . . . . . . . . . . . . . . . . . . 81
5.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Chapter 6 Authoring Procedures 85
6.1 Fit Characters to Story Paths . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.2 Simulate Potential Users for Testing Interactive Narrative . . . . . . . . . 90
6.2.1 Model Potential Users . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.2.2 Simulate Interactions with Potential Users . . . . . . . . . . . . . . 91
6.2.3 Filter Generated Story Paths . . . . . . . . . . . . . . . . . . . . . 94
6.2.4 Authoring Example . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.3 Reuse Authored Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.3.1 Example I: Create a New Character from Existing Characters . . . 100
6.3.2 Example II: Move a Character to a New Story . . . . . . . . . . . 100
6.3.3 Example III: Move a Character to a New Story with Some of the
Character's Goals Dropped . . . . . . . . . . . . . . . . . . . . . . 101
Chapter 7 Evaluation 102
7.1 Importance of Well-motivated Characters . . . . . . . . . . . . . . . . . . 103
7.1.1 Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.1.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.1.3 Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.1.4 Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7.1.5 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.1.5.1 Subjects' Experiences of the Story . . . . . . . . . . . . . 111
7.1.5.2 Subjects' Comprehension of the Story . . . . . . . . . . . 112
7.2 E®ectiveness of Directorial Control . . . . . . . . . . . . . . . . . . . . . . 118
7.2.1 Simulate the User . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
7.2.2 Comparison between With and Without Director Agent . . . . . . 119
7.2.2.1 Experimental Design . . . . . . . . . . . . . . . . . . . . 119
7.2.2.2 Results and Discussion . . . . . . . . . . . . . . . . . . . 122
7.2.3 Varying Directorial Goals . . . . . . . . . . . . . . . . . . . . . . . 127
7.2.3.1 Variation I . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.2.3.2 Variation II . . . . . . . . . . . . . . . . . . . . . . . . . . 128
vii
Chapter 8 Open Challenges 129
8.1 Computational Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . 129
8.2 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
8.2.1 Model Emotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
8.2.2 Model Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
8.2.3 Model Characters' Motivations . . . . . . . . . . . . . . . . . . . . 132
8.2.4 Model Decision-Making . . . . . . . . . . . . . . . . . . . . . . . . 133
8.2.5 Directorial Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Chapter 9 Conclusion 135
References 138
Appendix
Materials for Evaluation I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
viii
List Of Tables
2.1 Sample Dialogue in TLTS Story I . . . . . . . . . . . . . . . . . . . . . . . 23
4.1 Adjacency Pairs and Corresponding Obligations. . . . . . . . . . . . . . . 51
4.2 Small Talk Between Two Persons . . . . . . . . . . . . . . . . . . . . . . . 68
5.1 Syntax for Specifying Directorial Goals . . . . . . . . . . . . . . . . . . . . 74
5.2 Directorial Goals Example I . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.3 Directorial Goals Example II . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.4 Objectives if Directorial Goals are Violated . . . . . . . . . . . . . . . . . 78
6.1 An Example of the User Agent's Goals . . . . . . . . . . . . . . . . . . . . 92
6.2 Results of Simulating Users with Alternative Mental Models . . . . . . . . 97
6.3 Results of Simulating Prototypical Users . . . . . . . . . . . . . . . . . . . 97
7.1 Conditions in Evaluation I . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.2 Directorial Goals I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
7.3 Directorial Goals II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
7.4 Number of Subjects in Each Condition . . . . . . . . . . . . . . . . . . . . 110
7.5 Subjects' Choices of the Wolf's Experience in Evaluation I . . . . . . . . . 111
7.6 Directorial Goals for Evaluation II . . . . . . . . . . . . . . . . . . . . . . 119
7.7 Conditions in Evaluation II . . . . . . . . . . . . . . . . . . . . . . . . . . 121
ix
7.8 Directorial Goals Variation I . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.9 Directorial Goals Variation II . . . . . . . . . . . . . . . . . . . . . . . . . 128
x
List Of Figures
1.1 Two-layer System for Interactive Narrative . . . . . . . . . . . . . . . . . 12
2.1 Tactical Language Training System . . . . . . . . . . . . . . . . . . . . . . 23
4.1 Theory of Mind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.2 Belief Revision and Decision-making Processes . . . . . . . . . . . . . . . 47
4.3 Red's Lookahead Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.4 Accountability Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.1 Thespian's Authoring Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . 86
7.1 Curve I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
7.2 Curve II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
7.3 Subjects' Answers to Question 1 . . . . . . . . . . . . . . . . . . . . . . . 113
7.4 Subjects' Answers to Question 3 . . . . . . . . . . . . . . . . . . . . . . . 113
7.5 Subjects' Answers to Question 7 . . . . . . . . . . . . . . . . . . . . . . . 114
7.6 Subjects' Answers to Question 2 . . . . . . . . . . . . . . . . . . . . . . . 114
7.7 Subjects' Answers to Question 5 . . . . . . . . . . . . . . . . . . . . . . . 115
7.8 Subjects' Answers to Question 4 . . . . . . . . . . . . . . . . . . . . . . . 116
7.9 Subjects' Answers to Question 6 . . . . . . . . . . . . . . . . . . . . . . . 117
7.10 Subjects' Answers to Question 8 . . . . . . . . . . . . . . . . . . . . . . . 117
xi
7.11 Success Rates of Directorial Control . . . . . . . . . . . . . . . . . . . . . 123
7.12 Delay in Achieving Directorial Goals without Director Agent . . . . . . . 124
7.13 Delay in Achieving Directorial Goals with Director Agent . . . . . . . . . 124
7.14 Success Rates When the Wolf has Closer Social Distance with Others . . 125
xii
List of Algorithms
1 Dynamics for complete adjacency pair norm . . . . . . . . . . . . 52
2 Dynamics for initiate adjacency pair norm . . . . . . . . . . . . . 53
3 Dynamics for keep turn norm . . . . . . . . . . . . . . . . . . . . . . 53
4 Dynamics for conversational flow norm . . . . . . . . . . . . . . . 54
5 Motivational Relevance & Motivational Congruence . . . . . . 62
6 Is Coerced(actor, pact) . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7 Is Coercer(p coercer, actor, p coercer pact, actor pact) . . . . . . . . 64
8 Control(preUtility) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
9 Directorial Control() . . . . . . . . . . . . . . . . . . . . . . . . . . 77
10 Adjust Config(objectives) . . . . . . . . . . . . . . . . . . . . . . . . . 79
11 Fit To Objectives(objectives, history, desired paths) . . . . . . . . . 80
12 Find Explanation(beliefChange) . . . . . . . . . . . . . . . . . . . . . 82
13 Fit Sequence(S
0
, char name, seq, fixedgoals) . . . . . . . . . . . . . 87
14 Generate All Paths(S
0
, user, fixedgoals, remainingSteps, existPath) 92
15 Generate All Paths*(S
0
, user, remainingSteps, existPath, n, m) . 120
xiii
Abstract
With the rapid development of computer technology, a new form of media { interactive
narrative has received increasing attention. Interactive narrative allows the user to par-
ticipateinadynamicallyunfoldingstory,byplayingacharacterorbyexertingdirectorial
control. Byallowingtheusertointeract,interactivenarrativeprovidesaricherandpoten-
tially more engaging experience than traditional narrative. Moreover, because di®erent
choices of the user lead to di®erent paths through the story, the author of interactive
narrative can tailor the experience for the user or user groups.
Thedesignofinteractivenarrativefacesmanychallenges. Thecentralchallengecomes
from the integration of interactivity into the narrative. Instead of presenting one well-
crafted static story, the author has to design the characters' behaviors along many paths
through the story in response to possible user interactions. The amount of work can
easily overwhelm an author.
Inthisthesis,Ipresentamulti-agentapproachtomodelingandsimulatinginteractive
narrative,implementedastheThespianframework. Thespianutilizesatwo-layerruntime
system to drive the characters' interactions with the user. At the base is a multi-agent
system comprised of goal-oriented autonomous agents that realize the characters in the
story. Above this layer is a proactive director agent that continuously monitors the
xiv
progress of the story and directs the characters toward the author's plot design goals. In
addition to the two-layer runtime system, Thespian contains o²ine authoring procedures
to facilitate the author in con¯guring the characters.
TheevaluationoftheThespianframeworkhasbeenperformedatdi®erentlevels. Var-
ious components within Thespian have been individually evaluated or validated. In ad-
dition, Thespian's generality in practice for authoring a range of stories has been demon-
strated through its many applications in di®erent domains.
xv
Chapter 1
Introduction
Narrative is a central part of the human experience. Its power to shape people's mind
and a®ect people's behavior has been recognized throughout recorded history. For exam-
ple, Plato stated in Republic that \they [nurses and mothers] will shape their children's
souls with stories much more than they will shape their bodies by handling them." [80]
Aristotle, in Poetics, pointed out that the origin of narrative can be traced back to im-
itation, which is \lying deep in our nature" [3]. A person \through imitation learns his
earliest lessons; and no less universal is the pleasure felt in things imitated." Contem-
porary researchers and philosophers have also argued for the importance of narrative in
understandingourselvesandsocieties. AlasdairMacIntyrestatesthat\thereisnowayto
give us an understanding of any society, including our own, except through the stock of
storieswhichconstituteitsinitialdramaticresources."[57]BrunoBettelheimarguedthat
fairy tales are the crucial means by which children learn about the world [11]. Finally,
movies, television, plays and novels have become inseparable parts of people's lives.
Despite the power of traditional narrative to entertain and teach, the viewer does not
have any direct impact or control on how the story unfolds. The relation of the audience
1
orreadertothenarrativeisapassiveone,observingthestorythroughawindowprovided
by the author.
Withtherapiddevelopmentofcomputertechnology,anewformofmedia{interactive
narrative has received increasing attention [64, 77, 101, 118, 86, 48, 87, 21, 2, 58, 56, 119,
38, 79, 17]. Interactive narrative allows people to participate actively in a dynamically
unfolding story, by playing a character or by exerting directorial control over events in
the story. The user's choices a®ect the unfolding of the story.
The support for user interactivity distinguishes interactive narrative from other nar-
rative forms. By allowing the user to interact, the experience is richer and poten-
tially more engaging. Moreover, interactivity and the user's experience of agency can
promote intrinsic motivation in learning [78], and support learning in context and re-
play [68, 69, 31, 4, 129, 6]. Because the di®erent choices of the user result in di®erent
stories, the author of interactive narratives can also tailor the experience for individ-
ual users or user groups, based on the choices the users make in the story. Therefore,
interactive narrative can be potentially a more e®ective media than traditional narrative.
On the other hand, user interaction dramatically complicates the design process.
Instead of presenting one well-crafted story, the author has to design the characters'
behaviors along many paths through the story in response to possible user interactions.
The amount of work can easily overwhelm a human author.
2
1.1 Motivation
While the crafting of traditional narratives usually does not require facilitation from
a sophisticated authoring framework, the creation of interactive narratives does. As
an example, the following is cited from a recruiting ad posted by a game company for
recruiting writers for their RPG games at 2007,
\BioWaregamesrequireagreatdealofwriting. Storylines,worldbuilding,characters,
journals { and about a bazillion lines of dialogue. What makes my job harder is not only
does all of this writing have to be high quality - something not always demanded in our
industry - it can only be done by writers who understand the complexities of interactive
¯ction. Take the average screenwriter who doesn't play RPGs, place him in front of the
writing tool from Neverwinter Nights and you'll get a linear story with a complete lack
of Player agency and no interesting decisions. Just trying to explain the concept behind
writingwithoutaprotagonisttosomeonewhohasneverevenbeenadungeonmastercan
be like showing card tricks to a dog."
Unlike traditional narrative, in which only a single story line is crafted by the writer
andpresentedtothereaderorviewer,interactivenarrativeallowstheusertointeractwith
the characters in the story. Because of the aim of supporting rich interactivity, crafting
interactive narrative experience is extremely di±cult for even well trained writers. User
interaction can lead to many alternative paths through the story. Authoring enough
contingencies to create a richly interactive environment for an engaging experience is
often intractable to human authors [88]. Consider a brief interactive story consisting of 5
rounds of interaction between a user and virtual characters; if at each turn the user has
3
10 reasonable moves, there are 10
5
possible user action sequences or alternative paths
through the story. Within each story path, the author typically wants the characters
to act \within character", i.e. consistent with the characters' motivations. Further, the
author often wants to achieve certain consistencies across the di®erent story paths. For
example, the author may want to create related story structures or dramatic moments
for all the users. In reality, it is not feasible to manually try out each possible story path,
not even considering designing them. As a result, the authors often have to sacri¯ce
interactivity for the control of the story and the characters. The goal of supporting rich
userinteractivityininteractivenarrativesthusraisestheneedforanauthoringframework
that can help automate the design process.
1.2 Statement of Problem
This section analyzes the desiderata of interactive narratives and in turn the desiderata
of an authoring framework for interactive narratives.
1.2.1 Desiderata of Interactive Narrative
There are two basic desiderata of interactive narratives, the coherence of narrative and
being able to create the author's desired cognitive or a®ective e®ects on the user.
1.2.1.1 Coherence of Narrative
The basic requirement for any media that involves narrative is the coherence of the
narrative. In order for the author to create their intended e®ects, the user has to be able
to understand their experience, i.e. what happened and why it happened. The coherence
4
ofnarrativereferstothesequenceofeventsinthestoryhavingmeaningfulconnectionsin
both temporal and causal ways [75, 15]. The coherence of narrative has been shown as a
crucial factor for ensuring people understand their narrative experiences [127, 19, 71, 73].
In the context of social interaction, a story being coherent usually requires the char-
acters in it to be well-motivated and socially aware. The behaviors of such characters are
similar to those of humans', and thus allow the user to understand them in a similar way
to how he/she may understand other people in the real world.
The perils of not achieving coherence in narratives have been recognized as far back
as Aristotle. For example, \deus ex machina" refers to the device used in the production
of ancient Greek tragedies, especially in the works of Eurpides. A crane would lower an
actor playing a god on the stage, who would take some omnipotent action that would
move the plot in the direction desired by the dramatist. In the play Medea, deus ex
machina is used to move Medea to safety after she has committed murder, and in the
play Alcestis, Alcestis is brought back to life. In Poetics, Aristotle criticized the use of
this device, and argued that the characters ought to act based on their types and the
resolution of plots must arise internally:
\In the characters too, exactly as in the structure of incidents, [the writer] ought
alwaystoseekwhatiseithernecessaryorprobable,sothatiseithernecessaryorprobable
that a person of such-and-such a sort say or do things of the same sort, and it is either
necessary or probable that this [event] happen after that one."
5
1.2.1.2 Cognitive or A®ective E®ects
Coherent narrative by itself does not necessarily lead to dramatic or inspiring experi-
ence, because it merely requires the causal and temporal relationship among events to
be understandable to the reader/user. The author of interactive narratives often wants
to create cognitive or a®ective e®ects in the user through the interactive experience. For
example, many interactive narratives for pedagogy are designed to help the user practice
social and cognitive skills, such as social problem solving [77], negotiation [122] and coor-
dinationskills[118]whentheuserisatcertaina®ectivestates, e.g. highlystressed. More
generally, interactive narratives have often depended on triggering the user's emotional
responses for keeping the user engaged and setting the environment for the user to ex-
perience pedagogy or entertainment. For example, in FearNot! [77], which is targeted at
helping the learner deal with school bullying, empathic responses for the victim are trig-
gered by letting the user talk to a child character who is the victim of a school bully. In
both Fa» cade [66] and Mimesis [87], the systems have a major goal of creating a dramatic
tension arc, which is typically characterized as a slow increase in tension followed by a
rapid release. This desideratum is not unique to interactive narrative. Narrative forms
in general seek to create cognitive or a®ective e®ects. Tan, for example, describes ¯lms
as \emotion machines" [120].
1.2.2 Desiderata of Authoring Framework
The basic design challenge for an authoring framework is how to facilitate the creation of
an interactive narrative experience that is both coherent and reaches the author's desired
cognitive or a®ective e®ects.
6
Interactive narrative is closely related to traditional non-interactive forms of narra-
tive. Narrative has two parts: story and discourse. Story is the content of the narrative,
including the chain of events, the characters and the settings of the environments. Dis-
courseisthemeansbywhichthecontentiscommunicatedtothereaderorviewer[23]. In
interactive narratives, discourse management is often replaced by ¯rst-person experience
because the user has control over the character's behaviors and can only see and feel
as the character in the story. Therefore, the author's e®ort has largely focused on the
design of the story. Any e®ects the author intends to reach are created by the design of
the characters and the design of the events. I discuss in turn the challenges faced by an
authoring framework for facilitating the authors in designing characters and events.
1.2.2.1 Create Rich Characters
In discussing traditional narratives, Egri has suggested a central, key role for rich, fully
°eshedoutcharactersfornarrativedesign[27]. Hearguedthatsuchcharactersarecritical
notonlytonarrativebutalsoaskeyaspecttotheprocessofcreatingnarrative{thatrich
characters achieve autonomy in the writer's mind and can thereby serve as inspiration to
the author.
The richness of character development is evidenced in the work of Shakespeare. The
playOthello,forexample,givesusasenseoftherichnessofcharacterthatanauthormay
seek. In this play, Iago hates Othello and seeks his downfall. He hatches a plan to plant
evidence that will lead Othello to the false inference that his wife has cheated on him.
Iago believes that this false inference will lead Othello to kill his wife and consequently
destroy himself. Here the richness of character can be observed. The characters have
7
beliefs about others including how others think { they possess what in psychology and
philosophy is called a \Theory of Mind" [126]. They have motivations and emotions.
Finally, they understand the social structures and roles of which they are a part, such as
marriage and spouse, the social norms associated with those roles and the consequences
of violating them.
Similarly, it is desirable to have such rich characters interacting with the user in
interactive narratives. These capacities in characters { be well-motivated, have a Theory
ofMind, understandsocialnormsandhaveemotion{arenotonlyimportantforcreating
a dramatic e®ect, but also necessary for the basic desideratum of coherence narrative.
They enable the characters to behave human-like, and thus allow the user to understand
them in a similar way as how he/she understands people in the real world. To facilitate
the author in designing interactive narratives, an authoring framework should provide
support for building these capacities into the characters.
1.2.2.2 Balance the Design of Characters and the Design of Events
As suggested even in the earliest writing on traditional narrative, stories are a balance
of character and plot. Often, one is emphasized more than the other. Much of the
work in interactive narrative has taken an approach that emphasizes plots more (see
Section 3.1 for more details). This plot-centric view also can be seen in traditional
views of narrative that also tended to emphasize plot. For example, Aristotilian view of
tragedy is sometimes described in terms of standardized plots: the hero, his downfall and
sometimes the perseverance leading to recovery. A more extreme example is Vladimir
Propp'sstudyonthestructuralpatternsoftraditionalRussianfairytales[81]. Thestories
8
oftenhavestrictpatternsandthecharactersinthestoriesdonotnecessarilypossesstheir
own motivations and personalities; they sometimes only exist for supporting the plots.
However, overemphasizing the design of plots will result in having over-simpli¯ed, or
broken (inconsistent) characters. Over-simpli¯ed characters reduce the story's engaging
power. More importantly, broken characters can break the coherence of narrative. Con-
versely, even with very rich characters, plot is still important. Plot events are not just
physical events, such as the murder of Othello's wife, but are tied to the characters, their
internal beliefs and emotions [27], such as the dawn of suspicion in Othello. If all the
consideration is given to craft characters, the overall story may lose its structure, and
becomes a trivial story in which nothing happens [23].
In general, a balance has to be reached between the design of characters and the
design of events. In fact, their designs serve as constraints to each other { the characters'
actions create the events in the story. Therefore, the characters need to be designed to
supporttheplotdesignofthestoryandtheplotsshouldbedesignedto¯tthecharacters.
The author of interactive narratives, just like traditional writers, may have a prefer-
ence of concentrating on the design of the characters or the events. The designs of both
components need to be satisfactory for creating a good interactive narrative. Therefore,
the authoring framework should help ensure a good design of the other component, or at
least, inform the author of potential violations.
9
1.2.2.3 Exert Directorial Control
The support of user interactivity distinguishes interactive narrative from other narrative
forms. Despite the bene¯t of allowing the author to create a rich, engaging and user-
centered experience, user interaction dramatically complicates the design process. It not
only creates many paths through the story, but also is often fundamentally in con°ict
with the control of story development and therefore the creation of cognitive or a®ective
e®ects.
Astheauthorcedespartialcontrolofthestorytotheuser,itismuchhardertocontrol
the development of the story so that it achieves the author's desired e®ects [70]. There is
noreasontoexpectthattheuserisagoodauthorandwillmakechoicesthatleadtogood,
or the author's desired, narrative experiences. Indeed, there are reasons to suspect that
goodnarrativechoiceswouldbeavoidedbytheuserbecausetheauthorandtheusermay
have di®erent goals. For example, at the heart of traditional linear narrative, drama are
the con°icts and tensions imposed on the characters and their resolution over the course
of the drama. These events are imposed on the reader or viewer who is manipulated
by them. However, the user in an interactive narrative has control over how the story
unfolds and may actively seek to avoid such con°ict and tension.
To control the development of the story in the face of user interaction, automated
directorial control is often needed. Directorial control continuously ¯ne-tunes the virtual
charactersduringtheinteractionsothattheybehavebothconsistenttotheirmotivations
and in a way that leads to the author's desired e®ects. For example, in the Little Red
Riding Hood story, if the wolf { played by the user { keeps roaming around and does
10
not talk to other characters, directorial control may seek to speed up the progress of the
story by having Red can start a conversation with the wolf.
Directorial control enables the author to gain better control over the interaction and
therefore better predictability of the user's experience. Moreover, it allows the author to
use the same model of the story world for creating di®erent interactive experience (see
Sections 5.2 and 7.2 for examples).
1.2.2.4 Facilitate Authoring
In addition to the design challenges which follow from creating user experience's per-
spective, easing the authoring process is a major concern. The design of interactive
narrative, just as that of traditional non-interactive narrative, is a process of artistic cre-
ation. Ideally the author should be kept free from onerous design tasks, and be allowed
to concentrate on more creative processes.
The time-consuming and complex hand authoring process can be a serious design
bottleneck. A challenge for the designer of the authoring framework is how to hide the
technical details in the authoring process and make the authoring process available to
evennon-technicalauthors. Further,interactivenarrativesystemsautomaticallygenerate
thevirtualcharacters'behaviorsduringtheinteractionbasedoninformationprovidedby
the author. With any generative system, the facilitation of its design involves evaluation
processes { to test whether the interactive narrative system truly works to the author's
design expectations. This process is especially important when the interactive narrative
system is designed for pedagogical purposes because inappropriate designs may create
negative, undesired teaching e®ects.
11
Figure 1.1: Two-layer System for Interactive Narrative
1.3 Approach
In this thesis, I present a two-layer approach to simulating interactive narratives. This
approach seeks to ensure the coherence of narrative, supports both creating rich charac-
ters and managing the development of the story during the interaction for realizing the
author's plot design goals. This approach is implemented using a multi-agent framework
{ Thespian [101, 102, 103, 104, 100, 105, 106].
Egrihasstronglyarguedfortheimportanceofcharactersintraditionalnarratives[27].
His view of narrative{ of rich, wellmotivated, autonomous characters as a creativespark
to the author, is nevertheless constrained by the author's goals for the plot { serves as
inspirationtotheapproachtakeninthiswork. Speci¯cally,Thespianutilizesautonomous
agents for well-motivated and socially aware characters, and multi-agent coordination to
realize story plots.
12
At the base is a multi-agent system comprised of goal-oriented autonomous agents
that realize the characters of the story. A key aspect of this layer is the richness of the
agent design that provides motivations, emotions, Theory of Mind and social norms (see
Chapter 4 for details). The agents in this layer autonomously interact with each other
and the character controlled by the user, thereby generating the story.
Abovethislayerisadirectoragentthatproactivelydirectsthecharactersforrealizing
the author's plot design, which can be seen as group goals for the multi-agent system.
A key aspect of this layer is that the director agent has access to models of the agents
and the user. It uses these models to assess whether plot goals are achieved as well
as redirect the characters. Another key aspect is that the director agent takes a least
commitment approach to coordinating the agent-characters. During the interaction, the
directoragentmaintainsaspaceofcharactercon¯gurationsconsistentwiththecharacters'
prior behavior. Each of these con¯gurations is equally valid in the sense that they will
all drive the character to act in the same way up to the current point of the interaction.
Whenthedirectoragentforeseesaviolationoftheauthor'splotdesigngoals,itconstrains
that space so that the rest of the con¯gurations will drive the agent to act in a way that
eliminates the violation. Thus, from the user's perspective the characters are always
well-motivated and the user can interact freely with them.
In addition to the two-layer runtime system, Thespian contains authoring processes
to facilitate the author in designing interactive narratives. These procedures allow the
authortocon¯gurethecharactersinasimilarwayaswritingtraditionallinearnarratives.
Theycanalsosimulateavarietyofusersinteractingwiththeinteractivenarrativesystem
13
and test whether the system's responses are consistent with the author's expectations.
An overview of the Thespian framework is provided in Chapter 2.
1.4 Evaluation
The evaluation of this work has been performed at di®erent levels. First, Thespian's
generality in practice for authoring a range of stories has been demonstrated. Section 2.3
describes some of the domains. Secondly, empirical evaluations have been conducted on
the importance of modeling well-motivated characters and the e®ectiveness of Thespian's
approach to directing the characters toward the author's plot design (see Chapter 7
for details). Thirdly, various components within the Thespian framework for modeling
socially aware characters have been validated individually (see Sections 4.2 and 4.3 for
details). Finally, in terms of reducing authoring e®ort, Thespian's user modeling and
simulation procedures are demonstrated and evaluated in Section 6.2.4, and examples of
reusing authored story worlds and characters are given in Section 6.3.
1.5 Contributions
In this work, I identi¯ed the challenges of authoring interactive narrative and argued for
a character-centric approach with directorial control for addressing the challenges. The
speci¯c contributions of this research are as following:
² Identi¯ed the challenges of authoring interactive narrative
14
² Proposed, implemented and evaluated a two-layer multi-agent approach to inter-
active narrative design, which utilizes autonomous agents for well-motivated and
socially aware characters, and multi-agent coordination to realize story plots.
² Implemented the two-layer approach using a multi-agent framework { Thespian.
More speci¯cally,
{ Proposed a novel character design that uses a \Theory of Mind". This design,
¤ Allows characters to balance multiple goals, such as personal goals, peda-
gogical goals and notions of social norms.
¤ Allowscharacterstoconsiderothercharactersincludingtheuser'responses
when deciding their own behaviors.
{ Proposed, implemented and validated domain-independent models of social
normative behaviors for decision-theoretic goal-based agents.
{ Proposed, implemented and validated domain-independent models of emotion
for decision-theoretic goal-based agents.
{ Proposed, implemented and demonstrated user modeling based on decision-
theoretic goal-based agents.
{ Proposed a novel authoring interface that is similar to writing traditional nar-
rative for con¯guring the characters' motivations.
{ Proposed,implemented,andevaluatedanovelapproachtoexertingdirectorial
control with explicit considerations of the user model and the coherence of
narrative.
15
{ Proposed, implemented, and evaluated a novel user simulation procedure that
can simulate di®erent types of users interacting with the interactive narrative
system and test whether the system reaches the author's expectations.
² Evaluated the importance of modeling well-motivated characters in interactive nar-
rative.
² Demonstrated the approach's ability to model a range of stories, including multi-
scene training scenarios and Aesop fables.
1.6 Road Map
The rest of this thesis is constructed as the following: Chapter 2 provides an overview
of the Thespian framework and describes the example domains to which Thespian has
been applied. Chapter 3 summarizes related work in comparison to Thespian's approach
to modeling and simulating interactive narratives. Chapter 4 presents the ¯rst layer of
Thespian: a multi-agent system for modeling well-motivated and socially aware charac-
ters. Chapter 5 presents the second layer of Thespian: its directorial control procedures.
Chapter 6 presents Thespian's authoring procedures. Chapter 7 evaluates the impor-
tance of the core assumption in Thespian' approach to simulating interactive narrative {
maintaining the coherence of narrative { and the e®ectiveness of Thespian's directorial
controlprocedures. Chapter8discussesthelimitationsandfuturedirectionsofthiswork.
Finally, Chapter 9 concludes the whole thesis.
16
Chapter 2
Overview of the Thespian Framework
Thespian is a multi-agent framework for authoring and simulating interactive narratives.
AsstatedinSection1.3,Thespianmodelsandsimulatesinteractivenarrativeusingatwo-
layer approach. Autonomous agents are used for modeling well-motivated and socially
awarecharacters. Theyrespondtotheuserbasedonboththeirmotivationsandthestatus
of the interaction. During the interaction with the user, these agents are coordinated by
the director agent for realizing the author's plot design goals. In addition, Thespian
provides various authoring procedures to facilitate the authors in designing interactive
narratives.
This section provides an overview of both Thespian's runtime system and authoring
procedures,anddiscusseshowtheyaddressthedesignchallengesofinteractivenarratives.
This section also includes descriptions of Thespian's example domains.
2.1 Two-layer Runtime System
The ¯rst layer of Thespian { the multi-agent system { is based on PsychSim [65, 83], a
multi-agent framework for social simulation. In PsychSim, each agent is modeled based
17
onPartiallyObservableMarkovDecisionProblems(POMDPs) [113]. PsychSimprovides
a variety of algorithms for belief update and decision-making.
Thespian adopts PsychSim's basic agent architect for modeling the characters (see
Section 4 for more details). Decision-theoretic goal-based agents are used for modeling
thecharactersinthestory,witheachcharacter'smotivationsencodedastheagent'sgoals.
Each agent has multiple and potentially competing goals, e.g. keeping safe vs. keeping
others safe, that can have di®erent relative importance or preferences. The agents have
recursive beliefs about self and others, e.g. my belief about your belief about my goals,
which forms a Theory of Mind. This Theory of Mind capacity allows the agents to form
expectations about other agents' actions when making their own decisions. Each agent
chooses the action with the highest expected reward as its next action.
ThespianagentsextendPsychSimagentsbymodelingsocialnormativebehaviors[103]
and emotion [100]. By default, Thespian agents act following norms, unless they have
other more pressing goals.
The user is also modeled using a Thespian agent based part on the role the user is
playing. In modeling the user, not only are the goals of the user's character considered,
but also the goals associated with game play (see Section 6.2.1 for more details). This
model allows other agents to form mental models about the user the same way they do
about other characters, and the director agent to reason about the user's beliefs and
experience.
Thespian's second layer provides a director agent which functions as a drama man-
ager. Itguidesthecharacters'interactionswiththeuser,basedontheauthor'sdirectorial
18
control goals which are expressed as partial order or temporal constraints on the char-
acters including the user' beliefs and actions. During the interaction, the director agent
proactively estimates the future developments of the story and ¯ne tunes the characters
if necessary to achieve the goals.
In order to create coherent narratives, the characters' motivations need to be inter-
pretable by the user. Thespian facilitates the user in understanding the characters in
two ways. First, the ability of goal-based agents to decide their actions based on both
the status of the interaction and their goals makes Thespian agents react to the user
and behave with consistent motivations. Secondly, Thespian agents possess a Theory of
Mind and models of emotion and social normative behaviors. These capabilities allow
the agents to behave in a socially aware and life-like manner.
For creating desired cognitive or a®ective e®ects on the user, the author needs to
be able to create rich characters, exert directorial control and balance the design of the
charactersandthedesignoftheevents. Byintegratingthecapabilitieslistedaboveintoa
uni¯eddecision-theoreticframework,Thespianallowstheauthortocreaterichcharacters
For example, Thespian allows the author to model how Theory of Mind and social norms
play a role in the characters' decision-making process and how they a®ect the characters'
emotions. The director agent is used for realizing directorial control over events. By
default, it gives priority to maintaining consistent character motivations, so that it can
prevent the user from interacting with broken characters. However, to give the author
more °exibility in creating stories, the balance can be adjusted to favoring event design
(see Section 5.3.2 for more details).
19
2.2 Authoring Procedures
The Thespian framework is designed with an explicit intention to be easy to use by
traditional writers and non-technical authors.
Designing an interactive narrative involves building a story world, setting the ini-
tial con¯gurations of the characters, e.g. their states, goals and beliefs, and designing
directorial control goals.
Thespian supports reuse of story world elements and characters by separating the
e®ort of designing a story world, which contains models of actions, from the e®ort of
con¯guring the virtual characters and setting the directorial goals of the story (see Sec-
tion 6.3 for details). This support for reusing authored materials saves the author e®ort
for creating new interactive narratives. More importantly, it allows non-technical au-
thors to design interactive narratives by reusing and tweaking story world elements and
characters from other existing stories.
After a story world is built, the author needs to con¯gure the characters in it. Thes-
pian provides an automated ¯tting procedure which can tune the characters' motivation
according to their roles in example story paths (see Section 6.1 for details). The char-
acters will use the motivations \learned" from the paths to guide their behaviors, acting
consistently with their motivations when the user's choice deviates from preset paths.
This approach allows the author to con¯gure the characters' motivations in a way that is
similar to how narratives are typically created, and provides a good baseline for creating
experiences that are consistent with the author's plot design goals.
20
With any generative system, automated testing is desirable. Thespian contains auto-
matedevaluationproceduresfortestinginteractivenarratives(seeSection6.2fordetails).
Thespianmodelstheuserexplicitly. Theautomatedevaluationproceduressystematically
simulate di®erent styles of user interactions and give the author feedback on whether the
author's desired e®ects are achieved when the interactive narrative system faces a variety
of users. The author can then re¯ne the design of the characters, the directorial goals
and the story world, and test the interactive narrative system again. Such thorough
evaluation is impossible without the facilitation from automated procedures.
2.3 Example Domains
Thespian has been applied to authoring dozens of virtual characters in more than thirty
interactive narratives. The ¯rst interactive narrative to incorporate Thespian is the Mis-
sion Practice Environment of the Tactical Language Training System (TLTS) [47], which
is aimed at providing rapid language and culture training. Thespian has also been used
to model fables such as Little Red Riding Hood and the Fisherman and His Wife. This
section brie°y describes the domains that Thespian has been applied for.
2.3.1 Tactical Language Training System (TLTS)
The Tactical Language Training System is a large-scale (six to twelve scenes each for
three languages) project funded by US military for rapid language and culture training.
Thespian was used together with Unreal 2003 to author and simulate the interactive
narratives for the system's Mission Practice Environment. The system has been used by
thousands of military personnel. This project tests/demonstrates Thespian's generalness
21
forauthoringmultiplestoriesandmultiplecharactersinstories. Inaddition,Thisproject
serves as a test bed for Thespian's model of social normative behaviors (see Section 4.2
for details).
The user takes the role of a male army sergeant (Sergeant Smith) who is assigned to
conduct a civil a®airs mission in a foreign (e.g., Pashto, Iraqi) town. The human user
navigates in the virtual world using mouse and keyboard. The user can interact with
the virtual characters using spoken language and gestures. An automated speech recog-
nizer identi¯es the utterance and the mission manager, which is a component outside of
Thespian, converts them into a dialogue act representation that Thespian takes as input.
Output from Thespian consists of similar dialogue acts that instruct virtual character
bodies to speak and behave.
The story in TLTS consists of multiple scenes. A typical scene contains two or three
main characters and up to six supporting characters. The main characters usually have
10 to 20 di®erent actions including their dialogue acts, and the supporting characters
typically have fewer than 5 actions. Two example scenes are provided below, which
are used in later sections as the example domain for demonstrating some of Thespian's
models.
2.3.1.1 TLTS Story I
The story begins in a village caf¶ e. The user's aim in the scene is to ¯nd the senior o±cial
in the town to discuss providing aid to the locals. There are a variety of actions he can
perform including moving around the town, greeting people, introducing himself, asking
questions, using gestures etc. The user interacts with a range of characters in the scene,
22
Figure 2.1: Tactical Language Training System
Table 2.1: Sample Dialogue in TLTS Story I
Speaker Addressee Utterance
User Old man Who is the most important o±cial in this town?
Young man User Slow down! Who are you?
User Young man We are Americans.
Young man User CIA?
User Young man No, sir, we are from the American Army, Special
Forces.
most notably an old man and a young man. These two locals have di®erent personalities.
The old man is cooperative. The young man worries more about the safety of the town,
and may accuse the sergeant of being a CIA agent if the user does not establish trust.
Table 2.1 shows an excerpt from this story.
2.3.1.2 TLTS Story II
ThisstorybeginsastheuserarrivesoutsideofaPashtovillage. Somechildrenareplaying
nearby and come over to talk to the user as he arrives. The user's aim in the scene is
23
to establish initial rapport with people in the village through talking to their kids in a
friendly manner. The user's possible actions include moving around the town, greeting
people, introducing himself, asking questions, using gestures etc. The children possess
di®erent personalities. Some are shy and some are very curious about the American
soldier.
2.3.2 A Grimms' Fairy Tale { Little Red Riding Hood
ThisinteractivenarrativeisdevelopedasatestbedformultiplecomponentsinThespian,
including its director agent (see Chapter 5 for details), its user model and its approaches
for modeling the user as a way to test the system (see Section 6.2 for details).
The story starts as Little Red Riding Hood (Red) and the wolf meet each other on
the outskirt of a wood while Red is on her way to Granny's house. The wolf has a mind
to eat Red, but it dares not because there are some woodcutters close by. The wolf will
eat Red otherwise. Moreover, if the wolf hears about where Granny lives from Red and
someone else, it will also go eat Granny. Meanwhile, the hunter is searching the wood for
the wolf. Once the wolf is killed, people who were eaten by it can escape.
This story is built as a text-based interactive narrative. The user can take any role
in the story and interact with the characters using keyboard. The interaction happens
at the dialogue act level. Each of the characters has around 10 to 20 actions.
2.3.3 A Grimms' Fairy Tale { The Fisherman and His Wife
Thisinteractivenarrativeismodeledforthepurposeoftesting/demonstratingthereusabil-
ity of authored content across stories (see Section 6.3 for details).
24
This fable has three characters, the ¯sherman, his wife, and an enchanted ¯sh. The
story begins with the ¯sherman catching the ¯sh and letting it go since it is enchanted,
without asking for anything in return. However, when his greedy wife hears about the
¯sh, she keeps on asking him to ask for more and more from the ¯sh. Eventually, the ¯sh
runs out of gratitude to the ¯sherman. The story ends when the wife wishes to be God
and the ¯sh takes back all that was given.
Similar to the Little Red Riding Hood story, this story is also built as a text-based
interactive narrative. The user can take any role in the story and interact with the char-
acters using keyboard. The interaction happens at the dialogue act level. The characters
have around 5 to 10 actions.
25
Chapter 3
Related Work
ThischapterreviewsauthoringframeworksforinteractivenarrativesandcomparingThes-
pian with those works. There is a body of work on building authoring frameworks for
interactivenarratives. Researchinmany¯eldsisrelevanttothiswork,includingemotion,
decision-making, socialnorms, immersiveexperienceortheexperienceof\presence", etc.
In discussing related work, this review compares how di®erent interactive narrative
frameworks address the basic desiderata for interactive narrative: how they create co-
herent narrative in the face of user interaction, how they achieve the author's desired
cognitive or a®ective e®ects, and how they facilitate the author in designing interactive
narratives. Inaddition,becausesocialnormsandemotionarecrucialtosocialinteraction,
Iwillalsoreviewhowsocialnormsandemotionaremodeledindi®erentframeworks. The
characters' decision-making processes are usually embedded in the frameworks' designs
of characters and events, and therefore, are not reviewed separately. Finally, since one of
the ultimate goals for most virtual environments is to make the user feel present in the
virtual world, I will discuss the experience of presence in interactive narratives.
26
3.1 Create Coherent Narrative and Balance the Design of
Characters and Events
3.1.1 The State of the Art
To support rich user interactivity and prevent the author from spending extensive pro-
gramminge®ortonhand-tailoringthenarrativeexperience, variousautomatedauthoring
frameworks for interactive narratives have been proposed. Most of the frameworks either
adoptaplot-centricapproachoracharacter-centricapproachtomodelingandsimulating
interactive narratives.
The plot-centric view is evidenced in early works on narrative design, such as in Poet-
ics [3]. Plot-centric designs emphasize the design of the events in the story. Many frame-
works adopt plot-centric designs, such as Fa» cade [66], I-storytelling [21], Carmen's Bright
IDEAS [64], Mimesis [87, 131], The Thing Growing [2], Improv [79], DEFACTO [97],
OZ [125, 48], and Interactive Drama Architecture [58, 59, 61]. In Fa» cade [66], the story
is organized around hand-authored dramatic beats, realized as their pre-conditions, post-
conditions, and brief patterns of interaction between characters. Coherent narratives
within each beat are designed by the author while creating the beats. The author also
designs the pre-conditions and post-conditions of the beats so that the beats can only be
concatenated in ways that preserve the coherence of narrative. In Mimesis [87, 131], a
planning based interactive narrative authoring and simulating system, there are explicit
causal links among events in plans, and the emergence of coherent narrative becomes an
AI-planning problem. I-storytelling [21] uses planning over a hierarchical task network
(HTN) to realize interactive narratives. In Lamstein's [50] and Nelson's [72] interactive
27
narrative systems, the story is organized around events designed by the author, with
pre- and post-conditions. A drama manager is used to project into the future for pos-
sible developments of the story, evaluate the quality of possible story paths based on
an author-speci¯ed evaluation function, and recon¯gure the story world to achieve best
quality in the story.
In contrast, a contemporary view on character and action, as espoused by Lajos
Egri [27], suggests that plot unfolds based on the characters, that characters can essen-
tially \plot their own story". Consistent with this view, character-centric approaches for
interactivenarrativeemphasizethedesignofindividuallyplausiblecharacters. FearNot![77]
uses planning based techniques to simulate virtual characters. It has explicit representa-
tions of characters' personalities and motivations, which a®ect the individual character's
plan construction process. In MRE [118] and SASO [122], there is an extensive dialogue
management subsystem in each character that incorporates explicit rules for dialogues.
The agents have plans governing the coherence of their behaviors which take their per-
sonalities into account.
3.1.2 In the Thespian Framework
Thespian'stwo-layerruntimesystemiscapableofbothcreatingrichcharactersandman-
aging the events of the story during the interaction. Decision-theoretic goal-based agents
are used for controlling characters [102]. The coherence of narratives is ensured because
characters have consistent goals (motivations). For managing the development of the
story during the interaction, Thespian uses a director agent to proactively direct the
28
characters toward the author's plot design goals [105, 106]. When there is a con°ict be-
tween the design of characters and the design of events, Thespian gives priority to the
design of characters unless the author has indicated otherwise, so that the characters'
behaviors are interpretable to the user and the user is more likely to have a coherent
narrative experience.
Most of the contemporary interactive narrative frameworks provide systematic sup-
portforeitherthedesignofcharacterorthedesignofeventsinthestory,butrarelyboth.
Most character-centric approaches to interactive narrative can be viewed as only having
the ¯rst layer of Thespian's runtime system. Therefore, it is hard for them to e®ectively
control the development of the story in the face of user interactions. On the other hand,
in systems that adopt plot-centric approaches, often, a sophisticated character model is
missing. As a result, the systems cannot reason about the characters' motivations while
generating their behaviors. Because the author does not have direct control over either
the character design or the plot design of the story, to ensure both designs to be satis-
factory, the author has to either sacri¯ce the richness of interaction or spend extensive
e®ort to de¯ne the characters' behaviors.
3.2 Exert Directorial Control
3.2.1 The State of the Art
For achieving the author's desired pedagogical or dramatic e®ects during the interaction,
directorial control is often necessary. Directorial control is also often called drama man-
agement. It dynamically reshapes the story and the characters in the story as the user
29
interacts with the interactive narrative system for achieving e®ects/goals speci¯ed by the
author.
Various approaches for directorial control have been proposed. In search-based ap-
proaches, the drama manager operates over a set of plot points with pre- and post-
conditions[10,125,48,50,72]. Basedonauthor-speci¯edevaluationfunctions,thedrama
manager recon¯gures the story world to achieve the desired or best quality in the story.
Mimesis takes a di®erent approach, whereby the system may prevent the user's action
from being e®ective [87]. When the user's action deviates from the pre-computed story
plan, the system either replans or makes the user's action have no e®ect on story devel-
opment. In Interactive Drama Architecture [60], stories are planned over SOAR-based
agents. A drama manager is used to bring the story back on track if its development de-
viates from the ideal story path laid out by the author. Fa» cade [66] utilizes a beat-based
drama management system. Based on a desired global plot arc, such as the Aristotelian
tension arc, the drama manager chooses the next beat that is suitable to the context and
whose dramatic value best matches the arc.
Most existing works on directorial control are built within plot-centric approaches for
interactive narratives. For systems that adopt a character-centric approach, the design-
ers either do not intend to control the overall structure of the story, e.g. in Emergent
Narrative [49], or simply hope that an interesting or pedagogically signi¯cant story will
emerge from the user's interaction with the characters. However, there is no guarantee
that the story will develop in a way desired by the author.
30
Most existing interactive narrative frameworks do not model the user explicitly. This
restricts the e®ectiveness of their directorial control in several ways. Directorial con-
trols are often applied based on rules prede¯ned by the author for a \standard user",
and therefore cannot be adaptive to individuals who may react to the events di®erently.
Further, the coherence of narrative, which requires the events in the story to be mean-
ingfully connected in both temporal and causal ways, is crucial for ensuring that people
can understand their experience. A key aspect of creating coherent narratives is that the
characters' behaviors must be interpretable to the user. In interactive narratives, it is
hard to avoid generating unnatural characters' behaviors when interacting with the user
without a model of the user's beliefs and experiences.
Finally, many approaches for directorial control are reactive [125, 48, 50, 72, 87, 66]
ratherthanproactive. Theyreacttotheuser'sactions,butdonotproactivelypredictthe
user'sactionandtakeinterventionsaheadoftime. Arguably,aproactiveapproachcanbe
more e®ective in achieving directorial control. However, to adopt a proactive approach,
a model of the user is necessary to predict what the user will do.
3.2.2 In the Thespian Framework
Thespian's directorial control is proactive and tightly tied to the model of the characters
and the model of the user [105]. Its director agent projects into the future for detecting
potential violations of the author's desired e®ects (directorial goals), which are expressed
as partial order or temporal constraints on key events in the story. An event can be
eitheranactionfromacharacterorabelief/statechangeofacharacterincludingtheuser.
Onceapotentialviolationisdetected, thedirectoragentexploresalternativemethodsfor
31
tweakingthecharacters'beliefsandbehaviorsforreachingthedirectorialgoals. Di®erent
from most existing work, the evaluation of the achievement of directorial goals is based
on the model of the user.
3.3 Facilitate Authoring
3.3.1 The State of the Art
Although a range of generative approaches have been proposed to craft character and
story for authoring interactive narratives, the authoring process has often turned out to
be an undertaking thatrequires extensive programminge®orts. Thus, the author is often
required to be a programmer. For example, in the MRE system [118] linear scripts were
analyzed to inform the design of a team task model that motivated the agent-character's
behavioranddialoguewiththeuser. Skilledprogrammerscraftedthismodelbyhand. In
Fa» cade [66], it was up to the programmers to design the beats and determine the user's
in°uence on their progressions. The system took twoexperienced programmers ¯ve years
to build. The time-consuming and complex authoring process can be a serious design
bottleneck and exclude non-technical authors.
More recently, graphical user interfaces have been built to facilitate authoring [109,
25,91,108]. However, mostworkalongthisdirectionstrivestoprovideauserfriendlyin-
terface for the author to input information needed for the authoring frameworks' internal
models of interactive narratives which may be complex and non-intuitive, e.g. a complex
hierarchical plan, rather than to automate the process of creating such internal model or
maximizing the reusability of authored contents. Because the authors need to directly
32
construct the internal model, existing frameworks often make crafting interactive narra-
tives very di®erent from writing standard stories. Further, most existing frameworks do
not provide explicit approaches for predicting/evaluating how the author's model of the
story will react to user interaction, especially when facing a variety of users. Therefore,
the authors are often required to anticipate/reason about contingencies in the story to
account for user interactivity, which is often beyond the human author's capacity.
3.3.2 In the Thespian Framework
Thespian adopts the graphical user interface from PsychSim for de¯ning the agents. In
addition, Thespian provides various automated approaches to help authors, especially
traditional writers and non-technical authors, con¯gure virtual characters and control
the development of the story.
Thespian allows the author to con¯gure virtual characters in a similar way to writing
traditional narratives [102]. Thespian's ¯tting procedure automatically extracts con-
straints on characters' motivations from story path examples and determines whether
consistent goal preferences of agents can be inferred. To control the development of the
storyforreachingtheauthor'sdesiredcognitiveora®ectivee®ects,theauthorcanspecify
plot design goals using temporal constraints or partial order constraints of key events.
Thespian separates the e®ort of designing a story world, which contains models of
how actions a®ect the state of the world such as Thespian's model of social normative
behaviors, from the e®ort of con¯guring the characters and designing directorial control
goals. Thus, Thespian allows reuse of code { the same actions can often be used in
di®erent story worlds, and non-technical authors can design interactive narratives by
33
tweaking the con¯gurations of the virtual characters and directorial control goals in a
story world de¯ned by someone else.
Moreover, to facilitate authoring means not only to reduce the authoring e®ort, but
also to make the authoring process more productive and more creative. Thespian's au-
thoring framework encourages the author to think in di®erent levels of abstraction and
fromdi®erentperspectiveswhendesigninginteractivenarratives. Itcanworkasacollab-
oratorwiththeauthorbysuggestingdesignsandcritiquingtheauthor'sdesigns(see[107]
for further discussion).
3.4 Model Social Normative Behaviors
3.4.1 The State of the Art
Muchastheydoinhuman-humaninteraction,norm-followingbehaviorscanfacilitateand
constrain user interactions in natural/lifelike ways that ideally do not seem restrictive.
In general, social norms are commonly believed or accepted rules in social interaction.
These rules serve as a guide for human behavior, and as the basis for people's beliefs
and expectations about others. Without them, communication can break down easily.
Thoughnormsarecommonlyfollowed,thetendencytofollownormsisregulatedbyother
factors, such as more pressing, personal goals. There is a considerable body of work on
social norms and norms in conversations in particular, including formalization of norms
andobligations[14],hownormsemerge,spreadandgetenforcedinasociety[20],levelsof
cooperation in social communications [1], discourse obligations in dialogue [121], maxims
in cooperative conversations [37], etc.
34
In interactive narratives, norm-following/violating behavior is often not explicitly
modeled. Rather, it is modeled conjointly with characters' other behaviors. For exam-
ple, in Fa» cade [66], norms are encoded in the design of the beats and the beat selection
process, i.e. the pre- and post-conditions of the beats. In I-storytelling [22], charac-
ters' behaviors including norm following behaviors are modeled using HTN plans. In
MRE [118] and SASO [122], the dialogue management subsystem incorporates explicit
rules for normative behaviors, speci¯cally conversational norms. The priorities of these
rules are adjusted by agent authors to ¯t the characters' pro¯les.
3.4.2 In the Thespian Framework
Unlikemostinteractivenarrativeframeworks,Thespianexplicitlymodelsnormsinfaceto
face communication using a domain-independent model built within a decision-theoretic
context. Thespian agents have explicit motivations to following norms, and this tenden-
cies is mediated by the agents' other goals [103] and its situation. Thus, the agents can
reason about their decision-making and norm-following behaviors using a uni¯ed frame-
work. This allows the author to create rich character behaviors in a principled way. For
example, using Thespian the author can model two agents { one with goals for following
normsandonewithout. Whentheagentsareinahurry,theywillbehaveinthesameway
{ both ignore norm-related goals. However, they will behave di®erently under a di®erent
context.
35
3.5 Model Emotion
3.5.1 The State of the Art
Emotion is a key aspect of human social interactions, and therefore needs to be taken
into account in human-agent interactions, especially if the agents themselves must act
human-like. Computational models of emotion used in agents have often been based on
appraisal theories [28, 67, 123, 76, 29, 85, 5, 36], a class of leading psychological theories
for emotion. Appraisal theories argue that a person's subjective assessment of their re-
lationship to the environment, the person-environment relation, determines the person's
emotional responses [89, 115, 76, 116, 93, 34, 114, 53, 54]. This assessment occurs along
several dimensions, called appraisal variables or checks, including motivational congru-
ence, novelty, control, etc. Emotion is decided by the combination of results from these
checks. For example, an event that is incongruent with the person's motivations and is
caused by others may lead to anger responses; on the other hand, if the event is caused
by the person himself/herself, the person may feel guilty or regret [89]. So if an employee
believes that he/she is unfairly evaluated by the supervisor, an angry response is likely
to be elicited; if the employee instead believes that he/she receives negative evaluation
because of his/her own fault, he/she is more likely to feel regret.
Similar to how social normative behaviors are modeled in virtual characters, many
interactive narrative frameworks do not have an explicit model for the characters' emo-
tions. Here I will review computational models of emotion in both interactive narrative
frameworks and other systems that model virtual characters. In FLAME, El Nasr et
al. [28] use domain-independent fuzzy logic rules to simulate appraisal. In WILL [67],
36
the agent's concerns and relevance are evaluated as the discrepancies between the agent's
desired state and the current state. The Cathexis model [123] uses a threshold model
to simulate basic variables, which are called \sensors", related to emotion. The OCC
model of appraisal [76] has inspired many computational systems. Elliott's [29] A®ective
Reasoner uses a set of domain-speci¯c rules to appraise events based on the OCC theory.
Both EM [85] and FearNot! [5] deployed the OCC model of emotion over plan based
agents.
Thespian'sapproachtomodelingemotionisinspiredbytheworkonEMA[36]. EMA
follows Smith and Lazarus' theoretical model of appraisal [116]. Smith and Lazarus
described two types of appraisal: primary appraisal and secondary appraisal. Primary
appraisal concerns whether and how the event is relevant to the person. Secondary ap-
praisal evaluates the person's potential for coping with the event. The result of the
evaluation will be taken into account by the next primary appraisal process, and thus
form an appraisal-coping-reappraisal circle in the agent's cognitive/emotion generation
process. EMA [36] de¯nes appraisal processes as operations over a uni¯ed plan-based
representation, termed a causal interpretation, of the agent's goals and how events im-
pact those goals. Cognitive processes maintain the causal interpretation and appraisal
processes leverage this uni¯ed representation to generate appraisal.
3.5.2 In the Thespian Framework
Thespian's model of emotion is also based on Smith and Lazarus' appraisal theory [116].
Thespianmodels¯vekeyappraisaldimensions: motivationalrelevance,motivationalcon-
gruence,accountability,controlandnovelty[100]. Uponobservinganewevent{anaction
37
performedbyanagentorthehumanuser{eachagentappraisesthesituationalongthese
dimensions based on its beliefs and past expectations, and during the process of deciding
what to do next, the agent's coping potential is automatically reevaluated. The agent
also forms new beliefs and expectations. The updated beliefs and expectations are used
in the agent's next appraisal process.
A key distinction between Thespian's model of appraisal and other computational
models including EMA is Thespian's modeling of Theory of Mind, which is a key factor
in human social interaction [126], and the role Theory of Mind plays in decision-making
and belief revision. Agents in Thespian possess beliefs about other agents that constitute
a fully speci¯ed, quantitative model of the other agents' beliefs, policies and goals. In
other words, the agents have a Theory of Mind capability with which they can simulate
others. Thespian's representation of agents' subjective beliefs about each other enables
themodeltoreasonaboutsocialemotions. Ine®ectagentscanreasonaboutotheragent's
cognitive and emotional processes both from the other agent's and its own perspectives.
For instance, Agent A can use its beliefs about Agent B to evaluate the motivational
relevance and novelty of an event to Agent B. The result may be totally di®erent from
the appraisal performed from its own perspective. This allows the author to create richer
characters.
In addition, in EMA the construction of the person-environment relation representa-
tion by cognitive processes is treated as distinct from appraisal, and appraisal is reduced
tosimpleandfastpatternmatchingovertherepresentation. Thespianseekstogofurther
in coupling cognition and appraisal by detailing how the cognitive processes themselves
38
need to realize appraisal as part of decision-making and belief update, and therefore
argues for appraisal as an integral part of cognition.
Finally, Thespian explicitly models the depth of reasoning in agents, and therefore
is capable of simulating how the depths of reasoning a®ect the agents' appraisals. An
example is given in Section 4.3.3.
3.6 Create the Experience of \Presence"
Having the user feel as if they are present in the virtual world is often pursued as an
implicit design goal in interactive narratives. Many factors contribute to the experience
of presence, or the sense of \as if being there" [39, 99, 111, 117]. In this section, I
will brie°y review these factors and discuss how they are related to interactive narrative
design.
In order for the user to feel presence in a virtual environment, the content of the
virtual environment needs to be meaningful [42]. In particular, the content should be
predictable [110, 40], consistent [110] and plausible [9, 128] to the user. For virtual
environments that contain narratives, the quality of the narrative also a®ects the user's
sense of presence [44]. Further, the user should be able to form a mental model of the
virtual world [95, 40, 94, 12]. The user should know his/her own possible actions and
be able to anticipate the results of his/her actions as well as other characters/objects'
actions/movements [132].
39
Secondly, interactivity and the sense of agency also a®ect the experience of presence.
Biocca [13] points out that interactivity fosters social presence { the experience of inter-
acting with real human or intelligent characters.
Thirdly,tocreateasenseofpresence,thevirtualenvironmentneedstobeabletoshift
user'sattentionfromtherealworldtothevirtualworld[95,128,110]. Themoreattention
theuser devotesto the virtual world, thehigher sense of presence isexperienced [128]. In
addition, Slater [18] demonstrated the more often a \break" { user's attention switches
fromthevirtualworldtotherealworld{happens, thelesspresencetheuserexperiences.
Various studies haveshownthat people's a®ectivestates a®ect their attention [52, 96, 74,
51, 16, 26, 130]. For example, both extremely high and low arousal levels can drop the
user's attention from the virtual world [24].
Fourthly, the quality of the physical simulation plays an important role in inducing
the experience of presence. In general, the more inclusive, extensive, surrounding and
vivid the virtual environment is, the higher the presence [112, 62]. On the other hand,
the nature of the skills involved in the simulation decides the importance of creating a
physically vivid environment. For virtual environments aimed at training motor skills or
basic cognitive skills such as memory of objects and visual discrimination, high simula-
tion realism is necessary. On the other hand, for simulations that are aimed at training
complex cognitive abilities such as coping skills, family and social relationship, high vi-
sual/acoustic ¯delity is not as important as creating interactions between characters.
Ho®man argues that in this case an \ecologically valid" [41] simulation of the real world
is enough. Similarly Ba~ nos et al.'s study [8] demonstrates that high ¯delity of display
40
becomeslessimportantinvirtualenvironmentsthatcaninvokeemotionalresponsesthan
in virtual environments that do not involve emotion.
Finally,therearemanyotherusercharacteristicsthatmaya®ecttheuser'sexperience
of presence in a virtual environment but cannot be manipulated by the designer of the
virtualenvironment,suchastheuser'spriorexperienceofusingvirtualenvironments[33,
46, 7], and the user's mental health conditions [43].
Manyofthesefactorsoverlapwiththebasicdesiderataofinteractivenarrativesaslaid
out in Chapter 1.2 and which were used to inform the design of Thespian. For example,
coherent narratives, plausible stories and supporting interactivity are basic design goals
forinteractivenarratives. Inaddition,e®ortsonincreasingthebelievabilityofcharacters,
suchasbuildingcharacterswithpersonalitiesandemotions,establishingrapportwiththe
user,areallhighlyrelevant,ifnotdirectlycontributingtotheuser'sexperienceofpresence
inthevirtualworld. Asaresult, though\presence"israrelytargeteddirectlyasadesign
goal, the overlap between interactive narrative desiderata and the factors that in°uence
presence suggest that users will experience presence in interactive narratives.
41
Chapter 4
Character Level
To model characters that are well-motivated and socially aware, Thespian uses decision-
theoretic goal-based agents to control each character, with the character's personality
and motivations encoded as the agent's goals [102]. Each agent can have multiple and
potentially competing goals, e.g. keeping safe vs. keeping others safe, with di®erent
relative importance. Thespian agents have a Theory of Mind, together with the model of
socialnormativebehaviors[103]andemotion[100]. Thesecapacitiesmakethecharacters
behave with consistent motivations and being socially aware.
The agents are modeled within the PsychSim multi-agent framework [65, 83], but
with algorithmic extensions that provide abilities to model emotion and social norms.
4.1 Agent Architecture
InThespian eachcharacterin the storyis modeled as a POMDP-based [113] agent. Each
agentiscomposedof itsstate, actions, dynamics, goals, beliefs, and policy. Objects, such
as a cakeor a house, in the story can be represented asspecial Thespian agents that have
42
only state. This allows characters to reason about the state of an object in the same way
as that of a character.
4.1.1 State
State contains information about an agent's current status in the world. An agent's state
is de¯ned by a set of state features, such as the name and age of the character, and
the relation between that character and other characters, e.g. a±nity. Values of state
features are represented as real numbers bounded in the range of [-1,1].
4.1.2 Actions/Dialogue Acts
Each agent has a set of actions to choose from during its interaction. Minimally, the
de¯nition of an action includes an actor, the agent who performs the action and an
action type. For example, the agent that models the wolf in the Little Red Riding Hood
story can have an action of \wolf-run". The de¯nition may also include an object, which
is the target of the action, such as \wolf-greet-Red". Thespian does not di®erentiate
between physical actions and dialogue acts. They are represented and reasoned by the
agents in the same way.
4.1.3 Dynamics
Dynamics de¯ne how actions a®ect agents' states. For example, it can be speci¯ed that
small talk among a group of agents will increase their a±nity with each other by 0.1.
Section 4.2 contains more examples of de¯ning dynamics. The e®ects of actions can also
43
be de¯ned with probabilities. For example, the author may specify that when the hunter
tries to kill the wolf, the wolf will die only 60% of the time.
4.1.4 Goals
InThespian,acharacter'smotivationandpersonalitypro¯learemodeledasasetofgoals
and their relative importance (weights). Goals are expressed as a reward function over
thevariousstatefeaturesanagentseekstomaximizeorminimize. Thestatefeaturescan
belong to both the agent's own state and other agents' states (including the user's). For
example,acharactercanhavegoalsofbothmaximizingitssafetyandanothercharacter's
safety,withthe¯rstgoalbeingtentimesmoreimportantthanthesecondone. Theinitial
values of safety can be any value between -1.0 and 1.0. Once the values reach 1.0, future
eventscannotincreasethevaluesanymore. However,thegoalscanstillbehurt,i.e. their
values can be decreased by future events. Similarly, once the values reach -1.0, future
events cannot decrease the values further.
Thespian models the characters' motivations and personalities as being persistent
through the interaction. A character's intention at a given moment is the trade o®
among its multiple goals based on its beliefs.
4.1.5 Beliefs (Theory of Mind)
Thespian agents have a Theory of Mind that allows them to form mental models about
other agents. The agent's subjective view of the world includes its beliefs about itself
and other agents and their subjective views of the world, a form of recursive agent mod-
eling [35]. An agent's subjective view (mental model) of itself or another agent includes
44
Figure 4.1: Theory of Mind
every component of that agent, i.e. its state, beliefs, policy, etc. For example, Figure 4.1
shows the wolf character's beliefs.
Eachagenthasamentalmodelofselfandoneormorementalmodelsofotheragents.
Theagent'sbeliefaboutanotheragentisaprobabilitydistributionoveralternativemental
models. For example, in the Little Red Riding Hood story, Red can have two mental
models about the wolf { one being that the wolf does not have a goal of eating people
and one being otherwise. Initially, Red may believe that there is a 90% chance that the
¯rst mental model is true and a 10% chance that the second mental model is true. This
probability distribution can change because of events that happen in the story, such as
Red seeing or hearing about the wolf eating people (see Section 4.1.7.1 for details).
Within each mental model, an agent's belief about its own or another agent's state
is represented as a set of real values { the values of state features { with probability
distributions. Theprobabilitydistributionofthepossiblevaluesofstatefeaturesindicates
thecharacter'sbeliefsaboutthesevalues. Forexample,acharacter'sbeliefaboutanother
45
character could be: fmoney = 0:6, power = 0:8g with probability of 90%, and fmoney
= 0:9, power = 0:1g with probability of 10%
1
.
When reasoning about utilities of actions, the expected values of state features are
normally used. The expected value of a state feature is calculated as following: assuming
an agent's mental model about another agent's state contains n possibilities, value
i
is
the value of a state feature within the ith possibility, and P(i) is the probability associ-
ated with that possibility, the expected value of this state feature can be calculated as
P
n
i = 0
value
i
¤P(i). Forthesimplicityofdemonstration,theexpectedvalueisoftenused
in this thesis when referring to the value of a state feature.
4.1.6 Policy
By default, all agents use a bounded lookahead policy to decide their choice of actions.
The agents project into the future to evaluate the e®ect of each candidate actions, and
choose the one with the highest expected utility (see Section 4.1.7.2 for details). The
author can also manually design policies for the agents. However, the default way of
using lookahead policy ensures that the agent will make its decisions based on both its
goals and the status of the world.
4.1.7 Belief Revision and Decision-making Processes
Upon observation of an event, each agent updates its beliefs based on the observation
and its expectations, and then decides its next action. The decision-making process
1
This example only includes two state features for the simplicity of demonstration.
46
Figure 4.2: Belief Revision and Decision-making Processes
also generates new expectations, which will be used in the agent's next belief revision
processes. Figure 4.2 illustrates this process.
4.1.7.1 Belief Revision
Uponobservationofanaction,anagentupdatesitsbeliefsintwoways. First,withineach
mental model the agent has about others, action dynamics are applied and the agent's
beliefs about other agents' states are updated. For example, if Red sees the wolf eating
Granny, Red will believe that Granny is eaten.
Secondly, the agent's beliefs about the probabilities of alternative mental models are
also updated. Each observation serves as evidence for the plausibility of alternative
mentalmodels,i.e. howconsistenttheobservationiswiththepredictionsfromthemental
models. Usingthisinformation, theprobabilitiesofthementalmodelsareupdatedbased
on Bayes' Theorem (see [45] for details). For example, seeing the wolf eating Granny will
reduce the probability associated with Red's mental model of the wolf being a friendly
character.
47
Figure 4.3: Red's Lookahead Process
4.1.7.2 Decision-Making Process
In Thespian, all agents use a bounded lookahead policy by default. When an agent has
multiple mental models about other agents, by default it uses the most probable mental
models to predict other agents' future actions, though the expected states/utilities of all
alternative mental models are calculated for the purpose of belief revision.
Each agent has a set of candidate actions to choose from when making decisions.
When an agent selects its next action, it projects into the future to evaluate the e®ect of
each option on the states and beliefs of other entities in the story. The agent considers
not just the immediate e®ect, but also the expected responses of other characters and, in
turn, the e®ects of those responses, and its reaction to those responses and so on. The
agentevaluatestheoveralle®ectwithrespecttoitsgoalsandthenchoosestheactionthat
has the highest expected reward. During the interaction, the agents do not necessarily
need to do the lookahead reasoning online, rather they can use compiled policies which
are pre-computed o²ine [82].
48
Figure 4.3 lays out the expected states/utilities being calculated when a character
performs a one step lookahead with the belief that other characters will also perform a
one step lookahead. The actions in bold square are the actions with the highest expected
utilities among all options from the actor's perspective. This lookahead is taking place
in Red's belief space.
For each of her action options, she anticipates how the action a®ects each character's
state and utility. For example, when Red decides her next action after being stopped by
the wolf on her way to Granny's house, the following reasoning happens in her \mind,"
usingherbeliefsaboutthewolfandherself. Foreachofheractionoptions, e.g. talkingto
the wolf or walking away, she anticipates how the action directly a®ects each character's
state and utility. Next, Red considers the long term reward/punishment. For example, it
may be fun to talk to the wolf for a little while (positive immediate reward), but this will
delay Granny from getting the cake (long term punishment). To account for long term
e®ects, she needs to predict other agents' responses to her potential actions. For each
of her possible action, Red simulates the wolf's lookahead process. Similarly, for each of
the wolf's possible action choices, Red calculates the immediate expected states/utilities
of both herself and the wolf. Next, Red simulates the wolf anticipating her responses.
The lookahead process simulates only bounded rationality. The recursive reasoning stops
when the maximum number of steps for forward projection is reached. For example, if
the number of lookahead steps is set to one, the wolf will pick the action with highest
expected utility after simulating one step of Red's response rather than several rounds
of interaction. Similarly based on the wolf's potential responses in the next step, Red
calculatestheutilitiesofheractionoptions{thesumofreward/punishmentofthecurrent
49
and all future steps, and chooses the one with the highest expected utility. Theoretically,
each agent can perform lookahead for a large enough number of steps until there is no
gain for itself and other agents. For performance reasons, lookahead is usually limited to
a¯nitehorizonthattheauthordeterminestobesu±cientlyrealistictothestorywithout
incurring too much computational overhead.
4.2 Model Social Normative Behaviors
Thespian models norms in face-to-face communications [103]. This computational model
ofnorms consistsof goalsthatmotivatecharacterstobehavesociallyappropriately, state
featuresthatkeeptrackofthestatusofconversation,anddynamicfunctionsforupdating
the state features. Characters (Thespian agents) are given explicit goals of following
norms in addition to their other goals. Thus, the characters can reason about the e®ect
offollowingorviolatingnormsandachievingorsacri¯cingtheirothergoalsusingauni¯ed
decision-theoretic approach.
Currently, Thespian models three conversational norms: making relevant responses,
followingnaturalturn-takingpatterns,andhavingappropriateconversational°owemerge.
Through this section, I will use the second story from the Tactical Language Training
System as an example to illustrate how norms are modeled in the agents.
4.2.1 Adjacency Pairs
Adjacency pairs [92], such as greet and greet back, enquiry and inform are very common
in conversations. They are performed by two speakers and follow a ¯xed pattern. An
obligation-based approach is used to motivate the agents to act in the same way. The
50
Table 4.1: Adjacency Pairs and Corresponding Obligations
Speaker 1 Speaker 2 Obligation
Greet Greet back Greet back to speaker 1
Bye Bye Say \Bye" to speaker 1
Thanks You are welcome Say \You are welcome" to speaker 1
O®er X Accept/Reject X Either accept or reject X to speaker 1
Request X Accept/Reject X Either accept or reject X to speaker 1
Enquiry about X Inform about X Inform to speaker 1 about X
Inform information Acknowledgment Acknowledgment to speaker 1
character that performs the ¯rst part of an adjacency pair creates an obligation for
the addressee to perform the second part. By performing the second part, the second
speaker satis¯es the obligation. Obligations are represented by the agents' state features.
Table 4.1 provides examples of adjacency pairs and their corresponding obligations. For
example, if Sgt. Smith opens the conversation by greeting a child, then the child has an
obligationtogreetback,andthevalueofitscorrespondingstatefeaturefortheobligation
is set to 1.0. Once the obligation is satis¯ed, i.e. the child greets back, the value will go
back to its default level of 0.0, indicating the child does not have such an obligation.
On the other hand, after creating an obligation for the addressee, the ¯rst speaker
needs to stop talking to give the addressee a chance to respond. To motivate characters
to do so, an obligation of waiting for responses is created by the ¯rst speaker for itself.
This obligation will be satis¯ed after getting a response from other characters.
To enforce adjacency pairs, the characters have a goal of maximizing their state fea-
tures complete adjacency pair norm. If an agent's action satis¯es one of its obligations,
51
the value of this state feature increases. If an agent's action intends to satisfy an obliga-
tion which does not exist
2
, e.g. saying \you are welcome" without being thanked ¯rst,
the value of this state feature decreases.
Algorithm 1 Dynamics for complete adjacency pair norm
1: if self == dialogue act.actor then
2: if dialogue act intends to satisfy an obligation then
3: if the agent has such obligation then
4: return original value+0:1
5: else
6: return original value¡0:5
7: return original value
4.2.2 Turn Taking
In addition to motivating characters to complete adjacency pairs, the characters need to
be motivated to exhibit natural turn-taking behaviors, such as not interrupting other's
conversations. Sacks et al. summarized three basic rules for turn-taking behaviors in
multi-party conversations [90]:
1. If a party is addressed in the last turn, this party and no one else must speak next.
2. If the current speaker does not select the next speaker, any other speaker may take
the next turn.
3. If no one else takes the next turn, the current speaker may take the next turn.
To model the ¯rst rule, the characters are given a goal of maximizing their state fea-
tures initiate adjacency pair norm. A character's degree of achieving this goal is reduced
if it starts a new topic, i.e. creates new obligations for others to respond, when some-
one in the conversation still has unsatis¯ed obligations. Hence, in this case only those
characters that have unsatis¯ed obligations will seize the turn to talk.
2
Note that the communicative intent of a dialogue act is explicit in Thespian at this level. So it is
always feasible to tell whether a character is trying to create or satisfy an obligation.
52
Algorithm 2 Dynamics for initiate adjacency pair norm
1: if self == dialogue act.actor then
2: if dialogue act does not intend to satisfy an obligation then
3: for each character in conversation do
4: if character has unsatis¯ed obligations then
5: return original value¡0:1
6: return original value
In face-to-face conversations, people take turns to talk and none of the participants
dominates the conversation (the opposite case is a lecture instead of a conversation).
Thespianagentshaveagoalofmaximizingtheirstatefeatures keep turn norm. Ifachar-
acter keeps talking after reaching the maximum number (currently set to 2) of dialogue
acts itcan saywithin a conversationalturn, the value of this state feature decreases. The
counter of dialogue acts will reset to zero after another character starts speaking. This
goal prevents the characters from taking consequent turns, and therefore gives the char-
acters who were not talking in the previous turn priority to speak. This models Sacks'
second and third rules.
Algorithm 3 Dynamics for keep turn norm
1: if self == dialogue act.actor then
2: if self.sentences in current turn > 2 then
3: return original value-0.1
4: return original value
4.2.3 Conversational Flow
Conversations normally have an opening section, a body and a closing section [92]. The
state feature conversation status is used to keep track of what a character thinks the
current status of the conversation is. Initially the value for conversation status is \not
opened". Once a character starts talking to another, the value changes to \opened".
53
After the conversation ¯nishes, the value of conversation status is changed back to \not
opened".
To motivate the characters to behave following appropriate conversational °ow norm,
Thespianagentshaveagoalofmaximizingtheirstate features conversational °ow norm.
If the character opens a conversation without proper greetings or end a conversation
without closing phrases, the value of this state feature will be reduced.
Algorithm 4 Dynamics for conversational flow norm
1: if self == dialogue act.actor then
2: if self.conversation == 'not opened' then
3: if dialogue act.type != 'initiate greeting' then
4: return original value-0.1
5: else if dialogue act.type == 'end conversation' then
6: if characters have not said bye to each other then
7: return original value-0.1
8: return original value
4.2.4 A±nity
Most social interactions require the a±nity among the characters involved to be within
a certain range in order to take place. Some social interactions require closer a±nities
than others. For example, greeting, saying \thanks", and small talk can happen between
almost any two characters. While asking private or potentially sensitive questions, e.g.
\who is the leader of the town?", closer a±nity is required.
In scenarios in which a±nity plays a key role for deciding people's behaviors, the
dynamics of initiate adjacency pair norm need to be revised to take a±nity into account.
More speci¯cally, if satisfying an obligation requires closer a±nity between the two char-
actersthanwhatitiscurrently,ignoringtheobligationwillresultinmuchlesspunishment
54
than if the a±nity between the two characters is appropriate. The augmented rule will
allow characters to ignore unreasonable requests, such as an enquiry of personal infor-
mation from a stranger. Further, because the characters have models of each other, the
enquirer will know his/her enquiry is unreasonable and may be ignored.
A±nity is a®ected by many factors. Currently, three types of factors are usually
considered. First, a±nity is a®ected by whether the characters act following norms. In
Thespian, characters are closer to each other after having successful social interactions;
if a character often violates norms, its a±nity with other characters will decrease. Sec-
ondly, a±nity is also a®ected by the attitude associated with an action. If the action is
performed in an impolite manner, it will decrease the a±nity between the speaker and
theaddressee. Finally,themaine®ectofmanytypesofdialogueactsistochangea±nity.
For example, compliments and small talk, when not violating norms, can always increase
a±nity between two characters. Some other actions, such as accusations, once performed
will usually reduce the a±nity between the two characters.
4.2.5 Examples
Inthissection,IwilluseoneofthescenesfromthePashtoversionoftheTacticalLanguage
Training System to illustrate the working of Thespian's conversational norms model. I
will also discuss and illustrate how di®erent personalities and styles of behaviors can be
created by varying the importance of norm related goals. To demonstrate the e®ect of
varying goal weights on agents' behaviors, Sgt. Smith is controlled by an agent in the
examples.
55
The abstract of the story is described in Section 2.3.1.2. More speci¯cally, there are
four main characters in this story, three children { Hamed, Xaled, and Kamela { and
Sgt. Smith. The action options for the characters are: greeting each other, asking each
other questions, answering questions, saying good-bye to each other, having small talk,
and introducing information about oneself to others. The last action can increase the
a±nity between Sgt. Smith and the children and does not create any obligations for a
reply.
Eachofthesefourcharactershasthegoalsoffollowingnorms. Inaddition,Sgt. Smith
wants to increase his a±nity with the children so that he can make a good impression on
thelocals. Healsowantsto¯ndoutthechildren'snamesandthenamesoftheadultsclose
by. The children on the other hand are curious about what Sgt. Smith's nationality is,
and how much Pashto he understands. In addition, the children will not feel comfortable
to tell their parents' names if their a±nity with Sgt. Smith is low, however they will
answer other questions without considering a±nity. The goals of information collection
will be fully achieved once the character gets the piece of information.
4.2.5.1 Example I
Example I is a sample dialogue in which all the characters obey norms all the times,
i.e. the norm related goals are their most important goals. In line 1 of example 1,
Sgt. Smith chooses to greet the children ¯rst because doing any other actions will result
in opening the conversation inappropriately (hurting his goal of maximizing his conver-
sational °ow norm). Then Sgt. Smith chooses to stop talking to maximize his initi-
ate adjacency pair norm { the action he just performed has created obligations for the
56
Example I:
1. Sgt. Smith to Kids: Hello!
2. Xaled to Sgt. Smith: Hello!
3. Hamed to Sgt. Smith: Hello!
4. Kamela to Sgt. Smith: Hello!
5. Sgt. Smith to Xaled: What is your name?
6. Xaled to Sgt. Smith: My name is Xaled.
7. Xaled to Sgt. Smith: What is your name?
8. Sgt. Smith to Xaled: My name is Mike.
9. Sgt. Smith to Xaled: How are you?
10. Xaled to Sgt. Smith: I am ¯ne.
11. Xaled to Sgt. Smith: Are you an American?
12. Sgt. Smith to Xaled: Yes, I am an American.
13. Sgt. Smith to Xaled: I am learning Pashto.
...
childrentoreply, aswellasanobligationforhimtowaitforthereplies. Eachchildgreets
back in his/her turn because of their goals on maximizing complete adjacency pair norm.
Xaled and Hamed stop talking after greeting because they know Kamela has not greeted
back yet; if they create obligations for others, their initiate adjacency pair norm goals
will be hurt. In line 7, Xaled has satis¯ed his obligation and knows that no one in the
conversation has obligations. Xaled is then free to ask Sgt. Smith questions to satisfy
his goal of curiosity. Lines 6-7, 8-9, 10-11, and 12-13 demonstrate the e®ect of letting the
agents have goals on maximizing keep turn norm. In lines 12-13, even though introduc-
ing himself more will further increase a±nity, Sgt. Smith chooses to follow norms by not
holding the turn too long. Lines 8-13 also show the e®ect of a±nity. Sgt. Smith does not
ask the names of the children's parents directly, but chooses to talk about other topics
to increase his a±nity with the children ¯rst.
57
4.2.5.2 Example II
Varying the weights of di®erent norm related goals gives the author a large space for
creating di®erent characters. Following are some ideas about how di®erent personalities
and styles of interaction can be modeled: a character that seems either rude or in a
hurry can be modeled as not regarding conversational °ow norm as an important goal; a
character who does not like to respond to others can be modeled as having a low weight
on complete adjacency pair norm; a talkative character can be modeled by giving a low
weight on keep turn norm; a character who likes to interrupt other people's conversation
can be modeled by having a low weight on initiate adjacency pair norm.
Example II demonstrates how the goals of following norms interact with other goals
to decide a character's behavior. Because the agents are decision-theoretic, they can
make trade-o®s among their multiple goals. In this example, Sgt. Smith weights his
information gathering goals as being very important. As a consequence, Sgt. Smith does
notrespectproperconversational°owandignoreshisobligationtoanswerquestions. All
his actions are aimed at obtaining the information.
Example II:
1. Sgt. Smith to Xaled: What is your name?
2. Xaled to Sgt. Smith: My name is Xaled.
3. Xaled to Sgt. Smith: What is your name?
4. Sgt. Smith to Xaled: What is the name of this town?
...
58
4.3 Model Emotion
Cognitive appraisal theory, which evaluates people's environment in relation to their
goals, is a leading theory of emotion. Thespian's model of emotion is based on Smith
and Lazarus' appraisal theory [116]. They described two types of appraisals: primary
appraisal and secondary appraisal. Primary appraisal evaluates the signi¯cance of the
event { whether it is irrelevant, benign-positive, or stressful. Secondary appraisal is
invoked when the event is appraised as stressful. It evaluates the person's potential for
coping. The result of the evaluation will be taken into account by the person's following
primaryappraisalprocess,andthusformanappraisal-coping-reappraisalcircleinpeople's
cognitive/emotion generation process.
Thespian's model for appraisal includes ¯ve key appraisal dimensions: motivational
relevance, motivational congruence, accountability, control and novelty [100]. I adopted
Smith and Lazarus's de¯nitions for modeling motivational relevance, motivational con-
gruence and accountability. The model of control is roughly equivalent to Smith and
Lazarus's de¯nition of problem-focused coping potential, though it is closer to the con-
cept of control in Scherer's theory [93]. Finally, novelty is not an appraisal dimension in
SmithandLazarus'theorybecausetheyrefertotheresponseresultingfromanovelstim-
ulusasana®ectiveresponseratherthananemotionalresponse. Theevaluationofnovelty
is useful for driving virtual characters' non-verbal behaviors and therefore is included in
this model. Leventhal and Scherer's de¯nition of predictability-based novelty [93, 55] is
used to inform the computational model.
59
This section ¯rst presents the appraisal processes, i.e. when appraisal happens and
where the related information comes from, and then presents algorithms for evaluating
each appraisal dimensions.
4.3.1 Appraisal Process
Smith and Lazarus describe appraisal as a continuous process { people constantly reeval-
uate their situations, which forms a \appraisal-coping-reappraisal" cycle [116]. In Thes-
pian, upon observing a new event { an action performed by an agent or the human user,
each agent appraises the situation and updates its beliefs. The calculation of motiva-
tional relevance, motivational congruence, novelty and accountability depends only on
the agent's beliefs about other agents' and its own utilities in the current step and the
previoussteps, andthereforecanalwaysbederivedimmediately(seeSection4.3.2forde-
tails). Depending on the extent of reasoning the agent performed in the former steps, the
agentmayormaynothaveinformationimmediatelyavailableregardingitscontrolofthe
situation. However, when the agent makes its next decision, its control is automatically
evaluated. These evaluations in turn will a®ect the agent's emotion. In fact, the agent
may reevaluate along every appraisal dimension as it obtains more updated information
aboutothercharacters. Intheexamplesgiveninthissection, Ireportappraisalproduced
using information gathered in the agent's previous lookahead process. However, it can be
based on either the expectation formed in previous steps, or the lookahead process being
performed at the current step. The agent can also express both emotional responses in
sequence.
60
Further, Thespian agents have mental models of other agents; they not only can
have emotional responses to the environment but also can form expectations of other
agents' emotions. During decision-making, the lookahead process calculates the agent's
expectations of the resulting states and utilities of other agents' possible actions. This
information is kept in the agent's memory. To simulate expectation of another agent's
emotional responses, the observing agent's beliefs about the other agent is used for de-
riving appraisal dimensions. For instance, agent A can use its beliefs about agent B to
evaluate the motivational relevance and novelty of an event to agent B. When evaluat-
ing appraisal dimensions relating to self, the agent will use its belief about itself. When
the observing agent has multiple mental models about another agent, it uses the mental
model with highest probability to simulate appraisal.
4.3.2 Appraisal Dimensions
This section provides pseudocode for evaluating the ¯ve appraisal dimensions { moti-
vational relevance, motivational congruence or incongruence, accountability, control and
novelty { using states/utilities calculated during the agent's decision-making process.
4.3.2.1 MotivationalRelevance&MotivationalCongruenceorIncongruence
Motivational relevance evaluates the extent to which an encounter touches upon personal
goals, and motivational congruence or incongruence measures the extent to which the
encounter thwarts or facilitates the personal goals [116].
These appraisal dimensions are modeled as a product of the agent's utility calcula-
tions which are integral to the agent's decision-theoretic reasoning. The ratio of and the
61
Algorithm 5 Motivational Relevance & Motivational Congruence
1: # preUtility: utility before the event happens
2: # curUtility: utility after the event happens
3:
4: Motivational Relevanceà abs
curUtility¡preUtility
preUtility
5:
6: Motivational CongruenceÃ
curUtility¡preUtility
abs(preUtility)
direction of the relative utility change are used to model these two appraisal dimensions.
The rationale behind this algorithm is that the same amount of utility change will result
in di®erent subjective experiences depending on the agent's current status. For instance,
if eating a person increases the wolf's utility by 10, it will be 10 times more relevant and
motivationally congruent if the wolf's original utility is 1 (very hungry) compared to the
original utility of 10 (less hungry). Algorithm 5 gives the pseudocode for evaluating mo-
tivational relevance and motivational congruence or incongruence. When the calculated
value of Motivational Congruence is negative, the event is motivationally incongruent to
the agent, to the extent of abs(Motivational Congruence).
4.3.2.2 Accountability
Accountabilitycharacterizeswhichpersondeservescreditorblameforagivenevent[116].
Various theories have been proposed for assigning blame/credit, e.g. [124, 98]. The rea-
soning usually considers factors such as who directly causes the event, does the person
foresee the result, does the person intend to do so or is it coerced, etc.
Figure 4.4 illustrates the algorithm for determining accountability. It ¯rst looks at
the agent which directly causes the harm/bene¯t, and judges if this agent is the one who
should be fully responsible. The functionIs Coerced is called to determine whether the
62
Figure 4.4: Accountability Reasoning
agentwascoerced. Iftheagentwasnotcoerced, itisjudgedasbeingfullyresponsiblefor
the result and the reasoning stops there. Otherwise, the algorithm ¯nds the coercers of
the agent, and tests whether they were coerced by someone else in turn. The algorithm
traces limited steps back into the history to ¯nd all the responsible agents. During this
process, the algorithm assumes that the agent which is doing appraisal expects others to
foresee the e®ects of their actions
3
.
Algorithm6determineswhetheranagentiscoerced, andAlgorithm7checkswhether
an agent is coerced another agent. Coercion is determined using a quantitative model. If
other than the agent's current choice, all other options will lead to a drop in its utility,
i.e. the agent will be punished if it chooses any other actions, the agent is judged as
being coerced. The special case is when all of the agent's action options will decrease its
3
Thisassumptionismadetosimplifythecalculationprocess. Itistrueformostphysicalactions, i.e. a
person can safely assume others can foresee the consequences of their physical actions as much as he/she
can. This assumption may not hold when the action a®ects the characters' social relationships.
63
Algorithm 6 Is Coerced(actor, pact)
1: # actor : the agent being judged
2: # pact : the action performed by actor
3: # preUtility : actor's utility before doing pact
4: # S : actor's state before doing pact
5:
6: for each action in actor.actionOptions do
7: if action != pact then
8: # if there exists another action which does not hurt actor's own utility
9: if Reward(action, S)¸ preUtility then
10: return False
11:
12: if Reward(action,S) < preUtility then
13: return False
14: return True
15:
16: #Reward(action, state) : the reward of doing action when the agent is in the state
state
utility, i.e. the agent will be punished no matter what it chooses to do, the agent is not
considered being coerced.
Algorithm 7 Is Coercer(p coercer, actor, p coercer pact, actor pact)
1: # p coercer: the potential coercer
2: # p coercer pact: the action performed by p coercer
3: # actor pact: the action performed by actor
4:
5: for action in p coercer.actionOptions do
6: if action != p coercer pact then
7: Simulate action
8: # what will be actor's choice if p coercer did action instead
9: new actà Lookahead()
10: if Is Coerced(actor,new act) == False then
11: return True
12: return False
To decide who coerced an agent, each agent that acted immediately before the onset
of the event is treated as a potential coercer. A potential coercer is judged as an actual
coercer if it can avoid the other agent being coerced by making a di®erent choice. This
64
process is shown in Algorithm 7. In turn, a coercer may be coerced by someone else.
Algorithm 6 traces limited steps back to ¯nd all the agents that are responsible.
4.3.2.3 Control
The appraisal of control evaluates the extent to which an event or its outcome can be
in°uencedorcontrolledbypeople[93]. Itcapturesnotonlytheindividual'sownabilityto
control the situation but also the potential for seeking instrumental social support from
other people. For estimating an agent's level of control, its alternative mental models
about others are considered.
Algorithm 8 Control(preUtility)
1: # preUtility: the agent's utility before falling into the unfavorable situation
2:
3: control à 0
4: for each m1 in mental models about agent1 do
5: for each m2 in mental models about agent2 do
6: # project limited steps into the future using this set of mental models
7: Lookahead(m1,m2)
8: if curUtility ¸ preUtility then
9: control à control+P(m1)*P(m2)
10: return control
11:
12: # P(m): the probability associated with the mental model m
Algorithm 8 gives the pseudocode for evaluating control. This algorithm ¯rst looks
at whether there is a solution within individual mental models set about self and other
agents,thentheprobabilitiesofthesementalmodelstobecorrect,andthereforetheevent,
if being predicted, will actually happen in the future. For example, assume Granny has
two mental models about the wolf, which are: a 60% chance that the wolf will die after
the hunter shoots at it and a 40% chance that the wolf will not; also Granny has two
65
mental models regarding the hunter's location, which are a 50% chance that the hunter
is close by and therefore has a chance to rescue her and a 50% chance that the hunter
is far away. After Granny is eaten by the wolf, the only event that can help her is that
the wolf is killed by the hunter. Therefore, her control is: 60%£50%=30%. Note that
when using information generated in the past reasoning processes for evaluating control,
the reasoning processes, though they happened in the past, must contain information
regarding the current moment and the future. Otherwise, control cannot be evaluated.
The pseudocode in Algorithm 8 is designed for a three-agent interaction scenario. It is
straightforward to modify it for the scenarios when more or fewer agents are involved.
4.3.2.4 Novelty
Novelty indicates whether the event is expected from the agent's past beliefs [93]. In
Thespian, novelty appraisal is treated as a byproduct of the agent's belief maintenance.
Speci¯cally,inamulti-agentcontextthenoveltyofanotheragent'sbehaviorcanbeviewed
as the opposite side of the observing agent's beliefs about the other agent's motivational
consistency,i.e. themoreconsistenttheeventiswiththeobservingagent'sbeliefs,theless
novel. InThespian, noveltyisde¯nedas1¡consistency, whereconsistency iscalculated
as in [45].
Consistency(a
j
)=
e
rank(a
j
)
P
j
e
rank(a
j
)
(4.1)
66
Novelty is calculated based on the most probable mental model that the observing
agent has about the actor of the event. The algorithm ¯rst ranks the actor's alterna-
tive actions' utilities in reversed order (rank(a
j
)). The higher the event's utility rank
compared to other alternatives, the more novel the event is. For example, if from Red's
perspective the wolf did an action which has the second highest utility among all ¯ve
alternatives, the novelty Red experiences is calculated as 1-
e
3
P
0¡4
e
j
= 0:37.
4.3.3 Examples
This section provides two additional examples of modeling appraisal in social interac-
tions. Example I shows how appraisal re°ects the agent's depth of reasoning in decision-
making,andthereforedemonstratesthebene¯tofmodelingemotionanddecision-making
in a uni¯ed decision-theoretic framework. Example II provides a complex situation for
accountability reasoning and shows that the result of Thespian's model is consistent with
another validated computational model of social attribution.
4.3.3.1 Example I: Small Talk
In this example, two persons (A and B) take turns talking to each other. Both of them
havegoalstobetalkativeandobeysocialnorms. Infact,justthenormfollowingbehavior
itself is an incentive to them { they will be rewarded when their actions are consistent
with social norms. Table 4.2 lists the two persons' appraisals of motivational relevance
regardingeachother'sactions. Theresultsofotherappraisaldimensionsarenotincluded
as they are less interesting in this scenario.
67
Table 4.2: Small Talk Between Two Persons
Step Action Perspective Lookahead
Steps
Motivational
Relevance
1 A greets B B 1 0
B 2 100
2 B greets A A 1 0
A 2 0.99
3 A asks B a question B 1 0
B 2 0.99
4 B answers the question A 1 0
A 2 0.49
5 A asks B a question B 1 0
B 2 0.49
6 B answers the question A 1 0
A 2 0.33
Table4.2providesacomparisonofappraisalresultswhentheperson'sdecision-making
processhasdi®erentdepths. Itcanbeobservedthatapersonappraisestheotherperson's
initiativesasirrelevantwhenitdoesshallowreasoning(lookaheadsteps=1). Inthiscase,
eventhoughthepersonhaspredictedtheotherperson'saction, theactiondoesnotbring
him/her immediate reward. Once the person reasons one step further, he/she ¯nds out
that by opening up a topic the other person provides him/her a chance to engage in
further conversations and respond with a norm following action. The person will then
appraise the other person's actions as relevant. Further, the motivational relevance of
the same event decreases over time. Without modeling emotion and decision-making in
a tightly integrated manner as in Thespian, these e®ects will not naturally emerge.
4.3.3.2 Example II: Firing-squad
Iimplementedthisscenariofrom[63]toillustrateaccountabilityreasoninginwhichagents
are coerced and have only partial responsibility. The scenario goes like this:
68
\In a ¯ring-squad, the commander orders the marksmen to shoot a prisoner. The
marksmen refuse the order. The commander insists that the marksmen shoot. They
shoot at the prisoner and he dies."
The commander is modeled as an agent with an explicit goal of killing the prisoner,
and the marksmen are modeled as not having any particular goals related to the prisoner
but will be punished if they do not follow the commander's order. Using Thespian's
appraisal model, from the prisoner's perspective, the marksmen hold responsibility for
his/her death because they are the persons who directly perform the action. Further,
the prisoner simulates the decision-making process of the marksmen and ¯nds out that
the marksmen are coerced because their utilities will be hurt if they choose any action
other than shooting. The commander acts right before the marksmen in the scenario
and therefore is identi¯ed as a potential coercer for the marksmen. Using Algorithm 7,
the prisoner can see that if the commander chose a di®erent action, the marksmen are
not coerced to shoot. Assuming the prisoner does not ¯nd a coercer for the commander,
he/she will now decide that the commander holds full responsibility for his/her death.
This result is consistent with the prediction from Mao's model of social attribution and
the data collected from human subjects to validate that model [63].
69
Chapter 5
Directorial Control
Interactive narratives are designed to support and encourage user interactions. On one
hand, the author wants to create a rich interactive environment and provide the user
the sense of being in control. On the other hand, the author often also wants to enforce
certain consistency across the users' experiences. For example, assume in the Little Red
Riding Hood story, the user plays the role of the wolf. The author may not want the wolf
to die until the wolf ¯nds out where Granny lives. However, in character-centric designs,
the author does not have direct control over the development of the story. The author
can set the virtual characters' goals and initial conditions to in°uence how they act in
the story, but this usually does not provide enough control. For example, it is hard to
ensure that the user will not run into the hunter before he/she ¯nds out where Granny
livesbyjustsettingtheinitialconditionsorgoalsofthehunter. Toachievesuchane®ect,
directorial control is needed { the characters need to be adaptively coordinated during
the interaction.
70
This chapter discusses the challenges for applying directorial control and presents
Thespian's approach [105]. The success of directorial control depends on both the algo-
rithms used by the director agent and the design of the story world. Section 7.2 presents
the results of evaluating the e®ectiveness of the director agent in the Little Red Riding
Hood domain.
5.1 Design Challenge and Approach
The function of the director agent is to monitor the progress of the story and adjust
the virtual characters' behaviors if necessary to achieve the directorial goals. Thespian
agents are goal-based agents. Their behaviors are decided by their states, beliefs and
goals. In the situation where multiple actions are of equal utility to a character and one
of the actions is preferred for achieving the directorial goals, the director agent can di-
rectly request the character to choose the action. However, in general, adjustments to an
agent's behavior require adjustments to its goals or beliefs/states. Therefore, the basic
challenge for directorial control is how to ensure that the characters have consistent mo-
tivations throughout the interaction while modifying their goals, beliefs/states. Further,
the characters' motivations should also be consistent with the author's portrayals of the
characters in the story paths used for con¯guring (¯tting) the agents (see Section 6.1 for
more details).
Thespian utilizes a specialized agent { a director agent { to realize directorial control.
To address the design challenge, the basic assumption is that the user does not have a
precise mental model about the characters because the user's observations in the story
71
do not support such precision. Typically a range of con¯gurations of a character can be
used to explain the character's behaviors. Therefore, modifying a character's goals and
beliefs does not necessarily lead to broken character motivations being experienced. The
boundary of the range, i.e. how precise the user's mental model about the character is, is
decided by the user's observations of its prior behaviors and the user's prior beliefs about
the character.
The director agent determines whether changes to a character's goals will result in a
broken character using an algorithm based on Thespian's ¯tting procedure. The ¯tting
proceduretestswhetherconsistentcharactermotivationscanbeinferredfromoneormore
story paths (see Section 6.1 for details). If the answer is yes, the agent's goal weights are
tuned according to the story paths. In this case, the agent's motivations are con¯gured
so that the actions as indicated in the story paths have higher utility than the agent's
other action options in that situation. Thus, the agent will choose those actions if the
user'sactionsfollowthepath. Usuallytheresultof¯ttinganagenttoasetofstorypaths
is a space of possible goal settings, each of which can motivate the agent to act as the
author speci¯ed in the story paths. From the perspective of applying directorial control,
as far as the modi¯cations to the characters' motivations fall within the space resulting
from ¯tting the character's past behaviors, the user will not experience inconsistency in
the characters' motivations.
Similarly, to tweak a character's belief/state, the director agent needs to ensure that
the alternative belief/state is reasonable to the user. The user has expectations about
the characters' states/beliefs, which are modeled using the user agent. When an agent's
72
belief/state needs to be changed, the director agent seeks to ¯nd a reasonable explana-
tion for the alternative belief/state { an event that can lead the character to have the
alternative belief/state, and though not observed by the user is possible to happen in
the past. For example, assume the user plays the role of the wolf, and when he/she ¯rst
met Red, Red had a cake. The user agent's belief about whether Red has a cake will
not change unless the user sees Red giving the cake to someone. However, when the user
meets Red again, the director agent can either let Red have the cake or not depending
on the need of directorial control because there is a reasonable explanation for why the
cake disappears { the user can assume that Red have given the cake to Granny.
For applying directorial control, the author needs to specify a set of directorial goals
{ regardless of the user's actions, certain constraints should always be satis¯ed or certain
conditions should always be met during the interaction. The rest of this chapter presents
the syntax for specifying directorial goals, the director agent's algorithms, and step-by-
step examples of applying directorial control.
5.2 Directorial Goals
Directorialgoalsareusedbytheauthortoindicatehowhe/shewantsthestorytoprogress,
such as when an action should happen, or a character should change its belief about
another character. Thespian supports directorial goals expressed as a combination of
temporal and partial order constraints on the characters' actions and beliefs (including
the user's). Table 5.1 lists the syntax for specifying directorial goals.
73
Table 5.1: Syntax for Specifying Directorial Goals
orders = [event1,event2]
event2 should happen after event1
earlierThan = [event,step]
event should happen before step steps of interac-
tion
laterThan = [event,step]
event should happen after step steps of interaction
earlierThan2 = [event1,event2,step]
event2 should happen within step steps after
event1 happened
laterThan2 = [event1,event2,step]
event2 should happen after step steps after event1
happened
NoObjIfLater = [event,step]
if there is a constraint that requires event to hap-
pen, and event hasn't happen after step steps of
interaction, the constraint is not valid any more
The events in the syntax can be either an action, e.g. \wolf-eat-Granny" or a charac-
ter's belief, e.g. \wolf: wolf's hunger = 0 (the wolf believes that the value of the wolf's
statefeaturehungeris0)". \anybody"canbeusedinde¯ningactionsindirectorialgoals.
It indicates that the corresponding ¯eld of the action can be ¯lled with any character,
e.g. \anybody-kill-wolf".
Currently, six di®erent types of goals are supported. orders specify the partial order
constraints on the events in the story. earlierThan and laterThan specify the temporal
constraintsontheevents,i.e. howearlyorhowlateaneventshouldhappen. earlierThan2
and laterThan2 combine partial order constraints and temporal constraints. Finally,
NoObjIfLater isaspecialtypeoftemporalconstraint. Itaugmentsthetimespanofother
constraints.
74
Table 5.2: Directorial Goals Example I
orders = [[wolf-eat-Granny, anybody-kill-wolf],
[wolf-eat-anybody, wolf: wolf's hunger = 0]]
earlierThan = [[wolf-eat-Red, 50], [wolf-eat-Granny, 80]]
earlierThan2 = [[wolf-eat-Granny, anybody-kill-wolf, 30]]
laterThan2 = [[wolf-eat-Red, wolf-eat-Granny, 10]]
NoObjIfLater = [[wolf-eat-Red, 60]]
Table 5.3: Directorial Goals Example II
orders = [[wolf-eat-Granny, anybody-kill-wolf]]
earlierThan = [[anybody-talkAboutGranny-wolf, 20],
[wolf-eat-Red, 50],
[wolf-eat-Granny, 80]]
laterThan2 = [[anybody-talkAboutGranny-wolf, wolf-eat-
Granny,50]]
The author can combine any number of goals de¯ned using this syntax to specify
his/her designs of the story. For example, Tables 5.2 and 5.3 are two sets of goals that
can be applied to the same story. Ideally, when the ¯rst set of goals is applied, events in
the story will happen in a more even pace than when the second set of goals is applied.
Thus, di®erent user experiences can be created using the same design of the characters
but di®erent directorial goals.
5.3 Director Agent
Unlike other agents, the director agent is not mapped to an on-screen character. The
directoragentalsohasaccuratebeliefsaboutallotheragentsincludingtheirbeliefsabout
eachother. Incontrast,formodelingnarrativesitisoftennecessaryforcharacterstohave
incorrect beliefs about each other.
75
The director agent has a model of the user, which assumes that the user identi¯es
with the character, and adopts the character's goals to a certain degree, but also may
haveself-centricgoalssuchasexploringtheenvironment. Directorialcontrolisperformed
based on the director agent's expectations about the user's experience and choices.
5.3.1 Overall Work°ow
When the director agent is used, it takes over other agents' decision-making process,
decides the best movements for the story and causes other agents to perform the corre-
sponding actions. The director agent works by projecting into the future using its beliefs
about the user and other characters, and testing whether the future development of the
story is consistent with the author's directorial goals. If the director agent can foresee
a violation, it will try to tweak the virtual characters' con¯gurations and behaviors to
prevent the violation from happening. This process happens in the director agent's sim-
ulation of the future. It allows the director agent to perform adjustments to the virtual
characters' con¯gurations ahead of time to prevent the violation to happen.
Algorithm 9 contains the pseudocode for the overall work°ow of directorial control.
The director agent maintains a list of objectives that it will try to reach as soon as
possible. Each objective indicates the desirability of an event, such as \hunter-kill-wolf
is desirable", or \Red: Red's dead = 1 is undesirable". Initially, this list is empty. Each
time the user performs an action, the director agent simulates \lookaheadSteps" steps
of interaction in the future (line 8), examines whether the future development of the
story is consistent with the author's directorial goals. The director agent will add the
corresponding objectives to the list if it foresees violations (line 9).
76
Algorithm 9 Directorial Control()
1: objectivesà [ ]
2: bestOptionà [ ]
3: minViolationÃ1
4: futureStepsà [ ]
5: for each i in Range(Num of Tests) do
6: if objectives != [ ] then
7: Adjust Con¯g(objectives)
8: futureStepsà Lookahead(lookaheadSteps)
9: objectivesà Test Violation(futureSteps)
10: if Length(objectives) < minViolation then
11: bestOptionà futureSteps
12: minViolationà Length(objectives)
13: return bestOption
Table 5.4 lists the objectives that will be created in function Test Violation for
violatingeachtypeofdirectorialgoals. Ingeneral,ifapartialorderconstraintisexpected
to be violated, the director agent will try to prevent the latter event from happening; if a
temporal constraint is expected to be violated, the director agent will try to arrange the
event to happen or not happen according to the constraint. For example, based on the
directorial goals described in Table 5.2, if the wolf has not eaten Red by the 50th step
of the interaction, a violation happens, and the objective of \wolf-eat-Red is desirable"
will be added. As the last step ofTest Violation, the \NoObjIfLater" goals are applied
{ if there is an objective for event to happen, and step steps of interaction have already
passed, the objective will be removed.
When the list of objectives is not empty (line 6 in Algorithm 9), the director agent
will try to tweak the characters' con¯gurations for reaching the objectives. Function
Adjust Con¯g (Algorithm 10) tweaks the characters' goals and beliefs
1
for inducing
actionsorpreventingactionsfromhappeningasindicatedintheobjectives. Preferenceis
1
Thespianagentsmakedecisionsbasedontheirbeliefs,whichre°ecthowthestateoftheworldchanges.
However, tweaking the agents' states alone will not have any e®ect on their behaviors.
77
Table 5.4: Objectives if Directorial Goals are Violated
Violated Goal Desirable
Actions
Undesirable
Actions
orders =[event1,event2] event2
laterThan =[event,step] event
earlierThan =[event,step] event
earlierThan2 =[event1,event2,step] event2
laterThan2 = [event1,event2,step] event2
giventoadjustingthecharacters'goalsforachievingtheobjectivesbecauseanadditional
step of searching for reasonable explanations for the new beliefs is needed if the director
agent wants to adjust the characters' beliefs. Function Adjust Con¯g ¯rst tries to ¯t
the characters' goals to achieve the objectives. If none of the objectives can be reached
this way, it will try to tweak the characters' beliefs (lines 3-6) and then ¯t the characters'
goals again.
As part of the process for adjusting the characters' beliefs, whether a belief change is
reasonable to happen is tested, and how to make the belief change happen is proposed.
The same process is used for reaching the objectives that specify constraints on the
characters' beliefs/states. The details of this process are given in Algorithm 12 and the
related discussions. However, the rest of this section is organized around how to achieve
objectives regarding the characters' actions.
A special case for Function Adjust Con¯g is when the objectives involve the user's
actions(eitherbeingdesirableorundesirable),thedirectoragentneedstoa®ecttheuser's
decisions by only changing his/her beliefs.
After making all the adjustments, the director agent will again test whether the
directorial goals will be violated in future interactions using lookahead projection (lines
78
Algorithm 10 Adjust Config(objectives)
1: fitting resultà Fit To Objectives (objectives)
2: if fitting result == False then
3: beliefChangesà Find Suggestions(objectives)
4: for each beliefChange in beliefChanges do
5: if Find Explanation(beliefChange) then
6: Apply Belief Changes(beliefChange)
7: Fit To Objectives(objectives)
8:
9: # Find Suggestions(objectives): returns suggested belief changes for reaching the
objectives
8-9 in Algorithm 9). This iterative process will stop when a satisfactory solution has
been found or the maximum number of attempts has been reached, in which case the
virtual characters will act according to the futureSteps with minimal violations of the
directorial goals.
5.3.2 Fit Characters to Objectives
FunctionFit To Objectives (Algorithm 11) ¯ts the agents' goals to the objectives. For
each objective that contains a desirable action, the function tests to see whether it is
possible for the action to happen. For each objective that contains an undesirable action,
the function tests to see whether it is possible for the actor to do something else, and
possibly nothing. The function also tests whether doing an action is consistent with the
author's design of the character by considering the story paths designed by the author
for con¯guring the characters.
FunctionFit Sequence* in Algorithm 11 is a slightly modi¯ed version of Thespian's
¯tting procedure. Here, it is okay for the utility of the desired action to be the same as
other actions' utilities. In this case, even though the agent is not guaranteed to choose
79
Algorithm 11 Fit To Objectives(objectives, history, desired paths)
1: # history: interaction history
2: # desired paths: paths designed by the author for con¯guring the characters
3:
4: successà False
5: for each objective in objectives do
6: actor à objective.action.actor
7: S
0
à actor.initState
8: if objective.desirable == True then
9: pathsà desired paths +history.append(objective.action)
10: return Fit Sequence*(S
0
, actor, paths, actor.fixedgoals)
11: else
12: for each action in actor.actionOptions do
13: if action != objective.action then
14: pathsà desired paths +history.append(action)
15: if Fit Sequence*(S
0
, actor, paths, actor.fixedgoals) then
16: successà True
17: return success
18:
19: #Fit Sequence*(S
0
, actor, paths, fixedgoals): returns whether actor can be ¯t-
ted to paths without changing the weights of its goals included in fixedgoals (see
Algorithm 13 for details)
the action, which is not ideal for con¯guring the agent, the action is a reasonable choice
given the character's goals.
For characters that do not have distinct personalities/motivations in the story, the
constraints on utility can be further relaxed to accommodate more action options for the
director agent. In an extreme case, the director agent can directly order the characters
to do actions without testing whether the actions are consistent with the characters'
motivations. Inthiscase,priorityisgiventoplotdesignovercharacterdesign. Theauthor
can specify a di®erent balance between plot design and character design by adjusting the
constraints.
80
5.3.3 Adjust Characters' Beliefs
When Fit To Objectives alone cannot achieve any objectives, the director agent will
try to tweak the characters' states or beliefs
2
(lines 2-3 in Algorithm 10).
\Suggest" is a heuristic function provided by PsychSim. It suggests changes to an
agent's con¯guration so that the agent will prefer the author's desired choice of action
overitsoriginalchoice. \Suggest"achievesthisfunctionalityinalmosttheoppositeway
as ¯tting. In ¯tting, the relative goal weights of the agents are adjusted, and the agent's
beliefsandstateareuntouched. The\Suggest"functionsuggestschangestotheagent's
beliefs without a®ecting its goals. For example, to make the wolf not eat Granny, ¯tting
may result in the wolf character having a low goal weight on not being starved, and the
\Suggest" function may give the following solution: make the wolf believe that it is not
hungry (instead of being hungry). The author then needs to arrange an event, which can
create the belief change, to happen in the story before the wolf needs to make its decision
about eating Granny. Further,\Suggest" is a heuristic function. The suggestions make
the achievement of objectives possible, but do not guarantee it. To test the e®ect of
applying the belief changes, one needs to either simulate lookahead projections of the
characters or try to ¯t the characters.
Function Find Suggestions in Function Adjust Con¯g (Algorithm 10) calls Psy-
chSim's \Suggest" function for each objective and for each character, and merges the
results into one list, beliefChanges. Because beliefChanges is a collection of the belief
changes suggested for achieving all the objectives, there may be con°icts among them.
2
Thespian agents make decisions based on their beliefs, which re°ects how the state of the world
changes. However, tweaking the agents' states alone will not have any e®ect on their behaviors.
81
Algorithm 12 Find Explanation(beliefChange)
1: # history: interaction history
2: # desired paths: list of story paths designed by the author for con¯guring the char-
acters
3:
4: for each character in story do
5: if ! character == user then
6: S
0
à character.initState
7: for each action in character.actionOptions do
8: pathsà desired paths + history.append(action)
9: if Fit Sequence*(S
0
, character, paths, character.fixedgoals) then
10: if Dynamics(action) == beliefChange then
11: return True
12: return False
For example, Red may be suggested to both have and not have the cake for achieving
di®erent objectives. In addition, certain belief changes cannot happen because of special
constraints from the story. For example, the director agent cannot move Granny's house
next to the wolf to help the wolf ¯nd the house. In function Apply Belief Changes,
by default, the suggested belief change is applied. The author can also supply domain-
speci¯c knowledge about what are infeasible belief changes, and heuristics for alternating
the belief changes that cannot coexist among the director agent's di®erent rounds of
tweaking the characters and testing.
In Function Adjust Con¯g, whether the belief changes are reasonable needs to be
tested before they can be applied. Function Find Explanation (Algorithm 12) looks
for an action that is reasonable to happen at the moment or in the past, and can explain
the belief change. If such an action exists, even though the user does not see the action
happening, theusermayassumethatithappenedwhenhe/shewasnotthere. Therefore,
changes in the characters' beliefs will not seem sudden and unnatural to the user (see
Example II in Section 5.4 for an example). Of course, the belief change cannot be caused
82
by a user's action, because the user knows what he/she has done in the past. Further, if
thedirectoragentwantstheuser'sbelieftochange,itneedstoarrangetheeventthatcan
cause the belief change to actually happen. For example, for the user to believe that the
hunteriscloseby, thedirectoragentneedstoletthehunterappearattheuser'slocation.
Ingeneral,allthesuggestedbeliefchangesneedtobetestedbyFind Explanation. The
authorcanmakeexemptionsbyspecifyingstatefeatureswhosevaluesarenotimportant,
for example, the locations of the characters whom the user cannot see.
5.4 Examples
This section provides two step-by-step examples of applying directorial control. Example
I starts with a potential violation of directorial goals being detected: the wolf will eat
Red before Red gives the cake to Granny. The corresponding objective is then added to
the list: the director agent \wants" the wolf to choose an action other than eating Red.
Sincethewolfisplayedbytheuser, thedirectoragentskips¯ttingthewolf'smotivations
for achieving the objective, and directly tries to change the characters' beliefs. It ¯nds
that if the wolf believes that the hunter is close by, the wolf will choose a di®erent action
than eating Red. Assuming the belief change has happened, the director agent tests for
potential goal violations again with lookahead projection. This time the director agent
expects no violations. It then proceeds by applying the belief change { in this case by
physicallyre-locatingthehunter{andorderingthecharacters,otherthanthewolf,toact
accordingtoitslastlookaheadprojection. Aftertheuserresponds,thedirectoragentwill
start another round of directorial control, starting by detecting potential goal violations.
83
Example I
Lookahead projection :\wolf-eat-Red", \Red-doNothing", \hunter-walk" ...
Detect goal violation : order: [\Red-giveCake-Granny", \wolf-eat-Red"]
Add objective : [\wolf-eat-Red", \undesirable"]
Adjust beliefs : hunter's location = wolf's location
Simulate user's lookahead : \wolf-run"
Request characters to act : \hunter-walkTowards-wolf"
Example II
Lookahead projection : \wolf-walk", \Red-walk", \hunter-walk" ...
Detect goal violation : earlierThan2: [\wolf-eat-Granny", \Red-kill-wolf", 30]
Add objective : [(\Red-kill-wolf", \desirable")]
Fit characters to the objectives : failed to ¯t Red
Adjust beliefs : Red's location = wolf's location; Red's power > wolf's power
Find explanation : \hunter-giveGun-Red"! Red's power > wolf's power
Fit characters to the objectives : succeed, \Red-kill-wolf"
Request characters to act : \Red-walkTowards-wolf"
The basic procedure in Example II is the same as in Example I. In this example, the
director agent starts by trying to make Red kill the wolf. It fails to ¯t Red's motivations.
It then proceeds to adjust the characters' beliefs. It ¯nds that when Red is next to
the wolf and believes that she is stronger than the wolf, Red can be ¯tted to do the
action. The director agent has also tested whether the belief changes are reasonable.
Find Explanation returns that in order for Red to believe that she is stronger than
the wolf, the hunter should have given Red a gun and this is a reasonable action for the
hunter. The director agent therefore decides that the belief changes to Red are feasible.
In both of these examples, the director agent is able to achieve the directorial goals.
In general directorial control does not always succeed. The successfulness of directorial
control depends on both the algorithms used by the director agent and the model of the
story. An evaluation of the e®ectiveness of directorial control is included in Section 7.2.
84
Chapter 6
Authoring Procedures
In Thespian, the authoring process happens in an iterative fashion, as illustrated in
Figure 6.1. To design an interactive narrative, the author needs to de¯ne the story
world { the characters' actions and action dynamics, set the initial con¯gurations of the
characters, e.g. their states, goals and beliefs, and set up the goals for directorial control.
Manually setting these parameters, testing the interactive narrative and adjusting the
parameters to ensure that the characters behave appropriately during the interaction is
an extremely time-consuming process.
To aid the author in designing interactive narratives, Thespian provides automatic
procedures both for con¯guring the agents and for testing whether the interactive narra-
tive{thecharactersandthedirectoragent{behavesconsistentlywiththeauthor'sdesign
when facing a variety of users. These authoring procedures are aimed at automating the
authoring process and hiding the technical di±culties from the author.
Thespian also supports reuse of authored materials { story world elements and char-
acters { from existing interactive narratives. Being able to reuse story world elements
cansigni¯cantlycutdevelopmente®ortwhenbuildinginteractivenarratives. Itespecially
85
Figure 6.1: Thespian's Authoring Cycle
bene¯ts non-technical authors, because they can use elements from existing story worlds
(if applicable) instead of writing their own. Moreover, by plugging characters into new
stories, the author can creatively explore possible developments of the stories.
This chapter presents Thespian's authoring procedures and how authored materials
can be reused in new stories.
6.1 Fit Characters to Story Paths
To start designing the characters, the author needs to set the characters' motivations
based on their roles in the story. To help automate this process, Thespian provides a
¯tting procedure that allows the author to con¯gure the characters' goals by providing
alternative linear paths of the story (sequences of actions). The ¯tting procedure au-
tomatically extracts the constraints on the characters' goals from the story paths and
determines whetherconsistentgoalpreferences of the agents can beinferred [82, 102], i.e.
whetherthereisagoalpreferencessettingthatcandrivethecharacterstoactasspeci¯ed
inthestorypaths. Iftheresultisyes, theagent'sgoalweightsaretuned accordingto the
86
storypaths. Otherwise,¯ttingfails. If¯ttingsucceeds,theagents'autonomousbehaviors
will follow the story paths when the user's behavior is consistent with what the author
crafted for the paths. When the story deviates from the story paths, the characters will
use the same motivation they \learned" from the paths to motivate their behaviors.
Before running the ¯tting procedure, the author needs to set the initial conditions,
includingthegoalweightsforallofthecharacters. Theseinitialvalueswillbeusedtoset
the initial beliefs characters have about each other. The goal weights do not necessarily
need to be accurate, since the ¯tting process will automatically adjust them.
Algorithm 13 Fit Sequence(S
0
, char name, seq, fixedgoals)
1: # S
0
: initial state of char name
2: # char name : character whose role is to be ¯tted
3: # seq : time sequence of action { the hand crafted path
4: # fixedgoals : goals whose weights should not be changed in this process
5: # C : constraint on goal weights
6:
7: stateà S
0
8: C Ã [ ]
9: for each A in seq do
10: # update state
11: stateà state£ Dynamics(A)
12: if A.actor == char name then
13: # adding constraints
14: for each action in char name.actionOptions do
15: new C Ã Reward(A, state)¸ Reward(action, state)
16: C.Append(new C)
17: return Adjust Goal Weights(char name, C, fixedgoals)
18:
19: # Dynamics(action) : action dynamics
20: #Reward(action, state) : the reward of doing action when the agent is in the state
state
21: # Adjust Goal Weights(char name, constraints, fixedgoals) : returns whether
the characters' goal weights, except the weights of the fixedgoals, can be adjusted
so that all the constraints are satis¯ed
87
In ¯tting, Thespian proceeds iteratively for each story path, ¯tting the goals of one
agent at a time and holding all other agents' goals as ¯xed. Speci¯cally, for each story
pathandeachcharacter,Algorithm13isinvokedto¯tthecharacter'smotivationssothat
it prefers the actions as crafted in the story path over its other options. This function
takes a parameter called fixedgoals, which speci¯es goals whose weights should not be
changed in ¯tting. For example, being alive should always be an important goal for Red.
The algorithm proceeds down the sequence of actions in the story path (line 9). If the
character is the actor of the current action (line 12), the ¯tting process simulates the
agent's lookahead process, and calculates constraints on goal weights to ensure that the
desired action receives highest utility among all candidate actions (lines 14-16). Thus
each action in the story path e®ectively eliminates any goal weight values that would
cause the agent to choose some other action instead.
Forexample, assumethewolfin theLittle Red Riding Hood story has onlytwogoals:
staying alive and avoiding starvation. The wolf is currently hungry and alive, so its level
of not starving equals 0 and its level of being alive equals 1. If it eats Red, its level of
not starvingwillbeincreasedto0.5. Butthewoodcutteriscloseby, andwillkillthewolf
if he sees Red being eaten. So eating Red will drop the value of the wolf's being alive
to 0. In this case, to make the wolf prefer eating Red over doing nothing, Inequality 6.1
needs to be true. Inequality 6.1 speci¯es a constraint on the relative weights of the two
goals. This constraint when satis¯ed ensures that the utility of eating Red is higher than
the utility of doing nothing in the character's current context.
88
0:5¤W
not starving
+0:0¤W
being alive
>0:5¤W
not starving
+1¤W
being alive
(6.1)
At the end of the ¯tting process, the constraints resulting from ¯tting each path
are merged into one common constraint set. Typically, there are multiple candidate goal
weightvaluesthatareconsistentwiththepreferredstorypaths. Anyoneofthesesolutions
guarantees that the characters will follow the paths designed by the author. Thespian
can pick one of these solutions according to its own heuristic, which is to choose the goal
weights as close to the original ones as possible. It also gives the author the option of
manually selecting goal weights from the constrained set.
A character's goal weights after ¯tting are usually di®erent from their initial values
set by the author. This di®erence can lead to discrepancies between a character's actual
personality and another character's mental model of it. The author can synchronize the
models by repeating the ¯tting step with the agents' beliefs set to the actual personal-
ity. However, characters do not necessarily have to have exact knowledge about other
characters, or themselves, to exhibit the author's desired behaviors. In fact, it can be
dramatically interesting when characters do not have accurate models of each other. For
example, in the Little Red Riding story, it is Red's incorrect belief of the wolf not having
a goal of harming people that leads to the later dramatic events of the story { Red tells
the wolf where Granny lives and the wolf goes to eat Granny.
89
6.2 SimulatePotentialUsersforTestingInteractiveNarrative
When designing a new interactive narrative, the author needs to repeatedly test the
interactivenarrativesystemandre¯nehis/herdesigns. However,becauseuserinteraction
creates a huge number of paths through the story, performing such tests manually is
impossible in most cases. To help the author examine whether the interactive narrative
system acts according to his/her expectations when facing a variety of users, Thespian
provides automated evaluation procedures which can systematically simulate potential
users' interactions with the interactive narrative system, ¯lter the simulated interaction
histories,andprompttheauthortoverifythevirtualcharacters'behaviorsininteractions
that seem problematic [104]. The author's feedback will then be incorporated into the
next iteration of the authoring process.
Thespian has an explicit model of the user for simulating user interactions. This sec-
tion describes user modeling in Thespian and presents Thespian's automated approaches
for simulating interactions with potential users. The basic user simulation procedure can
generateallthestorypathsthatmaybeencounteredbyawell-motivateduser{auserwho
has consistent motivations during the interaction. This process may be time-consuming.
Thebasicapproachcanalsobecon¯guredtosimulateonlythe\prototypical"users. The
prototypes can either be designed by the author or be derived by discretizing the user
agent's possible goal space.
90
6.2.1 Model Potential Users
Thespian models the user as a decision-theoretic goal-driven agent similar to the other
characters in the story. The users' di®erent interaction styles are re°ected by their dif-
ferent priorities over the set of goals they have and their di®erent beliefs about other
characters.
Similar to how other characters are designed, the user agent's initial values of state
features, action dynamics and beliefs are based on the character being modeled. Most
of this information is either common-sense knowledge, e.g. social norms, or conveyed to
the user before the interaction through a background story, e.g. action dynamics, beliefs
about self and other characters. Therefore, this part of the model is ¯xed for simulating
all types of users unless the author purposely leaves some information uncertain to the
user, e.g. tells the user that there is 50% of the chance that the wolf is a good character
and will not eat people.
The user's goals are normally not limited to the role he/she takes in the story. The
user may have various personal goals and interaction styles which can also be cast as
goals in Thespian agents. In particular, exploring the environment and talking with the
virtual characters are two common personal goals of the user regardless of the story. In
Thespian, the user is modeled as having these two goals by default. Table 6.1 lists the
user agent's goals assuming the user plays Red in the Little Red Riding Hood story.
6.2.2 Simulate Interactions with Potential Users
Thebasicusersimulationproceduregeneratesallthestorypathsthatcanbeencountered
by a well-motivated user. This procedure works in a stepwise fashion. At each turn of
91
Table 6.1: An Example of the User Agent's Goals
Category Example Goals
Goals of the Character in the Story keep self alive
give cake to Granny
... ...
Personal Goals explore the environment
converse
Algorithm 14 Generate All Paths(S
0
, user, fixedgoals, remainingSteps,
existPath)
1: # S
0
: initial state of user
2: # fixedgoals : goals whose weights should not be changed in this process
3: # remainingSteps : steps left to simulate
4: # existPath : events that have already happened
5:
6: for each action in user.actionOptions do
7: pathnew à existPath.append(action)
8: resà Fit Sequence*(S
0
, user, pathnew, fixedgoals)
9: # if this is a possible user action
10: if res == True then
11: # simulate other characters' responses to the user's action
12: other character
0
s actionà Lookahead()
13: pathnew à pathnew.append(other character
0
s action)
14: remainingSteps new à remainingSteps - 1
15: if remainingSteps new > 0 then
16: Generate All Paths(S
0
, user, fixedgoals, remainingSteps new, pathnew)
17: else
18: Output to File(pathnew)
the user, it ¯nds all of the user actions that are well-motivated at the moment. Next,
for each of those actions, other characters' responses are simulated. When it is the user's
turn to act again, the same process will be repeated until the length of the interaction
for this simulation is reached.
Algorithm 14 shows the pseudocode of this process. When the function is called to
generate story paths starting from the very beginning of the story, existPath is empty.
92
Iftheauthorwantstosimulatepossiblestorypathsstartingfromthemiddleofthestory,
existPath should contain all the actions that have already happened by that time.
Note that unlike the ¯tting procedure used for tuning the characters' motivations,
in which the actions with equal utility as others are regarded as not well-motivated,
here those actions are treated as well-motivated. This is because this procedure tries
to simulate all actions the user can possibly choose instead of ensuring certain actions
will be picked by an agent. In fact, the author can make this simulation process more
tolerant of inaccurate user modeling by passing an epsilon value to the ¯tting procedure
to relax theAdjust Goal Weights function inFit Sequence*. This way, actions will
be treated as well motivated even if their utilities are slightly lower than the maximum
utility the agent might achieve via some other actions.
To simulate potential users with di®erent goals and di®erent mental models about
other characters, the author needs to supply the users' alternative mental models about
others. To do so, the author can either design the alternative mental models by hand or
derive them using automated means, e.g. [84]. Thespian will then set the user agent's
belief according to each of the models (as shown in [84], the number of possible models in
astoryenvironmentislimited),andcalltheGenerate All Pathsproceduretogenerate
all story paths that can be encountered by users with that mental model. Finally, story
paths generated by simulating users with di®erent mental models will be merged.
Simulating the full range of possible user interactions can be time consuming. It
is often su±cient to test a system's behaviors with prototypical users. The prototypes
can be designed by the author. Alternatively, Thespian can model prototypical users by
systematically varying the user agent's preference over its goals. The author can specify
93
his/her granularity preference, which will de¯ne how ¯ne the discretization is. For each
set of the potential goals of the user, the Generate All Paths procedure can be used
to generate all the possible story paths the user may encounter when having these goals.
While calling this process, the fixedgoals parameter needs to be set to include all of the
agent'sgoals. This way theFit Sequence* function will only perform a test on whether
theactionismotivatedbytheagent'scurrentgoals, andwillnottrytoadjusttheagent's
goal weights.
6.2.3 Filter Generated Story Paths
The number of story paths resulting from simulating potential users is huge. For select-
ing story paths to present to the author, Thespian needs to be able to automatically
judge which story paths are likely to be problematic, and therefore deserve the author's
attention.
Thespian can use both its default heuristics and author speci¯ed criteria for ¯ltering
story paths. Currently, Thespian provides two default heuristics. The ¯rst one picks
storypathsinwhichthevirtualcharactersrepeatthesamebehaviorsmorethanacertain
percentage, e.g. 75% of the time. The second heuristic selects story paths in which the
virtual characters repeat the same behavior continuously more than a certain number of
times, e.g. 3 times. These two heuristics are designed based on the observation that it is
usually not a good interactive experience if the virtual characters always respond to the
user with the same actions. Both heuristics are con¯gurable by the author. The author
can specify the names of the characters to be watched for (by default all the virtual
characters' behaviors will be included in the analysis), the thresholds for reporting, e.g.
94
insteadof75%thethresholdcanbeloweredto60%forthe¯rstheuristics,andtheactions
to be watched for, e.g. paths will be selected only if the wolf repeats \do nothing" all
the time. In addition to the default heuristics, the author can specify the achievements
(or lack of achievement) of a set of directorial goals as additional ¯ltering criteria. For
example, pulling out story paths in which the wolf eats Granny and is not killed by the
end or pulling out story paths in which Red believes that her a±nity with the wolf is
close at the ¯nale. The paths selected by di®erent criteria may overlap with each other.
As the last step, the ¯ltering procedure will merge paths selected by all the criteria and
present the distinct paths to the author.
To complete the authoring process, the author needs to review the paths being se-
lected, and make modi¯cations to the design of the interactive narrative if the paths are
di®erent from his/her expectations. For example, in one of the paths generated in the
examples given in Section 6.2.4, the hunter arrives at the place where Red and the wolf
are. The hunter would have killed the wolf, but Red keeps on talking to him. While the
hunterisentrappedintheconversationwithRed,thewolfrunsaway. Tocorrectthis,the
author can design a new story path in which the hunter kills the wolf in that situation.
Alternatively, the author can change the initial settings of the characters, the directorial
goals, or the design of the story world. The author can make multiple changes to correct
all the problems he/she can observe from the paths. Next, the newly corrected story
paths together with the originally designed story paths are passed to the ¯tting proce-
dure to recon¯gure the virtual characters, and the potential user's interactions need to
be simulated again. The authoring process ends when the simulated users' \experiences"
are consistent with the author's design.
95
6.2.4 Authoring Example
Thissectionprovidesacompleteexampleofauthoringaninteractivenarrative. Thestory
ofLittleRedRidingHoodisusedastheexampledomain. Theauthoringprocessconsists
of three steps.
Step 1 : Tune characters' goals to story paths speci¯ed by the author. The follow-
ing story path is passed to the Fit Sequence function.
Red greets the wolf. ! the wolf greets her back. ! Red tells the wolf that she is on
her way to visit Granny and Granny's location. ! the wolf says bye to Red. ! Red says
bye to the wolf. ! Both Red and the wolf walk away. ! Red and the wolf meet outside
of Granny's house. ! the wolf eats Granny. ! the hunter arrives. ! the hunter kills
the wolf. ! Granny escapes from the wolf.
Step 2 : Simulate potential users' behaviors. Two simulations are conducted. The
¯rst one simulates users with alternative mental models of other characters. In this sim-
ulation, the user is assumed to play the Red character. Users with two di®erent mental
models of the wolf are simulated: the wolf has goals of eating people vs. the wolf has
no such goal. The second simulation simulates prototypical users. Two prototypes are
simulated: the user has a dominant goal of giving the cake to Granny and the user has
a dominant goal of not giving the cake to Granny (this is the case that the user does not
adopt the character's goals). When simulating each of these prototypes, the user agent's
other goal weights are set to the results of ¯tting in Step 1.
96
Table 6.2: Results of Simulating Users with Alternative Mental Models
Mental model Possible paths Selected by heuristics Selected by plot point
the wolf is good 2456 181 72
the wolf is evil 1837 95 56
Table 6.3: Results of Simulating Prototypical Users
Mental model Want
Granny
to have cake
Possible paths Selected
by heuris-
tics
Selected
by plot
point
the wolf is good Yes 1688 98 68
the wolf is good Opposite 2318 175 72
the wolf is evil Yes 1513 86 56
the wolf is evil Opposite 1729 92 56
Step 3 : Filter the generated story paths and present to the author. When ¯ltering
the paths, the wolf is selected as the only character to be examined because in this story
all other characters are expected to have repeated actions, e.g. the hunter moves around
searching for the wolf most of the time. In addition to the default ¯ltering heuristics, the
achievement of the following plot point is also used as a ¯ltering criterion: the wolf eats
Granny and is not killed by the end of the story. The summaries of results are given in
Tables 6.2 and 6.3.
As shown in Tables 6.2 and 6.3, in this domain simulating a 5-round interaction
with a well-motivated user results in thousands of possible story paths, and the ¯ltering
procedure trims o® 90% of the paths before presenting the remainder to the author for
review. Alternating the ¯ltering criteria will a®ect the number of paths the author needs
to check. The author can balance between having thorough knowledge and control over
97
the possible interaction and spending reasonable time on debugging by adjusting what is
included in the ¯lter criteria.
In Section 7.2, additional examples of user simulation and ¯ltering are presented.
Though the main reason for conducting those experiments is to evaluate the e®ectiveness
ofthedirectoragent,theevaluationprocessesalsodemonstratehowanauthorcansetthe
¯ltering criteria for gaining knowledge about the performance of the interactive narrative
systems. In the examples, the ¯ltering criteria are the achievements of the author's
directorial goals.
6.3 Reuse Authored Materials
Thespian supports reuse of characters and story world elements in di®erent stories [102].
It separates the e®ort of designing a story world from the e®ort of con¯guring the char-
acters and setting the directorial goals of the story. A story world contains story-speci¯c
knowledge,includingactiondynamicsthatde¯nehowcharacters'actionsa®ecttheworld
and the characters' states and beliefs. A character's personality/motivation is expressed
as its goals and their relative importance. It is independent from the story world and the
possible actions the character can carry out. Therefore, existing characters can be easily
plugged into other stories to play a similar role, or be combined into a richer character.
Similarly, components of story environments, i.e. action dynamics, can be shared among
multiple stories.
The main issue for reusing a character in a new story is that not all of the character's
goals may be relevant in the new story. If the new story does not have actions related
98
to a goal, satisfying it is impossible. The author can address this issue in several ways.
The author can simply drop the goals that are not relevant in the new story, or replace
them with similar ones that are relevant. Alternatively, the author can include the nec-
essary actions and dynamics functions in the new story for supporting the goals. These
operations may or may not keep the character's personality intact, depending on which
goals are dropped or replaced, and what actions are added to the story. Sections 6.3.2
and 6.3.3 give two examples of moving characters to new stories.
Forcreatingcharacterswithrichercapabilitiesandpersonalities, theauthorcancom-
bine multiple characters into one single character. To do so, the author can simply ¯t
an agent over multiple stories. The agent will learn di®erent aspects of its \personal-
ity" from its roles in di®erent stories. When its roles in di®erent stories share common
goals, the agent will try to learn a motivation/personality that is consistent with all of
the roles. If the agent cannot ¯nd such motivation, i.e. when ¯tting fails, the characters
are con°icting with each other and cannot be directly combined. In this case, the au-
thor can either try to combine fewer characters or specify a priority of which character's
motivation/personality dominates the shared goals.
The second way of combining characters is to directly put goals from di®erent char-
acters together as the new character's goals. This requires the original characters to not
have shared goals. This method leaves the relative goal weights of the original characters
unchanged. While combining the goals, the author can also control which original char-
acter's personality will dominate. Di®erent from the ¯rst method, the priority is applied
over the full range of the goals, rather than just the shared goals.
99
Followingarethreeexamplesofreusingauthoredmaterials. Thetwostoriesdescribed
in Sections 2.3.3 and 2.3.1.1 are used as example domains.
6.3.1 Example I: Create a New Character from Existing Characters
This example explores combining two non-con°icting characters by directly combining
their goals and action dynamics. The Fish story, as described in Section 2.3.3 has high-
level dialogue with little in the way of normal social interaction, such as greeting and
thanking. In contrast, the dialogue in TLTS stories (described in Section 2.3.1.1) has
these social interactions. To equip the ¯sh character with basic social skills, social-norm
relatedgoals,dynamicsfunctions,andstatefeaturesareimportedfromtheaTLTSstudy.
The norm related goals will drive the ¯sh to initialize these social behaviors and respond
to other characters' social contact properly.
To create di®erent behaviors, the ¯sh character's old goals and the newly added goals
are weighted with di®erent relative importance. An overly polite ¯sh is created when the
norm related goals are given high importance. In this case, the ¯sh, which was caught
by the ¯sherman, greets and introduces itself before asking the ¯sherman to release it,
even though being out of water can endanger its life. Even after being released, the ¯sh
politelythanks andbidsgoodbyetothe¯sherman. With another set of weights, in which
the new goals receive lower weight, the ¯sh asks to be released ¯rst.
6.3.2 Example II: Move a Character to a New Story
Using the same method as described in the last example, all the characters in the Fish
story are made capable of basic social interactions. Then the old man character in the
100
TLTS story is imported into the Fish story to take the role of the ¯sh. The result of
this operation is that the ¯sh satis¯es all requests from the ¯sherman, which is consistent
with the old man's personality of being cooperative.
6.3.3 Example III: Move a Character to a New Story with Some of the
Character's Goals Dropped
In this example, the ¯sherman character is imported into the TLTS story to take the
role of young man. The ¯sherman's goals that are not supported in the TLTS story
are dropped, including the goal of pleasing his wife. The resulting young man does not
question the sergeant's identity even when the sergeant is not completely polite because
the ¯sherman does not have the goal of keeping the town safe.
101
Chapter 7
Evaluation
The evaluation of the Thespian framework has been conducted at multiple levels. To
test its application generality, Thespian has been applied to authoring nearly a hundred
virtual characters in more than thirty interactive narratives. In particular, the Tactical
LanguageTrainingProjectisalarge-scaleprojectdesignedforrapidlanguageandculture
training. Thespian was used to author interactive narratives for the user to practice
foreign language and culture skills. Thespian has also been used to model fables such as
Little Red Riding Hood and the Fisherman and his wife. Section 2.3 provides details on
some of Thespian's example domains.
Various components for modeling socially aware characters and facilitating authoring
have been individually evaluated. Thespian contains domain-independent models for
social norms and emotion. These models have been individually validated with examples
andagainstothervalidatedmodels(seeSections4.2and4.3fordetails). Thee®ectiveness
of simulating the user as a way to test an interactive narrative is shown in Section 6.2.4.
Inthischapter,twoempiricalstudiesareincludedtoevaluatethebasisoftheThespian
framework. AsstatedinSection1.3,Thespian'sbasicassumptionforinteractivenarrative
102
is that the design of the characters and the design of the events can be treated as distinct
processes but constrain each other. The characters' behaviors must speci¯cally portray
consistent character motivations, and adjustments to those behaviors by the director
agent must maintain consistent character motivations. Otherwise, the user may not be
able to understand his/her experiences, and therefore the author's plot design goals may
not be achieved as well.
Thesetwoevaluationsaredesignedtoaddresstheimportanceofthebasicassumption
and investigate, given the priority of consistent character motivations, how often the
director agent can successfully create the author's desired e®ects when facing a variety of
users.
7.1 Importance of Well-motivated Characters
Thespian's basic approach for interactive narrative is to use well-motivated and socially
aware characters as the basis for generating the interaction and constraining the char-
acters' behaviors with the plot design of the story. The underlying assumption for this
design is that the characters must be interpretable to the user for the story to be expe-
rienced as designed. Inconsistency in the characters' motivations can confuse the user
and therefore a®ect the user's expectations and interpretations of other events. This will
result in the users' experiences become less controlled. Following this assumption, Thes-
pian by default gives priority to maintaining consistent character motivations when there
is a con°ict between the design of the characters and the design of the events.
103
Table 7.1: Conditions in Evaluation I
Directorial Goal I Directorial Goal II
Consistent Motivations Condition I Condition II
Inconsistent Motivations Condition III Condition IV
To evaluate the importance of the assumption, an empirical study was undertaken to
test whether inconsistency in a main character's motivations a®ects the user's experience
and understanding of the story given that everything else, e.g. the directorial goals and
other characters' behaviors, in the story is the same.
7.1.1 Experimental Design
The study utilized a 2 by 2 between-group design. The Little Red Riding Hood story
was used as the example domain, and the user played the role of the wolf. Two factors
were varied: what the author wants the user to experience { the directorial goals {
and whether all characters have consistent motivations during the interaction. Each of
the independent variables had two variations. The virtual characters' motivations were
either kept consistent or allowed to be inconsistent during the interaction. The two sets
of directorial goals listed in Tables 7.2 and 7.3 were used for de¯ning the target dramatic
e®ects. These two sets of goals were designed to create two di®erent types of stories, as
illustratedinFigures7.1and7.2respectively. The¯rstsetofdirectorialgoalsideallywill
createstoriesinwhichtheuser'sdramaticexperiencecontainsmultipleexcitingmoments,
and the climax is reached at the end of the story. The second set of directorial goals tries
to restrict the user's experience of dramatic moments to two { a smaller spike at the
beginning and the climax at the end of the story.
104
Table 7.2: Directorial Goals I
orders = [[wolf-eat-Granny, anybody-kill-wolf],
[Red-giveCake-Granny, wolf-eat-Red]
[Red-giveCake-Granny, wolf-eat-Granny]]
earlierThan = [60: [anybody-talkAboutGranny-wolf], 90: [wolf-
eat-Red], 120: [wolf-eat-Granny]]
earlierThan2 = [(wolf-eat-Granny, 30, [anybody-kill-wolf])]
NoObjIfLater = [95: [wolf-eat-Granny]]
laterThan = [wolf-eat-Granny: 90, wolf-eat-Red: 60]
laterThan2 = [(wolf-eat-Red, 10, wolf-eat-Granny)]
Table 7.3: Directorial Goals II
orders = [[wolf-eat-Granny, anybody-kill-wolf],
[Red-giveCake-Granny, wolf-eat-Red]
[Red-giveCake-Granny, wolf-eat-Granny]]
earlierThan = [30: [anybody-talkAboutGranny-wolf], 120: [wolf-
eat-Red], 120: [wolf-eat-Granny]]
earlierThan2 = [(wolf-eat-Granny, 30, [anybody-kill-wolf])]
laterThan = [wolf-eat-Granny:90, wolf-eat-Red:90]
laterThan2 = [(anybody-talkAboutGranny-wolf, 50, wolf-eat-
Granny)]
Figure 7.1: Curve I
105
Figure 7.2: Curve II
The dependent variables are the type of the story experienced by the subjects and
their understandings of the characters' motivations and relationships. To simplify the
data collection process, the subjects did not interact directly with the interactive narra-
tive system. Instead, they watched animated interaction histories, and were instructed
to imagine that he/she is the user. The subjects' answers of what they think are the
user's experience is used as the estimation of the real user's experience. Alternatively,
the subjects could directly interact with the interactive narrative system. However, in
that case a subject's data would only be valid when the interaction is consistent with
the directorial goals; but directorial control does not always succeed, in some cases the
successful rate can be quite low (see [106] for details).
The data were collected through a post-test survey. Â
2
tests are used for examining
whether the subjects' answers in di®erent conditions are statistically di®erent.
106
7.1.2 Procedure
This study was conducted online. Subjects between the ages of 18 and 40 were recruited
via the Internet. The recruiting ad was posted at Craglist.com and other similar adver-
tising websites. The users were informed that one winner will be automatically selected
from every 40 subjects. The winner was awarded a $50 gift card.
The whole study takes around 15 minutes to complete. Subjects were randomly as-
signed to one of the four conditions. In all the conditions, the subjects ¯rst read a
backgroundstory, thenread/watchedananimatedstory, and¯nally¯lledoutaquestion-
naire. The subjects were instructed to imagine that he/she is the user, who plays the
character wolf, while reading/watching the story. The background story was the same
for all the conditions, and the animated story varied for each condition.
7.1.3 Materials
The background story provides the subjects basic information about the characters' mo-
tivations and abilities, such as why the wolf wants to eat Granny and who is capable of
killing the wolf. It also informs the subjects that this is a twisted version of the Little
Red Riding Hood story, so that the subjects could infer that the characters' motivations
and relationships may be di®erent from those in the original story. To help the subjects
identify with the wolf character, in the background story the wolf is portrayed as a good
character who wants to protect the woods, and the humans are described as intruders to
the woods.
For preparing the stories in which all the characters' motivations are consistent, the
experimenter interacted with the system simulating typical users. Thespian's director
107
agent was given the goals listed in Tables 7.2 and 7.3 respectively for generating the
stories needed for Condition I and Condition II. One representative interaction history,
where directorial control succeeds, is selected for each condition.
Outputs from Thespian are text-based story composed of dialogue acts. To make it
more natural for the user to read, a Java applet was developed to automatically convert
dialogue acts to surface sentences, show the story, and illustrate the story with pictures,
i.e. show texts with their corresponding pictures.
The stories used in the inconsistent character motivations conditions (Conditions III
& IV) are replications of the corresponding stories used in the consistent character moti-
vationsconditions, withtheactorofonekeyeventreplacedwithanothercharacterwhose
motivation is broken by conducting the action. More speci¯cally, in this experiment the
event of Red telling the wolf about Granny's location is replaced with the hunter telling
the wolf the same information. The same Java applet is used to show the stories in the
inconsistent character motivations conditions.
The post-test questionnaire contains two parts. In the ¯rst part, the subjects were
asked to choose from the two curves (Figures 7.1 and 7.2) the one that better describes
the protagonist's { the wolf's { dramatic experience during the story.
In the second part, the subjects need to ¯ll out a short survey consisting of eight
questions based on what they have been shown so far { the background story and the
animated story. These questions are designed to collect information on the subjects'
comprehension of the story { their beliefs of the motivations of the characters, the rela-
tionships among the characters, and the characters' predictions about other characters'
behaviorswhichinformthesubjects'understandingofthecharacters'relationships. Each
108
question is a statement, such as \the hunterwill kill the wolfwhenever he gets a chance."
The subjects need to indicate whether the statement is true, false, or they cannot decide
based on what they know.
7.1.4 Hypotheses
The main hypothesis of this study is that the users' experiences and comprehensions of
thestoryarea®ectedbytheconsistencyofthecharacters'motivations. Morespeci¯cally,
it is hypothesized that regardless of the intended design of the story, in the inconsistent
conditions (Conditions III & IV) more subjects will choose the two-spike curve (Fig-
ure 7.2) to describe the wolf's experience. This is because when the virtual characters'
motivations are inconsistent, the event that reveals the inconsistency becomes a signif-
icant event of the story. Together with the ¯nal scene, in which a character dies, this
leads to two major dramatic moments in the story.
This hypothesis is based on the assumption that the subjects will notice the incon-
sistency in the hunter character's motivations. However, it is not uncommon for the
audience to be tolerant to broken characters [48]. Therefore, the post-test survey tests
whether the inconsistency is detected by the subjects.
The survey contains eight questions, which can be divided into two groups. The
baseline questions (questions 2, 4, 5, 6, 8) ask about the relationships between the wolf,
thewoodcutter,RedandGranny. Theiranswersshouldbethesameacrossallconditions.
The rest of the questions (questions 1, 3, 7) are the \experimental" questions. They ask
about the relationships between the wolf, the hunter and Granny. In the inconsistent
conditions (Conditions III & IV), the hunter voluntarily provides Granny's location to
109
Table 7.4: Number of Subjects in Each Condition
Condition I Condition II Condition III Condition IV
Number of Subjects 76 69 67 74
the wolf. If the subjects paid attention to this unusual event, they are likely to feel
confused about the characters' relationships, and believe that the hunter has a wrong
expectation about the wolf or the hunter has a bad relationship with Granny. Therefore,
the subjects' answers to the \experimental" questions inform whether the inconsistency
in the hunter's motivations is noticed, which would a®ect the prediction of the main
hypothesis.
Finally, nohypothesisismadeabouthowthedesignofthestory(thedirectorialgoal)
a®ects the subjects' understanding of the characters' motivations and relationships.
7.1.5 Results and Discussion
The data for this study were collected from 286 internet users in the United States over a
4-week period. The number of subjects assigned to each condition is listed in Table 7.4.
The results of this evaluation con¯rm the hypotheses. The subjects did notice the
inconsistency in the hunter's motivations. Further, when the characters' motivations are
inconsistent, more subjects chose the two-spike curve (Figure 7.2) to describe the wolf's
experience regardless of the design of the story. In addition, the subjects' answers to the
baseline questions in the post-test survey reveal an unexpected but interesting trend: the
subjects in the inconsistent character motivations conditions understand some aspects
of the story, which are not related to the inconsistency, better than the subjects in the
110
consistentcharactermotivationsconditions. Thedetailsoftheresultsandthediscussions
are presented below.
7.1.5.1 Subjects' Experiences of the Story
Table 7.5 summarizes the subjects' choices of dramatic experience in each condition. It
can be observed that inconsistent character motivations lead to more subjects choosing
the two-spike curve (Figure 7.2) regardless of the design of the story. The subjects'
choices in Condition I and Condition III are signi¯cantly di®erent (Â
2
= 4.445, p = .04).
Similarly, their choices in Condition II and Condition IV are signi¯cantly di®erent (Â
2
=
10.285, p = .00).
Table 7.5: Subjects' Choices of the Wolf's Experience in Evaluation I
Condition Choose Curve I Choose Curve II
I 66% 34%
II 55% 45%
III 54% 46%
IV 36% 64%
The results in Table 7.5 also show that when the virtual characters' motivations
are kept consistent during the interaction, the ¯rst set of directorial goals realized the
author's design of the story better than the second set of directorial goals. In Condition
I, signi¯cantly more subjects chose Figure 7.1 (Â
2
= 7.579, p = .01, compared to a 50%
- 50% distribution). In Condition II, the subjects' choices were rather random (Â
2
=
0.71, p = .40, compared to a 50% - 50% distribution). Both sets of the directorial goals
were designed to create the corresponding dramatic experiences. Multiple reasons may
contribute to this result. One possibility is the second set of goals is simply not designed
well for creating that experience. It is also possible that because the curve in Figure 7.1
111
is a common view of story { Aristotelian tension curve, people may tend to believe or
expect that most stories have that structure.
7.1.5.2 Subjects' Comprehension of the Story
I will ¯rst report results from the experimental questions, and then from the baseline
questions. In reporting the results, the data from the two consistent character motiva-
tions conditions (Conditions I & II) are merged into one group, and the data from the
two inconsistent character motivations conditions (Conditions III & IV) are merged into
another group.
Experimental Questions
The results of the \experimental" questions are exactly as expected. In the inconsis-
tent character motivations conditions, the hunter not only did not kill the wolf, but also
informed the wolf of Granny's location. As a result, more subjects were confused about
the relationships between the hunter, the wolf and Granny.
Question 1: \The hunter will kill the wolf whenever he gets a chance."
In the inconsistent conditions, more subjects chose \cannot decide", and fewer sub-
jects chose true. The di®erence is signi¯cant (Â
2
= 371.47, p = .00).
Question 3: \The hunter and Granny don't get along well."
Intheinconsistentconditions,moresubjectschosetrue,andfewersubjectschosefalse
or \cannot decide". The di®erence is signi¯cant (Â
2
= 18.37, p = .00).
112
Figure 7.3: Subjects' Answers to Question 1
Figure 7.4: Subjects' Answers to Question 3
Question 7: \The hunter knows the wolf will always eat people whenever it
gets a chance."
In the inconsistent conditions, more subjects chose \cannot decide", and fewer sub-
jects chose true. The di®erence is signi¯cant (Â
2
= 26.05, p = .00).
Baseline Questions
It has been hypothesized that the subjects' answers to the baseline questions are the
113
Figure 7.5: Subjects' Answers to Question 7
same regardless of the experimental conditions. This hypothesis is con¯rmed in the sub-
jects' answers to two questions: questions 2 and 5.
Question 2: \The wolf and the woodcutter are friends."
Figure 7.6: Subjects' Answers to Question 2
Overall, the subjects' choices in the consistent conditions do not di®er from those in
the inconsistent conditions (Â
2
= 0.05, p = .98). Most of the subjects chose \cannot
decide".
114
Question 5: \Red didn't expect the wolf to eat people."
Overall, the subjects' choices in the consistent conditions do not di®er from those in
the inconsistent conditions (Â
2
= 3.17, p = .21). The majority of the subjects chose true.
Figure 7.7: Subjects' Answers to Question 5
For questions 4, 6 and 8, the subjects' answers in the inconsistent conditions are in fact
more consistent with Thespian's model of the story. These results could simply be an
artifact. It is also possible that a more deliberate decision-making process was involved
in the inconsistent conditions because the subjects felt confused [32, 30]. None of these
questions directly asks about information provided in the background story or in the
animated story. To answer the questions, the subjects need to make inferences based
on the information they know. A more deliberate decision-making process can help
the subjects to understand the story better and therefore, make more correct choices.
Following are the details of the results.
115
Question 4: \Red doesn't like Granny. She went to visit Granny just because
her mum asked her to. She would rather Granny die."
Figure 7.8: Subjects' Answers to Question 4
In this story, Red is not modeled as disliking Granny. In the inconsistent conditions,
more subjects chose false and fewer subjects chose \cannot decide". The di®erence is
signi¯cant (Â
2
= 10.36, p = .01). In addition to the two possible explanations listed
above, the fact that Red told the wolf where Granny lives in the consistent conditions
may also cause this e®ect.
Question 6: \The woodcutter didn't expect the wolf to eat people."
Intheinconsistentconditions,moresubjectschosetrue,andfewersubjectschosefalse
or \cannot decide". The di®erence is signi¯cant (Â
2
= 17.83, p = .00).
Question8: \Thewolfdidn'teatRedatthe¯rstplacebecausethewoodcutter
is close by."
Intheinconsistentconditions,moresubjectschosetrue,andfewersubjectschosefalse
or \cannot decide". The di®erence is signi¯cant (Â
2
= 6.63, p = .04).
116
Figure 7.9: Subjects' Answers to Question 6
Figure 7.10: Subjects' Answers to Question 8
In this study, a simple example is used to demonstrate how broken characters can hurt
the achievement of the author's plot design goals. The broken character is easy to ¯x in
this case. The author can simply add a special rule in the directorial goals to prevent
the hunter from telling the wolf Granny's location. However, to detect all of such broken
cases is a real problem. It is impossible for the author to follow each path and check
whether the characters behave appropriately in it. Further, as more behavior rules are
added by the author, they may start to con°ict with each other. Therefore, it is a better
117
design of the authoring framework if it contains character models that can automatically
reason about the characters' motivations while generating their behaviors.
7.2 E®ectiveness of Directorial Control
This section provides an evaluation of the e®ectiveness of Thespian's directorial con-
trol [106]. This evaluation contains two parts. The ¯rst part compares the success rates
of achieving a set of directorial goals with and without the director agent in various con-
ditions. The second part evaluates the director agent with two additional variations of
directorial goals and discusses the factors that may lead to failure in directorial control.
The Little Red Riding Hood story is used as the example domain for this evaluation.
The user plays the role of the wolf. To systematically generate the user's behaviors, the
user was simulated using a Thespian agent.
7.2.1 Simulate the User
In this evaluation, each simulated interaction contains 25 rounds, i.e. both the user and
the virtual characters act 25 times in the story. Because the total number of interactions
grows exponentially with the number of available choices the user has at each step, it is
impossibletosimulatealltheinteractions. Instead,around200to450distinctinteractions
are randomly sampled from the space of possible user interactions.
Algorithm15showstheprocessofsimulatingtheuser. Thisalgorithmisanextension
of Algorithm 14 which generates all the story paths that can be encountered by a well-
motivateduser. Algorithm15allowsforrandomlysamplingtheuser'sinteractions. Since
the user's decisions at earlier stages of the interaction have more impact on what will
118
Table 7.6: Directorial Goals for Evaluation II
orders = [[wolf-eat-Granny, anybody-kill-wolf],
[Red-giveCake-Granny, wolf-eat-Red]
[Red-giveCake-Granny, wolf-eat-Granny]]
earlierThan = [[wolf-enterHouse, 90], [anybody-
talkAboutGranny-wolf, 30]]
happen in the story, the interactions were sampled as following. At the ¯rst n rounds of
theinteraction(line20),theuseragentrandomlypickedasmallnumber(m)ofactionsto
simulate when it had multiple choices that were all consistent with its motivations (lines
21-25); and after the initial n rounds, the user agent randomly chose one action that is
consistent with its motivation to proceed (line 27). The values for m and n are picked so
that the total number of interactions falls within the range of [200,450].
7.2.2 Comparison between With and Without Director Agent
The ¯rst part of the evaluation exams the e®ectiveness of the director agent for achieving
a set of directorial goals in di®erent conditions. Di®erent styles of user interactions were
simulated. The initial settings of the story were also varied.
7.2.2.1 Experimental Design
This part of the evaluation contains two studies. In both of the studies, the directorial
goals listed in Table 7.6 were set as the director agent's goals.
The¯rststudyhasfourconditions,whicharelistedinTable7.7. Thisstudyvariedthe
user's interaction style and the initial settings of the story and compared the percentages
of the interactions that are consistent with the directorial goals between the conditions
with the director agent and without the director agent.
119
Algorithm 15 Generate All Paths*(S
0
, user, remainingSteps, existPath, n, m)
1: # S
0
: initial state of user
2: # user : the name of the user character
3: # remainingSteps : steps left to simulate
4: # existPath : actions that have already happened
5:
6:
7: allOptionsà []
8: curStepà 25 - remainingSteps
9: fixedgoalsà user.goals
10:
11: for each action in user.actionOptions do
12: pathnew à existPath.append(action)
13: resà Fit-Sequence*(S
0
, user, pathnew, fixedgoals)
14: # if this is a possible path
15: if res == True then
16: allOptions = allOptions.append(action)
17:
18: for each action in allOptions do
19: selectedà False
20: if curStep· n then
21: if length(allOptions)· m then
22: selectedà True
23: else
24: # randomly select m options
25: selectedà random(length(allOptions), m)
26: else
27: selectedà random(length(allOptions), 1)
28: if ! selected then
29: continue()
30: simulate the user does the action
31: simulate(action)
32: # simulate other characters' responses
33: other character
0
s actionà getResponse()
34: pathnew à pathnew.append(other character
0
s action)
35: remainingSteps new à remainingSteps - 1
36: if remainingSteps new > 0 then
37: Generate All Paths*(S
0
, user, remainingSteps new, pathnew, n, m )
38: else
39: Output to File(pathnew)
120
Table 7.7: Conditions in Evaluation II
Talkative Non-Talkative
Hunter Close Condition I Condition II
Hunter Far Away Condition III Condition IV
Two types of user interaction styles were simulated: talkative and non-talkative. The
agent simulating a talkative user regards talking as a very important goal. Only the
goals of safety and not being hungry are more important than talking. As a result, the
agent will actively initiate conversations with other characters and try to maintain the
conversation as long as its more important goals are not impacted; that is, when it sees
no danger of being killed by the hunter or the woodcutter, and it does not have a chance
to eat Red or Granny without being caught. A non-talkative user has a lower goal weight
on talking more. It responds to others, but prefers to walk around more than to engage
in a new conversation. It, too, considers safety and not feeling hungry as its two most
important goals.
The initial settings of the story include the user's and the other characters' statuses
and beliefs, such as where they are, and what they think their relationships with others
are. Thesesettingsa®ecthowlikelycertaineventswillhappeninthestory. Inthisstudy,
the hunter's initial location was varied. The location was set to be either close to where
the wolf, Red and the woodcutter were (these three characters were placed next to each
otheratthebeginningofthestory)orfarawayfromthem. Inthelattercase,thewolfhas
morechancetocarryonalongconversationwithRedorthewoodcutteratthebeginning
of the story.
121
In this study, the whole space of interactions was sampled 10 times. For each one,
200 to 450 interactions were randomly simulated.
In the ¯rst study, the directorial goals were never achieved when the director agent
was not used. Therefore, it cannot be observed how the initial settings of the story
and the user's interaction style a®ect the achievements of directorial goals. A second
study was conducted to demonstrate the e®ects of these factors. In the second study, the
wolf's initial social distance to other characters was set to be slightly closer. This makes
other characters more likely to tell the wolf Granny's location. This study also has four
conditions as in the ¯rst study. Similarly, the numbers of interactions simulated for each
condition are within the range [200, 450]. In the second study, instead of sampling the
space of possible interactions 10 times, only 1 sample for each condition was randomly
drawn.
7.2.2.2 Results and Discussion
The results of the two studies demonstrate the e®ectiveness of directorial control. In
general, without the director agent, the interaction histories were not consistent with
the directorial goals. When the director agent was used, dramatically more interaction
histories were consistent with the directorial goals. Further, these studies show that the
user's interaction style and the initial settings of the story can a®ect the user's experi-
ence. However, theire®ectsarefarlessstrongthantheimpactofdirectorialcontrol. The
details of the results are listed below.
122
Figure 7.11: Success Rates of Directorial Control
Study I
Figure 7.11 shows the percentages of the simulated paths that are consistent with the
directorialgoalsineachconditionbyaveragingtheresultsfromthe10setsofsimulations.
The results from the simulations without the director agents serve as a baseline. In this
study, when the director agent was not used, what happened during the interaction was
almost never consistent with the directorial goals. With the director agent, the success
rates are within 70% to 80%, a large improvement over the baseline.
As further evidence for the e®ectiveness of the director agent, it can be observed that
the interactions gradually converge to the author-speci¯ed directorial goals. The results
showninFigure7.11arethestatisticsofachievingthedirectorialgoalswithoutanydelay
in time, for example, if a directorial goal speci¯es that \wolf-eat-Granny" should happen
by the 20th step of the interaction, and if this action has not happened by that time,
123
directorial control is considered failed. However, because of the director agent's attempts
to make \wolf-eat-Granny" happen, it may take place sometime after.
Figure 7.12: Delay in Achieving Directorial Goals without Director Agent
Figure 7.13: Delay in Achieving Directorial Goals with Director Agent
Figures 7.12 and 7.13 show the statistics of achieving directorial goals within 5 steps
and 10 steps of delay respectively. What can be observed is that the success rates of
directorialcontrolincreaseaslongerdelayisallowed. Thisonlyhappenswhenthedirector
agentisused. Withoutthedirectoragent, thenumbersofinteractionsthatareconsistent
with the directorial goals almost do not di®er in all conditions.
124
Multiple reasons may account for the delay in achieving directorial goals. The most
probableoneisthattheuser,whoissimulatedbyaThespianagent,didnotactexactlyas
the director agent expected. Thus, though the director agent had set up an environment
for the author's desired e®ect to happen, it had to wait for the user to do the \right"
actions. For example, the director agent expects that the wolf, played by the user agent,
will eat Red when he meets Red on the road. If the wolf walks away instead, the director
agent will have to set up another occasion for them to meet.
Study II
Figure 7.14: Success Rates When the Wolf has Closer Social Distance with Others
In this study, the wolf's initial social distance to other characters is set to be closer
than in the previous study. The results of this study demonstrate the e®ect of varying
the user's interaction style and the initial settings of the story on the user's experience.
Figure 7.14 shows that without the director agent, a talkative user is more likely to have
125
an experience consistent with the directorial goals, especially when the hunter is initially
placed far away from the user. However, this e®ect is far less powerful than the impact
of using the director agent.
When the director agent is not used, the user's experience is fully determined by the
initial setting of the story and by how the user interacts. The users' experiences can be
very diverse because there is a large space of parameters to be set by the author, i.e. the
initial state and beliefs of each of the characters. When setting up the story, the author
may not fully realize the long term impact of each setting. For example, it is hard to
estimate how the hunter's initial location a®ects whether the wolf eats Red before Red
gives the cake to Granny. Further, the user can interact with di®erent styles, which also
a®ects how the story will unfold.
The director agent can tune the user's experience toward the directorial goals set by
the author. This function, while keeping the user's control of the character (agency),
makes the users' experiences less diverse and also less sensitive to the initial settings of
the story and the user's interaction style. The author can thus avoid the time-consuming
process of tweaking the initial setting of the story, and have their authoring e®ort greatly
saved. AsshowninFigures7.11and7.14, thedirectoragentreachedsimilarsuccessrates
in all conditions.
Directorial control is not guaranteed to always succeed. In the next section, an ex-
ample of failure in directorial control is provided.
126
Table 7.8: Directorial Goals Variation I
orders = [[wolf-eat-Granny, anybody-kill-wolf],
[Red-giveCake-Granny, wolf-eat-Red]
[Red-giveCake-Granny, wolf-eat-Granny]]
earlierThan = [[wolf-enterHouse, 90],
[anybody-talkAboutGranny-wolf, 30]],
laterThan = [[wolf-eat-Red, 90]]
7.2.3 Varying Directorial Goals
In previous evaluations, the same set of directorial goals is used. In this section, two
additional evaluations are performed with di®erent variations of directorial goals.
7.2.3.1 Variation I
This evaluation tests the e®ectiveness of directorial control when the director agent is
given the goals listed in Table 7.8. Compared to the goals in Table 7.6, this set of
directorial goals has an additional temporal constraint: the wolf should not eat Red until
the 90th step of the interaction.
In this study, the user's interactions were sampled 10 times with and without the
director agent. This evaluation was performed at a smaller scale. Each time, around 20
paths were sampled. In this evaluation, the user's interaction style and the initial setting
of the hunter were not varied. A talkative user was simulated, and the hunter was placed
close to the user's initial location.
The results of this evaluation show that without the director agent, only 17% of the
interactions are consistent with the directorial goals on average. With the director agent,
81% of the interactions satisfy the directorial goals.
127
Table 7.9: Directorial Goals Variation II
orders = [[wolf-eat-Granny, anybody-kill-wolf]]
earlierThan = [[wolf-enterHouse, 120],
[anybody-talkAboutGranny-wolf, 60]],
laterThan = [[wolf-eat-Red, 60]]
laterThan2 = [(wolf-eat-Red, wolf-enterHouse, 10)]
7.2.3.2 Variation II
The procedure for this evaluation is similar to that used in the last one. The directorial
goals listed in Table 7.9 were applied. This set of goals de¯nes a slightly di®erent style
of story, in which the events happen at a more even pace.
Inthisevaluation,directorialcontrolfailed. Onlyafewsimulatedpathsareconsistent
with the directorial goals. On a closer examination, it is observed that most of the paths
failed to satisfy the following temporal constraint: \laterThan2 = [(wolf-eat-Red, wolf-
enterHouse, 10)]". \wolf-enterHouse" is required to happen before the 120th step. On
the other hand, there is nothing to ensure \wolf-eat-Red" will happen before that.
In general, when directorial control fails, it is usually because of two reasons. The
¯rst possibility is that the directorial goals are designed with °aw, i.e. when designing
an interactive experience, the author may not fully realize the dependencies among the
di®erentgoals. Secondly,thedirectoragentcanonlyforeseeandplanlimitedstepsahead.
Itispossiblethatthedirectoragentdoesnothaveenoughtimetorespondafteritdetects
a potential violation of directorial goals.
128
Chapter 8
Open Challenges
The vision of this work is to allow non-technical authors to design interactive narratives
easily. Ideally, an authoring framework will hide the technical di±culties, allowing the
author to concentrate on the creative aspect of the design process and fostering the
author's creativity. Much work remains before realizing such a vision. This chapter
discusses the computational limitations of this work, and its future directions.
8.1 Computational Limitations
Thespian uses decision-theoretic goal-based agents with Theory of Mind for modeling
characters in interactive narratives. The agents use a bounded lookahead policy. When
an agent needs to decide its next action, it projects into the future to evaluate the e®ect
of each of its options. The agent not only needs to consider the immediate e®ect of
the action, but also the expected responses from other characters. To predict others'
responses, the agent uses its mental models about others to simulate their lookahead
processes. During this simulation, the agent often needs to simulate other agents in
turn trying to predict the responses for their action options. This is a computationally
129
expensive process. The amount of computation grows exponentially with the number of
agents in the interaction, the number of their lookahead steps, the number of actions the
agents have and the depths of recursive beliefs the agents have.
This expensive decision-making process constrains Thespian's ability to model social
interactions and to e®ectively test interactive narratives. In order for the characters to
make their decisions in a reasonable amount of time, the number of distinct actions each
agent has is usually limited to no more than twenty, assuming the interaction involves
only two or three characters, including the user. When there are more characters in
the interaction, each character needs to have fewer action options. Usually, each agent
performs three steps of lookahead to make a decision, and their beliefs about others
contain three levels of recursion, e.g. my beliefs about your beliefs about me.
Three levels of recursion in an agent's beliefs are usually su±cient to model social
reasoning. However, the constraints on the number of steps in lookahead projection and
the number of action options each agent has, limit, in practice the sophistications of the
models used, i.e. it is hard to model characters that both have a large number of action
options and project far into the future when making decisions.
This expensive decision-making process not only a®ects the modeling of individual
characters, but also the e®ectiveness of directorial control and the user simulation proce-
dure because both of them need to extensively simulate the characters' decision-making
processes (including the user's). In particular, the director agent needs to project further
down the story than each individual character to e®ectively direct the interaction.
Todealwiththiscomplexity,theagents'policiesincludingthedirectoragent'spolicies,
can be compiled o²ine (see [82] for details). In addition, the less important characters
130
canuseasimplerpolicy, suchasdoingfewerstepsoflookaheadandhavingfewerlevelsof
recursivebeliefs. Theauthorcanevensupplyrulebasedpoliciesforthem. However, even
with o²ine compilation, the author is still not free to design characters as complex as
he/shewants,becausethecharactermodel'scomplexitywillimpactthedebuggingphases
when the author needs to repeatedly test the characters and re¯ne their con¯gurations.
8.2 Future Directions
Thespian strives to model life-like characters and manage their interactions with the user
to create the author's desired narrative e®ects. In addition, Thespian is aimed at provid-
ing the author a friendly authoring environment that allows them to have control over
both the characters and the events during the interaction without heavy programming or
extensive manual tweaking/testing of the characters or the story. This section discusses
the limitations of Thespian's current approach to interactive narrative, and proposes
future improvements to Thespian.
8.2.1 Model Emotion
The emotions of the characters play an important role in narratives. Thespian models
characters' emotions based on the appraisal theory, which de¯nes emotion as the charac-
ter's evaluation of its person-environment relationship. However, other factors may also
a®ect people's emotion and a®ective state in general, such as their prior emotional state,
memory of past experiences and the emotions of other people in the scenario. The future
extensions of Thespian should consider capturing some of these factors. Of course, it is
131
an open question as to how sophisticated and detailed emotion models need to be for
characters in interactive narratives.
Thespian also does not contain a domain-independent model of how emotion a®ects
the agent's decision-making and belief revision processes. Emotion a®ects how people
make their decisions and even whether they choose to believe a piece of information.
Currently, the author can encode the e®ects of emotion using domain-speci¯c models. A
character's emotional state can be considered as a state feature and therefore can a®ect
the character's future decisions and beliefs. However, such domain-dependent models
cannot be easily generalized to di®erent characters and di®erent stories.
8.2.2 Model Actions
Linear narrative based media often realize rich dialogue and behavior in their characters.
Thespian, however models the characters' actions at a level of abstraction. In particular,
dialogue acts are used for representing the characters' speech. Thespian does not contain
a sophisticated sub-system for converting between dialogue acts and the actual sentences
the characters speak. In practice, either manually encoded rules or an external system is
used for the conversion. Moreover, Thespian does not plan the characters' animations,
and therefore has no control over how the characters say the sentences or perform the
actions. Additional modules are needed for these functions.
8.2.3 Model Characters' Motivations
Charactersinstoriesoftenprogressthroughacharacterarc,aprogramthatchangestheir
motivationsinsomewayastheyexperiencethestoryworlds. InThespian,thecharacters'
132
goals are persistent through the interaction. Thespian currently does not model how a
character'smotivationsarea®ectedbyitsexperience. Tochangeacharacter'smotivations
during the interaction, the author needs to force the interactive narrative system to load
a di®erent agent model, which must be prede¯ned by the author.
8.2.4 Model Decision-Making
In Thespian, the characters are modeled as goal-based agents and use a deliberate looka-
head reasoning process to decide their actions
1
. However, people also often make deci-
sionsthroughshallowerprocess, suchasretrievalofsimilarpastexperience, orusingonly
partial information for reasoning. To address this issue, a more sophisticated model of
decision-making that incorporates multiple strategies is needed. Nevertheless, similar to
the need for a more sophisticated model of emotion, how complex this model needs to be
is an open question { the user may not be able to detect the di®erence when the agent is
using di®erent strategies.
8.2.5 Directorial Control
Thespian models the story { the characters and the events { in interactive narratives.
However, interactive narrative may contain more than the story. As mentioned earlier,
interactive narrative is closely related to traditional non-interactive forms of narrative;
and narrative has two parts: story and discourse. Though ¯rst person experience is
most commonly used in interactive narratives, the system could also apply discourse
management for shaping the user's experience.
1
The characters can use a rule based policy as well. However, such a policy is either computed based
on the character's goals or has to be manually designed by the author.
133
Moreover, interactive narrative often happens in a computer simulated 3D world.
Therefore factors such as sound and stage lighting also dramatically a®ect the user's ex-
perience. ThesefactorsarecurrentlynotconsideredbyThespian. Asapossibleextension
to Thespian, the director agent could model and vary these factors to in°uence the user's
experience.
134
Chapter 9
Conclusion
One of the ultimate research goals for Arti¯cial Intelligence is the design of life-like intel-
ligent characters that can interact with us in virtual environments. This thesis explores
interactive narrative { a virtual environment that allows people to participate actively in
a dynamically unfolding story, by playing a character or by exerting directorial control
over events in the story. The user's choices a®ect the unfolding of the story.
The basic desiderata of interactive narrative are two-fold. Foremost, in media that
involve narratives, e.g., dramas, movies and interactive narratives, the coherence of the
narrative is a basic goal, because it serves as the basis for people to understand their
experience. On top of that, the author often also seeks to achieve certain reactions in
theirusers{cognitiveora®ectivee®ects. Tofacilitatethedesignofinteractivenarrative,
an authoring framework should provide support for creating rich characters, applying
directorialcontrol,andbalancingthedesignofthecharactersandthedesignoftheevents.
Finally, the authoring framework should be easy to use and support even non-technical
authors.
135
Various authoring frameworks for interactive narrative have been proposed. Most
authoringframeworksadopteitheraplot-centricapproachoracharacter-centricapproach
to interactive narrative, and provide automatic support for either the design of events
or the design of characters, but rarely both. Without facilitation from the authoring
framework, to ensure both the design of the characters and the design of the events to
be satisfactory, the author has to either sacri¯ce the richness of interaction or spend
extensive e®ort { consider each possible contingency of the interaction { to de¯ne the
characters' behaviors.
In this thesis, I present a unique two-layer approach to interactive narrative design,
which utilizes autonomous agents for well-motivated and socially aware characters, and
on top of that a director agent that coordinates the multi-agent system to realize the
author's plot design goals. This approach is implemented using a multi-agent framework
{Thespian. Thisapproachfacilitatestheauthorinbothdesigningthecharactersandde-
signing the events. Further, it allowsthe author tobalance thesetwo aspects of narrative
design. Bydefault,Thespiangivesprioritytothedesignofcharacterstopreventbreaking
the coherence of narrative during the interaction. However, the author can adjust the
balance and make it more in favor of event design.
The Thespian framework is designed with an explicit intention to be easy to use
for traditional writers and non-technical authors. In addition to the two-layer runtime
system, Thespian provides various authoring procedures. Its authoring procedures allow
the author to con¯gure the virtual characters in a natural way that is similar to how
narratives are typically created. Thespian also supports reuse of story world elements
136
and characters, and provides automated testing/evaluation procedures for interactive
narrative.
Up to now, Thespian has been applied to authoring dozens of virtual characters in
more than thirty interactive narratives. Its application domains include language and
culture training, and fables such as the Little Red Riding Hood story. Thespian has
been evaluated both in terms of its overall practical value as a framework for authoring
and simulating interactive narratives, and on its individual components. In particular,
its core assumption for modeling socially aware characters and its approach to realizing
directorial control have been empirically evaluated.
Movingbeyondthecurrentwork,thevisionistorealizeadesignprocessforinteractive
narrativethatisintuitive,fastandfun. Thus,individualswithoutatechnicalbackground
will be able to easily design their interactive narratives, share their story worlds and
characters with others and enjoy the creative process.
137
References
[1] Gabriella Airenti, Bruno G. Bara, and Marco Colombetti. Conversation and Ba-
havior Games in the Pragmatics of Dialogue. In Cognitive Science, volume 17(2),
pages 197{256{48, 1993.
[2] Josephine Anstey, Dave Pape, and Dan Sandin. The Thing Growing: Autonomous
Characters in Virtual Reality Interactive Fiction. In IEEE Virtual Reality 2000,
pages 18{22, New Brunswick, NJ, March 1999.
[3] Aristotle. Poetics. 350 B.C.E.
[4] Robert K. Atkinson and Alexander Renkl. Interactive Example-Based Learning
Environments: Using Interactive Elements to Encourage E®ective Processing of
Worked Examples. Educational Psychology Review., 19:375{386, 2007.
[5] RuthAylett, J~ oaoDias, andAnaPaiva. Ana®ectively-drivenplannerforsynthetic
characters. In ICAPS, 2006.
[6] Roger Azevedo. Computer Environments as Metacognitive Tools for Enhancing
Learning. Educational Psychologist., 40:193197, 2005.
[7] Jeremy N. Bailenson, Rosanna E. Guadagno, Eyal Aharoni, Aleksandar Dimov,
AndrewC.Beall,andJimBlascovich. ComparingBehavioralandSelf-ReportMea-
sures of Embodied Agents' Social Presence in Immersive Virtual Environments. In
Proceedings of the 7th Annual International Workshop on PRESENCE, Valencia,
Spain, 2004.
[8] R. M. Ba~ nos, C Botella, M Alca~ niz, V Lia~ no, B. Guerrero, and B. Rey. Immersion
and emotion: their impact on the sense of presence. CyberPsychology & Behavior,
7(6), 2004.
[9] R. M. Ba~ nos, C. Botella, A. Garcia-Palacios, H. Villa, C. Perpina, and M. Alcaniz.
Presence and Reality Judgment in Virtual Environments: A Unitary Construct?
CyberPsychology & Behavior, 3:327{335, 2000.
[10] Joseph Bates. Virtual Reality, Art, and Entertainment. Presence: Teleoperators
and Virtual Environments., 2(1):133{138, 1992.
[11] Bruno Bettelheim. The Uses of Enchantment: The Meaning and Importance of
Fairy Tales. Knopf, New York, 1976.
138
[12] Frank Biocca. The Cyborg's Dilemma: Progressive Embodiment in Virtual Envi-
ronments. Journal of Computer-Mediated Communication, 3(2), 1997.
[13] Frank Biocca, Chad Harms, and Judee K. Burgoon. Toward a More Robust The-
ory and Measure of Social Presence: Review and Suggested Criteria. Presence:
Teleoperators and Virtual Environments, 12(5):456{480, 2003.
[14] Guido Boella and Leendert van der Torre. Obligations as Social Constructs. In
Proceedings of the Italian Conference on Arti¯cial Intelligence (AI*IA'03), pages
27{38, 2003.
[15] David Bordwell and Kristin Thompson. Film Art: an introduction. New York:
McGraw-Hill., 5th edition, 1997.
[16] M.M.Bradley,D.Sabatinelli,P.J.Lang,J.R.Fitzsimmons,W.King,andP.Desai.
Activation of the visual cortex in motivated attention. Behav Neurosci, 117:369{
380, 2003.
[17] NorbertBraun. StorytellinginCollaborativeAugmentedRealityEnvironments. In
WSCG, pages 33{40, 2003.
[18] Andrea Brogni, Mel Slater, and Anthony Steed. More Breaks Less Presence. In
Presence 2003: The 6th Annual International Workshop on Presence., 2003.
[19] J. Bruner. Acts of Meaning. Cambridge, Mass : Harvard University Press., 1990.
[20] Cristiano Castelfranchi. Commitments: From Individual Intentions to Groups and
Organizations. In ICMAS, pages 41{48, 1995.
[21] Marc Cavazza, Fred Charles, and Steven J. Mead. Agents' Interaction in Virtual
Storytelling. In Proceedings of the International WorkShop on Intelligent Virtual
Agents, pages 156{170, 2001.
[22] Marc Cavazza, Fred Charles, and Steven J. Mead. Emergent Situations in In-
teractive Storytelling. In Proceedings of ACM Symposium on Applied Computing
(ACM-SAC), March 2002.
[23] Seymour Chatman. Story and Discourse: Narrative Structure in Fiction and Film.
Cornell University Press, 1980.
[24] Mihaly Csikszentmihalyi. Finding Flow: The Psychology of Engagement with Ev-
eryday Life. BasicBooks, New York, 1st edition, 1997.
[25] S.DonikianandJ.Portugal. WritingInteractiveFictionScenariiwithDraMachina.
In Proceedings of TIDSE, 2004.
[26] J. A. Easterbrook. The e®ect of emotion in cue utilization and the organization of
behavior. Psychological Review, 66:183{201, 1959.
[27] Lajos Egri. The Art of Dramatic Writing: Its Basis in the Creative Interpretation
of Human Motives. Simon & Schuster, 2004.
139
[28] Magy S. El-Nasr, John Yen, and Thomas R. Ioerger. FLAME: Fuzzy Logic Adap-
tive Model of Emotions. Autonomous Agents and Multi-Agent Systems, 3(3):219{
257, 2000.
[29] Clark Elliott. The A®ective Reasoner: A Process Model of Emotions in a Multi-
Agent System. PhD thesis, Northwestern University Institute for the Learning
Sciences, 1992.
[30] Kimberly D. Elsbach and Pamela S. Barr. The e®ects of mood on individual's use
of structured decision protocols. Organization Science., 10:181{198, 1999.
[31] ChrisEvansandNicolaJ.Gibbons. Theinteractivitye®ectinmultimedialearning.
Computers & Education., 49(4):1147{1160, 2007.
[32] J.P.Forgas. MoodandJudgment: TheA®ectInfusionModel(AIM). Psychological
Bulletin., 117:39{66, 1995.
[33] Jonathan Freeman, S. E. Avons, Don E. Pearson, and Wijnand A. IJsselsteijn.
E®ects of Sensory Information and Prior Experience on Direct Subjective Ratings
of Presence. Presence: Teleoperators and Virtual Environments, 8:1{13, 1999.
[34] Nico H. Frijda. The Emotions. Cambridge University Press, 1987.
[35] Piotr Gmytrasiewicz and Edmund Durfee. A Rigorous, Operational Formalization
of Recursive Modeling. In ICMAS, pages 125{132, 1995.
[36] JonathanGratchandStacyMarsella. ADomain-independentFrameworkforMod-
eling Emotion. Cognitive Systems Research, 5(4):269{306, 2004.
[37] H. Paul Grice. Logic and Conversation. In Peter Cole and Jerry Morgan, editors,
SyntaxandSemantics: Vol.3: SpeechActs,volume3,pages64{75.AcademicPress,
1975.
[38] Barbara Hayes-Roth and Robert van Gent. Story-Making with Improvisational
Puppets. In the First International Conference on Autonomous Agents, Marina
Del Ray, CA, 1997.
[39] Carrie Heeter. Being There: The Subjective Experience of Presence. Presence:
Teleoperations and Virtual Environments, 1(2):262{271, 1992.
[40] R. M. Held and N. I. Durlach. Telepresence. Presence: Teleoperators and Virtual
Environments, 1:109{112, 1992.
[41] Hunter G. Ho®man, David R. Patterson, Je® Magula, Gretchen J. Carrougher,
and Karen Zeltzer. Water-friendly virtual reality pain control during wound care.
Clinical Psychology, 60:189{195, 2004.
[42] Hunter G. Ho®man, Jerrold D. Prothero, Maxwell J. Wells, and Joris Groen. Vir-
tual chess: The role of meaning in the sensation of presence. International Journal
of Human-Computer Interaction, 10:251{263, 1998.
140
[43] Milton P. Huang and Norman E. Alessi. Mental Health Implications for Presence.
CyberPsychology & Behavior, 2:15{18, 1999.
[44] Wijnand IJsselsteijn and Giuseppe Riva. Being There: The experience of presence
in mediated environments. In G. Riva, F. Davide, and W.A. IJsselsteijn, editors,
Being There: Concepts, E®ects and Measurements of User Presence in Synthetic
Environments, chapter 1. Ios Press, Amsterdam, The Netherlands, 2003.
[45] Jonathan Y. Ito, David V. Pynadath, and Stacy C. Marsella. A Decision-Theoretic
Approach to Evaluating Posterior Probabilities of Mental Models. In AAAI-07
Workshop on Plan, Activity, and Intent Recognition, 2007.
[46] Anne Jelfs and Denise Whitelock. Presence and the Role of Activity Theory in
Understanding: How Students Learn in Virtual Learning Environments. In CT
'01: Proceedingsofthe4thInternationalConferenceonCognitiveTechnology,pages
123{129, London, UK, 2001. Springer-Verlag.
[47] W. Lewis Johnson, Carole Beal, Anna Fowles-Winkler, Ursula Lauper, Stacy C.
Marsella, Shrikanth Narayanan, Dimitra Papachristou, and Hannes Vilhj¶ almsson.
Tactical Language Training System: An Interim Report. In 7th International Con-
ference on Intelligent Tutoring Systems, pages 336{345, 2004.
[48] Margaret T. Kelso, Peter W. Weyhrauch, and Joseph Bates. Dramatic Presence.
Presence: Teleoperators and Virtual Environments, 2:1{15, 1993.
[49] MichaelKriegel,RuthAylett,J~ oaoDias,andAnaPaiva. AnAuthoringToolforan
Emergent Narrative Storytelling System. In AAAI Fall Symposium on Intelligent
Narrative Technologies, 2007.
[50] Ari Lamstein and Michael Mateas. Search-Based Drama Management. In AAAI
Workshop Series: Challenges in Game Arti¯cial Intelligence., 2004.
[51] PeterJ.Lang, MargaretM.Bradley, andBruceN.Cuthbert. MotivatedAttention:
A®ect, Activation, and Action. In P. J. Lang, R. F. Simons, and M. T. Balaban,
editors, Attention and orienting: sensory and motivational processes,pages97{135,
Hillsdale, NJ, 1997. Lawrence Erlbaum.
[52] Peter J. Lang, Margaret M. Bradley, Je®rey R. Fitzsimmons, Bruce N. Cuthbert,
James D. Scott, Bradley Moulder, and Vijay Nangia. Emotional arousal and acti-
vation of the visual cortex: an fMRI analysis. Psychophysiology, 35:199{210, 1998.
[53] Richard S. Lazarus. Emotion & Adaptation. Oxford University Press., New York,
1991.
[54] NicoleLazzaro. WhyWePlayGames: FourKeystoMoreEmotionWithoutStory.
In Game Developers Conference, 2004.
[55] HowardLeventhalandKlausR.Scherer. TheRelationshipofEmotiontoCognition:
A Functional Approach to a Semantic Controversy. Cognition and Emotion., 1:3{
28, 1987.
141
[56] Sandy Louchart and Ruth Aylett. The Emergent Narrative theoretical investiga-
tion. Inthe 2004 Conference on Narrative and Interactive Learning Environments.,
2004.
[57] Alasdair MacIntyre. After Virtue: A Study in Moral Theory. University of Notre
Dame Press, Notre Dame, 2nd edition, 1984.
[58] Brian Magerko. A Proposal for an Interactive Drama Architecture. In Working
Notes of the AAAI Spring Symposium on Arti¯cial Intelligence and Interactive
Entertainment, 2002.
[59] Brian Magerko and John E. Laird. Building an Interactive Drama Architecture.
In 1st International Conference on Technologies for Interactive Digital Storytelling
and Entertainment (TIDSE)., 2003.
[60] Brian Magerko and John E. Laird. Mediating the Tension between Plot and In-
teraction. In AAAI Workshop Series: Challenges in Game Arti¯cial Intelligence.,
pages 108{112, 2004.
[61] Brian Magerko, John E. Laird, Mazin Assanie, Alex Kerfoot, and Devvan Stokes.
AI Characters and Directors for Interactive Computer Games. In the 16th Innova-
tive Applications of AI Conference., 2004.
[62] Fabrizia MANTOVANI and Gianluca CASTELNUOVO. Sense of Presence in Vir-
tual Training: Enhancing Skills Acquisition and Transfer of Knowledge through
LearningExperienceinVirtualEnvironments. InG.Riva,F.Davide,andW.A.IJs-
selsteijn, editors, Being There: Concepts, E®ects and Measurements of User Pres-
ence in Synthetic Environments, chapter 11. Ios Press, Amsterdam, The Nether-
lands, 2003.
[63] Wenji Mao and Jonathan Gratch. Social Causality and Responsibility: Modeling
and Evaluation. In IVA, Kos, Greece, 2005.
[64] Stacy C. Marsella, W. Lewis Johnson, and Catherine Labore. Interactive Pedagog-
ical Drama for Health Interventions. In AIED, pages 341{348, 2003.
[65] Stacy C. Marsella, David V. Pynadath, and Steven J. Read. PsychSim: Agent-
based modeling of social interactions and in°uence. In Proceedings of the Interna-
tional Conference on Cognitive Modeling, pages 243{248, 2004.
[66] Michael Mateas and Andrew Stern. Integrating Plot, Character and Natural Lan-
guageProcessingintheInteractiveDramaFa» cade. Inthe International Conference
on Technologies for Interactive Digital Storytelling and Entertainment, 2003.
[67] David Mo®at and Nico H. Frijda. Where there's a WILL there's an agent. In
ECAI-94 Workshop on Agent Theories, Architectures, and Languages, Amsterdam,
The Netherlands, 1995.
142
[68] Roxana Moreno. Multimedia learning with animated pedagogical agents. In
R. Mayer, editor, Handbook of multimedia learning., page 507524. Cambridge Uni-
versity Press, New York, 2005.
[69] Roxana Moreno, Richard E. Mayer, Hiller A. Spires, and James C. Lester. The
Case for Social Agency in Computer-based Teaching: Do Students Learn More
Deeply When They Interact with Animated Pedagogical Agents? Cognition and
Instruction., 19:177{213, 2001.
[70] Jane H. Murray. Hamlet on the Holodeck: The Future of narrative in Cyberspace.
MIT Press, Cambridge, 1997.
[71] Katherine Nelson. Narratives from the Crib. Cambridge, Mass:Harvard University
Press., 1989.
[72] Mark J. Nelson, David L. Roberts, Charles L. Isbell, and Michael Mateas. Re-
inforcement Learning for Declarative OptimizationBased Drama Management. In
AAMAS, 2006.
[73] Elinor Ochs and Lisa Capps. Living Narrative: Creating Lives in Everyday Story-
telling. Cambridge, MA, Harvard University Press, 2001.
[74] Arne
Ä
Ohman, Anders Flykt, and Esteves Esteves. Emotion Drives Attention:
Detecting the Snake in the Grass. Journal of experimental psychology. General,
130:446{478, 2001.
[75] Susana Onega and Jos¶ e Angel Garcia Landa. Narratology: An Introduction. Long-
man, London and New York, 1996.
[76] Andrew Ortony, Gerald L. Clore, and Allan Collins. The Cognitive Structure of
Emotions. Cambridge. UK: Cambridge University Press, 1998.
[77] AnaPaiva, J~ oaoDias, DanielSobral, RuthAylett, PollySobreperez, SarahWoods,
and Carsten Zoll. Caring for Agents and Agents that Care: Building Empathic
Relations with synthetic agents. In AAMAS, pages 194{201, 2004.
[78] Sanghoon Park and Kioh Kim. The use of pedagogical agent as a tool to improve
learning interest: Based on the distinction between individual interest and situa-
tional interest. In K. McFerrin, R. Weber, R. Carlsen, and D. A. Willis, editors,
Society for Information Technology and Teacher Education International Confer-
ence, pages 2777{2781. Chesapeake, VA: AACE, 2008.
[79] Ken Perlin and Athomas Goldberg. Improv: A System for Scripting Interactive
Actors in Virtual Worlds. In the 23rd Annual Conference on Computer Graphics,
pages 205{216, New Orleans LA, 1996.
[80] Plato. Republic. 377 B.C.E.
[81] Vladimir Propp. Morphology of the Folktale. University of Texas Press., 1968.
143
[82] David V. Pynadath and Stacy C. Marsella. Fitting and Compilation of Multiagent
Models through Piecewise Linear Functions. In AAMAS, pages 1197{1204, 2004.
[83] David V. Pynadath and Stacy C. Marsella. PsychSim: Modeling Theory of Mind
with Decision-Theoretic Agents. In IJCAI, pages 1181{1186, 2005.
[84] David V. Pynadath and Stacy C. Marsella. Minimal mental models. In AAAI.,
2007.
[85] W. Scott Reilly and Joseph Bates. Building Emotional Agents. Technical Report
CMU-CS-92-143, Carnegie Mellon University, 1992.
[86] Mark O. Riedl and Stern. Andrew. Believable Agents and Intelligent Scenario
Direction for Social and Cultural Leadership Training. In the 15th Conference on
Behavior Representation in Modeling and Simulation, Baltimore, Maryland, 2006.
[87] MarkO.Riedl,C.J.Saretto,andR.MichaelYoung. ManagingInteractionBetween
Users and Agents in a Multi-agent Storytelling Environment. In AAMAS, pages
741{748, 2003.
[88] Mark O. Riedl and R. Michael Young. From Linear Story Generation to Branching
Story Graphs. IEEE Computer Graphics and Applications., 26(3), 2006.
[89] Ira J. Roseman. Cognitive Determinants of Emotion: A Structural Theory. Review
of Personality and Social Psychology, 2:11{36, 1984.
[90] Harvey Sacks, Emmanuel A. Scheglo®, and Gail Je®erson. A Simplest Systematics
fortheOrganizationofTurn-TakingforConversation. Language, 50:696{735, 1974.
[91] Sebastian Sauer, Kerstin Osswald, Xavier Wielemans, and Matthias Stifter. U-
Create: Creative Authoring Tools for Edutainment Applications. In Proceedings of
TIDSE, 2005.
[92] Emanuel A. Scheglo® and Harvey Sacks. Opening Up Closings. Semiotica, 7:289{
327, 1973.
[93] Klaus R. Scherer. Appraisal Considered as a Process of Multilevel Sequencial
Checking. In K. Scherer, A. Schorr, and T. Johnstone, editors, Appraisal Processes
in Emotion: Theory, Methods. Oxford University Press., Oxford, 2001.
[94] David W. Schloerb. A Quantitative Measure of Telepresence. PPresence: Teleop-
erators and Virtual Environments, 4:64{80, 1995.
[95] T.Schubert,F.Friedman,andH.Regenbrecht. TheExperienceofPresence: Factor
Analytic Insights. Presence: Teleoperators and Virtual Environments, 10:266{281,
2001.
[96] HaraldT.Schupp,MarkusJunghÄ ofer,AlmutI.Weike,andAlfonsO.Hamm. EMO-
TIONAL FACILITATION OF SENSORY PROCESSING IN THE VISUAL COR-
TEX. Psychological Science, 14:7{13, 2003.
144
[97] Nikitas M. Sgouros. Dynamic Generation, Management and Resolution of Interac-
tive Plots. Arti¯cial Intelligence, 107(1):29{62, 1999.
[98] Kelly G. Shaver. The Attribution Theory of Blame: Causality, Responsibility and
Blameworthiness. Springer-Verlag, 1985.
[99] Thomas B. Sheridan. Musings on telepresence and virtual presence. Presence:
Teleoperators and Virtual Environments, 1:120{126, 1992.
[100] Mei Si, Stacy C. Marsella, and David V. Pynadath. Modeling Appraisal in Theory
of Mind Reasoning. JAAMAS., 20010.
[101] Mei Si, Stacy C. Marsella, and David V. Pynadath. THESPIAN: An Architecture
for Interactive Pedagogical Drama. In AIED, pages 595{602, 2005.
[102] Mei Si, Stacy C. Marsella, and David V. Pynadath. Thespian: Using Multi-Agent
Fitting to Craft Interactive Drama. In AAMAS, pages 21{28, 2005.
[103] Mei Si, Stacy C. Marsella, and David V. Pynadath. Thespian: Modeling Socially
Normative Behavior in a Decision-Theoretic Framework. In IVA, pages 369{382,
2006.
[104] Mei Si, Stacy C. Marsella, and David V. Pynadath. Proactive Authoring for Inter-
active Drama: An Author's Assistant. In IVA, pages 225{237, 2007.
[105] Mei Si, Stacy C. Marsella, and David V. Pynadath. Directorial Control in a
Decision-Theoretic Framework for Interactive Narrative. In Proceedings of Inter-
national Conference on Interactive Digital Storytelling, pages 221{233, Guimares,
Portugal, 2009.
[106] Mei Si, Stacy C. Marsella, and David V. Pynadath. Evaluating Directorial Control
in a Character-Centric Interactive Narrative Framework. In AAMAS, Toronto,
Canada, 2010.
[107] Mei Si, Stacy C. Marsella, and Mark O. Riedl. Interactive Drama Authoring with
Plot and Character: An Intelligent System that Fosters Creativity. In the AAAI
Spring Symposium on Creative Intelligent Systems, Palo Alto, California, 2008.
[108] Barry G. Silverman, Michael Johns, Ransom Weaver, and Josh Mosley. Game-
play, Interactive Drama, and Training: Authoring Edutainment Stories for Online
Players (AESOP). Presence: Teleoperators and Virtual Environments, 16:65{83,
2007.
[109] JamesSkorupski,LakshmiJayapalan,SheenaMarquez,andMichaelMateas. Wide
Ruled: AFriendlyInterfacetoAuthor-GoalBasedStoryGeneration. InProceedings
of the 4th International Conference on Virtual Storytelling (ICVS 2007), 2007.
[110] Mel Slater and Martin Usoh. Representation systems, perceptual position, and
presence in immersive virtual environments. Presence: Teleoperators and Virtual
Environments, 2:221{233, 1993.
145
[111] Mel Slater, Martin Usoh, and Anthony Steed. Depth of presence in virtual envi-
ronments. Presence: Teleoperators and Virtual Environments, 3:130{144, 1994.
[112] Mel Slater and Sylvia Wilbur. A Framework for Immersive Virtual Environments
(FIVE): Speculations on the Role of Presence in Virtual Environments. Presence:
Teleoperators and Virtual Environments, 6:603{616, 1997.
[113] Richard D. Smallwood and Edward J. Sondik. The Optimal Control of Partially
ObservableMarkovProcessesoveraFiniteHorizon. Operations Research,21:1071{
1088, 1973.
[114] Craig A. Smith and Phoebe C. Ellsworth. Patterns of Cognitive Appraisal in Emo-
tion. Personality and Social Psychology, 48(4):813{838, 1985.
[115] CraigA.SmithandPhoebeC.Ellsworth. Patternsofappraisalandemotionrelated
to taking an exam. Personality and Social Psychology, 52:475{488, 1987.
[116] CraigA.SmithandRichardS.Lazarus. EmotionandAdaptation. InL.A.Pervin,
editor, Handbook of personality:Theory and research. Guilford, New York, 1990.
[117] Jonathan Steuer. De¯ning Virtual Reality: Dimensions Determining Telepresence.
Journal of Communication, 42(4):72{93, 1992.
[118] W.Swartout, R.Hill, J.Gratch, W.L.Johnson, C.Kyriakakis, C.LaBore, R.Lind-
heim, S. C. Marsella, D. Miraglia, B. Moore, J. Morie, J. Rickel, M. Thi¶ ubaux,
L. Tuch, R. Whitney, and J. Douglas. Toward the Holodeck: Integrating Graphics,
Sound, Character and Story. In Agents, pages 409{416, 2001.
[119] Nicolas Szilas. IDtension: a narrative engine for interactive drama. In the 1st
International Conference on Technologies for Interactive Digital Storytelling and
Entertainment, pages 14{25, March 2003.
[120] Ed S. Tan. Emotion and the Structure of Narrative Film: Film as an Emotion
Machine. Tr. by Barbara Fasting. Lawrence Erlbaum Associates, Mahwah, N.J.,
1996.
[121] DavidR.TraumandJamesF.Allen. DiscourseObligationsinDialogueProcessing.
In ACL, pages 1{8, 1994.
[122] David R. Traum, William Swartout, Stacy C. Marsella, and Jonathan Gratch.
Fight, Flight, or Negotiate: Believable Strategies for Conversing under Crisis. In
IVA, 2005.
[123] JuanD.Velasquez. ModelingEmotionsandOtherMotivationsinSyntheticAgents.
In AAAI, 1997.
[124] Bernard Weiner. The Judgment of Responsibility: A Foundation for a Theory of
Social Conduct. The Guilford Press, 1995.
[125] Peter W. Weyhrauch. Guiding Interactive Drama. PhD thesis, Carnegie Mellon
University, 1997. Technical Report CMU-CS-97-109.
146
[126] Andrew Whiten, editor. Natural Theories of Mind. Basil Blackwell, Oxford, UK,
1991.
[127] Todd Wilkens, Anthony Hughes, Barbara M. Wildemuth, and Gary Marchionini.
The Role of Narrative in Understanding Digital Video: An Exploratory Analysis.
In the Annual Meeting of the American Society for Information Science, pages
323{329, 2003.
[128] Bob J. Witmer and Michael J. Singer. Measuring presence in virtual environ-
ments: A presence questionnaire. Presence: Teleoperators and Virtual Environ-
ments, 7:225{240, 1998.
[129] AnnaWong,NadineMarcus,PaulAyres,LeeSmith,GrahamA.Cooper,FredPaas,
andJohnSweller. Instructionalanimationscanbesuperiortostaticswhenlearning
human motor skills. Computers in Human Behavior., 25(2):339{347, March 2009.
[130] Robert M. Yerkes and John D. Dodson. The Relation of Strength of Stimulus to
Rapidity of Habit Formation. Journal of Comparative Neurology and Psychology,
18:459{482, 1908.
[131] R. Michael Young, Mark O. Riedl, Mark Branly, Arnav H. Jhala, R. J. Martin, and
C. J. Saretto. An architecture for integrating plan-based behavior generation with
interactive game environments. In Journal of Game Development., 2004.
[132] Pavel Zahoric and Rick L. Jenison. Presence as Being-in-the-World. Presence:
Teleoperators and Virtual Environments, 7:78{89, 1998.
147
Appendix
Materials for Evaluation I
Background Story
An Alternative Little Red Riding Hood Story
Everybody knows the story of the Little Red Riding Hood { or at least they think
they do. It is about how a sweet little girl encountered an old murderous wolf on a nice
Sunday morning when she visits her Granny's house. Today, I present to you a di®erent
story. A story that is told by the wolf:
\Idon'tknowhowthiswholebigbadwolfthinggotstarted,butit'sallwrong. Maybe
it'sbecauseofourdiet. Butthatisjustournature. Manyanimals,includingthehumans,
eat other animals. The humans always put themselves in such a high moral position! In
fact, they are just intruders in these woods.
The woodcutter cut down the trees, which are home to many animals, the birds, the
squirrels, the bears... But they are lucky compared to others who have been constantly
chased by another human, the hunter. The hunter brought the meat of the animals to
Granny, an old lady who ran a restaurant right in the middle of our woods. Many people
would travel great distances to eat there. They said she made the most delicious meat
pies ... All the animals in these woods want the humans to go away!"
The wolf was dedicated to saving the woods, though it is not easy for him. This was
his plan: "I should eat Granny. Once she and the restaurant are gone, the humans will
eventually move on to other places. While searching for Granny, I should stay away from
the hunter at all times because he can easily kill me with his gun. The woodcutter also
hasagun, buthemindshisownbusinessaslongasnootherhumanshavebeenattacked.
So in case I cannot ¯nd Granny, I may risk talking to him and maybe he will accidentally
let out some useful information. Of course, talking to children is the easiest and safest,
if I am lucky enough to see them walking alone in the woods. "Now you can click the
"Next" button below to watch an animated story and ¯nd out what happened to the
wolf. The story is told from the wolf's perspective. While you are reading and watching
the story, try to imagine what the wolf is feeling.
148
Animated Stories
Condition I
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf sees Red and the woodcutter.
The wolf says to Red, "Where are you going so early, Little Red Riding Hood?"
Red says to the wolf, "To Granny's." And the wolf says to himself, "Maybe I can ¯nd
out where Granny lives from her ..."
The wolf says to Red, "What are you carrying under your apron?"
Red says to the wolf, "I have to go to Granny's house now. She just lives at the end
of this road." And the wolf says to himself, "Hmm I thought I have already checked that
place before. Maybe I should take a look again."
The hunter approaches.
The wolf runs NORTH.
The wolf runs NORTH.
The wolf runs NORTH.
The wolf runs SOUTH.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf sees Granny's house.
The wolf knocks at the door: tap, tap.
"Who's there?"
"Your grandchild, Little Red Riding Hood." the wolf replies, impersonating Red's
voice.
Granny says, "Pull the bobbin, and the latch will go up."
The wolf pulls the bobbin, and enters the house ...
The wolf eats Granny!
The wolf leaves the house.
The-End
Condition II
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf sees Red and the woodcutter.
The wolf says to Red, "Where are you going so early, Little Red Riding Hood?"
Red says to the wolf, "To Granny's." And the wolf says to himself, "Maybe I can ¯nd
out where Granny lives from her ..."
149
The wolf says to Red, "What are you carrying under your apron?"
Red says to the wolf, "Granny is sick and weak, and I am taking her some cake and
wine. We baked yesterday, and they should give her strength."
The woodcutter needs help moving the chopped wood.
To show how friendly he is to people, the wolf helps the woodcutter.
The hunter approaches. Red walks away.
The wolf runs NORTH.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf sees Red.
Red gives the wolf some cake.
Red says to the wolf, "I have to go to Granny's house now. She just lives at the end
of this road." And the wolf says to himself, "Hmm I thought I have already checked that
place before. Maybe I should take a look again."
The hunter approaches.
The wolf runs NORTH.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf sees Granny's house! But the hunter is there.
The wolf runs SOUTH.
The wolf runs NORTH.
The wolf sees Granny's house.
The wolf knocks at the door: tap, tap.
"Who's there?"
"Your grandchild, Little Red Riding Hood." the wolf replies, impersonating Red's
voice.
Granny says, "Pull the bobbin, and the latch will go up."
The wolf pulls the bobbin, and enters the house ...
The wolf eats Granny!
The wolf leaves the house.
The wolf eats the cake Red gave him.
The-End
Condition III
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf sees Red and the woodcutter.
The wolf says to Red, "Where are you going so early, Little Red Riding Hood?"
Red says to the wolf, "To Granny's." And the wolf says to himself, "Maybe I can ¯nd
out where Granny lives from her ..."
The wolf says to Red, "What are you carrying under your apron?"
The hunter approaches.
150
The hunter says to the wolf, "Have you seen Granny today, who lives at the end of
this road?" The wolf is scared and says to himself, "Why I didn't notice the hunter is
already so close? Now he can easily shoot me. But what he said is interesting. I thought
I have already checked that place before. Maybe I should take a look again."
The wolf runs NORTH.
The wolf runs NORTH.
The wolf runs NORTH.
The wolf runs SOUTH.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf sees Granny's house.
The wolf knocks at the door: tap, tap.
"Who's there?"
"Your grandchild, Little Red Riding Hood." the wolf replies, impersonating Red's
voice.
Granny says, "Pull the bobbin, and the latch will go up."
The wolf pulls the bobbin, and enters the house ...
The wolf eats Granny!
The wolf leaves the house.
The-End
Condition IV
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf sees Red and the woodcutter.
The wolf says to Red, "Where are you going so early, Little Red Riding Hood?"
Red says to the wolf, "To Granny's." And the wolf says to himself, "Maybe I can ¯nd
out where Granny lives from her ..."
The wolf says to Red, "What are you carrying under your apron?"
Red says to the wolf, "Granny is sick and weak, and I am taking her some cake and
wine. We baked yesterday, and they should give her strength."
The woodcutter needs help moving the chopped wood.
To show how friendly he is to people, the wolf helps the woodcutter.
The hunter approaches.
The wolf runs NORTH.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf sees the hunter.
The hunter says to the wolf, "Have you seen Granny today, who lives at the end of
this road?" The wolf is scared and says to himself, "Why I didn't notice the hunter is
151
already so close? Now he can easily shoot me. But what he said is interesting. I thought
I have already checked that place before. Maybe I should take a look again."
The wolf runs NORTH.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf searches for Granny's house carefully, but couldn't ¯nd it.
The wolf sees Granny's house! But the hunter is there.
The wolf runs SOUTH.
The wolf runs NORTH.
The wolf sees Granny's house.
The wolf knocks at the door: tap, tap.
"Who's there?"
"Your grandchild, Little Red Riding Hood." the wolf replies, impersonating Red's
voice.
Granny says, "Pull the bobbin, and the latch will go up."
The wolf pulls the bobbin, and enters the house ...
The wolf eats Granny!
The wolf leaves the house.
The-End
152
Abstract (if available)
Abstract
With the rapid development of computer technology, a new form of media -- interactive narrative has received increasing attention. Interactive narrative allows the user to participate in a dynamically unfolding story, by playing a character or by exerting directorial control. By allowing the user to interact, interactive narrative provides a richer and potentially more engaging experience than traditional narrative. Moreover, because different choices of the user lead to different paths through the story, the author of interactive narrative can tailor the experience for the user or user groups.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
The interpersonal effect of emotion in decision-making and social dilemmas
PDF
Context dependent utility: an appraisal-based approach to modeling context, framing, and decisions
PDF
Decoding information about human-agent negotiations from brain patterns
PDF
The human element: addressing human adversaries in security domains
PDF
A framework for research in human-agent negotiation
PDF
Keep the adversary guessing: agent security by policy randomization
PDF
Parasocial consensus sampling: modeling human nonverbal behaviors from multiple perspectives
PDF
Hierarchical planning in security games: a game theoretic approach to strategic, tactical and operational decision making
PDF
Modeling social causality and social judgment in multi-agent interactions
PDF
Towards social virtual listeners: computational models of human nonverbal behaviors
PDF
Game theoretic deception and threat screening for cyber security
PDF
Protecting networks against diffusive attacks: game-theoretic resource allocation for contagion mitigation
PDF
Interactive learning: a general framework and various applications
PDF
A planner-independent approach to human-interactive planning
PDF
The Toymaker’s Bequest: a defense of narrative‐centric game design
PDF
Thwarting adversaries with unpredictability: massive-scale game-theoretic algorithms for real-world security deployments
PDF
Generating gestures from speech for virtual humans using machine learning approaches
PDF
The moonlighters: a narrative listening approach to videogame storytelling
PDF
Neural networks for narrative continuation
PDF
The return: a case study in narrative interaction design
Asset Metadata
Creator
Si, Mei
(author)
Core Title
Thespian: a decision-theoretic framework for interactive narratives
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Computer Science
Publication Date
05/17/2010
Defense Date
11/10/2009
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
conversational agent,interactive narrative,mulit-agent system,OAI-PMH Harvest
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Marsella, Stacy (
committee chair
), Gratch, Jonathan (
committee member
), Miller, Lynn Carol (
committee member
), Tambe, Milind (
committee member
)
Creator Email
meisi.g@gmail.com,meisi@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m3081
Unique identifier
UC1319629
Identifier
etd-Si-3452 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-339595 (legacy record id),usctheses-m3081 (legacy record id)
Legacy Identifier
etd-Si-3452.pdf
Dmrecord
339595
Document Type
Dissertation
Rights
Si, Mei
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
conversational agent
interactive narrative
mulit-agent system