Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Modeling social causality and social judgment in multi-agent interactions
(USC Thesis Other)
Modeling social causality and social judgment in multi-agent interactions
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
MODELING SOCIAL CAUSALITY AND SOCIAL JUDGMENT IN MULTI-AGENT INTERACTIONS by Wenji Mao A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (COMPUTER SCIENCE) December 2006 Copyright 2006 Wenji Mao Acknowledgements I am greatly indebted to my advisor, Jonathan Gratch, who has given me enormous support and supervision, and provided me such an exciting environment to learn and grow. I could not have accomplished this dissertation without Jon’s kind support. When I first came to USC five years ago, I was very lucky working with Jon from the beginning. Through years of collaboration, Jon has led me to this cross-disciplinary field and taught me how to be a researcher. His critical view on research has continuously upgraded the quality of this work, to the level I could not have reached otherwise. I am very grateful to my qualifying and dissertation committee members for their guidance: Jerry Hobbs, Paul Rosenbloom, Stephen Read and David Traum. Jerry’s good insight to the problem has deepened my research and enriched my knowledge of common sense. Paul’s questions have stimulated me to think of the essence and eventually shaped my work significantly. Steve’s social psychological perspective is particularly helpful to me in understanding human theories. David provided detailed comments, which have greatly improved this work in the early stage. I would like to thank my friend and colleague, Andrew Gordon, for the supervision, care and assistance he has given me throughout my Ph.D. years. Giving me valuable advice on research, career, life and everything, Andrew is always approachable to me. I have also learned a lot from the group seminars organized by Andrew and Jerry. As a member of the USC Computational Emotion Group, I have learned so much from the lively discussions in the group. I was also provided plenty of chance to present my work ii to peer students in the group meetings. I would like to thank the co-director of the group, Stacy Marsella, and other members for their constructive suggestions and support. Thanks to Wendy Treynor for carefully proofreading the draft thesis and correcting every English mistake. Living in Los Angeles, I am fortunate to learn from Bernard Weiner. Bernie has led me to go through attribution and motivation theories in his distinguished lectures at UCLA. He also introduced related papers and his new book to me. As a prestigious psychologist and perhaps the most well-known attribution theorist, Bernie is always modest and ready to answer my questions. I thank Bernie for all his kind help as well as his great friendship. When it comes to evaluating my work against other approaches, Joseph Halpern at Cornell has been very responsive to explain how his causality model works. In the final stage of my dissertation research, Joshua Knobe, a rising star in experimental philosophy, contacted me and voluntarily gave me many helpful comments on how to improve our computational framework. I have benefited from the vibrant USC research community in general. Whenever I get stuck in an unfamiliar area, I can always find excellent people to ask for advice. David Pynadath pointed me to the plan recognition literature. Bilyana Martinovski showed me how mitigation and legal judgment are studied in linguistics. Jim Blythe taught me decision- theoretic planning. Anna Okhmatovskaia explained the fundamentals of psychological experiment to me. Lewis Johnson and Skip Rizzo also provided very good feedback on this work. I sincerely thank my officemates, classmates and ICT staffs for creating the warm and friendly atmosphere. Thanks to Youngjun Kim and Hyeok-Soo Kim for sharing space and iii Korean food. Thanks to my classmates, Feng Pan, Donghui Feng, Lei Qu, Zhigang Deng and Min Cai for making my student life an enjoyable one. I would also like to thank ICT staffs for the technical and administrative support that has made this work possible. I found my experience at the German Research Center for Artificial Intelligence (DFKI GmbH) especially rewarding as a preparation for Ph.D. studies. My first stay at the Multi- agent Systems Group was a smooth transition to the life abroad. I would like to thank the group leader, Joerg Siekmann, and project leaders, Klaus Fischer and Elisabeth Andre for their kind help. Thanks to Elisabeth in particular, for having been so supportive to me, even now when I am in the United States. Thanks to my best friend and collaborator, Steve Allen, and many other colleagues for their cooperation and influences. Finally, I want to express my deepest love and gratitude to my family. To my parents, I can never express enough how much I love them and how much I thank for the endless love and support they give me. My warmest love and thanks to my mother-in-law. Very special thanks to my husband, Shaoqiang, for his deep love, understanding, support and commitment, and for that feeling of deep satisfaction and happiness in our family life. Sweet kisses to our lovely son, Dun. Thanks for his love and the great pleasure he brings to the whole family. This dissertation is dedicated to Shaoqiang and Dun, and in memory of my father-in-law. iv Table of Contents Acknowledgements ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ ii List of Tables ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ vii List of Figures ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ viii Abstract ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ x Chapter 1: Introduction ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1 1.1 Motivation ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1 1.2 Thesis Preview ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 2 1.3 Contributions ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 4 1.4 Outline of the Dissertation ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 5 Chapter 2: Related Work ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 7 2.1 Related Theories ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 7 2.1.1 Early Causal Attribution Theories ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 8 2.1.2 Weiner’s Model of Responsibility Judgment ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 9 2.1.3 Shaver’s Model of Blame Assignment ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 10 2.2 Related Computational Frameworks ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 12 2.2.1 Legal Reasoning ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 12 2.2.2 Extended Causal Models ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 14 2.3 Limitations ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 17 Chapter 3: The Computational Framework⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 20 3.1 Overview ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 20 3.2 Representation ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 22 3.2.1 Causal Knowledge ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 23 3.2.2 Communicative Events ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 25 3.2.3 Attribution Variables ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 26 3.2.4 Logical Expressions ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 28 3.3 Inferences ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 32 3.3.1 Dialogue Inference ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 32 3.3.2 Causal Inference ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 37 3.4 Attribution Process ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 49 3.5 Illustration ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 51 v Chapter 4: Evaluation ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 57 4.1 Claims ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 57 4.2 Assessing Overall Judgments ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 59 4.2.1 Method ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 59 4.2.2 Results ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 62 4.2.3 Comparison and Discussion ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 64 4.3 Assessing Inference Process ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 70 4.3.1 Method ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 71 4.3.2 Results ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 76 4.3.3 Discussion ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 80 4.4 General Discussion ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 85 Chapter 5: Toward Probabilistic Extensions ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 88 5.1 Probabilistic Representation ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 89 5.1.1 Actions and Plans ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 89 5.1.2 Degree of Belief ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 91 5.1.3 Symbolic Extensions ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 91 5.2 Probabilistic Reasoning ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 92 5.2.1 Intention Recognition ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 92 5.2.2 Coercion Inference ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 96 5.3 Algorithm and Illustration ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 98 Chapter 6: Conclusions ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 105 6.1 Summary ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 105 6.2 Future Considerations ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 106 Bibliography ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 109 Appendices ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 118 Appendix A. Predicates and Functions ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 118 Appendix B. Inference Rules ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 121 Appendix C. Representation of Action Execution ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 128 Appendix D. Computing Effect Set, Definite and Indefinite Effects ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 130 Appendix E. Definitions of Relevant Actions and Effects ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 132 Appendix F. Computing Expected Utilities of Actions and Plans ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 133 Appendix G. Model Predictions of Company Program Scenarios ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 136 Appendix H. Subjects’ Responses to Company Program Scenarios ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 141 Appendix I. Belief Derivations and Steps of Algorithm Execution ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 143 Appendix J. Evidence Choice of Human Subjects ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 159 vi List of Tables Table 1: Comparison of Results by Different Models with Human Data ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 63 Table 2: Model Predictions and Subject Responses for Company Program Scenarios ⋅⋅⋅⋅⋅⋅⋅⋅ 75 Table 3: Kappa Agreement between Model and Subjects ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 76 Table 4: Accuracies of Inference Rules ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 78 vii List of Figures Figure 1: The Responsibility Attribution Process ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 10 Figure 2: Sequential Model of Blame Assignment ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 11 Figure 3: Causal Model for Firing Squad Example ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 16 Figure 4: Overview of the Computational Framework ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 21 Figure 5: Illustrative Example of Action Representation ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 24 Figure 6: Indirect Agency Establishes Action Precondition ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 38 Figure 7: Inferring Outcome Intent by Comparing Alternatives − General Case ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 39 Figure 8: Inferring Outcome Intent by Comparing Alternatives − Special Case 1 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 40 Figure 9: Inferring Outcome Intent by Comparing Alternatives − Special Case 2 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 40 Figure 10: Inferring Outcome Coercion − Non-Decision Node ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 43 Figure 11: Inferring Outcome Coercion − Decision Node: Definite Effects ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 44 Figure 12: Inferring Outcome Coercion − Decision Node: Indefinite Effects ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 45 Figure 13: Inferring Outcome Coercion − Indirect Case ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 46 Figure 14: Algorithm for Finding Responsible Agents ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 51 Figure 15: First Stage of Model Validation − Assessing Overall Judgments ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 56 Figure 16: Second Stage of Model Validation − Assessing Intermediate Beliefs ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 57 Figure 17: Third Stage of Model Validation − Assessing Inference Process ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 57 Figure 18: Firing Squad Scenario 1 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 59 Figure 19: Firing Squad Scenarios 2−4 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 60 viii Figure 20: Proportion of Population Agreement on Responsibility/Blame in Scenarios ⋅⋅⋅⋅⋅ 62 Figure 21: Team Plan for the Squad in Scenario 1 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 64 Figure 22: Team Plan for the Squad in Scenarios 2 and 3 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 66 Figure 23: Company Program Scenario 2 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 70 Figure 24: Company Program Scenario 1 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 71 Figure 25: Company Program Scenario 3 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 72 Figure 26: Company Program Scenario 4 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 72 Figure 27: Illustrative Example of Probabilistic Plan Representation ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 88 Figure 28: Algorithm for Evaluating Responsible Agents ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 98 Figure 29: Plan Alternative from Sergeant’s Perspective ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 99 ix Abstract Intelligent Agents are typically situated in a social environment and must reason about social cause and effect. Social causal reasoning is qualitatively different from physical causal reasoning that underlies most intelligent systems. Modeling the process and inference of social causality can enrich the capabilities of multi-agent and intelligent interactive systems. In this thesis, first we explore the underlying theory and process of how people evaluate social events, and present a domain-independent computational framework to reason about social cause and responsibility. The computational framework can be generally incorporated into an intelligent system to augment its cognitive and social functionality. For the fidelity of the modeling, this work is based on psychological attribution theory. Attribution theory identifies several key factors people use in forming their judgments, such as physical cause, intentions, foreknowledge and coercion. Based on the theory, our work formalizes commonsense reasoning of deriving the beliefs about the key factors from natural language communication and task execution. In addition to developing the model, we design and conduct experiments to empirically validate the model, using real human data. The experimental results show that model predictions of the overall judgments, intermediate beliefs about the variables and inferential mechanism are consistent with people responses. The computational framework has been applied to several applications, such as emotion modeling, natural language conversation strategies and performance assessment in group training. Other potential applications include interactive system design, adaptive user x interfaces and coherent internal models for virtual humans. In the end of the dissertation, we summarize the research contributions and raise some issues for future considerations. xi Chapter 1 Introduction People rarely give simple causal explanations when investigating social events. In contrast to how causality is used in physical sciences, people instinctively seek out individuals for their everyday judgments of social cause, blame and credit. Such judgments are a fundamental aspect of social intelligence. They involve evaluations of not only physical causality, but also individual responsibility and free will [Shaver, 1985]. They manifest how we make sense of the behavior of others and determine the way we act on the social world around us. 1.1 Motivations A growing number of applications have sought to incorporate automatic reasoning techniques into intelligent agents. Many intelligent systems adopt planning and reasoning techniques designed to reason about physical causality. Since intelligent agents are typically situated in a multi-agent environment and multi-agent interactions are inherently social, physical causes and effects are simply inadequate for explaining social phenomena. In contrast, social causality, both in theory and as practiced in everyday folk judgments, emphasizes multiple causal dimensions, involves epistemic variables, and distinguishes between physical cause, responsibility, and credit or blame. With the advance of multi-agent interactive systems, user-aware adaptive interfaces and systems that socially interact with people, it is increasingly important to model and reason 1 about this human-centric form of social inference. Social causal reasoning facilitates social planning by augmenting classical planners with the ability to reason about which entities have control to effect changes. It facilitates social learning by appraising behavior as praiseworthy or blameworthy, and reinforcing the praiseworthy. In modeling the communicative and social behavior of human-like agents, social judgment helps inform models of social emotion by characterizing which situations evoke anger, guilt or praise [Gratch et al, 2006]. As people are usually adept at taking credit and deflecting blame in social dialogue (e.g., negotiation dialogue), the information also helps guide conversation strategies [Martinovski et al, 2005]. In general, by evaluating the behavior of the participating entities (i.e., human user, computer program and agent) and providing information about the social and cognitive states of an entity, social inference and social judgment benefit various forms of social interactions, including human-computer interaction, human-agent interaction and agent- agent interaction. They can also benefit human-human interaction by identifying the underlying cognitive process and principles of human judgments. In a multi-agent environment, social causal reasoning helps distribute responsibility in multi-agent organization [Jennings, 1992; Jennings & Mamdani, 1992], automate after-action review of group performance [Gratch & Mao, 2003], and support social simulation for agent society [Conte & Paolucci, 2004]. 1.2 Thesis Preview Social causality and responsibility judgment (i.e., judgment that people make about the accountability of the behavior of others or self) cuts across a number of social judgment 2 issues, such as evaluation of power and influence, explanation of interpersonal relationship and prediction of future behavior. Responsibility is intertwined with causality and intentions, and the assignment of it mediates between the assessments of cognitive states and the generation of social responses (i.e., affective and moral responses, e.g. praise, blame, pride, shame, resentment and gratitude). Responsibility judgment, an important class of social judgment, is the core problem of this thesis. Responsibility judgment has been studied extensively in moral philosophy (e.g., [Williams, 1995]), law (e.g., [Hart & Honore, 1985]), and social psychology (e.g., [Shaver, 1985; Weiner, 1995]). Traditions differ to the extent that the models are prescriptive (i.e., what is the “ideal” criterion that people ought to conform in their judgments) or descriptive (i.e., what do people actually do in their judgments). Much of the work on AI has focused on identifying ideal principles of responsibility (e.g., legal codes or philosophical principles) and ideal mechanisms to reason about this, typically contradictory principles and counterfactual reasoning [McCarty, 1997; Chockler and Halpern, 2004]. Our primary goal is to design faithful computational framework in use for human-like agents so as to drive realistic behavior generation [Gratch et al, 2002]. Psychological and philosophical studies agree on the broad features people use in their everyday judgments. Our work is particularly influenced by attribution theory, a body of research in social psychology exploring folk explanation of behavior [Weiner, 1986, 1995]. We start from the features identified by attribution theory, and we believe the theoretical stands we take are applicable to most interactive systems in multi-agent context. Although social causality and responsibility theory is well-founded in social and psychological studies, one big challenge is to convert the abstract conceptual description of 3 the theory into a functionally workable model and to develop corresponding computational mechanism to automate the process for the actual realization in intelligent systems. To this end, we take advantages of artificial intelligence reasoning techniques, in particular, commonsense reasoning [Gordon & Hobbs, 2004; Mueller, 2006], and agent modeling techniques, in particular, the BDI model [Bratmann, 1987; Georgeff & Lansky, 1987]. The BDI concepts help us map sometimes vague psychological terms into widely accepted concepts in AI and agent research, and commonsense reasoning helps construct the automatic inferential mechanism from the general representation in computational systems. Just proposing a model does not suffice. We need to compare model behavior to actual human performance. A strong emphasis of this work is model validation using human performance data. In the related research communities, including causality, multi-agent, cognitive modeling and commonsense reasoning, it is rare to empirically evaluate commonsense causal reasoning, especially given that we evaluate both the individual variables (i.e., factors) and the inference process. 1.3 Contributions In this thesis, we develop a general computational framework for modeling social causality and responsibility judgment in the context of multi-agent interactions. To the best of our knowledge, such a computational framework based on psychological theory has not been built before. It is also the first validated computational framework for the problem. Specifically, this work makes several major contributions: • Produces the first general computational framework of social causality and social judgment based on attribution theory in psychology 4 • Provides the formalism of commonsense reasoning about the beliefs of attributions from speech act representation of communication and features in plan representation • Designs and conducts experiments to show strong empirical support for the computational framework • Introduces the first computational framework of coercion by plan-based evaluation • Extends the computational framework to probabilistic representation and decision- theoretic reasoning • Develops the first intention recognition algorithm based on maximizing expected plan utility • Designs algorithms to describe the attribution process and credit/blame assignment Moreover, this work provides a first step of cognitive modeling of human social intelligence as well as a basis for formalizing the knowledge of inter-agent commonsense reasoning. Meanwhile, by simulating cognitive and social theories in computational systems, the work helps advance our understanding of the cognitive process and principles of human social inference. 1.4 Outline of the Dissertation The rest of the thesis is organized as follows. Chapter 2 reviews previous theoretical and computational work on social causality, responsibility and blame. We summarize the related attribution theories in social psychology, and the related computational frameworks and extensions in legal systems and causality research. 5 Chapter 3 presents our computational framework for social causality and responsibility. We give the detailed descriptions of the computational representation, inferences and the algorithm for the attribution process. We also provide an example to illustrate the use of the framework. Chapter 4 describes the empirical studies on model validation, assessing overall judgments, individual beliefs and inference rules step by step. We demonstrate the experimental design, methodology as well as the results in comparison with other computational approaches. Chapter 5 presents the probabilistic extension to the computational framework. We explain how to extend the representation of actions and plans to probabilistic representation, and how to extend causal inference to probabilistic reasoning of actions and plans based on the expected utilities. Chapter 6 concludes by clarifying the main contributions of this dissertation and raising some considerations for future research. 6 Chapter 2 Related Work According to Shaver [1975], there exist at least three senses of responsibility. Causal responsibility is identical to causality in meaning (i.e., you are responsible for what you cause). Most theoretical statements on responsibility are not limited to the causality sense. Legal responsibility is the sense used in law, and moral responsibility, in moral realm. Moral sense is closest to the commonsense usage of responsibility. For the aim of this work, we shall focus on moral sense of responsibility and related theories in social psychology. Related computational work on responsibility has taken both legal and moral views, represented by legal reasoning and the extensions of causality models. 2.1 Related Theories Most contemporary psychological studies of responsibility and social judgment draw on attribution theory. In more than 40 years of research, attribution theory has progressed significantly, made countless contributions to the literature and became a core area of social psychology [Malle, 2004; Weiner, 2006]. Attribution research began with the seminal work of [Heider, 1958] who argued that social perceivers make sense of the world by attributing behavior and events to their underlying causes. Attribution therefore refers to the process of ascribing a cause to an event or explaining the event, as well as the inferences or judgments made. 7 Because responsibility judgment requires an attribution of causality, and because of its unique social characteristics, it is best described by attributional approaches. Two of the most influential attributional models for responsibility and blame, with complete conceptual frameworks, are those of Shaver [1985] and Weiner [1995]. In this section, we first analyze the simplified portrayal of behavior explanation in early causal attribution theories, and then concentrate on the attributional models of Weiner and Shaver. 2.1.1 Early Causal Attribution Theories Classical attribution research can be viewed as a narrowing of Heider’s broad model of social perception in selective directions. Two main strands are causal judgment [Kelley, 1967] and correspondent inference [Jones and Davis, 1965; Jones & McGillis, 1976]. Kelley’s model of causal judgment specifies how a perceiver might judge causes of behavior as internal or external based on the covariation principle and proposes the criteria for causal judgment when sufficient information is presented to the perceiver 1 . However, the covariance principle requires repeated observations of behavior. In many social situations, and especially in the case of misfortunes and moral transgressions that are subject to the judgments of responsibility and blame, there is only a single occurrence of event. Jones et al’s correspondent inference theory proposes the process by which humans may infer intentional behavior and personal disposition from a single event. The judgment of responsibility rests on a perceived causal connection between the person and the event in 1 The covariation principle states that an effect is attributed to that condition which is present when the effect is present and which is absent when the effect is absent. Based on this principle, Kelley identified three criteria for causal judgment, that is, distinctiveness, consistency and consensus. Kelley [1972, 1973] proposed additional rules of causal reasoning (e.g., the discounting principle), resorting to the assumed pattern of behavioral data represented by causal schemata when multiple occurrences of events are not available. 8 question, and emphasizes the central role of intention in explaining behavior. However, correspondent inference is based on the deviation of behavior from role expectancy and the uniqueness of action effects 2 . The reliability of such inference has been questioned [Harris & Harvey, 1981; Shaver, 1985], as intentions are derived solely from the comparison of action effects. When a person does the expected, very little about his or her disposition can be derived from the correspondent inference theory with any certainty. Another limitation in Kelley’s model, as well as in other early causal attribution work, is that the simple internal-external dichotomy of causes is inadequate in attributing actions to internal dispositions. Rather, the underlying motives and reasons should play an important role [Buss, 1978; White, 1991]. Weiner [1995] and Shaver [1985] extended the traditional causal dimension to include features about and beyond Jones et al’s notion of intention to explain the accountability of behavior. Their models also introduce the attribution processes in which these features are applied to forming the judgment. 2.1.2 Weiner’s Model of Responsibility Judgment Weiner [1986] extended Kelley’s internal-external dimension of causality (which Weiner called locus of causality), by introducing two additional causal dimensions, stability (i.e., duration of a cause) and controllability (i.e., whether the causal agent could have done otherwise). Weiner applies his theory to evaluating achievement striving and provides 2 The model identifies conditions that increase the likelihood of correspondent trait inference. for example, the more the actor’s behavior deviates from what is normally regarded as desirable for the role and the more unique the action effects are compared to those of alternative courses of action, the more certain the perceivers judge the behavior as reflecting personal dispositions. 9 explanations of people’s motivation and emotion in such a context. Studies show that his model fits empirical data very well [Weiner et al, 1979, 1982; Weiner 1983, 1985]. Weiner [1995] further extended his attributional theory of motivation to account for responsibility judgment and social conduct. In his model, the assignment of responsibility has as a first step the determination of locus of causality (i.e., personal versus situational causality). If personal causality is involved, the judgment proceeds by determining whether the cause is controllable. Internal controllability is a determinant of responsibility, though judgment of responsibility will not be rendered if mitigating circumstances are available (Figure 1). Weiner also points out that one of the major determinants of the degree of responsibility is whether a controllable act is perceived as intentionally committed or due to negligence. Event NOT RESPONSIBLE ON (continue process) Impersonal Causality Personal Causality Uncontrollable Cause Controllable Cause Mitigating Circumstance No Mitigating Circumstance Assignment of Responsibility Event NOT RESPONSIBLE ON (continue process) Impersonal Causality Personal Causality Uncontrollable Cause Controllable Cause Mitigating Circumstance No Mitigating Circumstance Assignment of Responsibility Figure 1 The Responsibility Attribution Process (from Weiner [1995]) 2.1.3 Shaver’s Model of Blame Assignment Shaver’s model [1985] is similar to Weiner’s, but he introduces other dimensions of responsibility, foreseeability and coercion. In his model, the assignment of blame for a negative occurrence is the result of a process of evaluating dimensions of responsibility. 10 First, one assesses causality, distinguishing between personal causality (i.e., human agency) versus impersonal causality (i.e., environmental factors). If human agency is involved, the judgment proceeds by assessing other key variables: Did the actor foresee its occurrence? Was it the actor’s intention to produce the outcome? Was the actor forced under coercion? At each point along the way in Figure 3, particular sorts of actions do not meet the successive tests for potential blame, and those actions split off to lead to the alternative attributions. Finally, the perceiver takes possible mitigating factors (justifications or excuses) into consideration and assigns proper blame to the responsible agent (Figure 2). Negative Consequence Attribution Variables Caused Foreseen Unforeseen Intended Unintended Voluntary Coerced No Excuse Excuse Blame Increasing responsibility but no blame Negative Consequence Attribution Variables Caused Foreseen Unforeseen Intended Unintended Voluntary Coerced No Excuse Excuse Blame Increasing responsibility but no blame Figure 2 Sequential Model of Blame Assignment (Adapted from Shave [1985]) Shaver distinguished between causality, responsibility and blameworthiness. Blame depends on a prior attribution of moral responsibility, and in the same fashion, responsibility attribution depends on a prior judgment of causality. However, causality should not be equated with responsibility. Responsibility can be diminished by coercion, or mitigated by lack of intention and/or foreknowledge. Similarly, responsibility is not equated with blameworthiness. Being responsible is liable for blame, but justifications and excuses are likely to diminish or mitigate blameworthiness. 11 The assignment of blame is a complex process of social judgment. One person’s judgments of causality, responsibility and blameworthiness may or may not agree with the judgments made by others. Both Weiner’s and Shaver’s models emphasize the perceiver’s subjective view of blame assignment (e.g., a perceiver’s knowledge may not necessarily be correct, and there might be error of judgment), and meanwhile, they capture the underlying generality of the judgment processes by different perceivers. 2.2 Related Computational Frameworks Computational approaches to social causality and responsibility have proceeded along two tracks. One track is the legal argument and legal reasoning in AI and law research. The other is the extension of causal models in causality research. 2.2.1 Legal Reasoning Perhaps the first computational work to address legal reasoning is McCarty’s Taxman project, which aimed to reconstruct the lines of reasoning in a few leading American tax law cases [McCarty & Sridharan, 1981]. Most of the system was formally specified and some parts were implemented [McCarty, 1995]. Another implemented system is case-based HYPO [Rissland & Ashley, 1987], and continued projects CABARET [Rissland & Skalak, 1991] and CATO [Aleven & Ashley, 1995], which combine rules and cases. The main focus of these systems is to generate persuasive arguments that would be made by human lawyers. There are some similarities in the judgments of moral and legal responsibility, and a few researchers have suggested the legal model as a direct analogue for that of moral responsibility (e.g., [Fincham & Jaspars, 1980]). However, there are fundamental differences 12 between the two kinds of responsibility judgment. In legal judgment of responsibility, a criminal act consists of two components: (1) a physical act that is willfully performed, occurs in specified circumstances, and results in certain harmful consequences; and (2) a guilty mind, with which the action was performed [Williams, 1953]. Shaver [1985] argued that the legal concept of act (aka actus reus) is not a synonym for action in moral sense. In law, the term of act is inseparable from the specific circumstances in which the action is performed and the consequences following from that action. The very same act may or may not be a criminal act depending on the circumstances. For example, “getting married”, an act normally positively regarded by society, is a crime (i.e., bigamy) if one is already married. That is why most legal reasoning systems are case-based (e.g., HYPO [Rissland & Ashley, 1987] and TAXMAN-ΙΙ [McCarty, 1995]), whereas evaluating moral responsibility identifies general theories that fall within the broad studies of cognitive functionalism 3 (e.g., clarifying the roles of cause, belief and intention in explaining behavior). Meanwhile, much of the logic-based research on legal argument has focused on more general reasoning mechanism, typically defeasible inference using non-monotonic reasoning and defeasible argumentation (e.g. [Hage, 1997; Prakken, 1997]). Here the main efforts are on the representation of complex legal rules (e.g., contradictory, nonmonotonic and priority rules), the inference with rules and exceptions, and the handling of conflict rules. Prakken and Sartor [2002] present a four-layer view of legal argument, which comprises a logical layer (constructing an argument), a dialectical layer (comparing and assessing conflicting arguments), a procedural layer (regulating the process of argumentation), and a strategic or 3 The doctrine that views theories of behavior as a complex mental states, introduced and individualized by the functions or the roles they play in producing the behavior to be explained. 13 heuristic layer (arguing persuasively). Each additional layer presupposes, and is built around the previous layers. They use their four-layered view to analyze some most influential implemented work in legal argument and legal reasoning. Here is an illustrative example, taken from Prakken and Sartor [2002]: P 1 : I claim that John is guilty of murder. Q 1 : I deny your claim. P 2 : John’s fingerprints were on the knife. If someone stabs a person to death, his fingerprints must be on the knife, so John has stabbed Bill to death. If a person stabs someone to death, he is guilty of murder, so John is guilty of murder. O 2 : I concede your premises, but I disagree that they imply your claim: Witness X says that John has pulled the knife out of the dead body. This explains why his fingerprints were on the knife. P 3 : X’s testimony is inadmissible evidence, since she is anonymous. Therefore, my claim still stands. P 1 illustrates the procedural layer: the proponent (P) starts a dispute by stating a claim. The opponent (O) can either accept or deny this claim. O does the latter with O 1 . The procedure now assigns the task of proof to P. P attempts to fulfill this task with an argument for his claim (P 2 ). P 2 includes an abductive inference. Whether it is constructible is determined at the logic layer. O follows with a counterargument O 2 , but whether it is a counterargument and has sufficient attacking strength is determined at the dialectical layer. The same remark holds for P’s counterargument P 3 . In addition, P 3 illustrates the heuristic layer: it uses the heuristic that evidence can be attacked by arguing that it is inadmissible. Another heuristic could have been used is that witness testimonies can be attacked by undermining the witnesses’ credibility. 2.2.2 Extended Causal Models 14 It is not uncommon to use physical causality as a substitute for modeling social causality. This is the approach used by most current intelligent systems. A simple cause model always assigns responsibility and blame to the actor whose action directly produces the outcome. Instead of always picking up the actor, a slightly more sophisticated model can choose the highest authority (if there is one) as the responsible and blameworthy agent. We call such a model a simple authority model. As simple models use fixed approaches to handle each causal scenario, they are inflexible and thus insensitive to the changing situations specified in each scenario. In general, simple models are not good solutions to social cause and responsibility judgment. We shall provide experimental evidence in the evaluation (Section 5.2). In contrast to these simple solutions, recent computational approaches have addressed the problem by extending causal models [Halpern & Pearl, 2001; Chockler & Halpern, 2004]. Halpern and Pearl [2001], for example, proposed a definition of actual cause within the framework of structural causal models. As their approach can extract more complex causal relationships from simple ones, their model is capable of inferring indirect causal factors including social causes. A structural model (or a causal model) is a system of equations over a set of random variables. There are two finite sets of variables: exogenous (U) and endogenous (V). The values of exogenous variables are determined by factors outside the model, and thus have no corresponding equations. Each endogenous variable has exactly one structural equation (or causal equation) that determines their value. A causal model can be expressed as a causal diagram, with nodes corresponding to the variables, and edges from the parents of each 15 endogenous variable (indicated by the structural equations) to the endogenous variable. Take the two-man firing squad example [Pearl, 1999]: There is a two-man firing squad; on their captain’s order, both riflemen shoot simultaneously and accurately, and the prisoner dies. Figure 3 illustrates the causal model for the firing squad example, where U={Uc} and V={C, R1, R2, D}. A particular value of the exogenous variables U (called a context) represents a specific situation (i.e., a causal world). For instance, if we assume Uc=1 (i.e., the captain’s order is true) in the causal model below, then the resulting causal world describes the two-man firing squad story above. Rifleman-2 shoots (R2) Commander orders (C) Rifleman-1 shoots (R1) Prisoner’s death (D) Context (Uc) Rifleman-2 shoots (R2) Commander orders (C) Rifleman-1 shoots (R1) Prisoner’s death (D) Context (Uc) Figure 3 Causal Model for Firing-Squad Example Structural equations: Uc = C C = R1 C = R2 R1 ∨ R2 = D Causal inference is based on counterfactual dependence under some contingency. In the above firing squad scenario, for example, given the context that the captain orders, under the contingency that rifleman-2 did not shoot, the prisoner’s death is counterfactually dependent on rifleman-1’s shooting (details omitted). So rifleman-1’s shooting (R1=1) is an actual cause of the death. Similarly, rifleman-2’s shooting (R2=1) is an actual cause of the death. Besides the two riflemen who physically cause the death, Halpern & Pearl’s model can find the captain’s order (C=1) as an actual cause for the death as well. 16 Chockler and Halpern [2004] extended this notion of causality, to account for degree of responsibility. They provide a definition of responsibility. For example, if a person wins an election 11-0, then each voter who votes for her is a cause for the victory, but each voter is less responsible for the victory than each of the voters in a 6-5 victory. Based on this notion of responsibility, they then defined the degree of blame, using the expected degree of responsibility weighed by the epistemic state of an agent (i.e., an agent’s knowledge about the outcome prior to action performance, corresponding to foreseeability in Shaver’s terms). In the firing squad example above, according to their formalism, C=1 (the captain’s order) has degree of responsibility 1 and degree of blame 1, while the riflemen’s shooting, R1=1 and R2=1, each share responsibility ½ and each is blamed ½. 2.3 Limitations The models of responsibility and blame attribution proposed by Weiner and Shaver are quite similar in essence. Both models require human agency for attributing responsibility. Only when human actions are involved, does an event become relevant to a psychological investigation of responsibility attribution. Both view an agent’s freedom of choice as being of prime importance in directing responsibility. Both argue that responsibility judgment is a subjective process. In both models, responsibility varies in degree, and the assessments of intention and mitigating factors affect the intensity assigned. One limitation of Shaver’s process model is its strict sequential feature. It assumes that the evaluations of variables should follow each other in time. Weiner’s model is more relaxed, in that the sequential processing in Shaver’s model is not presumed (though for convenience, Weiner used an on-off decision process to present his model). In Weiner’s 17 model, when a causal agent has no control over the event (i.e., internal and uncontrollable cause), the agent is evaluated as not responsible for the consequence. Yet his model has one limitation: it could not redirect responsibility to the agent who has control in this case. Shaver’s model uses coercion instead. The term is more restrictive than controllability, but it better represents the situations when external forces limit an agent’s freedom to choose moral alternatives, and better accounts for how to redirect responsibility in these situations. Another limitation of Shaver’s model is that it does not make the subtle distinctions between act and outcome coercion/intention. Weiner [2001] distinguished between act intentionality and outcome intent, and argued that it is outcome intent that affects our judgments of behavior and deserves more elevated responsibility assignment. Regarding the logic-based approach to legal argument, McCarty [1997] argued whether in real cases, a judge would apply the formal theory to evaluate the complex rules, and thereby arrive at the correct result. He criticized the creation of more and more sophisticated rules as a “clash of intuitions”, and called for a new version of legal rules, which would be “simple and clear”. We side with McCarty here. Furthermore, we argue that a layman’s judgment of behavior in everyday situations is not quite the same as that made in the court. Not only does it occur in richer form of social interactions, but it follows different set of rules. Chockler & Halpern’s extended definition of responsibility can account for multiple causes and the extent to which each cause contributes to the occurrence of a specific outcome. Another advantage of their model is that their definition of degree of blame takes an agent’s epistemic state into consideration. However, they only consider one epistemic variable: an agent’s foreknowledge. Important concepts in moral responsibility, such as 18 intention and freedom of choice are excluded in their definition. As a result, their model uses foreknowledge as the only determinant for blame assignment, which is inconsistent with psychological theories. As their model is the extension of counterfactual reasoning within the structural-model framework, and structural-model approach represents all the events as random variables and causal information as equations over the random variables, this brings about other limitations in their model. For instance, causal equations do not have direct correspondence in computational systems, so it is hard to obtain them for practical applications. As communicative events are also represented as random variables in their model (which is propositional), it is difficult to construct equations for communicative acts and infer intermediate beliefs that are important for social causal reasoning. In constructing our computational model, we take psychological attribution theory as a starting point, and choose the strength of Weiner’s and Shaver’s models as the building blocks. We follow the basic dimensions of Shaver but relax the strict sequential feature in his model. We follow the implications of Weiner’s theory. In evaluating the individual variables, we consider both the actions of agents and the outcomes they produce. We use first-order logic as a tool to express the commonsense content. Rather than pursuing the complexity of logical forms, we design a small number of inference rules to capture the intuitions in people’s judgments. We also take different forms of social interactions into account, and makes use of commonsense reasoning to infer beliefs from dialogue communication and task execution. Our approach is based on the general representation commonly used in many intelligent systems. We build the inferential mechanism by evaluating general features of this representation. In the next chapter, we shall present the details of the computational framework. 19 Chapter 3 The Computational Framework 3.1 Overview Attribution theory identifies the general process and key factors people use in judging social behavior. However, these process and variables are not readily applicable to computational systems, as these variables are described at an abstract conceptual level that is insufficiently precise from computational perspective. The vast majority of attribution theories, including those for responsibility and blame, have presumed a single event rather than sequence of events. They rarely address the internal knowledge structures and cognitive processes involved, with few exceptions being Lalljee and Abelson [1983] and Read [1987]. On the other hand, current intelligent systems are increasingly sophisticated, usually involving natural language communication, multi-agent interactions, the ability to generate and execute plans to achieve goals, and methods that explicitly model beliefs, desires and intentions of agents [Pollack, 1990; Grosz & Kraus, 1996]. In order to bridge the gap between conceptual descriptions of the theory and actual components in current intelligent systems, we need to construct a computational model. Ideally, our computational model should be based on the data structures and representations that are typically used in practical systems, and rely as little as possible on additional structure or representation. The computational model should function as inferential 20 mechanism that derives the variables needed for the theory from information and context available in practical systems. In constructing our computational framework, we have adopted the causal representation used by most intelligent systems, especially agent-based systems. This representation provides a concise description of the causal relationship between events and states. It also provides a clear structure for exploring alternative courses of actions, and plan interactions. Such a representational system supports several key inferences necessary to form attributions: recognizing the relevance of events to an agent’s goals and plans – key for intention recognition; assessing an agent’s freedom and choice in acting – key for assessing coercive situations; and detecting how an agent’s plan facilitates or prevents the plan execution of other agents – key for detecting plan interventions. Knowledge Causal Knowledge Knowledge Causal Knowledge Observations Action Execution Communication Observations Action Execution Communication Dialogue Inference Inferences Causal Inference Dialogue Inference Inferences Causal Inference Beliefs Attribution Values •Cause • Intention • Foreknowledge •Coercion Beliefs Attribution Values •Cause • Intention • Foreknowledge •Coercion Judgments Responsibility Credit/Blame Judgments Responsibility Credit/Blame Figure 4 Overview of the Computational Framework In this chapter, we show how to derive attribution variables via inference over prior causal knowledge and observations, and show how the attribution process and algorithm utilize the beliefs of the variables to form an overall judgment. Figure 4 illustrates an 21 overview of the computational framework. Two important sources of information contribute to the inference process. One source is the actions performed by the agents involved in the social situation (including physical acts and communicative acts). The other is the general causal knowledge about actions and states of the world. Causal inference reasons about beliefs from causal evidence. Dialogue inference derives beliefs from communicative evidence. Both inferences make use of commonsense knowledge and generate beliefs of attribution variables. These beliefs serve as inputs for the attribution process, which is described as an algorithm in our model. Finally, the algorithm forms an overall judgment and assigns proper credit or blame to the responsible agents. In the remainder of this chapter, we shall discuss the representation, inferences, attribution process and algorithm in detail. 3.2 Representation Bratman et al [1988] recognizes the primacy of beliefs, desires and intentions (BDI) in modeling the rational behavior of agents. Intuitively, an agent’s beliefs represent the information the agent has about the world (including self and other agents). These beliefs may be incomplete or incorrect. An agent’s desires represent the states of affairs that the agent would wish to bright about, and an agent’s intentions represent those desires that have been committed by the agent. Bratman et al argued that agents are resource bounded: they are unable to spend unbounded time on deliberation. Thus for a resource-bounded agent, a major role of agent’s plans is to constrain the amount of further practical planning she must deliberate on. AI planning systems are typically designed to automatically construct plans for agents prior to execution. Since the BDI theory was originally developed [Bratman, 22 1987], it has been implemented and successfully applied to a number of complex domains, for example, the procedural reasoning system (PRS) [Georgeff & Lansky, 1987] and other paradigmatic architectures [Fischer et al, 1996; Rao, 1996; d’Inverno et al, 1997; Huber, 1999]). It has become possibly the best known and best studied model of practical reasoning agents [Georgeff et al, 1999]. Our computational representation is based on the BDI model and plan descriptions that are widely adopted by intelligent agent systems. 3.2.1 Causal Knowledge Causal reasoning plays a central role in deriving attribution variables. In our approach, causal knowledge is encoded via a hierarchical plan representation. An Action has a set of propositional preconditions and effects (including conditional effects). Actions can be either primitive (i.e., directly executable by agents) or abstract. An abstract action may be decomposed in multiple ways and each decomposition is one choice of executing the action. Different choices of action execution are called alternatives each other. Consequences or outcomes (we use them as exchangeable) are those desirable or undesirable action effects (i.e., effects having positive or negative significance to an agent). A plan is a set of actions to achieve certain intended goal(s). As a plan may contain abstract actions (i.e., an abstract plan), each abstract plan indicates a plan structure of decomposition. If an abstract action can only be decomposed in one way, it is a non-decision node (i.e., and node) in the plan structure. Otherwise, it is a decision node (i.e., or node) in the plan structure and an agent must decide amongst the options. Decomposing the abstract actions into primitive ones in an abstract plan results in a set of primitive plans (i.e., plans composed of only primitive actions), which are directly executable by agents. The space of 23 all the primitive plans constitutes a plan library. Outcomes of a plan are the aggregation of the outcomes of the actions that constitute the plan. To represent the hierarchical organizational structure of social agents, each action in a plan is associated with a performer (i.e., the agent capable of performing the action) and an agent who has authority over its execution. The performer should not execute the action until authorization is given by the authority. … … Send One Squad Performer: sergeant Authority: lieutenant One Squad Forward Performer: squad leader Authority: sergeant Remaining Forward Performer: squad leader Authority: sergeant Send Two Squads Performer: sergeant Authority: lieutenant Two Squads Forward Performer: squad leader Authority: sergeant Remaining Forward Performer: squad leader Authority: sergeant AND AND OR Route Secured Unit Fractured 1-6 Supported Not Fractured … 1-6 Supported … … Troop-at-aa Support Unit 1-6 Performer: lieutenant Authority: lieutenant Troop-at-aa Troop-at-aa One-sqd-at-aa Two-sqds-at-aa Remaining-at-aa Remaining-at-aa Send One Squad Performer: sergeant Authority: lieutenant One Squad Forward Performer: squad leader Authority: sergeant Remaining Forward Performer: squad leader Authority: sergeant Send Two Squads Performer: sergeant Authority: lieutenant Two Squads Forward Performer: squad leader Authority: sergeant Remaining Forward Performer: squad leader Authority: sergeant AND AND OR Route Secured Unit Fractured 1-6 Supported Not Fractured … 1-6 Supported … … Troop-at-aa Support Unit 1-6 Performer: lieutenant Authority: lieutenant Troop-at-aa Troop-at-aa One-sqd-at-aa Two-sqds-at-aa Remaining-at-aa Remaining-at-aa Figure 5 Illustrative Example of Action Representation Figure 5 illustrates an example of action representation from a team training system we developed. A leader (the lieutenant) is in charge of a troop to fulfill his mission of supporting a sister unit (unit 1-6). There are two alternative ways to support unit 1-6, either sending one squad or sending two squads. Each alternative can be performed by his assistant (the sergeant) if authorized. The alternatives can be further decomposed into subsequent primitive actions that are directly executable by the subordinates of the sergeant (the squad leaders). Action execution brings about certain effects, for example, two squads forward 24 fractures the unit. The actions in the graph form a partial plan structure for the agent team (the troop). 3.2.2 Communicative Events We represent communicative events as sequence of speech acts [Austin, 1962; Searle, 1969, 1979]. Austin [1962] observed that utterances are not just factual statements, but are used to do things. Speaking is acting, and in speaking, the speaker is performing speech acts. Austin distinguished several types of speech acts, for example, an act performed in saying something (illocutionary act, e.g. asking a question or making a promise), an act of saying something (locutionary act; e.g. “It is raining”), and an act performed by saying something (perlocutionary act, e.g. eliciting an answer). Searle [1969] further developed Austin’s approach. Most of his work is on illocutionary acts. According to Searle, in saying something, a speaker is performing at least three distinct kinds of acts ([Searle, 1969], pp.23-24): (1) the uttering of words (performing utterance acts); (2) making reference and predicting (performing propositional acts); (3) stating, questioning, commanding, promising, etc (performing illocutionary acts). In performing an illocutionary act, one characteristically performs propositional acts and utterance acts. Speech act theory is a relatively well-defined theory of communicative acts [Dore, 1978]. At the most fundamental level, speech acts are units which simultaneously manifest the structure, content, and function of language. The structure of a speech act is its grammar (which we shall not focus on in this thesis). Its content consists of the conceptual substance of the proposition, and its function is the illocutionary force (consisting of the speaker’s intentions, expectations, etc). 25 To represent an illocutionary act, we use a first-order predicate, with the variables representing the speaker, the hearer and the propositional content of the act. For our purpose, we focus on the speech acts that help infer dialogue agents’ desires, intentions, foreknowledge and choices in acting. We have included the following speech acts in our model (Variables x and y are different agents. Let p and q be propositions and t be the time): inform(x, y, p, t): x informs y that p at t. request(x, y, p, t): x requests y that p at t. order(x, y, p, t): x orders y that p at t. accept(x, p, t): x accepts p at t. reject(x, p, t): x rejects p at t. counter-propose(x, p, q, y, t): x counters p and proposes q to y at t. Except for the speech act “inform”, propositions in the speech act representation above usually take the form of do(z, A) or achieve(z, e) (see next section for definitions; Here A is an action, e is an action effect, and z can be one of the participating agents x or y, or another agent). 3.2.3 Attribution Variables Attributional models employ a set of key variables to determine social cause and responsibility. Causality refers to the relationship between cause and effect. Causality has been studied by many disciplines (e.g., physical science, philosophy, psychology, religion, law, politics, etc) and each discipline proposes different principles of causes. For the psychological investigation of responsibility attribution, causality is restricted to personal causality [Shaver, 26 1985; Weiner, 1995]. In our approach, we encode causal knowledge about actions (i.e., human agency) and the effects they produce via plan representation. Intention is generally conceived as a commitment to work toward certain act or outcome. Most theories view intention as a major determinant of the degree of responsibility. Intentions can be future-directed or present-directed. The former guide agents’ planning and constrain their adoption of other intentions [Bratman, 1987], whereas the latter function causally in producing behavior [Searle, 1983]. As future-directed concept of intentions is generally accepted, we shall concentrate on it in the thesis. There is another distinction of intentions. An agent may intentionally perform an action (i.e. act intentionality), but may not intend all the action effects (i.e., outcome intent). It is outcome intent, rather than act intentionality that is key in responsibility judgment [Weiner, 2001]. We use intend and do to represent act intentionality, and intend and achieve for outcome intent (see next section for symbolic representation). Foreseeability refers to an agent’s foreknowledge about actions and their consequences. If an agent knows that an action leads to a certain outcome before action execution, then the agent foresees the action outcome. Intention entails foreknowledge, that is, if an agent intends an action to achieve an outcome, then the agent must have the foreknowledge that the action brings about the outcome. We use know and bring about to represent foreseeability (see next section for symbolic representation). Coercion occurs when some external force, such as a more powerful individual or a socially sanctioned authority, limits an agent’s freedom of choice. It is outcome coercion (i.e., coerced action effect) rather than act coercion (i.e., coerced action) that actually affects our judgment of behavior, and is used to determine the responsible agents. We use coerce 27 and do to represent act coercion and coerce and achieve for outcome coercion (see next section for symbolic representation). 3.2.4 Logical Expressions In previous sections, we have defined the representational features used in our computational framework. Now we concretize them by providing their symbolic expressions in predicate calculus. We choose first-order logic as a generic formal tool for several reasons. First, first-order logic is sufficiently expressive that can be used to encode rich forms of everyday language as well as knowledge required. It is more expressive than propositional logic and other restricted logic forms, yet has lower complexity comparing with higher-order logics. Unlike modal logics, it is also a well-established approach with a sound computational basis for verifiability and inference. Though the satisfiability problem for first-order logic is only semi-decidable [Genesereth & Nilsson, 1987], the corresponding problem for modal logics tends to be even worse. Some multimodal logics are undecidable even in the propositional case [Halpern & Vardi, 1989]. This has made it prohibitive inventing practical theorem proving methods for modal logics. Even for propositional logic, the satisfiability problem is NP-complete. However, with decades of efforts, researchers have progressed significantly in building efficient theorem provers for first-order logic [Wos & Pieper, 2000; Kalman, 2001]. Hobbs [1985] proposed a first-order logical notation. The chief feature of this notation is the use of eventuality, that is, an extra argument in each predication referring to the condition that exists when that predication is true. Eventuality may or may not exist in the real world. For every predicate P(x), P is true of x if and only if there is an eventuality or possible 28 situation e’ of P being true of x (called P’) and e’ really exists. The relation between the unprimed and primed predicates is given by (∀x) P(x) ⇔ (∃e’) P’(e’, x) ∧ Exist(e’) Hobbs [1985] gives further explanation of the specific problems and ontological assumptions of the notation. By reifying events and conditions, Hobbs’s approach provides a way of expressing higher-order properties in first-order logic. In our formalism, we shall use this notation when necessary. Predicates Variables x and y are different agents. Let A be an action, e be an effect, p be a proposition and t be a time (time t is used to add ordering constraint). Variable E is an effect set. We have adopted the following predicates in the model: P1. cause(x, e, t): agent x physically causes effect e at time t. P2. assist-cause(x, y, e, t): agent x assists agent y by causing effects relevant to achieving e at time t. P3. know(x, p, t): agent x knows the proposition p at time t. P4. intend(x, p, t): agent x intends the proposition p at time t. P5. coerce(x, y, p, t): agent x coerces agent y the proposition p at time t. P6. want(x, p, t): agent x wants the proposition p at time t. P7. obligation(x, p, y, t): agent x has the obligation of proposition p created by agent y at time t. P8. primitive(A): A is a primitive action. P9. and-node(A): action A is a non-decision node in the plan structure. P10. or-node(A): action A is a decision node in the plan structure. 29 P11. alternative(A, B): actions A and B are alternatives of performing a higher-level action. P12. do(x, A): agent x performs action A. P13. achieve(x, e): agent x achieves effect e. P14. bring-about(A, e): action A brings about effect e. P15. by(A, e): by acting A to achieve effect e. P16. execute(x, A, t): agent x executes action A at time t. P17. enable(x, E, t): agent x makes effects in effect set E true at time t (enable(x, ¬E, t) means that agent x disables effect set E by making at least one effect in E false). P18. can-execute(x, A, t): agent x is capable of executing action A at time t. P19. can-enable(x, e, t): agent x is capable of making effect e true at time t (can-enable(x, ¬e, t) means that agent x can disable effect e by making it false). P20. occur(e, t): effect e occurs at time t. P21. superior(x, y): agent x is a superior of agent y. P22. true(e, t): effect e is true at time t (if e is an effect set, this means that every effect in the effect set e is true at time t, and ¬true(e, t) means at least one effect in the effect set e is false at time t). Predicates P1−P7 describe the epistemic variables (including attributions) used for inferring intermediate beliefs. Predicates P8−P17 represent action types and relation, and notations related to action execution. Predicates P18−P22 represent capabilities and power relationship of agents. Various situations in action execution (e.g., intentional action/effect, negligence, side effect and failed attempt) as well as enabling conditions can be expressed using predicates. The detailed formulae are given in Appendix C. There is subtle difference between predicates occur and true. Effect e occurs at time t implies that e becomes true at time 30 t, but not before t, while the fact that effect e is true at t just describes a state of affair, regardless of the previous state of e (being true or false). Functions Variable x is an agent (or a group of agents). Let A and B be actions, e be an effect and AT be an action theory. We have adopted the following functions in the model: F1. precondition(A): precondition set of action A. F2. effect(A): effect set of action A. F3. subaction(A): subaction set of abstract action A. F4. choice(A): choice set of performing abstract action A. F5. conditional-effect(A): conditional effect set of action A. F6. antecedent(e): antecedent set of conditional effect e. F7. consequent(e): consequent of conditional effect e. F8. definite-effect(A): definite effect set of action A. F9. indefinite-effect(A): indefinite effect set of action A. F10. relevant-action(e, AT): relevant action set to achieve e according to action theory AT and observations. F11. relevant-effect(e, AT): relevant effect set to achieve e according to action theory AT and observations. F12. side-effect(e, AT): side effect set to achieve e according to action theory AT and observations. F13. performer(A): performing agent(s) of action A. F14. authority(A): authorizing agent(s) of action A. F15. primary-responsible(e): primary responsible agent(s) for effect e. 31 F16. secondary-responsible(e): secondary responsible agent(s) for effect e. Among these functions, F1−F7 are the generic features in plan representation. These features are defined in Section 3.2.1. Functions F8−F12 describe definite/indefinite effect set, relevant action/effect and side effect (see Appendices D and E for definitions and computations). Functions F13−F16 represent the agents involved. 3.3 Inferences To form social judgment, a perceiving agent needs to infer beliefs about attribution variables from observations of behavior. We show how automatic methods for causal and dialogue reasoning can provide such a mechanism. 3.3.1 Dialogue Inference Conversation between agents is a rich source of information for deriving attribution values. Both attribution theorists (e.g., [Kidd & Amabile, 1981; Hilton, 1990]) and computational linguists (e.g., [Allen & Perrault, 1980; Cohen et al, 1990]) have pointed out the importance of language communication in attributing behavior. In a conversational dialogue, the participating agents exchange information alternatively. A perceiving agent (who can be one of the participating agents or another agent) forms and updates beliefs according to the observed speech acts and previous beliefs. We follow the information state approach to communication management [Larsson & Traum, 2000]. This approach maintains an explicit information state that is updated by dialogue moves. For example, given that the applicability conditions hold, “ask” is a dialogue move that has as its 32 effect an obligation added to the information state for the hearer to address the speaker’s question. Natural language communication can be seen as a collaborative activity between conversational agents. Successful communication requires some degree of common ground [Clark & Schaefer, 1987], and illocutionary acts must be grounded to have their useful effects. Grounding [Traum, 1994] is the process of adding to the common ground between the conversational participants. For instance, by acknowledging what the speaker says, the hearer shows mutual belief about the content of communication. We assume communication between agents is already grounded. Besides, we assume conversation conforms to Grice’s maxims of Quality and Relevance [Grice, 1975]. The quality maxim states that one ought to provide true information in conversation. The relevance maxim states that one’s contribution to conversation ought to be pertinent in context. Grice’s maxims are neither prescriptive nor descriptive of what actually happens in conversations. Rather, they express assumptions that a hearer can bring to bear, assumptions that any interpretation of a speaker’s utterance should attempt to preserve [Miller & Glucksberg, 1988]. These assumptions ensure us to take conversation of agents as reliable information source in our approach. The disadvantage is that for many social dialogues that evoke credit or blame assignments, people are notoriously biased. Therefore, in these circumstances, sincerity condition will no longer hold. Nevertheless, our approach provides a starting point as well as a first approximation of inferring social attributions from communication, and it is possible to adjust the degrees of inferred beliefs according to the credibility of the speaker (We shall discuss the probabilistic extensions of the approach in Chapter 5). 33 We design commonsense rules that allow a perceiving agent to derive beliefs about the epistemic states of the observed agents. There are subtle differences between the epistemic variables such as desire, want and intention. Similar to Sadek’s want attitude [1992], we think that an agent wants what she believes (is true), but desire does not require this property. Thus, an agent may desire something, but not necessarily want it. On the other hand, an agent may want something, but not necessarily intend it. To adopt an intention, an agent must commit to it. However, if an agent acts voluntarily, intention should entail want, and want entails desire. Goals are those chosen desires, and by construction, chosen desires are consistent [Cohen & Levesque, 1990]. Two concepts are important in understanding coercion. One concept is social obligation. The other is (un)willingness. For example, if some authorizing agent commands another agent to perform a certain action, then the latter agent has an obligation to do so. But if the agent is actually willing to, this is a voluntary act rather than a coercive one. We also take social information (agents’ relationship) into consideration, for example, same speech act (e.g., request) performed by agents with different social status may lead to different belief derivation. If at time t1, a speaker (s) informs (or tells) a hearer (h) the content p, then after t1, it can be inferred that the speaker knows that proposition p as long as there is no intervening contradictory belief. As conversations between agents are grounded, it can be inferred that the hearer also knows that p (Universal quantifiers are omitted for simplicity). Rule D1 [inform]: inform(s, h, p, t1) ∧ t1<t3 ∧ ¬(∃t2)(t1<t2<t3 ∧ ¬know(s, p, t2)) ⇒ know(s, p, t3) 34 Rule D2 [inform-grounded]: inform(s, h, p, t1) ∧ t1<t3 ∧ ¬(∃t2)(t1<t2<t3 ∧ ¬know(h, p, t2)) ⇒ know(h, p, t3) A request shows what the speaker wants. An order (or command) shows the speaker’s intent. An order can only be successfully issued by someone higher in social status. If requested or ordered by a superior, then it creates a social obligation for the hearer to perform the content of the act (To simplify the rules below, we introduce a predicate etc 4 which stands for the absence of contradictory situations; This is similar to the notation used in Hobbs et al [1993]). Rule D3 [request]: request(s, h, p, t1) ∧ t1<t2<t3 ∧ etc 3 (s, p, t2) ⇒ want(s, p, t3) Rule D4 [superior-request]: request(s, h, p, t1) ∧ superior(s, h) ∧ t1<t2<t3 ∧ etc 4 (s, h, p, t2) ⇒ obligation(h, p, s, t3) Rule D5 [order]: order(s, h, p, t1) ∧ t1<t2<t3 ∧ etc 5 (s, p, t2) ⇒ intend(s, p, t3) Rule D6 [order]: order(s, h, p, t1) ∧ t1<t2<t3 ∧ etc 6 (s, h, p, t2) ⇒ obligation(h, p, s, t3) The hearer may accept, reject or counter-propose an order (or request). If the hearer has no obligation beforehand but accepts, it can be inferred that the hearer intends. If the hearer wants beforehand and accepts, we can draw the same conclusion. 4 In our formalism, predicate etc in each rule takes the form of ¬(∃tk)(ti<tk<tj ∧ ¬P(x, tj)), where P(x, tj) is the right-hand side of the rule and ti is the time stamp in the left-hand side. It essentially means that there is no contradictory belief in between. 35 Rule D7 [accept]: ¬obligation(h, p, s, t1) ∧ accept(h, p, t2) ∧ t1<t2<t3<t4 ∧ etc 7 (h, p, t3) ⇒ intend(h, p, t4) Rule D8 [want-accept]: want(h, p, t1) ∧ accept(h, p, t2) ∧ t1<t2<t3<t4 ∧ etc 8 (h, p, t3) ⇒ intend(h, p, t4) If there is no clear evidence of an agent’s willingness (i.e., want) beforehand, yet the agent accepts the obligation, there is evidence of coercion. In another case, when an agent obviously does not intend (or want) but accepts the obligation, there is evidence of strong coercion (refer to Chapter 5 for the degree of belief and its usage). Rule D9 [accept-obligation]: ¬(∃t1)(t1<t3 ∧ want(h, p, t1)) ∧ obligation(h, p, s, t2) ∧ accept(h, p, t3) ∧ t2<t3<t4<t5 ∧ etc 9 (s, h, p, t4) ⇒ coerce(s, h, p, t5) Rule D10 [unwilling-accept-obligation]: ¬intend(h, p, t1) ∧ obligation(h, p, s, t2) ∧ accept(h, p, t3) ∧ t1<t3 ∧ t2<t3<t4<t5 ∧ etc 10 (s, h, p, t4) ⇒ coerce(s, h, p, t5) If the hearer rejects, infer that the hearer does not intend. If the hearer counters A and proposes B instead, both the speaker and the hearer are believed to know that A and B are alternatives. It also implies what the hearer wants and does not intend. Rule D11 [reject]: reject(h, p, t1) ∧ t1<t2<t3 ∧ etc 11 (h, p, t2) ⇒ ¬intend(h, p, t3) Rule D12 [counter-propose]: counter-propose(h, p, q, s, t1) ∧ do’(p, h, A) ∧ do’(q, h, B) ∧ t1<t2<t3 ∧ etc 12 (h, A, B, t2) ⇒ ∃a(know(h, a, t3) ∧ alternative’(a, A, B)) 36 Rule D13 [counter-propose-grounded]: counter-propose(h, p, q, s, t1) ∧ do’(p, h, A) ∧ do’(q, h, B) ∧ t1<t2<t3 ∧ etc 13 (s, A, B, t2) ⇒ ∃a(know(s, a, t3) ∧ alternative’(a, A, B)) Rule D14 [counter-propose]: counter-propose(h, p, q, s, t1) ∧ t1<t2<t3 ∧ etc 14 (h, p, t2) ⇒ ¬intend(h, p, t3) Rule D15 [counter-propose]: counter-propose(h, p, q, s, t1) ∧ t1<t2<t3 ∧ etc 15 (h, q, t2) ⇒ want(h, q, t3) If the speaker has known the alternatives and still requests (or orders) one of them, infer that the speaker wants (or intends) the chosen action and does not intend the alternative (Here z can be s or h). Rule D16 [know-alternative-request]: know(s, a, t1) ∧ alternative’(a, A, B) ∧ request(s, h, p, t2) ∧ do’(p, z, A) ∧ do’(q, z, B) ∧ t1<t2<t3<t4 ∧ etc 16 (s, q, t3) ⇒ ¬intend(s, q, t4) Rule D17 [know-alternative-order]: know(s, a, t1) ∧ alternative’(a, A, B) ∧ order(s, h, p, t2) ∧ do’(p, h, A) ∧ do’(q, h, B) ∧ t1<t2<t3<t4 ∧ etc 17 (s, q, t3) ⇒ ¬intend(s, q, t4) 3.3.2 Causal Inference Plan representation and plans give further evidence for inferring agency, intention and coercion, in both direct and indirect cases. Causal inference is a plan-based evaluation based on the causal information provided by this representation. To simplify logical expressions, without causing any confusion, we sometimes substitute A and e for do(x, A) and achieve(x, e), respectively. 37 Agency In a plan execution environment which multiple agents inhabit, agents’ plans can interact in various ways. The preconditions of an agent’s action might be established by the activities of other agents, and thus these other agents indirectly help cause the outcome. Given an action theory AT, observed executed actions and an outcome e, in the absence of coercion, the performer of an action A that directly causes e is the causal agent. Other performers of relevant actions to achieve e (see Appendix E for the details of definition and computation) have indirect agency. Causal agent is deemed responsible for e, while other agents assist causing e should share responsibility with this causal agent. Rule C1 [cause-action-effect]: execute(x, A, t1) ∧ e∈effect(A) ∧ occur(e, t2) ∧ t1<t2<t3<t4 ∧ etc 18 (x, e, t3) ⇒ cause(x, e, t4) Rule C2 [cause-relevant-effect]: execute(x, B, t1) ∧ B∈relevant-action(e, AT) ∧ e∈effect(A) ∧ A≠B ∧ cause(y, e, t2) ∧ t1<t2<t3<t4 ∧ etc 19 (x, y, e, t3) ⇒ assist-cause(x, y, e, t4) Execute(sergeant, assemble, t1) Two Squads Forward Performer: squad leader Authority: sergeant Two Squads Forward Performer: squad leader Authority: sergeant Two-squads-at-accident-area Two-squads-at-accident-area 1-6-supported 1-6-supported Assemble Performer: sergeant Authority: lieutenant Assemble Performer: sergeant Authority: lieutenant Troop-at-accident-area Troop-at-accident-area Troop-in-transit Troop-in-transit Execute(squad-leader, two-squads-forward, t2) Unit-fractured Unit-fractured Establish Execute(sergeant, assemble, t1) Two Squads Forward Performer: squad leader Authority: sergeant Two Squads Forward Performer: squad leader Authority: sergeant Two-squads-at-accident-area Two-squads-at-accident-area 1-6-supported 1-6-supported Assemble Performer: sergeant Authority: lieutenant Assemble Performer: sergeant Authority: lieutenant Troop-at-accident-area Troop-at-accident-area Troop-in-transit Troop-in-transit Execute(squad-leader, two-squads-forward, t2) Unit-fractured Unit-fractured Establish Figure 6 Indirect Agency Establishes Action Precondition 38 In the example above (Figure 6), the squad leader performs a primitive action “two squads forward”, and causes the outcomes 1-6-supported and unit-fractured. While the squad leader is the causal agent for the outcomes, the sergeant, who assists the squad leader by enabling action precondition “two squads at accident area” is partially responsible for the outcomes. Intention Attribution of intention is essential to people’s explanations of behavior [Heider, 1958; Malle & Knobe, 1997]. As we have discussed in Section 3.3.1, intentions can be inferred from evidence in natural language conversation. Causal inference helps infer outcome intent from evidence of act intentionality. For example, if an agent intends an action A voluntarily, the agent must intend at least one action effect of A. If A has only one action effect e, then the agent is believed to intend e. Rule C3 [intend-action]: intend(x, p, t1) ∧ do’(p, z, A) ∧ ¬(∃y)(coerce(y, x, A, t1)) ∧ t1<t2<t3 ∧ etc 20 (x, A, t2) ⇒ ∃e(e∈effect(A) ∧ intend(x, e, t3)) In more general cases, when an action has multiple effects, in order to identify whether a specific outcome is intended or not, a perceiver may examine alternatives the agent intends and does not intend, and compare the effects of intended and unintended alternatives. If an agent intends an action A voluntarily and does intend its alternative B, we can infer that the agent either intends (at least) one action effect that only occurs in A or does not intend (at least) one consequence that only occurs in B, or both (Figure 7). 39 OR Action C Performer: superior Action C Performer: superior ••• Intend some effect in {p, q} OR ¬Intend some effect in {s, t} Intend (Do A) ¬Intend (Do B) Action A Performer: agent Authority: superior Action A Performer: agent Authority: superior Action B Performer: agent Authority: superior Action B Performer: agent Authority: superior ••• Effects: { p, q, r} Effects: { r, s, t} OR Action C Performer: superior Action C Performer: superior ••• Intend some effect in {p, q} OR ¬Intend some effect in {s, t} Intend (Do A) ¬Intend (Do B) Action A Performer: agent Authority: superior Action A Performer: agent Authority: superior Action B Performer: agent Authority: superior Action B Performer: agent Authority: superior ••• Effects: { p, q, r} Effects: { r, s, t} Figure 7 Inferring Outcome Intent by Comparing Alternatives – General Case If the effect set of A is a subset of that of B, the rule can be simplified. As there is no effect of A not occurring in the effect set of B, we can infer that the agent does not intend (at least) one effect that only occurs in B. In particular, if there is only one effect e of B that does not occur in the effect set of A, infer that the agent does not intend e (Figure 8). OR Action C Performer: superior Action C Performer: superior ••• ¬Intend effect r Action A Performer: agent Authority: superior Action A Performer: agent Authority: superior Action B Performer: agent Authority: superior Action B Performer: agent Authority: superior ••• Effects: { p, q} Effects: { p, q, r} Intend (Do A) ¬Intend (Do B) OR Action C Performer: superior Action C Performer: superior ••• ¬Intend effect r Action A Performer: agent Authority: superior Action A Performer: agent Authority: superior Action B Performer: agent Authority: superior Action B Performer: agent Authority: superior ••• Effects: { p, q} Effects: { p, q, r} Intend (Do A) ¬Intend (Do B) Figure 8 Inferring Outcome Intent by Comparing Alternatives – Special Case 1 Rule C4 [intend-one-alternative]: intend(x, p, t1) ∧ do’(p, z, A) ∧ ¬intend(x, q, t1) ∧ do’(q, z, B) ∧ ¬(∃y)(coerce(y, x, A, t1)) ∧ alternative(A, B) ∧ effect(A)⊂effect(B) ∧ t1<t2<t3 ∧ etc 21 (x, A, B, t2) ⇒ ∃e(e∉effect(A) ∧ e∈effect(B) ∧ ¬intend(x, e, t3)) 40 On the other hand, given the same context that an agent intends an action A and does not intend its alternative B, if the effect set of B is a subset of that of A, infer that the agent intends (at least) one effect that only occurs in A (Figure 9). In particular, if there is only one effect e of A that does not occur in the effect set of B, the agent must intend e. OR Action C Performer: superior Action C Performer: superior ••• Intend effect p or q Action A Performer: agent Authority: superior Action A Performer: agent Authority: superior Action B Performer: agent Authority: superior Action B Performer: agent Authority: superior ••• Effects: { p, q, r} Effects: { r} Intend (Do A) ¬Intend (Do B) OR Action C Performer: superior Action C Performer: superior ••• Intend effect p or q Action A Performer: agent Authority: superior Action A Performer: agent Authority: superior Action B Performer: agent Authority: superior Action B Performer: agent Authority: superior ••• Effects: { p, q, r} Effects: { r} Intend (Do A) ¬Intend (Do B) Figure 9 Inferring Outcome Intent by Comparing Alternatives – Special Case 2 Rule C5 [intend-one-alternative]: intend(x, p, t1) ∧ do’(p, z, A) ∧ ¬intend(x, q, t2) ∧ do’(q, z, B) ∧ ¬(∃y)(coerce(y, x, A, t1)) ∧ alternative(A, B) ∧ effect(B)⊂effect(A) ∧ t1<t3 ∧ t2<t3<t4 ∧ etc 22 (x, A, B, t3) ⇒ ∃e(e∈effect(A) ∧ e∉effect(B) ∧ intend(x, e, t4)) If there is no clear belief about intention derived from causal and dialogue inferences. We can employ intention recognition as a general approach to detecting intentions. Given the observed executed actions of agent(s) and a plan library, if the observed action sequence matches the actions in a primitive plan, then we can certainly infer that the primitive plan is pursued by the agent(s). In most situations, however, the observed action sequence can only partially match a specific plan. To find the best candidate plan that explains the observed actions, most intention recognition algorithm uses probabilistic models for the inference. We shall leave the discussions of probabilistic intention recognition method to Chapter 5. Here we give the implied criteria for determining intended actions and effects. 41 If an agent intends certain plan to achieve the goal of the plan, then the agent should intend those actions and effects relevant to achieving the goal in the plan context (see Appendix E for definitions). Other side effects are not intended by the agent. Rule C6 [intend-plan]: intend(x, b, t1) ∧ by’(b, plan, goal) ∧ A∈relevant-action(goal, plan) ∧ t1<t2<t3 ∧ etc 23 (x, A, t2) ⇒ intend(x, A, t3) Rule C7 [intend-plan]: intend(x, b, t1) ∧ by’(b, plan, goal) ∧ e∈relevant-effect(goal, plan) ∧ t1<t2<t3 ∧ etc 24 (x, e, t2) ⇒ intend(x, e, t3) Rule C8 [intend-plan]: intend(x, b, t1) ∧ by’(b, plan, goal) ∧ e∈side-effect(goal, plan) ∧ t1<t2<t3 ∧ etc 25 (x, e, t2) ⇒ ¬intend(x, e, t3) Foreknowledge Since foreknowledge refers to an agent’s epistemic state, it is mainly derived from dialogue inference. Speech acts inform, tell or assert, for instance, gives evidence that the conversants know the content of the act. Intention recognition also helps infer an agent’s foreknowledge: if an agent intends an action A to achieve an effect e of A, then the agent must know that A brings about e. Rule C9 [intend-foreknowledge-relation]: intend(x, b, t1) ∧ by’(b, A, e) ∧ t1<t2<t3 ∧ etc 26 (x, A, e, t2) ⇒ ∃ba(know(x, ba, t3) ∧ bring-about’(ba, A, e)) 42 In addition, an agent should know what his or her action would bring about, if the action and its effects are general knowledge in task representation. Otherwise, on the left-hand sides of the rules below, a perceiver should use instead what she knows about the specific knowledge the involved agents have. Rule C10 [foreknowledge-performer]: e∈effect(A) ∧ t1<t2 ∧ etc 27 (performer(A), A, e, t1) ⇒ ∃ba(know(performer(A), ba, t2) ∧ bring-about’(ba, A, e)) Rule C11 [foreknowledge-authority]: e∈effect(A) ∧ t1<t2 ∧ etc 28 (authority(A), A, e, t1) ⇒ ∃ba(know(authority(A), ba, t2) ∧ bring-about’(ba, A, e)) These two rules are relatively weak compared to Rules D1 and D2, where an agent’s foreknowledge is inferred from clear evidence in language communication (refer to Chapter 5 for the degree of belief and its usage). Coercion A causal agent could be absolved of responsibility if she was coerced by other forces, but just because an agent applies coercive force does not mean coercion actually occurs. What matters is whether this force truly constrains causal agents’ freedom to avoid the outcome. Causal inference helps infer outcome coercion from evidence of act coercion. If an agent is coerced to execute a primitive action, the agent is also coerced to achieve all the action effects. If being coerced to execute an abstract action and the action has only one decomposition (i.e., non-decision node), then the agent is also coerced to execute the subsequent actions and achieve all the subaction effects (Figure 10). (For illustrative purpose, we use good effect and bad effect instead of specific action effects.) 43 Action C Performer: superior Action C Performer: superior AND ••• ••• Good-effect Good-effect Bad-effect Bad-effect Action A Performer: agent Authority: superior Action A Performer: agent Authority: superior Action B Performer: agent Authority: superior Action B Performer: agent Authority: superior Coerce (Do C) Coerce Bad-effect Coerce Good-effect Action C Performer: superior Action C Performer: superior AND ••• ••• Good-effect Good-effect Bad-effect Bad-effect Action A Performer: agent Authority: superior Action A Performer: agent Authority: superior Action B Performer: agent Authority: superior Action B Performer: agent Authority: superior Coerce (Do C) Coerce Bad-effect Coerce Good-effect Figure 10 Inferring Outcome Coercion – Non-Decision Node Rule C12 [coerce-primitive]: coerce(y, x, p, t1) ∧ do’(p, x, A) ∧ primitive(A) ∧ e∈effect(A) ∧ t1<t2<t3 ∧ etc 29 (x, y, e, t2) ⇒ coerce(y, x, e, t3) Rule C13 [coerce-non-decision-node]: coerce(y, x, p, t1) ∧ do’(p, x, A) ∧ and-node(A) ∧ B∈subaction(A) ∧ t1<t2<t3 ∧ etc 30 (x, y, B, t2) ⇒ coerce(y, x, B, t3) Rule C14 [coerce-non-decision-node]: coerce(y, x, p, t1) ∧ do’(p, x, A) ∧ and-node(A) ∧ e∈effect(A) ∧ t1<t2<t3 ∧ etc 31 (x, y, e, t2) ⇒ coerce(y, x, e, t3) If the coerced action has multiple decompositions (i.e., decision node), then the agent has options: only the effects appear in all alternatives (i.e., definite effects; see Appendix D for definition and computation) are unavoidable, and thus these effects are coerced (Figure 11); Since other effects that only appear in some (but not all) alternatives (i.e., indefinite effects; also see Appendix D for definition and computation) are avoidable, they are not coerced (Figure 12). 44 OR Action C Performer: superior Action C Performer: superior ••• ••• Good-effect Good-effect Bad-effect Bad-effect Action A Performer: agent Authority: superior Action A Performer: agent Authority: superior Action B Performer: agent Authority: superior Action B Performer: agent Authority: superior Coerce (Do C) ¬Coerce Bad-effect ¬Coerce Good-effect OR Action C Performer: superior Action C Performer: superior ••• ••• Good-effect Good-effect Bad-effect Bad-effect Action A Performer: agent Authority: superior Action A Performer: agent Authority: superior Action B Performer: agent Authority: superior Action B Performer: agent Authority: superior Coerce (Do C) ¬Coerce Bad-effect ¬Coerce Good-effect Figure 11 Inferring Outcome Coercion – Decision Node: Definite Effects OR Action C Performer: superior Action C Performer: superior ••• ••• Bad-effect Bad-effect Action A Performer: agent Authority: superior Action A Performer: agent Authority: superior Action B Performer: agent Authority: superior Action B Performer: agent Authority: superior Coerce (Do C) Coerce Bad-effect Bad-effect Bad-effect OR Action C Performer: superior Action C Performer: superior ••• ••• Bad-effect Bad-effect Action A Performer: agent Authority: superior Action A Performer: agent Authority: superior Action B Performer: agent Authority: superior Action B Performer: agent Authority: superior Coerce (Do C) Coerce Bad-effect Bad-effect Bad-effect Figure 12 Inferring Outcome Coercion – Decision Node: Indefinite Effects Rule C15 [coerce-decision-node]: coerce(y, x, p, t1) ∧ do’(p, x, A) ∧ or-node(A) ∧ B∈choice(A) ∧ t1<t2<t3 ∧ etc 32 (x, y, B, t2) ⇒ ¬coerce(y, x, B, t3) Rule C16 [coerce-decision-node]: coerce(y, x, p, t1) ∧ do’(p, x, A) ∧ or-node(A) ∧ e∈definite-effect(A) ∧ t1<t2<t3 ∧ etc 33 (x, y, e, t2) ⇒ coerce(y, x, e, t3) 45 Rule C17 [coerce-decision-node]: coerce(y, x, p, t1) ∧ do’(p, x, A) ∧ or-node(A) ∧ e∈indefinite-effect(A) ∧ t1<t2<t3 ∧ etc 34 (x, y, e, t2) ⇒ ¬coerce(y, x, e, t3) However, if among the choices of the coerced action, initially only one alternative is executable (i.e., action preconditions are true), this limits the agent’s option: the agent is coerced to execute the only alternative. If the only available alternative is enabled by other agents, these other agents are also coercers. If other agent(s) block other action alternatives (by disabling action preconditions), the only alternative left is coerced and these blocking agents are also coercers (Figure 13). In either case, if some agent(s) assist the coercing agents’ activities by enabling relevant effects, these assisting agents are viewed as indirect coercers. Agent-2 blocks A OR ••• ••• Good-effect Good-effect Bad-effect Bad-effect Coerce Bad-effect Action B Performer: agent-1 Authority: superior Action B Performer: agent-1 Authority: superior Action A Performer: agent-1 Authority: superior Action A Performer: agent-1 Authority: superior Action C Performer: superior Action C Performer: superior Coerce (Do C) Agent-2 blocks A OR ••• ••• Good-effect Good-effect Bad-effect Bad-effect Coerce Bad-effect Action B Performer: agent-1 Authority: superior Action B Performer: agent-1 Authority: superior Action A Performer: agent-1 Authority: superior Action A Performer: agent-1 Authority: superior Action C Performer: superior Action C Performer: superior Coerce (Do C) Figure 13 Inferring Outcome Coercion – Indirect Case Rule C18 [coerce-decision-node-initial-one-alternative-available]: A∈choice(C) ∧ true(precondition(A), t1) ∧ (B∈choice(C) ∧ B≠A ⇒ ∃e(e∈precondition(B) ∧ ¬true(e, t1) ∧ ¬can-enable(x, e, t1))) ∧ coerce(y, x, p, t2) ∧ do’(p, x, C) ∧ t1<t2<t3<t4 ∧ etc 35 (x, y, A, t3) ⇒ coerce(y, x, A, t4) 46 Rule C19 [coerce-decision-node-other-enable-one-alternative]: coerce(y, x, p, t1) ∧ do’(p, x, C) ∧ A∈choice(C) ∧ enable(z, precondition(A), t2) ∧ x∉z ∧ (B∈choice(C) ∧ B≠A ⇒ ∃e(e∈precondition(B) ∧ ¬true(e, t2) ∧ ¬can-enable(x, e, t2))) ∧ (execute(x, A, t3) ⇒ t2<t3) ∧ t1<t2<t4<t5 ∧ etc 36 (x, y, z, A, t4) ⇒ coerce(y∪z, x, A, t5) Rule C20 [coerce-decision-node-self-enable-one-alternative]: coerce(y, x, p, t1) ∧ do’(p, x, C) ∧ A∈choice(C) ∧ enable(z, precondition(A), t2) ∧ x∈z ∧ (B∈choice(C) ∧ B≠A ⇒ ∃e(e∈precondition(B) ∧ ¬true(e, t2) ∧ ¬can-enable(x, e, t2))) ∧ (execute(x, A, t3) ⇒ t2<t3) ∧ t1<t2<t4<t5 ∧ etc 37 (x, y, A, t4) ⇒ coerce(y, x, A, t5) Rule C21 [coerce-decision-node-disable-other-alternative]: coerce(y, x, p, t1) ∧ do’(p, x, C) ∧ A∈choice(C) ∧ true(precondition(A), t2) ∧ (B∈choice(C) ∧ B≠A ⇒ ∃e(e∈precondition(B) ∧ ¬true(e, t2) ∧ ¬can-enable(x, e, t2))) ∧ ∃B(B∈choice(C) ∧ B≠A ∧ enable(z, ¬precondition(B), t2)) ∧ x∉z ∧ (execute(x, A, t3) ⇒ t2<t3) ∧ t1<t2<t4<t5 ∧ etc 38 (x, y, z, A, t4) ⇒ coerce(y∪z, x, A, t5) Given its antecedents are initially true, if a conditional effect is coerced, then its consequent is also coerced. Otherwise, if its antecedents are false initially, the consequent is not coerced. If other agent(s) enable the antecedents, these other agents are also coercers. If the antecedents are established by self (i.e., the actor), the consequent is not coerced, as she could choose to do otherwise. Rule C22 [coerce-conditional-effect-initial-antecedent-true]: e∈conditional-effect(A) ∧ true(antecedent(e), t1) ∧ coerce(y, x, e, t2) ∧ t1<t2<t3<t4 ∧ etc 39 (x, y, consequent(e), t3) ⇒ coerce(y, x, consequent(e), t4) 47 Rule C23 [coerce-conditional-effect-initial-antecedent-false]: e∈conditional-effect(A) ∧ false(antecedent(e), t1) ∧ coerce(y, x, e, t2) ∧ t1<t2<t3<t4 ∧ etc 40 (x, y, consequent(e), t3) ⇒ ¬coerce(y, x, consequent(e), t4) Rule C24 [coerce-conditional-effect-other-enable-antecedent]: coerce(y, x, e, t1) ∧ e∈conditional-effect(A) ∧ enable(z, antecedent(e), t2) ∧ x∉z ∧ ∃e’(e’∈antecedent(e) ∧ ¬can-enable(x, ¬e’, t2)) ∧ (execute(x, A, t3) ⇒ t2<t3) ∧ t1<t2<t4<t5 ∧ etc 41 (x, y, z, consequent(e), t4) ⇒ coerce(y∪z, x, consequent(e), t5) Rule C25 [coerce-conditional-effect-self-enable-antecedent]: coerce(y, x, e, t1) ∧ e∈conditional-effect(A) ∧ enable(z, antecedent(e), t2) ∧ x∈z ∧ (execute(x, A, t3) ⇒ t2<t3) ∧ t1<t2<t4<t5 ∧ etc 42 (x, y, consequent(e), t4) ⇒ ¬coerce(y, x, consequent(e), t5) If coerced to achieve a goal (outcome) and there is no plan alternative (i.e., only one primitive plan available to achieve the outcome), then the primitive plan is coerced: the agents are coerced to execute the actions in the primitive plan and achieve all the action effects (The corresponding inference rules are similar to those for non-decision node). If there are plan alternatives available, the evaluation process needs to compare the outcomes of plan alternatives. Only the outcomes appears in all the alternatives are coerced. Other outcomes that only appear in some (but not all) alternatives are not coerced (The corresponding rules are similar to those for decision node). For indirect cases, the inference rules are also similar to those given above. 48 Coercion 5 entails intention, that is, handing over one’s wallet under the threat of “your money or your life” may well be seen as intentional: one decides to do so, albeit unwillingly, with the goal of saving one’s life. Rule C26 [coerce-intend-relation]: ∃y(coerce(y, x, p, t1)) ∧ do’(p, x, A) ∧ t1<t2<t3 ∧ etc 43 (x, A, t2) ⇒ intend(x, A, t3) 3.4 Attribution Process Social attributions involve evaluating consequences of events with personal significance to an agent. This evaluation is always from a perceiving agent’s subjective perspective and the interpretation of events is based on the individual perceiver’s preferences. The perceiver uses her own knowledge and observation of the observed agents to form beliefs about attribution variables. The beliefs are then used in the attribution process to form an overall judgment. Given the same situation, as different perceivers may have different observations, different knowledge and preferences, they may form different beliefs and judge the same situation differently. Despite individual differences, the posited attribution process is general, and applies uniformly to different perceivers. The assessments of physical causality and coercion determine who is responsible. If an action performed by an agent brings about positive or negative consequence, and the agent is not coerced to achieve the consequence, then the performer of the action is the primary responsible agent. Otherwise, in the presence of external coercion, the primary responsible agent for the consequence is redirected to the 5 Here we mean psychological coercion. Coercion sometimes means physical coercion, such as pushing someone’s hand to pull the trigger of a gun. 49 coercer (Note that coercion may occur in more than one level of action hierarchy, and so the process may need to trace several levels up to find the real authority). Other agents who indirectly assist the performer are the secondary responsible agents. They should share partial responsibility with the primary responsible agent. Epistemic variables, intention and foreseeability, determine the degree of responsibility assigned. We adopt a simple categorical model of responsibility assignment (see chapter 5 for the probabilistic extensions). If belief of outcome intention is true, then the degree of responsibility is high. An agent’s intentional action and action effect may succeed or fail. However, as long as it manifests intentions, a failed attempt can be blamed or credited almost the same as a successful one [Zimmerman, 1988]. If the responsible agent has no foreknowledge, the degree of responsibility is low. The intensity of credit or blame is determined by the degree of responsibility as well as the positivity/severity of the outcome. We have developed an algorithm to find the responsible agents for a specific consequence (Figure 14). The algorithm first searches dialogue history and infers from dialogue evidence (Step 1). Then it applies causal inference rules (Step 2). For each executed action that potentially leads to the consequence, if the action does cause the outcome occurrence or the performer of the action intends to bring the outcome about (i.e. attempting the outcome but failing to achieve it) (Step 3.1), then assign the performer to the primary responsible agent. Other agents who assist the performer are secondary responsible agents (Step 3.2). To trace the coercing agent(s), the evaluation process starts from the primitive action (Step 3.3), and works up the task hierarchy. During each pass through the main loop, if there is evidence of outcome coercion (Step 3.4.2), the authority is deemed responsible (Step 3.4.3). If current action is not the root node in action hierarchy and outcome coercion is 50 true, the algorithm assigns the parent node to current action (Step 3.4.4) and evaluates the next level up (Step 3.4). If the outcome is intended by the responsible agent (Step 3.5), the degree of responsibility is high (Step 3.6). If the outcome is not intended (Step 3.7), then the degree assigned is low (Step 3.8). Otherwise, assign medium degree of responsibility (Step 3.9). At last, the algorithm returns the primary and secondary responsible agents as well as the degrees of responsibility (Step 4). Events may lead to more than one desirable/undesirable consequence. For evaluating multiple consequences, we can apply the algorithm the same way, focusing on one consequence each time during its execution. Finally, to form an overall judgment, the results can be aggregated and grouped by the responsible agents. 3.5 Illustration We use an example from the Mission Rehearsal Exercise (MRE) leadership training system [Swartout et al, 2006] to illustrate how the model works. We focus on three social actors, the lieutenant (student), the sergeant and the squad leader (Lopez), who act as a team in this example. The lieutenant is a human trainee and acts as an authority over the sergeant. The squad leader acts as a subordinate of the sergeant. In one trial of the training exercise, the lieutenant ordered his sergeant to adopt a course of actions that the sergeant considered undesirable. The lieutenant was informed of the bad consequence, but still persisted with the previous decision. The sergeant finally accepted the order, and commanded his own subordinates to perform the subsequent actions. The following dialogue is extracted from an actual run of the system. 51 Algorithm 1 (consequence e, action theory AT): 1. Search dialog history and apply dialog inference rules 2. Apply causal inference rules 3. FOR each executed action A that has e as its effect 3.1 IF cause(performer(A), e) OR intend(performer(A), p) ∧ by’(p, A, e) THEN 3.2 primary-responsible(e) = performer(A) secondary-responsible(e) = performer(relevant-action(e, AT)) 3.3 P = A 3.4 DO 3.4.1 B = P 3.4.2 IF coerce(authority(B), performer(B), e) THEN 3.4.3 primary-responsible(e) = authority(B) 3.4.4 P = parent of node B in AT END-IF WHILE B ≠ root of action hierarchy AND coerce(authority(B), performer(B), e) 3.5 IF intend(primary-responsible(e), e) THEN 3.6 Assign high degree of responsibility 3.7 ELSE IF ¬intend(primary-responsible(e), e) THEN 3.8 Assign low degree of responsibility 3.9 ELSE assign medium degree of responsibility END-IF END-FOR 4. RETURN primary-responsible(e) ∪ secondary-responsible(e); Degrees of responsibility Figure 14 Algorithm for Finding Responsible Agents 52 Student: Sergeant, send two squads forward. (Line 1) Sergeant: That is a bad idea, sir. We shouldn’t split our forces. (Line 2) Instead we should send one squad to recon forward. (Line 3) Student: Send two squads forward. (Line 4) Sergeant: Against my recommendation, sir. (Line 5) Lopez! Send first and fourth squads to Eagle 1-6’s location. (Line 6) Lopez: Yes, sir. Squads! Mount up! (Line 7) Conversations between agents are represented within the system as speech acts and a dialogue history as in the MRE. Details on how this negotiation is automatically generated and how natural language is mapped into speech acts can be found in [Traum et al, 2003]. The dialogue above corresponds to the following acts, ordered by the time the speakers addressed them (lt, sgt and sld stand for the lieutenant, the sergeant and the squad leader, respectively. t1<t2<…<t7). (1) order(lt, sgt, p1, t1) ∧ do’(p1, sgt, send-two-sqds) (Line 1) (2) inform(sgt, lt, p2, t2) ∧ bring-about’(p2, send-two-sqds, unit-fractured) (Line 2) (3) counter-propose(sgt, p1, p3, t3) ∧ do’(p3, sgt, send-one-sqd) (Line 3) (4) order(lt, sgt, p1, t4) (Line 4) (5) accept(sgt, p1, t5) (Line 5) (6) order(sgt, sld, p4, t6) ∧ do’(p4, sld, two-sqds-fwd) (Line 6) (7) accept(sld, p4, t7) (Line 7) Figure 5 in Section 3.2.1 illustrates the causal knowledge of the troop underlying the example, where sending one squad and sending two squads are two choices of supporting unit 1-6. Sending one squad is composed of two primitive actions, one squad forward and remaining squads forward. Sending two squads consists of two squads forward and remaining squads forward. 53 Take the sergeant’s perspective as an example. The sergeant has access to the partial plan knowledge of the troop, and perceives the conversation between the actors and task execution. He observed a physical action two-squads-forward executed by the squad leader, and the action effects occurred. Two effects are salient to the sergeant, (unit) 1-6 supported and unit fractured. Supporting unit 1-6 is a desirable team goal. Assume the sergeant assigns negative utility to unit fractured and this consequence serves as input of the algorithm. We illustrate how to find the responsible agent given the sergeant’s task knowledge and observations. Step 1 Based on sequence 1-7 in the dialogue history, the sergeant can derive a number of beliefs by inferring the observed speech acts (Here t1<t1’, t2<t2’, …, t7<t7’): (1) intend(lt, p1, t1’) (Act #1 [order], Rule D5 [order]) (2) obligation(sgt, p1, lt, t1’) (Act #1 [order], Rule D6 [order]) (3) know(sgt, p2, t2’) (Act #2 [inform], Rule D1 [inform]) (4) know(lt, p2, t2’) (Act #2 [inform], Rule D2 [inform-grounded]) (5) know(sgt, a, t3’) ∧ alternative’(a, send-two-sqds, send-one-sqd) (Act #3 [counter-propose], Rule D12 [counter-propose]) (6) know(lt, a, t3’) (Act #3 [counter-propose], Rule D13 [counter-propose-grounded]) (7) ¬intend(sgt, p1, t3’) (Act #3 [counter-propose], Rule D14 [counter-propose]) (8) want(sgt, p3, t3’) (Act #3 [counter-propose], Rule D15 [counter-propose]) (9) ¬intend(lt, p3, t4’) (Act #4 [order], Belief 6, Rule D17 [know-alternative-order]) 54 (10) coerce(lt, sgt, p1, t5’) (Act #5 [accept], Beliefs 2&7, Rule D10 [unwilling-accept-obligation]) (11) intend(sgt, p4, t6’) (Act #6 [order], Rule D5 [order]) (12) obligation(sld, p4, sgt, t6’) (Act #6 [order], Rule D6 [order]) (13) coerce(sgt, sld, p4, t7’) (Act #7 [accept], Belief 12, Rule D9 [accept-obligation]) Step 2 Based on the observations of task execution and the beliefs obtained in Step 1, causal inference can further derive the following beliefs of the sergeant (Here t0 is the time of initial state, t0<t1, t0<t0’): (14) know(sld, p5, t0’) ∧ bring-about’(p5, two-sqds-fwd, unit-fractured) (Rule C10 [foreknowledge-performer]) (15) know(sgt, p5, t0’) (Rule C11 [foreknowledge-authority]) (16) intend(lt, unit-fractured, t4’) (Beliefs 1&9, Rule C5) (17) coerce(lt, sgt, unit-fractured, t5’) (Belief 10, Rule C14 [coerce-non-decision-node]) (18) coerce(sgt, sld, unit-fractured, t7’) (Belief 13, Rule C12 [coerce-primitive]) 55 Step 3 Steps 3.1−3.2: As action two-squads-forward directly causes the evaluated outcome unit- fractured, and the action is performed by the squad leader, initially, assign the squad leader to the responsible agent. Step 3.4: Loop 1: The algorithm starts from the primitive action two-squads-forward. The sergeant believes that he coerced the squad leader to fracture the unit (Belief 18). The sergeant also believes that both he and the squad leader foresaw the outcome unit-fractured (Beliefs 14&15). As outcome coercion is true, assign the sergeant to the responsible agent. Since parent node is not the root of action hierarchy and outcome coercion is true, the algorithm enters next loop. Loop 2: The action is send-two-squads, performed by the sergeant. The sergeant believes that the lieutenant coerced him to fracture the unit (Belief 17). The sergeant also believes that the lieutenant intended unit-fractured (Belief 16). As outcome coercion is true, assign the lieutenant to the responsible agent. Since parent node is not the root of action hierarchy and outcome coercion is true, the algorithm enters next loop. Loop 3: The action is support-unit-1-6, performed by the lieutenant. There is no relevant dialogue act in history, nor is there clear evidence of coercion. Parent node is the root node. The algorithm returns the lieutenant as the primary responsible agent. Steps 3.5−3.9: As the sergeant believes that the lieutenant intended unit-fractured, the lieutenant is responsible for unit-fractured with high degree. 56 Chapter 4 Evaluation 4.1 Claims To validate the computational framework, we need to assess the consistency between model predictions and human responses, that is, given the same input, whether people and the model produce the same output. According to the computational framework shown in Figure 4, we consider model validation as a three-stage process. First, we evaluate the consistency between model predictions and human performance data with respect to overall judgments of responsibility, credit or blame. At this stage, the internal components of the model are treated as a black box (see Figure 15). Knowledge Causal Knowledge Observations Action Execution Communication Judgments Responsibility Credit/Blame Inferences Beliefs Causal Inference Dialogue Inference Attribution Values •Cause • Intention • Foreknowledge • Coercion Knowledge Causal Knowledge Observations Action Execution Communication Judgments Responsibility Credit/Blame Inferences Beliefs Causal Inference Dialogue Inference Attribution Values •Cause • Intention • Foreknowledge • Coercion Figure 15 First Stage of Model Validation – Assessing Overall Judgments Rather than simply viewing the internal components as a black box, we would like to assess the consistency of the model’s internal structure and processes underlying human 57 attributions of responsibility and blame − that is, whether our model uses the same sources of evidence and generates the same intermediate conclusions as people do. Thus, at the second and third stages, we evaluate the consistency between model predictions and human data with respect to the intermediate beliefs and inference rules (see Figures 16 and 17). Knowledge Causal Knowledge Observations Action Execution Communication Inferences Beliefs Causal Inference Dialogue Inference Attribution Values •Cause • Intention • Foreknowledge • Coercion Judgments Responsibility Credit/Blame Knowledge Causal Knowledge Observations Action Execution Communication Inferences Beliefs Causal Inference Dialogue Inference Attribution Values •Cause • Intention • Foreknowledge • Coercion Judgments Responsibility Credit/Blame Figure 16 Second Stage of Model Validation – Assessing Intermediate Beliefs Knowledge Causal Knowledge Observations Action Execution Communication Dialogue Inference Inferences Causal Inference Beliefs Judgments Attribution Values •Cause • Intention • Foreknowledge • Coercion Responsibility Credit/Blame Knowledge Causal Knowledge Observations Action Execution Communication Dialogue Inference Inferences Causal Inference Beliefs Judgments Attribution Values •Cause • Intention • Foreknowledge • Coercion Responsibility Credit/Blame Figure 17 Third Stage of Model Validation – Assessing Inference Process As there are several computational alternatives to responsibility and blame judgments (Section 2.2.2), for assessing the overall judgments, we compare our model’s predictions and the predictions generated by the alternative models to people’s responses. The first claim of 58 evaluation is that our model will approximate human judgments of responsibility and blame/credit, and perform better than other computational approaches. We shall validate our first claim in Section 5.2. However, as the alternative models are incapable of inferring the beliefs of internal variables, for assessing the internal structure and inference processes, we have no computational alternative to compete with. At the second and third stages, we directly compare the predictions of our model with human data. The second claim of evaluation is that our model predicts human judgments of attribution variables, and the third claim is that in forming beliefs of social attributions, our model makes inferences consistent with that people use in their judgments. We shall validate these two claims in Section 5.3. 4.2 Assessing Overall Judgments We have introduced several alternative models for responsibility and blame judgments in Section 2.2.2. A simple cause model always assigns responsibility and blame to the actor whose action directly produces the outcome. A simple authority model always chooses the highest authority (if there is one) as the responsible and blameworthy agent. Chockler and Halpern’s [2004] model (abbreviated C&H model) extends the causal model to account for degrees of responsibility and blame. In this section, we report our experiments on the comparison of overall judgments by our model (abbreviated to M&G model below) and the alternative models with human data. 4.2.1 Method Participants and Procedure 59 Twenty-seven subjects participated in the experiments, most of whom were ICT (Institute for Creative Technologies) and ISI (Information Sciences Institute) staff (including graduate students) at the University of Southern California, with ages ranging from 20 to 45 and genders evenly distributed. The participants were presented with four similar scenarios. Each scenario was followed by a questionnaire, asking questions about the assessments of physical cause, responsibility, blame and perceived coercion of the characters in the scenarios. The orders of the scenarios were randomly assigned. Materials Scenario 1: Suppose that there is a firing squad consisting of ten excellent marksmen. Only one of them has live bullets in his rifle; the rest have blanks. The marksmen do not know which of them has the live bullets. The marksmen shoot at the prisoner and he dies. Questions: 1. Who physically caused the death? a. the marksman who has live bullets in his rifle b. all the marksmen in the firing squad c. none of the above 2. Who would you think is responsible for the death? a. the marksman who has live bullets in his rifle b. all the marksmen in the firing squad c. none of the above 3. Who deserves the most blame for the death? a. the marksman who has live bullets in his rifle b. all the marksmen in the firing squad c. none of the above 4. In making your judgment, do you feel the marksmen were coerced? a. there was strong coercion b. there was weak coercion c. there was no coercion Figure 18 Firing Squad Scenario 1 60 We took as a starting point the “firing squad” scenario typically used in causality research. For the convenience of comparing with the related work, we design variants of the “firing squad” scenario in [Chockler & Halpern, 2004]. Scenario 1 is the original example (see Figure 18). Each scenario is followed by a questionnaire. The questions in the questionnaires are the same across four scenarios. Figure 18 shows the wording of the questions. Scenario 2: Suppose that there is a firing squad consisting of a commanding officer and ten excellent marksmen that generally abide their leader’s commands. Only one of them has live bullets in his rifle; the rest have blanks. The commanding officer and the marksmen do not know which marksman has the live bullets. The commander orders the marksmen to shoot the prisoner. The marksmen shoot at the prisoner and he dies. Scenario 3: Suppose that there is a firing squad consisting of a commanding officer and ten excellent marksmen that generally abide their leader’s commands. Only one of them has live bullets in his rifle; the rest have blanks. The commanding officer and the marksmen do not know which marksman has the live bullets. The commander orders the marksmen to shoot the prisoner. The marksmen refuse the order. The commander insists that the marksmen shoot the prisoner. The marksmen shoot at the prisoner and he dies. Scenario 4: Suppose that there is a firing squad consisting of a commanding officer and ten excellent marksmen that generally abide their leader’s commands. The commanding officer orders the marksman to shoot the prisoner, and each marksman can choose to use either blanks or live bullets. The commander and the marksmen do not know whether other marksmen have live bullets. By tradition, if the prisoner lives (i.e., everyone chooses blanks), he is set free. The marksmen shoot at the prisoner and he dies. Figure 19 Firing Squad Scenarios 2−4 We designed variants to this scenario to systematically vary the perception of the key variables such as intention and coercion. In each variant of Scenario 1, we manipulate evidence of perceived coercion (and also intention) of agents. Scenario 2 extends the example by including an authority - the commander, who orders the squad to shoot. Scenario 3 further extends the example by presenting a negotiation dialogue between the commander 61 and the marksmen. The marksmen first reject the commander’s order. The commander insists and orders again. Finally the marksmen accept the order and shoot at the prisoner. In Scenario 4, the commander still orders, but each marksman has freedom to choose either using blanks or live bullets before shooting (see Figure 19). Model Predictions Simple models: For each scenario, the simple cause model predicts the marksman (or marksmen) with bullets as the responsible and blameworthy agent. The simple authority model assigns responsibility and blame to the commander in Scenarios 2 to 4. C&H model: As each marksman is a real cause for the outcome, the C&H model predicts all marksmen share responsibility and blame in Scenario 1. For a similar reason, in Scenarios 2 and 3, the C&H model predicts both the commander and all marksmen are responsible and blameworthy. Their model has some difficulty handling the social situation in Scenario 4. (We shall discuss this later in Section 5.2.3.) M&G model: In Scenario 1, our model predicts the same result as that in the C&H model, but judges the commander as the sole responsible and blameworthy agent in Scenarios 2 and 3. In the last scenario, our model assigns responsibility and blame to the marksmen with bullets. 4.2.2 Results Figure 20 shows the proportion of subjects that attribute blame and responsibility to different categories of agents, and the corresponding confidence intervals (α=0.05) [Rice, 1994]. For example, in scenario 1, three subjects blame the marksman with live bullets in his rifle, 19 blame all the marksmen and the rest do not blame any of them. The analysis of the 62 sample data and their confidence intervals show that a small percentage of the population will blame the marksman with live bullets, a significant majority will blame all the marksmen, and a small percentage won’t blame any, with 0.95 confidence. - - - - - - - - - - 100 90 80 70 60 50 40 30 20 10 with bullets all marksmen none Responsibility Blame Confidence interval Scenario 1 - - - - - - - - - - 100 90 80 70 60 50 40 30 20 10 with bullets all marksmen none Responsibility Blame Confidence interval Scenario 1 - - - - - - - - - - 100 90 80 70 60 50 40 30 20 10 commander with bullets all marksmen & commander & commander Responsibility Blame Confidence interval Scenario 2 - - - - - - - - - - 100 90 80 70 60 50 40 30 20 10 commander with bullets all marksmen & commander & commander Responsibility Blame Confidence interval Scenario 2 - - - - - - - - - - 100 90 80 70 60 50 40 30 20 10 commander with bullets all marksmen & commander & commander Responsibility Blame Confidence interval Scenario 3 - - - - - - - - - - 100 90 80 70 60 50 40 30 20 10 commander with bullets all marksmen & commander & commander Responsibility Blame Confidence interval Scenario 3 Figure 20 Proportion of Population Agreement on Responsibility/Blame in Scenarios - - - - - - - - - - 100 90 80 70 60 50 40 30 20 10 with bullets all marksmen commander with bullets all marksmen & commander & commander Responsibility Blame Confidence interval Scenario 4 - - - - - - - - - - 100 90 80 70 60 50 40 30 20 10 with bullets all marksmen commander with bullets all marksmen & commander & commander Responsibility Blame Confidence interval Scenario 4 Table 1 shows the results of blame assignment generated by different models, and the comparison of these results with the dominant proportion (i.e., majority) of human agreement. (In Scenario 4, however, the dominant proportion overlaps with another category; in this case, if model prediction falls into the majority category, we regard it as a partial match). The simple cause model only partially matches the human agreement in Scenario 4, but is inconsistent with the data in Scenarios 1 to 3. The simple authority model matches the human data in Scenarios 2 and 3, but is inconsistent with the data in other scenarios. In 63 general, simple models use invariant approaches to the judgment problem. Therefore, they are insensitive to the changing social context specified in the scenarios. Table 1 Comparison of Results by Different Models with Human Data The C&H model does not perform well either. It matches human judgments only in Scenario 1. In the remaining scenarios, its results are incompatible with the data. Like other work in causality research, the underlying causal reasoning in the C&H model is based on philosophical principles (i.e., counterfactual dependencies). Though their extended definition of responsibility accounts better for the extent to which a cause contributes to the occurrence of an outcome, the results show that their blame model does not match human data well. These empirical findings generally support our first claim of evaluation. 4.2.3 Comparison and Discussion Now we discuss how our model appraises each scenario and compare our approach with the C&H model. The complete steps of algorithm execution for each scenario are given in Appendix I. 64 Scenario 1 Actions and plans are explicitly represented in our approach. In Scenario 1, each marksman performs a primitive action, shooting. The action has a conditional effect, with the antecedent live bullets and the consequent death. All marksmen’s shooting actions constitute a team plan squad firing, with the (goal) outcome death (Figure 21). Squad Firing Performer: squad Authority: none Shooting Performer: marksman-1 Authority: none Shooting Performer: marksman-10 Authority: none AND ••• Death Shooting Performer: marksman-2 Authority: none Death Death Live Bullets Live Bullets Live Bullets Squad Firing Performer: squad Authority: none Shooting Performer: marksman-1 Authority: none Shooting Performer: marksman-10 Authority: none AND ••• Death Shooting Performer: marksman-2 Authority: none Death Death Live Bullets Live Bullets Live Bullets Figure 21 Team Plan for the Squad in Scenario 1 The shooting actions are observed executed, and the outcome death occurs. As all the observed primitive actions of the marksmen match the team plan, we can certainly infer that the plan is pursued by the squad 6 (i.e., certain case of intention recognition; see intention inference in Section 3.3.2). The marksmen are believed to intend the actions in the plan and the goal of the plan (i.e. death). The inferred beliefs are given below. (The symbols sqd and mkn stand for the squad and the marksman, respectively. t1<t1’.) (1) intend(sqd, p1, t1’) ∧ by’(p1, firing, death) (intention recognition) (2) intend(mkn, shooting, t1’) (Belief 1, Rule C6) (3) intend(mkn, death, t1’) (Belief 1, goal of plan) 65 6 Note that intention recognition is generally applied to a plan library. This example is oversimplified. The marksman with the bullets is the sole cause of the death. This marksman intends the outcome, and thus deserves a high degree of responsibility and blame. As other marksmen with blanks also intend the actions and the outcome, and shooting actions are observed executed but the antecedent of the conditional effect is false, their failed attempt can be detected (see Appendix C for the definition). Therefore, other marksmen are also blameworthy for their attempt (recall that an unsuccessful attempt can be blamed or credited almost the same as a successful one; see Section 3.4). The C&H model judges responsibility according to the actual cause of the event. As the marksman with the bullets is the only cause of the death, this marksman has degree of responsibility 1 for the death and others have degree of responsibility 0. This result is inconsistent with human data. In determining blame, the C&H model draws the same conclusion as ours, but their approach is different. They consider each marksman’s epistemic state before action performance (corresponding to foreknowledge). There are 10 situations possible, depending on who has the bullets. Each marksman is responsible for one situation (in which this marksman has bullets), with degree of responsibility 1. Given that each situation is equally likely to happen (i.e., with possibility 1/10), each marksman has degree of blame 1/10. As there is no notion of intention in their model, the C&H model uses foreknowledge as the only determinant for blame assignment. This is fine when there is no foreknowledge, as no foreknowledge entails no intention. When there is foreknowledge, however, the blame assigned is high, even if there are no intentions in the case. For example, in a context different from this example, if a marksman fires the gun by mistake, without any intention of 66 causing or attempting the death, in the C&H model, this marksman will be blamed just the same as those who have such intention. Scenarios 2 & 3 In our model, we take different forms of social interactions into account. The inference process reasons about beliefs from both causal and dialogue evidence. Figure 22 illustrates the team plan of the squad in Scenarios 2 and 3, where a commander acts as an authority of the squad. Squad Firing Performer: squad Authority: commander Shooting Performer: marksman-1 Authority: commander Shooting Performer: marksman-10 Authority: commander AND ••• Death Shooting Performer: marksman-2 Authority: commander Death Death Live Bullets Live Bullets Live Bullets Squad Firing Performer: squad Authority: commander Shooting Performer: marksman-1 Authority: commander Shooting Performer: marksman-10 Authority: commander AND ••• Death Shooting Performer: marksman-2 Authority: commander Death Death Live Bullets Live Bullets Live Bullets Figure 22 Team Plan for the Squad in Scenarios 2 and 3 The intermediate inference results for Scenario 2 are given below. (The symbol cmd stands for the commander. t2<t2’.) (1) intend(cmd, p2, t1’) ∧ do’(p2, sqd, firing) (Act order, Rule D5) (2) obligation(sqd, p2, cmd, t1’) (Act order, Rule D6) (3) intend(cmd, death, t1’) (Belief 1, Rule C3) (4) coerce(cmd, sqd, p2, t2’) (Act accept & Belief 2, Rule D9) (5) coerce(cmd, mkn, shooting, t2’) (Belief 4, Rule C13) (6) coerce(cmd, mkn, death, t2’) (Belief 4, Rules C14 & C22) 67 So in Scenario 2, the marksmen cause/attempt the death due to coercion. The commander is responsible for the death. As the commander intends the outcome (Belief 3), the commander is to blame with high intensity. Scenario 3 includes a sequence of negotiation acts. The derived beliefs thus change to the following (t4<t4’): (1) intend(cmd, p2, t1’) (Act order, Rule D5) (2) obligation(sqd, p2, cmd, t1’) (Act order, Rule D6) (3) intend(cmd, death, t1’) (Belief 1, Rule C3) (4) ¬intend(sqd, p2, t2’) (Act reject, Rule D11) (5) coerce(cmd, sqd, p2, t4’) (Act accept & Beliefs 2&4, Rule D10) (6) coerce(cmd, mkn, shooting, t4’) (Belief 5, Rule C13) (7) coerce(cmd, mkn, death, t4’) (Belief 5, Rules C14 & C22) Clearly the marksmen do not intend firing (Belief 4). Scenario 3 shows evidence of strong coercion, which is also reflected in the data. A greater proportion of people regard the commander as responsible and blameworthy in Scenario 3 than in Scenario 2. The C&H model represents all the relevant events in the scenarios as random variables. Thus, if we want to model the communicative acts in Scenarios 2 and 3, each act must be a separate variable in their model. This is problematic when conversational dialogue is involved in a scenario. As the approach uses structural equations to represent the relationships between variables, and each equation in the model must be deterministic, it is difficult to come up with such equations for a dialogue sequence. For example, if we want to model communicative acts in Scenario 3, we will have to provide deterministic relationships between acts (e.g., if the commander orders, the squad will accept). Such strict equations 68 simply do not exist in a natural conversation. If we ignore some or all of the communicative acts, important information conveyed by them will be lost. Assume marksman-1 is the one with the live bullets. Using the C&H approach, the outcome is counterfactually dependent on marksman-1’s shooting, so marksman-1’s shooting is an actual cause of the death. Similarly, the commander’s order is also an actual cause of the death. Based on the responsibility definition in the C&H model, both the commander and marksman-1 are responsible for the death, and each has degree of responsibility 1. This result is inconsistent with human data. In assigning blame, there are ten situations altogether, and in each situation, the commander has expected responsibility 1, so the commander is to blame with degree 1. The marksmen each have degree of blame 1/10. Thus the C&H model appraises that the commander and all marksmen are blameworthy for the outcome. As their model for responsibility and blame is an extension of the counterfactual causal reasoning model, which has been criticized as being far too permissive [Hopkins & Pearl, 2003], the same problem also exists in their model of responsibility and blame. Scenario 4 Unlike the previous scenarios, in Scenario 4, the bullets are not initially set before the scenario starts. The marksmen can choose to use either bullets or blanks before shooting. Firing is still the joint action of the squad, but there is no team plan or common goal for the squad. As the commander orders the joint action, shooting actions and conditional effects are coerced. However, as the antecedents are enabled by a self agent (i.e., the marksmen), the consequent death is not coerced. The inferred beliefs are as follows. (1) intend(cmd, p2, t1’) (Act order, Rule D5) 69 (2) obligation(sqd, p2, cmd, t1’) (Act order, Rule D6) (3) coerce(cmd, sqd, p2, t3’) (Act accept & Belief 2, Rule D9) (4) coerce(cmd, mkn, shooting, t3’) (Belief 3, Rule C13) (5) ¬coerce(cmd, mkn, death, t3’) (Belief 3, Rules C14 & C25) In this case, the commander is not responsible for the outcome, but rather, the marksmen who choose to use bullets and cause the death are responsible and blameworthy. Figure 23 shows that in Scenario 4, people’s judgments somehow diffuse. There is overlap between blaming the marksmen with bullets and blaming both the commander and the marksmen with bullets. Nonetheless, the category our model predicts is clearly better than the other three. The C&H model requires all the structural equations to be deterministic. In essence, their model could not handle alternative courses of action, which inherently have nondeterministic properties. One way to compensate for this is to push the nondeterminism into the setting of the context (see Section 2.2.2 for the explanation of context in the C&H model). For example, in Scenario 4, they could build a causal model to let the context determine whether the bullets are live or blank for each marksman, and then have a probability distribution over contexts. After that, they can compute the probability of an actual cause. However, since these contexts are only background variables, they could not provide internal structural features that really impact the reasoning process. 4.3 Assessing Inference Process In addition to predicting overall judgments, our model is capable of inferring the intermediate beliefs of the variables. Belief derivation is supported by the inference rules. 70 We design experiments to see how the model performs at the level of predicting internal variables as well as the inference process. In this section, we report our empirical results on these tests. 4.3.1 Method Participants and Procedure Two groups of subjects participated in the experiments. The first group contained 18 participants (9 from China, 4 from America, 2 from South Korea, 1 from Canada, 1 from Europe and 1 from the Pacific Islands), who were ICT (Institute for Creative Technologies) and ISI (Information Sciences Institute) staff (including graduate students) at the University of Southern California, with ages ranging from 20 to 35 and genders evenly distributed. Among them, 12 subjects completed the four-page surveys, and 6 subjects completed two pages of the surveys. The surveys were randomly ordered. Participants of the second group were 30 graduate students taking the Advanced AI class (Fall 2005) at the University of Southern California. The majority were from Asia (India, China and South Korea), and bout 70% of them were males. Each subject in the second group completed two pages of a randomly ordered survey. The survey was composed of four small scenarios. Each scenario was followed by a questionnaire, asking questions about the assessments of internal variables, including the characters’ foreknowledge, intention, desire, obligation and coercion in the scenarios. At the end of each questionnaire, the subjects were asked to score how much blame the characters deserve in the scenario. Materials 71 Scenario 1: E1 The vice president of Beta Corporation goes to the chairman of the board and requests, “Can we start a new program?” E2 The vice president continues, “The new program will help us increase profits, E3 and according to our investigation report, it has no harm to the environment.” E4 The chairman answers, “Very well.” E5 The vice president executes the new program. E6 However, the environment is harmed by the new program. Questions: 1. Does the vice president want to start the new program? Your answer: Yes No Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 2. Does the chairman intend to start the new program? Your answer: Yes No Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 3. Is it the chairman’s intention to increase profits? Your answer: Yes No Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 4. Does the vice president know that the new program will harm the environment? Your answer: Yes No Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 5. Is it the vice president’s intention to harm the environment by starting the new program? Your answer: Yes No Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 6. How much would you blame the individuals for harming the environment? Blame the chairman: 1 2 3 4 5 6 Blame the vice president: 1 2 3 4 5 6 Little Lots Figure 23 Company Program Scenario 1 72 As a starting point, we adopt the “company program” scenario used in [Knobe, 2003a]. This example has received much attention in recent folk psychology and experimental philosophy research (see Experimental Philosophy Blog). The original example is shown in Figure 24 (Scenario 2). The variants of the original example are shown in Figures 23, 25 and 26. Figure 23 shows the wording of Scenario 1. For the convenience of assessing inference rules (see next section), descriptions of each scenario are organized into separate labeled statements of evidence (e.g., E1-E6). Scenario 3: E1 The chairman of Beta Corporation is discussing a new program with the vice president of the corporation. E2 The vice president says, “The new program will help us increase profits, E3 but according to our investigation report, it will also harm the environment. E4 Instead, we should run an alternative program, that will gain us fewer profits than this new program, but it has no harm to the environment.” E5 The chairman answers, “I only want to make as much profit as I can. Start the new program!” E6 The vice president says, “Ok,” and executes the new program. E7 The environment is harmed by the new program. Figure 25 Company Program Scenario 3 Scenario 2: E1 The chairman of Beta Corporation is discussing a new program with the vice president of the corporation. E2 The vice president says, “The new program will help us increase profits, E3 but according to our investigation report, it will also harm the environment.” E4 The chairman answers, “I only want to make as much profit as I can. Start the new program!” E5 The vice president says, “Ok,” and executes the new program. E6 The environment is harmed by the new program. Figure 24 Company Program Scenario 2 73 Scenario 4: E1 The chairman of Beta Corporation is discussing a new program with the vice president of the corporation. E2 The vice president says, “There are two ways to run this new program, a simple way and a complex way. E3 Both will equally help us increase profits, but according to our investigation report, the simple way will also harm the environment.” E4 The chairman answers, “I only want to make as much profit as I can. Start the new program either way!” E5 The vice president says, “Ok,” and chooses the simple way to execute the new program. E6 The environment is harmed. Figure 26 Company Program Scenario 4 Experimental Design As our model embodies the theoretical view that people will judge social cause and responsibility differently based on their perception of the key variables such as intention, foreknowledge and coercion, a good experimental design is to see how the model performs when evidence for such judgments is systematically varied. To this end, we take a description of a single social situation and systematically vary it, using the inference rules of our model as a guide. For example, if our model suggests that particular evidence supports the inference of intention, then an obvious variation would be to add a line to the scenario encoding such evidence. By exploring the space of inference rules and generating the scenarios on our own, we were able to incorporate information needed for different inference paths and to predict judgment results in a systematic way. We encode information into each line of a scenario. The specific information includes speech act, causal knowledge, goal identification, physical action, the occurrence of effects, etc. The encoded information serves as the model’s input, which provides evidence for the 74 inference. For example, in Scenario 1, the following information is encoded (vp and chm refer to the vice president and the chairman, respectively): (1) request(vp, chm, p1, t1) ∧ do’(p1, vp, new-program) (speech act) (2) inform(vp, chm, p2, t2) ∧ bring-about’(p2, new-program, profit-increase) (causal knowledge) (3) inform(vp, chm, p3, t2) ∧ ¬bring-about’(p3, new-program, env-harm) (causal knowledge) (4) accept(chm, p1, t3) (speech act) (5) execute(vp, new-program, t4) (action execution) (6) occur(env-harm, t5) (outcome occurrence) The questions in the questionnaires are designed to test beliefs about different variables. Each question corresponds to the firing of an inference rule. We select to assess the fundamental rules in the model. In Scenario 1, we manipulate evidence related to agents’ foreknowledge of the outcome (i.e., no foreknowledge). We design questions to test the inference rules for foreseeability (Question 4, Rule D1), relation of intention and foreknowledge (Question 5, Rule C9), connection of act and outcome intention (Question 3, Rule C3), etc. Figure 25 shows the wording of the questions after Scenario 1. The complete scenarios with questions are given in Appendix G. Scenario 2 gives clear evidence of foreknowledge. The authority’s goal is also clearly stated. Correspondingly, questions are designed to test rules for intentional action/effect and side effect (Questions 3-4, Rules C7-8), foreknowledge (Question 1, Rule D2), and speech acts. 75 In Scenario 3, we manipulate the degree of perceived coercion and unwillingness by introducing an alternative course of action that will not harm the environment and which the vice president prefers. Specifically, we add one line between E3 and E4 (and all the other lines remain the same as those in Scenario 2). Questions are designed to test the agent’s willingness (Question 2, Rules D14-15) and coercions (Questions 3-4, Rules D10 & C12). In Scenario 4, we manipulate the characters’ freedom of choice. We introduce an alternative, but the preference of the vice president is based on a feature unrelated to the environment and the vice president is allowed to choose from the options. We design three questions to test other important rules for coercion (Rules C15-17). 4.3.2 Results Assessing Inferred Beliefs Table 2 shows the number of answers to each question in the sample scenarios. The values for the last questions are the averages of people’s answers (on a 6-point scale). The model’s predictions are checked with ‘ √’ Table 2 Model Predictions and Subject Responses for Company Program Scenarios Question 1 Question 2 Question 3 Question 4 Question 5 Question 6 . The data show that for most questions, people agree with each other quite well. But certain disagreement exists on some of the questions (see Appendices G and H for more details). Yes No Yes No Yes No Yes No Yes No Chai r VP Model √ √ √ √ √ √ √ √ √ Scenari o 1 People 30 0 27 3 29 1 2 28 0 30 3.00 3.73 Model √ √ √ √ √ Scenari o 2 People 30 0 30 0 30 0 10 20 22 8 5.63 3.77 Model √ √ √ √ Scenari o 3 People 21 9 2 28 29 1 21 9 N/A 5.63 3.23 Model √ √ √ Scenari o 4 People 21 9 5 25 5 25 N/A N/A 4.13 5.20 76 Though people sometimes may disagree with each other on specific questions, our purpose is to assess the model’s general agreement with people. We measure the agreement of the model and each subject using the Kappa statistic. The Kappa coefficient is the de facto standard to evaluate the agreement between raters, which factors out expected agreement due to chance [Cohen, 1960; Krippendorff, 1980; Carletta, 1996]. It has long been used for classification tasks, such as in content analysis, medicine, or psychiatry to assess how well students’ diagnoses on a set of test case agree with expert answers [Grove et al, 1981]. The K coefficient is computed as: ) ( 1 ) ( ) ( E P E P A P K − − = P(A) is the propositional agreement among raters. P(E) is the expected agreement, that is, the probability that the raters agree by chance. Di Eugenio and Glass [2004] argued that the computation of K coefficient is sensitive to the skewed distribution of categories (i.e., prevalence). In our treatment, we account for prevalence and construct contingency tables for the calculation, and average the results of Kappa agreement of the model’s predictions with each subject’s answers (see Table 3). The average Kappa agreement between the model and subjects is 0.732. Based on the scales given by Rietveld and van Hout [1993], K>0.8 is often considered as excellent agreement and 0.6<K<0.8 indicates substantial agreement. 0.4<K<0.6 shows moderate agreement, and K<0.4 means poor performance. The empirical results show good consistency between the model’s generation of intermediate beliefs and human data. This generally supports our second claim of evaluation. 77 Subjects P(A) P(E) K 1 .824 .526 .628 2 .882 .543 .742 3 .706 .491 .422 4 .882 .509 .761 5 .941 .526 .876 6 .882 .543 .742 7 .941 .561 .866 8 .941 .561 .866 9 .882 .543 .742 10 .941 .526 .876 11 1 .543 1 12 .882 .543 .742 13 1 .543 1 14 .941 .526 .876 15 .765 .543 .485 16 .824 .491 .667 17 .824 .561 .598 18 .765 .543 .485 19 .882 .509 .761 20 .941 .526 .876 21 .824 .561 .598 22 .882 .543 .742 23 .765 .543 .485 24 .941 .526 .876 25 .941 .561 .866 26 .882 .509 .761 27 .765 .578 .443 28 .882 .543 .742 29 .824 .491 .667 30 .882 .509 .761 Average .732 Table 3 Kappa Agreement between Model and Subjects Assessing Inference Rules In our model, every belief is derived by a specific inference rule, so the answer to a question in the questionnaires corresponds to the firing of one rule (with the exception of three questions in the questionnaires designed to test two rules each). Currently, we have 39 dialogue and causal inference rules in the model, plus 4 rules for conditional plan (tested already in the firing squad examples). This survey study covers 19 of them. (After this study, we also did another additional experiment to test the credit assignment as well as the inference rule for intending one alternative C5. The accuracy result for Rule C5 is also good.) 78 To assess the accuracy of inference rules, we compare the conditions of each rule with the evidence people use in forming each answer (The complete data of evidence the subjects use are given in Appendix J). Similar to the approach in machine learning, we measure accuracy using the confusion matrix [Kohavi & Provost, 1998]. For every subject’s answer to each question, we build a confusion matrix to compute the number of true positive TP (i.e., evidence both the rule and the subject use), true negative TN (i.e., evidence both the rule and the subject ignore), false positive (i.e., evidence the rule incorrectly uses), and false negative (i.e., evidence the rule incorrectly ignores). 79 Question Inference Rule Average Accuracy 1 D3 [Request] 0.76 2 D7 [Accept] 0.96 3 C3 [Intend-Action] 0.85 4 D1 [Inform] 0.94 Scenario 1 5 C9 [Intention-Foreknowledge-Relation] 0.91 1 D2 [Inform-Grounded] 0.92 2 D5 [Order] 0.96 3 C7 [Intend-Plan] 0.86 4 C8 [Intend-Plan] 0.70 Scenario 2 5 D6 & D9 [Order; Accept-Obligation] 0.84 1 D13 [Counter-Propose-Grounded] 0.94 2 D14 & D15 [Counter-Propose] 0.88 3 D6 & D10 [Order; Unwilling-Accept- Obligation] 0.80 Scenario 3 4 C12 [Coerce-Primitive] 0.74 1 C16 [Coerce-Decision-Node] 0.71 2 C15 [Coerce-Decision-Node] 0.84 Scenario 4 3 C17 [Coerce-Decision-Node] 0.75 Table 4 Accuracies of Inference Rules For each question Q i , the correct prediction of the corresponding rule with respect to the evidence chosen by subjects is measured by accuracy (AC), whereas N s is the total number of subjects and N e is the total number of evidence for Q i . e s Subjects j i i s Subjects j i i N N Q j TN Q j TP N Q j AC Q AC * )) , ( ) , ( ( ) , ( ) ( ∑ ∑ ∈ ∈ + = = Table 4 lists the accuracy of the tested rules. The average accuracy of the rules in the model is 0.85. Given that each question contains 6 or 7 lines of evidence and people choose multiple lines in most cases, the accuracy results are fairly good. The empirical results show that the evidence the model uses for inference is consistent with human data, thereby supporting our third claim of evaluation. 4.3.3 Discussion Now we discuss how our model appraises each scenario and some experimental findings. The complete fired rules and belief derivations for the questions in the scenarios are given in Appendix I. Scenario 1 In Scenario 1, the questionnaire specifically queries the perceived want, foreknowledge and intentions of the characters (see Figure 27). The belief that the vice president wants the new program can be inferred from the speech act request in E1. The chairman’s intention to start the new program can be inferred from the speech act accept in E4. As starting the new program has only one action effect (E2), we can infer outcome intention from act intention. The chairman must intend the only effect (i.e., profit increase). The vice president has no 80 foreknowledge of the environmental harm can be inferred from the content of inform in E3. According to our causal inference rule, no foreknowledge entails no intention. Subjects gave consistent answers to the questions in Scenario 1. Their answers to the last question show that blameworthiness is mitigated by no foreknowledge. This result is consistent with psychological findings. Though people assigned relatively more blame to the vice president, the data also suggest that the chairman should share blame with the vice president. The accuracies of inference rules are also good, in general. The accuracy of the rule tested in Question 1 is lower than the others because, in addition to evidence E1, many people chose E2 as well. Post-experiment interviews with the subjects uncovered that many subjects had assumed that making profits should be desirable to the vice president (because of his role), and therefore, he should want to start the new program to increase profits (which is supported by E2). Scenarios 2 & 3 Scenarios 2 and 3 manipulate the degree of perceived coercion and willingness of the coerced agent. The agents have clear foreknowledge about the harm (E3). The chairman’s goal of making more profits is also clearly informed (E4 in Scenario 2, E5 in Scenario 3). The following beliefs are inferred from Scenario 2: B1. know(vp, p2, t1’) ∧ bring-about’(p2, new-program, profit-increase) (E2: inform, Rule D1) B2. know(chm, p2, t1’) (E2: inform, Rule D2) B3. know(vp, p4, t1’) ∧ bring-about’(p4, new-program, env-harm) (E3: inform, Rule D1) B4. know(chm, p4, t1’) (E3: inform, Rule D2) 81 B5. intend(chm, p1, t2’) ∧ do’(p1, vp, new-program) (E4: order, Rule D5) B6. obligation(vp, p1, chm, t2’) (E4: order, Rule D6) B7. coerce(chm, vp, p1, t3’) (B6, E5: accept, Rule D9) B8. coerce(chm, vp, profit-increase, t3’) (B7, Rule C12) B9. coerce(chm, vp, env-harm, t3’) (B7, Rule C12) As there is no evidence of unwillingness, perceived coercion (B7) is true in a weak sense. The inferred beliefs B4, B5, B6 and B7 give predictions to Questions 1, 2 and 5 in Scenario 2. As there is only one plan in this scenario and the chairman intends the action (i.e., starting new program) in the plan (B5), intention recognition is trivial 7 . Making profit increase is the goal of the plan (E4), so it is intended by the chairman (Question 3). Environmental harm is a side effect of goal attainment, so it is not intended by the chairman (Question 4). In Scenario 3, the vice president’s counter-proposal provides additional information (E4). More beliefs can be derived: B1. know(vp, p2, t1’) (E2: inform, Rule D1) B2. know(chm, p2, t1’) (E2: inform, Rule D2) B3. know(vp, p4, t1’) (E3: inform, Rule D1) B4. know(chm, p4, t1’) (E3: inform, Rule D2) B5. know(vp, a, t1’) ∧ alternative’(a, new-program, alternative-program) (E4: counter-propose, Rule D12) B6. know(chm, a, t1’) ∧ alternative’(a, new-program, alternative-program) (E4: counter-propose, Rule D13) 7 Note that our intention recognition method is generally applied to a plan library with multiple plans and sequences of actions. In this oversimplified example, intention recognition becomes trivial. 82 B7. ¬intend(vp, p1, t1’) (E4: counter-propose, Rule D14) B8. want(vp, p5, t1’) ∧ do’(p5, vp, alternative-program) (E4: counter-propose, Rule D15) B9. intend(chm, p1, t2’) (E5: order, Rule D5) B10. obligation(vp, p1, chm, t2’) (E5: order, Rule D6) B11. coerce(chm, vp, p1, t3’) (B7&B10, E6: accept, Rule D10) B12. coerce(chm, vp, profit-increase, t3’) (B11, Rule C12) B13. coerce(chm, vp, env-harm, t3’) (B11, Rule C12) Beliefs B6, B7, B8, B11 and B13 give predictions to Questions 1, 2, 3 and 4 in Scenario 3, respectively. Belief B11 gives strong evidence of coercion. There are several disagreements among the subjects in Scenarios 2 and 3. In Question 4 of Scenario 2, one-third of the subjects think it the chairman’s intention to harm the environment. Whether a side effect is intentional or not is controversial in philosophy, and other empirical studies show similar results as ours [Nadelhoffer, forthcoming]. Also in Question 5 of Scenario 2, some subjects think the vice president is not coerced to start the new program by the chairman, as evidence is weaker than in Scenario 3. Half of them referred to evidence E5, indicating that they expect the vice president to negotiate with the chairman rather than directly accept the order. This result suggests a limitation in our current model. In contrast, when asked the same question in Scenario 3 (Question 3), almost all the subjects agreed that the vice president was coerced to start the new program. In the first question of Scenario 3, some subjects think the chairman does not know the alternative program, though the vice president clearly states this in the scenario. Most of these subjects (80%) referred to evidence E5, showing that they looked for grounding information. 83 As our model infers grounded information from conversation, it is our mistake having omitted this information in our design of the scenario. Last, in Question 4 of Scenario 3, some subjects seemed reluctant to infer outcome coercion from evidence of act coercion. Nonetheless, they still assigned high degree of blame to the chairman. Comparing the blame assignments in Scenarios 2 and 3, it shows that on the one hand, the higher the degree of coercion, the less blame is assigned to the actor − a result consistent with psychological findings. On the other hand, even when perceived coercion is not strong, people still assign high degree of blame to the coercer, as in Scenario 2. The accuracies of the two tested rules are comparatively lower than the others, such as the rule used in Question 4 of Scenario 2. In our model, the evidence needed for the inference is E2, E3 and E4. Almost all subjects chose evidence E4, but most ignored E2 (except two subjects). One reason is that E2 as knowledge (i.e., the new program helps increase profits), seems implied in E4 (otherwise people will have difficulty understanding E4). Similarly, for the rule used in Question 4 of Scenario 3, most subjects did not choose knowledge E3. However, we think this knowledge is necessary for the inference. Scenario 4 In Scenarios 2 and 3, action and alternative are coerced by authority, whereas in Scenario 4, the vice president has some freedom of choice. While the high-level plan (i.e., starting the new program) is still coerced (E4), the agent can choose to execute either alternative (simple way or complex way in E2). As both ways will increase profits (E3), increasing profit is unavoidable either way: The vice president is coerced to achieve this effect (Question 1), but he is neither coerced to choose the simple way (Question 2), nor is he coerced to achieve the specific effect environmental harm that only occurs in the simple way (Question 3). 84 In Question 1, some subjects think that the vice president is not coerced to increase profits, for the same reason mentioned earlier. They think it the vice president’s job to increase profits, so he must be willing to do so. People assigned more blame to the vice president, as he could have done otherwise. This result is consistent with psychological findings [Shaver, 1985]. However, people still assigned considerable blame to the chairman, though it was the vice president’s choice to harm the environment. The inference rules in Question 1 and Question 3 are based on evidence E3, E4 and E5. Many subjects ignore knowledge E3. This lowers the accuracies of the two rules. 4.4 General Discussion Although the experimental results show general support for the model, they reveal some limitations of the approach. It is clear that people made assumptions about the scenarios that were not explicitly represented in the model. For example, people assumed that the vice president had the goal of increasing profits even though this was not explicitly stated. This is related to the more general issue of ensuring correspondence between the model’s encoding of the scenarios and subjects’ interpretation of them. Currently, we construct this mapping by hand. This method has the disadvantage that, as designers of the scenarios, we may unintentionally introduce discrepancies. Alternatives would be to explore ways to automatically generate descriptions from their representation in the model, or at least to use an independent set of coders to characterize the textual encoding. Subjects tended to assign shared blame to the individuals involved. Though our model supports joint activity and multi-agent plan, one limitation of the model is that it always assigns most of the blame to one agent (or a group of agents), who has caused or coerced the 85 outcome. In the firing squad scenarios, a portion of the subjects mentioned that they think the marksmen actually make group decisions together, and so they should be collectively responsible for the outcome. Sometimes this is true even when the individual is not causally connected to the creditworthy or blameworthy event (e.g., the chairman is blamed in the company program scenario 1). Some researchers’ work is relevant to this. Norman and Reed [2000] discuss the issue of task delegation: When an agent decides to delegate tasks to others, the responsibility for the task is shared. Lickel [2003] investigates collective responsibility, in which blame is extended to others who are not behaviorally involved in the blameworthy event. Though our current approach evaluates responsibility mainly from an individual perspective, the model’s representational and inferential mechanism has the potential to account for these extensions. It seems that some variables are more universal than the others; for example, in inferring intentions, people give relatively consistent answers (or show systematic bias) than other variables (e.g., coercion). In appraising coercion, there is evidence of cultural differences in the data. For example, in the firing squad example, subjects from the western culture (i.e., American or European background in our data) are less willing to assign coercion, even if the situation shows clear evidence (e.g., firing squad scenario 3). Subjects from eastern culture (i.e., Chinese or Korean background), on the other hand, are more willing to judge the same situation as being coerced. We did not find gender differences in our studies. It is well known that responsibility judgment is influenced by the observer’s emotional state, interpersonal goals such as impression management [Mele, 2001], and dispositional differences such as personality. Although attribution theory emphasizes subjective interpretation of events, it is a general theory of lay social reasoning. Given the difficulties of 86 personality measurement and the situational specificity of behavior, it is much more fruitful to search first for general principles rather than explore person × situation interactions [Weiner, 1986]. We start from the general principles identified by attribution theory, and if necessary, we can build deviated models of individual differences by refining each stage of the attribution process. Further, our model of dialogue inference assumes that parties faithfully articulate their actions and beliefs, whereas people are notoriously biased when describing their involvement in creditworthy or blameworthy events [Nisbett & Wilson, 1977]. Although we have not accounted for these biases, our current model provides a framework for both generating and recognizing such influences. 87 Chapter 5 Toward Probabilistic Extensions In Chapter 3, we presented our computational framework to social judgment, which involves the evaluation of individual variables and the algorithm to determine the responsible agents. We showed how the conceptual variables map into the computational representation, and how the inferential mechanism and algorithm work to form the final judgment result. As our work relies on commonsense heuristics of human inference from conversation communication, agent’s knowledge state and observation of behavior, the approach is domain-independent and thus can be used as a general solution to the problem. However, there exist several obvious limitations in our model. The most obvious limitation is its inability to deal with uncertainty inherent in observations and judgment processes. As a result, the beliefs of variables in the model are treated as binary, either true or false. This is particularly problematic when it comes to inferring the mental state of other parties, such as their intentions. Although both dialogue inference and causal inference can derive beliefs of intentions if sufficient evidence is available, these techniques are quite limited. To address these limitations, we need to extend the model to incorporate probabilistic representation and reasoning. This chapter extends the computational framework to incorporate probabilistic representation of actions and plans, and build the probabilistic reasoning mechanism to infer degrees of beliefs. To achieve this goal, we take a decision-theoretic approach that combines 88 utilitarian preferences with probabilities of outcome occurrence. The evaluation of agents’ behavior is based on the fundamental principle of “maximum expected utility” (MEU) underlying decision theory: An agent is rational if and only if it chooses the action that yields the highest expected utility, averaged over all the possible outcomes of the action [Russell & Norvig, 2003]. Decision theory can be viewed as both a normative theory, supported by the observation that MEU principle prescribes the way in which people should make decisions, and a descriptive theory, justified by the fact that people actually use these expectations and values in their decisions. It can be argued that human intuitions in commonsense inference have both descriptive and prescriptive features. Although empirical evidence shows that people systematically violate its principle [Tversky & Kahneman, 1982], decision theory still provides an excellent approximation to many judgments and decisions [Slovic et al, 1988]. 5.1 Probabilistic Representation 5.1.1 Actions and Plans In a probabilistic plan representation, each action has preconditions and effects. Action effects can be nondeterministic (i.e., effect probability, the likelihood of the occurrence of an action effect given the corresponding action is successfully executed), conditional and/or conditional nondeterministic (i.e., conditional probability, the likelihood of the occurrence of its consequent given a conditional effect and its antecedents are true). To represent the success and failure of action execution, an action has an execution probability (i.e., the likelihood of successful action execution given the preconditions are true). The likelihood of preconditions and effects is represented by probability values. The desirability of action 89 effects (i.e., their positive/ negative significance to an agent) is represented by utility values [Blythe, 1999]. Representation of plans is similar to that in Chapter 3, except that we use (expected) utilities of plans to represent the overall benefit and disadvantage of a plan. Plan utility is computed using the utilities of outcomes in the plan and the probabilities with which different outcomes occur. (We shall talk more about the computation later.) Send One Squad Performer: sergeant Authority: lieutenant One Squad Forward Performer: squad leader Authority: sergeant Remaining Forward Performer: squad leader Authority: sergeant Send Two Squads Performer: sergeant Authority: lieutenant Two Squads Forward Performer: squad leader Authority: sergeant Remaining Forward Performer: squad leader Authority: sergeant AND AND OR Route Secured Unit Fractured 1-6 Supported Not Fractured … 1-6 Supported … … … Support Unit 1-6 Performer: lieutenant Authority: lieutenant Troop-at-aa Troop-at-aa One-sqd-at-aa Two-sqds-at-aa Remaining-at-aa Remaining-at-aa Troop-at-aa 0.75 0.88 0.95 0.95 +25 +25 -50 0 0 1.0 1.0 Send One Squad Performer: sergeant Authority: lieutenant One Squad Forward Performer: squad leader Authority: sergeant Remaining Forward Performer: squad leader Authority: sergeant Send Two Squads Performer: sergeant Authority: lieutenant Two Squads Forward Performer: squad leader Authority: sergeant Remaining Forward Performer: squad leader Authority: sergeant AND AND OR Route Secured Unit Fractured 1-6 Supported Not Fractured … 1-6 Supported … … … Support Unit 1-6 Performer: lieutenant Authority: lieutenant Troop-at-aa Troop-at-aa One-sqd-at-aa Two-sqds-at-aa Remaining-at-aa Remaining-at-aa Troop-at-aa 0.75 0.88 0.95 0.95 +25 +25 -50 0 0 1.0 1.0 Figure 27 Illustrative Example of Probabilistic Plan Representation Figure 27 illustrates the probabilistic representation of actions shown earlier in Section 3.2.1 (Figure 5). Actions two squads forward and remaining forward have nondeterministic effects (unit) 1-6 supported, with probabilities 0.88 and 0.75, respectively. Other action effects are deterministic (probability = 1.0). A priori execution probability of each primitive action is set to 0.95. Utilities of action effects are also given in the graph. Supporting unit 1- 6 is a desirable goal with a positive utility value. Unit fractured is associated with a negative value, which indicates undesirability. 90 5.1.2 Degree of Belief The belief model in Chapter 3 can be seen as an all or nothing model; that is, an agent either believes something or does not. There is no intermediate value to describe the extent to which an agent believes or disbelieves the attributions derived. This is fine if we purely use theorems of first-order logic for inferences. However, because of the limitations mentioned earlier, to support probabilistic reasoning, we need to represent the strengths of agents’ beliefs. The degree of belief model extends the classical truth values (true and false) to graded scales. Lambert [1993] proposed a three-level model of dialogue consisting of certain belief, strong belief and weak belief. Martinovski [2000] discussed how modality (especially epistemic modality) and sources of evidence affect the degrees of certainty in language communication. In this chapter, our focus is the probabilistic plan inference of beliefs about attributions. We use numerical values [0, 1] to represent degrees of beliefs. 5.1.3 Symbolic Extensions For the probabilistic representation of actions of plans, we add the following functions to the model (Let A, e, e’ and E be an action, an action effect, the consequent of a conditional effect and an effect set, respectively). F17. P effect (e | A): probability of the occurrence of its effect e given action A is successfully executed. F18. P conditional (e’ | antecedent(e), e): probability of the occurrence its consequent e’ given conditional effect e and its antecedents are true. 91 F19. P execution (A | precondition(A)): probability of successful execution of action A given its preconditions are true. F20. utility(e): utility value of effect e (ranging between −100 and +100 in the model). Functions F17−F19 represent the additional features in probabilistic plan representation. As we take decision-theoretic approach to the inference, state utility (F20) is another important input. 5.2 Probabilistic Reasoning Utility and rationality issues have been explored in AI and agent research, as means for specifying, designing and controlling rational behavior as well as descriptive means for understanding behavior [Doyle, 1992, 2004; Lang et al, 2002]. In our approach, we use utilities in two ways. One way is to represent the perceiving agent’s preferences over states. Since the attribution process is from this perceiving agent’s perspective, the perceiver uses state preferences to evaluate the observed agents’ freedom in choosing outcomes with different valences. Utilities can be used in another way to represent the presumed preferences of the observed agents. In the latter case, state preferences are used in recognizing agents’ intentions and for disambiguation. 5.2.1 Intention Recognition Bratman [1987] identifies the properties of intention in practical reasoning, and states intentions as elements in agents’ stable partial plans of action structuring present and future conduct. Plans thus provide context in evaluating intention, pertaining to the goals and 92 reasons of an agent’s behavior. Bratman’s work justifies plan inference as a means for recognizing intentions and goals of agents. Meanwhile, in AI literature, there is a wealth of computational work on plan/intention recognition. Schmidt, Sridharan and Goodson [1978] first open the field of plan recognition. Their psychological experiments, together with the experiment by Cohen, Perrault and Allen [1982], provide empirical evidence that humans do infer the plans and goals of other agents and use these hypotheses in their subsequent reasoning. Kautz and Allen [1986] present the first formal theory of plan recognition using McCarthy’s circumscriptive theory. To deal with uncertainty inherent in plan inference, Charniak and Goldman [1989, 1993] build the first probabilistic model of plan recognition based on Bayesian reasoning. Huber, Durfee and Wellman [1994] use PRS as a general specification language, and construct the dynamic mapping from PRS to belief networks. Tambe and Rosenbloom [1995, 1996] propose the RESC approach to agent tracking in dynamic environment, which is based on situated commitments. Kaminka, Pynadath and Tambe [2002] employ probabilistic multiagent plan recognition for monitoring team behavior. Though the approaches differ, most plan recognition systems infer a hypothesized plan from observation of actions. World states and in particular, the agents’ preferences over outcomes are rarely considered in the recognition process. On the other hand, in many real- world applications, utilities of different outcomes are already known [Blythe, 1999]. A planning agent usually takes into account that actions may have different outcomes, and some outcomes are more desirable than the others. Therefore, when an agent makes decisions and acts on the world, the agent needs to balance between different possible outcomes. 93 In our approach, we view plan recognition as inferring the decision-making strategy of the observed agents, and explicitly take states and state desirability into account. There are different ways to address the utility issue in plan recognition (see [Mao & Gratch, 2004] for discussions), for example, we can extend traditional probabilistic reasoning framework, to incorporate utility nodes into belief nets. Another approach is based on the MEU principle, which assumes that a rational agent will adopt a plan maximizing the expected utility. Here we shall focus on the second way. The computation of expected plan utility is similar to that in decision-theoretic planning (e.g. DRIPS, [Haddawy & Suwandi, 1994]), using the utilities of outcomes and the probabilities with which different outcomes occur. In our approach, however, we use the observations of behavior as evidence to incrementally update state probabilities and the probabilities of action execution, and compute an exact utility value rather than a range of utility values as in decision-theoretic planning. The computation of expected plan utility captures two important factors. One is the desirability of plan outcomes. The other is the likelihood of outcome achievements, represented as outcome probabilities. The calculation of outcome probability considers the uncertainty of action preconditions (i.e., state probabilities), uncertainty in action execution (i.e., execution probability), and nondeterministic and/or conditional effects (for detailed formulae, see Appendix F). The intention recognition algorithm works on a possible plan set that is a subset of the plan library. Each plan in the possible plan set includes some or all of the observed actions and the outcome under evaluation. Beliefs of act intention about the observed agents acquired from other sources (e.g., communication) constrain the possible plans to a smaller 94 set that is consistent with the current beliefs. The algorithm calculates the expected utility of each possible plan; the one with the highest expected utility is the hypothesized plan. Degree of Intention Once the current plan is identified (with probability), we can further infer the intentional/ unintentional actions effects by examining their relevance to goal attainment. If the evaluated outcome is relevant to the goal achievement (i.e., serving as a precondition of certain action in the hypothesized plan), then outcome intention is true. Otherwise, the evaluated outcome is the side effect in goal achievement. So outcome intention is false. In either case, the degree of intention (being true or false) is equal to the probability of achieving the goal of the current hypothesized plan. Intention is a major determinant of the degree of responsibility (Section 3.2.3). The higher the degree of intention, the greater the responsibility assigned. The degrees of intention and responsibility are used in computing the intensity of credit or blame assignment in the attribution process. Comparisons Some probabilistic approaches have considered the influence of world states on plan recognition, when actions themselves are unobservable (e.g., Pynadath & Wellman [1995]). Bui et al [2002, 2003] use the abstract hidden Markov model for online policy recognition. We did not adopt a Markov model in our work for several considerations. A Markov-based approach generates a relatively large state space, and assumes fixed goals. The core technologies of our application system center on a common representation of plan knowledge, which is shared and reused among different system components. Besides, in modeling realistic virtual agents, we would like to give our agents the flexibility of 95 strategically varying their interpretations of outcome desirability, as a result of coping with specific situations [Marsella & Gratch, 2003]. Some work has implicitly considered an agent’s utility functions; for example, Pynadath and Wellman [2000] capture the likelihood that an agent will expand a plan in a particular way (i.e., the expansion probabilities of PSDGs). Bayesian reasoning is advantageous in accounting for how well the observed actions support a hypothesized plan, but the inference itself requires large numbers of prior and conditional probabilities. In many situations, these probabilities are hard to obtain, and there is no good answer for where the numbers come from. Knowledge about actions, their preconditions and effects are typically available in a plan-based system. Comparing with other work, our approach makes better use of this knowledge. We use observations of action execution to change state probabilities. However, there is no strong assumption about the observability of actions or effects in our approach, and a sequence of observations can be processed incrementally in the same way. Though our approach merely approximates the exact solution, we feel it is sufficient for our applications and compatible with the current system representation. 5.2.2 Coercion Inference We have discussed in Chapter 3 how to infer coercion with respect to a specific outcome. However, in most social situations, we praise or blame an actor (or a coercer) mainly because of the beneficial or harmful property of her deed, and/or because she has the freedom to choose an alternative with a different outcome property. One limitation of evaluating specific outcomes is that they could not express the contextual meaning of the 96 goodness (or badness) of the outcomes or how good (or bad) they are. For example, a person will be blamed for a specific bad deed (if she is not coerced to do so), even though she actually has no choice of doing something that is not bad. Therefore, instead of comparing specific outcomes, a more meaningful way of inferring coercion is to consider the valences and utilities of different outcomes. In the probabilistic context, the inference of outcome coercion is similar to that described in Chapter 3, except for comparing expected utilities of actions and plans (for detailed computations, see Appendix F). Here we should use those probability values prior to action execution). For example, if an agent is coerced to execute an abstract action and the coerced action is a decision node in the plan structure, the evaluation of outcome coercion is based on the estimation of expected utilities of action alternatives. If the evaluated outcome appears in all alternatives (i.e., all the action alternatives have the same valence of expected utilities), then outcome coercion is true. If there is an action alternative with the valence of utility different from that of the evaluated outcome, then the agent has the freedom to choose at least one alternative to avoid the outcome. So outcome coercion is false. Expected plan utilities are used in the same way. For example, if an agent is coerced to achieve a goal (outcome) and there is no plan alternative (i.e., only one primitive plan available to achieve the outcome), then the agent is coerced to pursue the primitive plan and the plan outcome. Otherwise, if there are plan alternatives available, the evaluation process computes the expected utility of each plan alternative. If there is a plan alternative with a different utility value (e.g., current plan has a negative utility value but a plan alternative has a positive value), then the agent has options to choose an alternative plan to avoid the outcome. So the outcome is not coerced in this case. If all the plan alternatives have the same 97 valence of expected utilities, then outcome coercion is true. If other agents enable the only executable alternative or they block all the alternatives with different utilities, these other agents are also viewed as coercers that help coercing the plan and the outcome. Degree of Coercion When outcome coercion is true, we assign a degree value to measure the relative freedom of the coerced agents in choosing among similar alternatives. The higher the degree of coercion, the less freedom the agents have. If there is only one alternative available or all the available alternatives have the same expected utility values, then the degree of coercion equals to 1. Otherwise, the degree assigned is proportional to the difference (i.e., subtraction) of the expected utility of the chosen alternative and the minimal expected utility among all alternatives (in evaluating negative outcome), divided by the range of the expected utilities of all alternatives. In evaluating a positive outcome, the difference is computed by subtracting the expected utility of the chosen alternative from the maximal expected utility among all alternatives with the degree of coercion proportional to the difference value divided by the range of the expected utilities of all alternatives. Coercion is used to determine the responsible agents (Section 3.2.3). In the absence of external coercion, the actor whose action directly produces the outcome is regarded as responsible. The presence of coercion can deflect some or all of the responsibility to the coercive force, depending on the perceived degree of coercion. 5.3 Algorithm and Illustration We have developed an algorithm (Figure 28) for evaluating the responsible agents for a specific outcome e. The algorithm first searches the history of dialogue and applies dialogue 98 and causal inference rules (Steps 1−2). Then it employs intention recognition method based on observed action execution of agents (Step 3). For each executed action relevant to achieving e in the hypothesized plan (Step 4) and each of its relevant effect, including e (Step 4.1), if the relevant effect is caused or attempted by the performer (Step 4.1.1), the performer is responsible for the relevant effects (Step 4.1.2). To trace the coercing agent(s), the evaluation process starts from the primitive actions that directly cause the relevant effects (Step 4.1.3), and works up the task hierarchy (Step 4.1.4). If outcome coercion is true (Step 4.1.4.2), compute degree of coercion and assign coercers to the responsible agents (Step 4.1.4.3). If the goal of the plan is coerced (Step 5), the algorithm computes utilities of plans and infers the alternative plans (Step 5.1). If outcome e is coerced (Step 5.2), it computers the degree of coercion and the coercer of the goal is responsible (Step 5.3). Finally, the algorithm assigns responsibility to the secondary responsible agents (Step 6), and returns the responsible agents as well as the beliefs about the degrees of intention and coercion (Step 7). After its execution, the algorithm identifies the responsible agents. Meanwhile, via dialogue inference and intention recognition, the algorithm also acquires beliefs of intentions (with degree). The intensity of credit or blame is computed by multiplying the absolute utility value of the evaluated outcome (i.e., the positivity or severity of the outcome) and the degree with which the outcome is intended by the observed agents. In addition, degree of coercion adjusted by the factor K mediates the credit or blame assigned among the responsible agents. For the secondary responsible agents, the formula below is multiplied by a ratio R, indicating the agents’ portion of contribution to causing or coercing the creditworthy or blameworthy event. Degree outcome Utility coercion Degree K Blame Credit Intensity × × = | ) ( | )) ( ( ) /((intention) 99 Algorithm 2 (consequence e, action theory AT, utility functions): 1. Search dialogue history and apply dialogue inference rules 2. Apply causal inference rules on actions 3. Observe action execution and apply intention recognition algorithm 4. FOR each executed relevant action B of e in the hypothesized plan 4.1 FOR each effect e’ of B relevant to achieving e 4.1.1 IF cause(performer(A), e’) OR intend(performer(A), p)∧by’(p, A, e’) THEN 4.1.2 primary-responsible(e’) = performer(B) 4.1.3 P = B 4.1.4 DO 4.1.4.1 C = P 4.1.4.2 IF coerce(authority(C), performer(C), e’) THEN 4.1.4.3 Compute degree of coercion primary-responsible(e’) = authority(C) 4.1.4.4 P = parent node of C in AT END-IF WHILE C ≠ root of action hierarchy AND coerce(authority(C), performer(C), e’) END-IF END-FOR END-FOR 5. IF goal of the plan is coerced THEN 5.1 Compute utilities of plan alternatives and apply causal inference rules on plan 5.2 IF consequence e is coerced THEN 5.3 Compute degree of coercion primary-responsible(e) ∪= coercer(goal) END-IF END-IF 6. Compute degree of intention secondary-responsible(e) = primary-responsible(e’) 7. RETURN primary-responsible(e) ∪ secondary-responsible(e); Degrees of beliefs Figure 28 Algorithm for Evaluating Responsible Agents ) , ( effect elevant ' plan e r e − ∈ ∪ 100 Now we consider again the example in Section 3.5. The social actors, conversation history and task execution remain the same. We still take the sergeant’s perspective as an example, but base on the probabilistic plan representation shown in Figure 27. The complete plans of the agents are given below (Figure 29). Assemble Performer: sergeant Authority: lieutenant One Squad Forward Performer: squad leader Authority: sergeant Remaining Forward Performer: squad leader Authority: sergeant Troop-in-transit Troop-at-aa One-sqd-at-aa Route-secured Remaining-at-aa 1-6-supported Plan 1: ··· ··· 0.75 +25 Assemble Performer: sergeant Authority: lieutenant Two Squads Forward Performer: squad leader Authority: sergeant Remaining Forward Performer: squad leader Authority: sergeant Troop-in-transit Troop-at-aa Two-sqds-at-aa Unit-fractured Remaining-at-aa Not-fractured Plan 2: ··· ·· · 0.8 +25 1-6-supported -50 Assemble Performer: sergeant Authority: lieutenant One Squad Forward Performer: squad leader Authority: sergeant Remaining Forward Performer: squad leader Authority: sergeant Troop-in-transit Troop-at-aa One-sqd-at-aa Route-secured Remaining-at-aa 1-6-supported Plan 1: ··· ··· 0.75 +25 Assemble Performer: sergeant Authority: lieutenant Two Squads Forward Performer: squad leader Authority: sergeant Remaining Forward Performer: squad leader Authority: sergeant Troop-in-transit Troop-at-aa Two-sqds-at-aa Unit-fractured Remaining-at-aa Not-fractured Plan 2: ··· ·· · 0.8 +25 1-6-supported -50 Figure 29 Plan Alternatives from Sergeant’s Perspective In the scenario, the lieutenant’s mission is to support unit 1-6 (i.e., unit 1-6 supported in the task model). This is a desirable team goal. Two plan alternatives are available in the plan library to achieve this goal, namely Plan 1 and Plan 2. Plan 1 is composed of three primitive actions, assemble, one-squad-forward and remaining-squads-forward. Remaining-squads- forward in Plan 1 achieves 1-6-supported (with effect probability 0.75). Plan 2 consists of primitive actions, assemble, two-squads-forward and remaining-squads-forward. Two- 101 squads-forward in Plan 2 achieves 1-6-supported (with effect probability 0.8), but also brings about the outcome unit-fractured. Besides, one-squad-forward and remaining- squads-forward compose the abstract action send-one-squad, and two-squads-forward and remaining-squads-forward compose the abstract action send-two-squads. The performer and authority of each action, probabilities and utilities (from the sergeant’s perspective) are shown in the figure. The prior execution probability of each action is 0.95. Two actions of the troop are observed executed, assemble and 1 st -and-4 th -squads- forward. Assume the sergeant assigns negative utility to unit-fractured and this consequence serves as input to the evaluation algorithm. We illustrate how to find the responsible agents given the sergeant’s plan knowledge and observations. Steps 1-2: Inferring the observed speech acts, the sergeant can derive a number of beliefs (the same as those in Section 3.5, Steps 1−2). Step 3: The observed action sequence of the troop, assemble and 1 st -and-4 th -squads- forward (an instance of two-squads-forward) supports Plan 2 (EU(Plan1|E)=27 and EU(Plan2|E) =32; computed using the utility functions of the troop, Utility(1-6- supported)=+40). So Plan 2 is the current hypothesized plan (with probability 0.8). Step 4: As two-squads-forward directly causes the evaluated outcome unit-fractured, the performer squad leader is the causal agent for the outcome. As assemble establishes the precondition of two-squads-forward and the sergeant is the performer, the sergeant is the indirect agency for the outcome. Both actions are relevant to achieving the evaluated outcome unit-fractured in Plan 2. The effect of assemble, troop-at-accident-area (i.e., troop- at-aa), is relevant to achieving the outcome unit-fractured. 102 Initially, assign the performers of relevant actions to the responsible agents. The sergeant is responsible for troop-at-aa, and the squad leader is responsible for unit-fractured. There is no clear evidence of coercion for the primitive action assemble and its effects. From causal inference, it can be inferred that the primitive action two-squads-forward was coerced by the sergeant (from Step 1) and the squad leader was the performer, the squad leader was coerced to achieve the outcomes 1-6-supported and unit-fractured by the sergeant. Since the sergeant was coerced the abstract action send-two-squads by the lieutenant (from Step 1) and the action is a non-decision node, the sergeant was coerced the outcomes unit-fractured by the lieutenant. Assign the lieutenant to the responsible agent of unit-fractured, with the degree of coercion is 1. Step 5: The lieutenant was obliged to fulfill his mission of supporting unit 1-6. He was coerced to achieve the goal. Two plan alternatives, Plan 1 and Plan 2 are available. By computing the utilities of Plan 1 and Plan 2 (EU(Plan1|E)=16.9 and EU(Plan2|E)=−30; computed using the utility functions of the sergeant in Figure 15), the sergeant knows that there is a plan alternative with a different utility value. No other agents’ activities blocked the alternative plan. So the lieutenant was not coerced to execute the plan or achieve the plan outcome. Step 6: As the plan recognizer identifies that Plan 2 is the current hypothesized plan of the troop, and that unit-fractured is not a relevant effect to the goal achievement, the sergeant believes that the troop was not intended unit-fractured, with degree 0.8. Assign the sergeant to the secondary responsible agent. The algorithm also acquires beliefs of intention and foreknowledge. From the results of dialogue inference (Step 1), the sergeant believes that both the lieutenant and himself knew 103 the consequence beforehand. The lieutenant intended sending two squads forward and did not intend sending one squad forward. Through the application of a causal inference rule, the sergeant believes that the lieutenant intended the outcome. This belief overrides another belief from intention recognition. As the lieutenant foresaw and intended the outcome, he is responsible for unit-fractured with high degree. The intensity of blame assigned to the lieutenant is 40 (computed by multiplying |Utility(unit-fractured)|=50 and Degree(intention)=0.8; ratio R and factor K equal to 1). The sergeant also shares some blame, with intensity 5 (computed by 0.5×1×50×0.2). 104 Chapter 6 Conclusions 6.1 Summary In this dissertation, we have developed a computational framework for modeling social causality and responsibility judgment in the context of multi-agent interactions. Based on attribution theory in psychology, we take the conceptual variables in the theory and map them into the computational representation in intelligent agent-based systems. We build the model to infer attribution variables such as agency, intention, foreknowledge and coercion from speech act representation of communication and features in plan representation. We extend the computational framework to the probabilistic context and develop the general intention recognition algorithm based on maximizing expected plan utility. We design algorithms to identify the responsible agents in the attribution process. Finally, we validate the computational framework using human data, and evaluate the model’s performance in generating overall judgments, internal beliefs and inferences. Thus, theoretically and empirically, the computational framework we developed can be used as a general solution to the problem. This work lays the first step of cognitive modeling of human social intelligence and forms a basis for formalizing commonsense knowledge in the social domain. Meanwhile, it contributes to other disciplines as well. It contributes to cognitive science by identifying the underlying cognitive process, structure and inference of human social reasoning. It 105 contributes to social psychology by constructing the first computational attribution theory. It contributes to computer science by providing a functionally workable model of social interactions for human-like intelligent agents. 6.2 Future Considerations Some issues are briefly discussed here. Although they are not the main focus of this work, they deserve attention in future research. Credit and Blame Asymmetry We use a uniform model for both credit and blame judgments. However, there is a distinction between these two types of judgments. D’Arcy [1963] pointed out that the criteria for judging benefit (i.e., credit assignment) are stricter than those for judging harm (i.e., blame assignment). He argued that establishing the actor’s intention is not as critical to evaluating responsibility for harm as it is to responsibility evaluation for beneficial outcomes. For example, a driver is quite likely to be held responsible for the accident she causes negligently. In contrast, people are reluctant to credit an actor for benefit that she brings about inadvertently or carelessly. The experimental findings of Knobe [2003b] also show credit and blame asymmetry in people’s judgments of behavior. Attribution theories for causality and responsibility have been silent on this issue. We did an additional experiment, suggested by Joshua Knobe, to verify this finding (by changing the harmful conditions in the company program examples to the helpful conditions and keeping all the other scenario descriptions the same). The empirical results are significant enough for us to consider reflecting them in our model. 106 Self-Serving Biases In her classic paper, Bradley [1978] reports empirical evidence on self-serving biases 8 , which refer to people’s tendencies of making self-attributions for their own positive behavior and external attributions for their own negative behavior. This tendency reveals subjective needs and motivational influence of the self as perceiver on social judgment. That is, by taking credit for good things and denying blame for bad things, an individual may be able to enhance or protect his or her self-esteem. The self-serving bias is a social phenomenon involving self perception and self judgment. Though we emphasize the perceiver’s subjective evaluation of behavior, and individual perceivers can form different judgments in our model, we have not considered perceiver’s motivational needs. One way to model this aspect of human behavior is to apply the model reversely, starting from personal needs of the perceiver and then seeking for information to alter the attribution values so as to deviate the judgment process. Our work on mitigation is a first attempt in this direction [Martinovski et al, 2005]. Social Norms and Organization Structure In modeling social judgment, social norms and moral standards should play an important role. As desirability in our model can be personal as well as social, our approach can be viewed as representing social standards by utility or (dis-utility) over states that uphold (or break) the standards. This conceptualization simplifies the recognition of satisfying (or violating) standards, as the situations are restricted to those represented within plan context. 8 Also called defensive biases or hedonic biases. See also the related notion of self-deception [Mele, 2001]. 107 But clearly, there are other situations of satisfying or violating standards that could not be handled this way. Besides, our approach assumes a hierarchical power relation of agents. This is acceptable given that one universal characteristic of human societies is the existence of this authority-subordination hierarchy. But one can argue that different societies and organizational structures may vary the treatment to the problem. Nested Beliefs Judging others’ behavior needs represent others’ beliefs, for example, evaluating foreseeability needs the perceiver’s belief about the performing agent’s belief about actions and consequences. The layers of belief nesting increase when an agent takes another agent’s perspective, for example, when an agent infers how another agent might judge his or her own behavior. There are several related studies in cognitive science (e.g., [Wilks & Bien, 1983; Maida, 1986; Wilks & Ballim, 1987]). We follow Wilks et al’s proposal that based on the least-effort principle, a nesting should be generated by default reasoning, using a default rule for ascription of beliefs. The default ascriptional rule is to assume that one’s view of another agent’s view is the same as one’s own except where there is explicit evidence to the contrary. 108 Bibliography V. Aleven and K. D. Ashley. Doing Things with Factors. Proceedings of the Fifth International Conference on Artificial Intelligence and Law, 1995. J. F. Allen and R. Perrault. Analyzing Intention in Utterances. Artificial Intelligence, 15(3):143-178, 1980. J. Austin. How to Do Things with Words. Harvard University Press, 1962. J. Blythe. Decision-Theoretic Planning. AI Magazine, 20(2):37-54, 1999. G. W. Bradley. Self-Serving Biases in the Attribution Process: A Reexamination of the Fact or Fiction Question. Journal of Personality and Social Psychology, 36(1):56-71, 1978. M. E. Bratman. Intention, Plans, and Practical Reason. Harvard University Press, 1987. M. E. Bratman, D. J. Israel and M. E. Pollack. Plans and Resource-Bounded Practical Reasoning. Computational Intelligence, 4(4):349-355, 1988. H. H. Bui, S. Venkatesh and G. West. Policy Recognition in the Abstract Hidden Markov Model. Journal of Artificial Intelligence Research, 17:451-499, 2002. H. H. Bui. A General Model for Online Probabilistic Plan Recognition. Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, 2003. R. Buss. Causes and Reasons in Attribution Theory: A Conceptual Critique. Journal of Personality and Social Psychology, 36:1311-1321, 1978. J. Carletta. Assessing Agreement on Classification Tasks: the Kappa Statistic. Computational Intelligence, 22(2):249-254, 1996. E. Charniak and R. Goldman. A Semantics for Probabilistic Quantifier-Free First-Order Languages, with Particular Application to Story Understanding. Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, 1989. E. Charniak and R. Goldman. A Bayesian Model of Plan Recognition. Artificial Intelligence, 64(1):53-79, 1993. H. Chockler and J. Y. Halpern. Responsibility and Blame: A Structural-Model Approach. Journal of Artificial Intelligence Research, 22:93-115, 2004. 109 H. H. Clark and E. F. Schaefer. Collaborating on Contributions to Conversation. Language and Cognitive Processes, 2, 1-23. J. Cohen. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement. 20:37-46, 1960. P. R. Cohen and H. J. Levesque. Intention is Choice with Commitment. Artificial Intelligence, 42(2-3):213-261, 1990. P. R. Cohen, J. Morgan and M. E. Pollack (Eds.). Intentions in Communication. The MIT Press, 1990. P. R. Cohen, C. R. Perrault and J. F. Allen. Beyond Question Answering. Strategies for Natural Language Processing, pp. 245-274. Lawrence Erlbaum Associates, 1982. R. Conte and M. Paolucci. Responsibility for Societies of Agents. Journal of Artificial Societies and Social Simulation, 7(4), 2004. E. D’Arcy. Human Acts: An Essay in Their Moral Evaluation. Oxford: Clarendon, 1963. B. Di Eugenio and M. Glass. The Kappa Statistic: A second Look. Computational Linguistics, 30(1):95-101, 2004. M. d’Inverno, D. Kinny, M. Luck and M. Wooldridge. A Formal Specification of dMARS. In: M. P. Singh, A. Rao and M. J. Wooldridge (Eds.). Intelligent Agents IV, pp. 155-176. Springer-Verlag, 1997. J. Dore. Conditions for the Acquisition of Speech Acts. In: I. Markova (Ed.). The Social Context of Language, pp. 87-111. John Wiley & Sons, 1978. J. Doyle. Rationality and Its Roles in Reasoning. Computational Intelligence, 8(2):376-409, 1992. J. Doyle. Prospects for Preferences. Computational Intelligence, 20(2):111-136, 2004. F. D. Fincham and J. M. Jaspars. Attribution of Responsibility: From Man the Scientist to Man as Lawyer. In: L. Berkowitz (Ed.). Advances in Experimental Social Psychology (Vol. 13), pp. 81-138. Academic Press, 1980. K. Fischer, J. P. Mueller and M. Pischel. A Pragmatic BDI Architecture. In: M. Wooldridge, J. P. Mueller and M. Tambe (Eds.). Intelligent Agents II, pp. 203-218. Springer-Verlag, 1996. M. R. Genesereth and N. Nilsson. Logical Foundations of Artificial Intelligence. Morgan Kaufmann Publishers, 1987. M. P. Georgeff and A. L. Lansky. Reactive Reasoning and Planning. Proceedings of the Sixth National Conference on Artificial Intelligence, 1987. 110 M. Georgeff, B. Pell, M. Pollack, M. Tambe, and M. Wooldridge. The Belief-Desire- Intention Model of Agency. In: J. Muller, M. P. Singh, and A. S. Rao (Eds). Intelligent Agents V: Proceedings of the Fifth International Workshop on Agent Theories, Architectures, and Languages, pp. 1--10. Springer-Verlag, 1999. A. Gordon and J. R. Hobbs. Formalizations of Commonsense Psychology. AI Magazine, 25(4):49-62, 2004. J. Gratch, J. Rickel, E. Andre, N. Badler, J. Cassell and E. Petajan. Creating Interactive Virtual Humans: Some Assembly Required. IEEE Intelligent Systems, 17(4):54-63, 2002. J. Gratch and W. Mao. Automating After Action Review: Attributing Blame or Credit in Team Training. Proceedings of the Twelfth Conference on Behavior Representation in Modeling and Simulation, 2003. J. Gratch, W. Mao and S. Marsella. Modeling Social Emotions and Social Attributions. In: R. Sun (Ed.). Cognition and Multi-Agent Interaction, pp. 219-251. Cambridge University Press, 2006. H. P. Grice. Logic and Conversation. In: P. Cole and J. Morgan (Eds.). Syntax and Semantics: Vol 3, Speech Acts. Academic Press, 1975. B. Grosz and S. Kraus. Collaborative Plans for Complex Group Action. Artificial Intelligence, 86(2):269-357, 1996. W. M. Grove, N. C. Andreasen, P. McDonald-Scott, M. B. Keller and R. W. Shapiro. Reliability Studies of Psychiatric Diagnosis. Theory and Practice. Archives of General Psychiatry, 38(4):408-413, 1981. P. Haddawy and M. Suwandi. Decision-Theoretic Refinement Planning Using Inheritance Abstraction. Proceedings of the Second International Conference on Artificial Intelligence Planning, 1994. J. C. Hage. Reasoning with Rules: An Essay on Legal Reasoning and Its Underlying logic. Kluwer Academic Publishers, 1997. J. Y. Halpern and J. Pearl. Causes and Explanations: A Structural-Model Approach – Part Ι: Causes. Proceeding of the Seventeenth Conference on Uncertainty in Artificial Intelligence, 2001. J. Y. Halpern and M. Y. Vardi. The Complexity of Reasoning about Knowledge and Time. I. Lower Bounds. Journal of Computer and System Sciences, 38(1):195-237, 1989. B. Harris and J. H. Harvey. Attribution Theory: From Phenomenal Causality to the Intuitive Social Scientist and Beyond. In: C. Antaki (Ed.). The Psychology of Ordinary Explanations of Social Behavior, pp. 57-95. Academic Press, 1981. 111 H. L. A. Hart and T. Honore. Causation in the Law (Second Edition), Oxford University Press, 1985. F. Heider. The Psychology of Interpersonal Relations. John Wiley & Sons Inc, 1958. D. J. Hilton. Conversational Processes and Causal Explanation. Psychological Bulletin, 107:65-81, 1990. J. R. Hobbs. Ontological Promiscuity. Proceedings of the Twenty-Third Annual Meeting of the Association for Computational Linguistics, 1985. J. R. Hobbs and R. C. Moore (Eds.). Formal Theories of the Commonsense World. Ablex Publishing Corp., 1985. J. R. Hobbs, M. Stickel, D. Appelt and P. Martin. Interpretation as Abduction. Artificial Intelligence, 63(1-2):69-142, 1993. M. Hopkins and J. Pearl. Clarifying the Usage of Structural Models for Commonsense Causal Reasoning. Proceedings of AAAI Spring Symposium on Logic Formulizations of Commonsense Reasoning, 2003. M. J. Huber, E. H. Durfee and M. P. Wellman. The Automated Mapping of Plans for Plan Recognition. Proceedings of the Tenth Annual Conference on Uncertainty in Artificial Intelligence, 1994. M. J. Huber. JAM: A BDI-Teoretic Mobile Agent Architecture. Proceedings of the Third International Conference on Autonomous Agents, 1999. N. R. Jennings. On Being Responsible. In: E. Werner and Y. Demazeau (Eds.). Decentralized A.I., pp. 93-102. North Holland Publishers, 1992. N. R. Jennings and E. H. Mamdani. Using Joint Responsibility to Coordinate Collaborative Problem Solving in Dynamic Environments. Proceedings of the Tenth National Conference on Artificial Intelligence, 1992. E. E. Jones and K. E. Davis. From Acts to Dispositions: The Attribution Process in Person Perception. In: L. Berkowitz (Ed.). Advances in Experimental Social Psychology (Vol.2), pp. 219-266. Academic Press, 1965. E. E. Jones and D. McGillis. Correspondent Inferences and the Attribution Cube: A comparative Reappraisal. In: J. H. Harvey, W. J. Ickes and R. F. Kidd (Eds.). New Directions in Attribution Research (Vol.1), pp. 289-420. Lawrence Erlbaum Associates, 1976. J. A. Kalman. Automated Reasoning with Otter. Rinton Press, 2001. 112 G. Kaminka, D. V. Pynadath and M. Tambe. Monitoring Teams by Overhearing: A Multiagent Plan Recognition Approach. Journal of Artificial Intelligence Research, 17:83- 135, 2002. H. A. Kautz and J. F. Allen. Gneralized Plan Recognition. Proceedings of the Fifth National Conference on Artificial Intelligence, 1986. H. H. Kelley. Attribution Theory in Social Psychology. In: D. Levine (Ed.). Nebraska Symposium on Motivation 1967, pp. 192-238. University of Nebraska Press, 1967. H. H. Kelley. Causal Schemata and the Attribution Process. In: E. E. Jones, D. Kanouse, H. H. Kelley, R. E. Nisbett, S. Valins and B. Weiner (Eds.). Attribution: Perceiving the Causes of Behavior, pp. 151-174. General Learning Press, 1972. H. H. Kelley. The Processes of Causal Attribution. American Psychologist, 28:107-128, 1973. R. F. Kidd and T. M. Amabile. Causal Explanations in Social Interaction: Some Dialogues on Dialogue. In: J. H. Harvey, W. J. Ickes and R. F. Kidd (Eds.). New Directions in Attribution Research (Vol. 3), pp. 307-328. Lawrence Erlbaum Associates, 1981. J. Knobe. Intentional Action and Side-Effects in Ordinary Language. Analysis, 63:190-193, 2003a. J. Knobe. Intentional Action in Folk Psychology: An Experimental Investigation. Philosophical Psychology, 16:309-324, 2003b. R. Kohavi and F. Provost. Glossary of Terms. Machine Learning, 30(2/3):271-274, 1998. K. Krippendorff. Content Analysis: An Introduction to its Methodology. Sage Publications, 1980. M. Lalljee and R. P. Abelson. The Organization of Explanations. In: M. Hewstone (Ed.). Attribution Theory: Social and Functional Extensions, pp. 65-80. Oxford: Blackwell, 1983. L. Lambert. Recognizing Complex Discourse Acts: A Tripartite Plan-Based Model of Dialogue. Ph.D. Thesis, University of Delaware, 1993. J. Lang, L. van der Torre and E. Weydert. Utilitarian Desires. Autonomous Agents and Multi-Agent Systems, 5:329-363, 2002. S. Larsson and D. Traum, Information State and Dialogue Management in the TRINDI Dialogue Move Engine Toolkit. Natural Language Engineering, 6(3-4):323-340, 2000. B. Lickel, T. Schmader and D. L. Hamilton. A Case of Collective Responsibility: Who Else Was to Blame for the Columbine High School Shootings? Personality and Social Psychology Bulletin, 29(2): 194-204, 2003. 113 A. S. Maida. Introspection and Reasoning about the Beliefs of Other Agents. Proceedings of the Eighth Annual Conference of the Cognitive Science Society, 1986. B. F. Malle. How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction. The MIT Press, 2004. B. F. Malle and J. Knobe. The Folk Concept of Intentionality. Journal of Experimental Social Psychology, 33:101-121, 1997. W. Mao and J. Gratch. Decision-Theoretic Approaches to Plan Recognition. ICT Technical Report (http://www.ict.usc. edu/publications/ICT-TR-01-2004.pdf), 2004. S. Marsella and J. Gratch. Modeling Coping Behavior in Virtual Humans: Don’t Worry, Be Happy. Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, 2003. B. Martinovski. The Role of Repetitions and Reformulations in Court Proceedings: A Comparison between Sweden and Bulgaria. Ph.D. Thesis, University of Göteborg, 2000. B. Martinovski, W. Mao, J. Gratch and S. Marsella. Mitigation Theory: An Integrated Approach. Proceedings of the Twenty-Seventh Annual Conference of the Cognitive Science Society, 2005. L. T. McCarty and N. S. Sridharan. The Representation of an Evolving System of Legal Concepts: ΙΙ. Prototypes and Deformations. Proceedings of the Seventh International Joint Conference on Artificial Intelligence, 1981. L. T. McCarty. An Implementation of Eisner v. Macomber. Proceedings of the Fifth International Conference on Artificial Intelligence and Law, 1995. L. T. McCarty. Some Arguments about Legal Arguments. Proceedings of the Sixth International Conference on Artificial Intelligence and Law, 1997. A. R. Mele. Self-Deception Unmasked. Princeton University Press, 2001. G. A. Miller and S. Glucksberg. Psycholinguistic Aspects of Pragmatics and Semantics. In: R. C. Atkinson, R. J. Herrnstein, G. Lindzey and R. D. Luce (Eds.). Stevens’ Handbook of Experimental Psychology (2 nd Edition) Vol.2, pp. 417-471. John Wiley & Sons, 1988. E. Mueller. Commonsense Reasoning. Morgan Kaufmann Publishers, 2006. T. Nadelhoffer. On saving the Simple View. Mind and Language, forthcoming. R. E. Nisbett and T. D. Wilson. Telling More than We Can Know: Verbal Reports on Mental Processes. Psychological Review, 84(3):231-259, 1977. 114 T. J. Norman and C. Reed. Delegation and Responsibility. In: C. Castelfranchi & Y. Lesperance (Eds.). Intelligent Agents VII: Proceedings of the Seventh International Workshop on Agent Theories, Architectures and Languages. Springer-Verlag, 2001. J. Pearl. Reasoning with Cause and Effect. Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, 1999. M. E. Pollack. Plans as Complex Mental Attitudes. In: P. R. Cohen, J. Morgan and M. E. Pollack (Eds.), Intentions in Communication, pp. 77-103. The MIT Press, 1990. H. Prakken. Logic Tools for Modeling Legal Argument: A Study of Defeasible Argumentation in Law. Kluwer Academic Publishers, 1997. H. Prakken and G. Sartor. The Role of Logic in Computational Models of Legal Argument. In: A.Kakas and F. Sadri (eds.). Computational Logic: Logic Programming and Beyond, Essays in Honor of Robert A. Kowalski, Part II, pp. 342-380. Springer-Verlag, 2002. D. V. Pynadath and M. P. Wellman. Accounting for Context in Plan Recognition, with Application to Traffic Monitoring. Proceedings of the Eleventh International Conference on Uncertainty in Artificial Intelligence, 1995. D. V. Pynadath and M. P. Wellman. Probabilistic State-Dependent Grammars for Plan Recognitions. Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, 2000. A. S. Rao. AgentSpeak(L): BDI Agents Speak out in a Logical Computable Language. In: W. Van de Velde and J. W. Perram (Eds.). Agents Breaking Away: Proceedings of the Seventh European Workshop on Modeling Autonomous Agents in Multi-Agent World, pp. 42-55. Springer-Verlag, 1996. S. J. Read. Constructing Causal Scenarios: A Knowledge Structure Approach to Causal Reasoning. Journal of Personality and Social Psychology, 52(2):288-302, 1987. J. A. Rice. Mathematical Statistics and Data Analysis (Second Edition). Duxbury Press, 1994. Rietveld, T., and van Hout, R. Statistical Techniques for the Study of Language and Language Behavior. Mouton de Gruyter, 1993. E. L. Rissland and K. D. Ashley. A Case-Based System for Trade Secrets Law. Proceedings of the First International Conference on Artificial Intelligence and Law, 1987. E. L. Rissland and D. B. Skalak. CABARET: Statutory Interpretation in a Hybrid Architecture. International Journal of Man-Machine Studies, 34:839-887, 1991. S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, 2003. 115 M. Sadek. A Study in the Logic of Intention. Proceedings of the Third International Conference on Principles of Knowledge Representation and Reasoning, 1992. C. F. Schmidt, N. S. Sridharan and J. L. Goodson. The Plan Recognition Problem: An Intersection of Psychology and Artificial Intelligence. Artificial Intelligence, 11(1-2):45-83, 1978. J. R. Searle. Speech Acts: An Essay in the Philosophy of Language. Cambridge University Press, 1969. J. R. Searle. Expression and Meaning. Cambridge University Press, 1979. J. R. Searle. Intentionality: An Essay in the Philosophy of Mind. Cambridge University Press, 1983. K. G. Shaver. An Introduction to Attribution Processes. Winthrop Publishers, 1975. K. G. Shaver. The Attribution Theory of Blame: Causality, Responsibility and Blameworthiness. Springer-Verlag, 1985. P. Slovic, S. Lichtenstein and B. Fischhoff. Decision Making. In: R. Atkinson, R. J. Herrnstein, G. Lindzey and R. D. Luce (Eds.). Stevens’ Handbook of Experimental Psychology (Second Edition): Volume 2, Learning and Cognition, pp. 673-738. John Wiley & Sons, 1988. W. Swartout, J. Gratch, R. Hill, E. Hovy, S. Marsella, J. Rickel and D. Traum. Toward Virtual Humans. AI Magazine, 27(2):96-108, 2006. M. Tambe and P. S. Rosenbloom. RESC: An Approach for Real-time, Dynamic Agent Tracking. Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, 1995. M. Tambe and P. S. Rosenbloom. Event Tracking in a Dynamic Multi-Agent Environment. Computational Intelligence, 12(3):499-521, 1996. D. Traum. A Computational Theory of Grounding in Natural Language Conversation. Ph.D. Thesis, University of Rochester, 1994. D. Traum, J. Rickel, J. Gratch and S. Marsella. Negotiation over Tasks in Hybrid Human- Agent Teams for Simulation-Based Training. Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, 2003. A. Tversky and D. Kahneman. Causal Schemata in Judgments under Uncertainty. In: M. Fishbein (Ed.). Progress in Social Psychology, pp. 49-72. Lawrence Erlbaum Associates, 1980. 116 B. Weiner, D. Russell and D. Lerman. The Cognition-Emotion Process in Achievement- Related Contexts. Journal of Personality and Social Psychology, 37(7):1211-1220, 1979. B. Weiner, S. Graham and C. Chandler. Pity, Anger, and Guilt: An Attributional Analysis. Personality and Social Psychology Bulletin, 8(2):226-232, 1982. B. Weiner. Some Methodological Pitfalls in Attributional Research. Journal of Educational Psychology, 75(4):530-543, 1983. B. Weiner. An Attributional Theory of Achievement Motivation and Emotion. Psychological Review, 92(4):548-573, 1985. B. Weiner. An Attributional Theory of Motivation and Emotion. Springer-Verlag, 1986. B. Weiner. Judgments of Responsibility: A Foundation for a Theory of Social Conduct. The Guilford Press, 1995. B. Weiner. Responsibility for Social Transgressions: An Attributional Analysis. In: B. F. Malle, L. J. Moses and D. A. Baldwin (Eds.). Intentions and Intentionality: Foundations of Social Cognition, pp. 331-344. The MIT Press, 2001. B. Weiner. Social Motivation, Justice and the Moral Emotions: An Attributional Approach. Lawrence Erlbaum Associates, 2006. P. A. White. Ambiguity in the Internal/External Distinction in Causal Attribution. Journal of Experimental Social Psychology, 27:259-270, 1991. B. Williams. Making Sense of Humanity and Other Philosophical Papers. Cambridge University Press, 1995. G. L. Williams. Criminal Law: the General Part. Stevens & Sons, 1953. Y. Wilks and J. Bien. Beliefs, Points of View and Multiple Environments. Cognitive Science, 7(2):95-119, 1983. Y. Wilks and A. Ballim. Multiple Agents and the Heuristic Ascription of Belief. Proceedings of the Tenth International Joint Conference on Artificial Intelligence, 1987. L. Wos and G. Pieper. A Fascinating Country in the World of Computing: Your Guide to Automated Reasoning. World Scientific Press, 2000. M. J. Zimmerman. An Essay on Moral Responsibility. Rowman & Littlefield, 1988. 117 Appendices Appendix A. Predicates and Functions Predicates P1. cause(x, e, t): agent x physically causes effect e at time t. P2. assist-cause(x, y, e, t): agent x assists agent y by causing effects relevant to achieving e at time t. P3. know(x, p, t): agent x knows the proposition p at time t. P4. intend(x, p, t): agent x intends the proposition p at time t. P5. coerce(x, y, p, t): agent x coerces agent y the proposition p at time t. P6. want(x, p, t): agent x wants the proposition p at time t. P7. obligation(x, p, y, t): agent x has the obligation of proposition p created by agent y at time t. P8. primitive(A): A is a primitive action. P9. and-node(A): action A is a non-decision node in the plan structure. P10. or-node(A): action A is a decision node in the plan structure. P11. alternative(A, B): actions A and B are alternatives of performing a higher-level action. P12. do(x, A): agent x performs action A. P13. achieve(x, e): agent x achieves effect e. P14. bring-about(A, e): action A brings about effect e. P15. by(A, e): by acting A to achieve effect e. 118 P16. execute(x, A, t): agent x executes action A at time t. P17. enable(x, E, t): agent x makes effects in effect set E true at time t. P18. can-execute(x, A, t): agent x is capable of executing action A at time t. P19. can-enable(x, e, t): agent x is capable of making effect e true at time t. P20. occur(e, t): effect e occurs at time t. P21. superior(x, y): agent x is a superior of agent y. P22. true(e, t): effect e is true at time t. Functions F1. precondition(A): precondition set of action A. F2. effect(A): effect set of action A. F3. subaction(A): subaction set of abstract action A. F4. choice(A): choice set of performing abstract action A. F5. conditional-effect(A): conditional effect set of action A. F6. antecedent(e): antecedent set of conditional effect e. F7. consequent(e): consequent of conditional effect e. F8. definite-effect(A): definite effect set of action A. F9. indefinite-effect(A): indefinite effect set of action A. F10. relevant-action(e, AT): relevant action set to achieve e according to action theory AT and observations. F11. relevant-effect(e, AT): relevant effect set to achieve e according to action theory AT and observations. F12. side-effect(e, AT): side effect set to achieve e according to action theory AT and observations. 119 F13. performer(A): performing agent(s) of action A. F14. authority(A): authorizing agent(s) of action A. F15. primary-responsible(e): primary responsible agent(s) for effect e. F16. secondary-responsible(e): secondary responsible agent(s) for effect e. F17. P effect (e | A): probability of the occurrence of its effect e given action A is successfully executed. F18. P conditional (e’ | antecedent(e), e): probability of the occurrence of its consequent e’ given conditional effect e and its antecedents are true. F19. P execution (A | precondition(A)): probability of successful execution of action A given its preconditions are true. F20. utility(e): utility value of effect e. 120 Appendix B. Inference Rules For simplification, all universal quantifies are omitted. Variables x, y and z are agents. Let s and h be a speaker and a hearer, p and q be propositions, and t, t1, …, t5 be time stamps. Let A, B and C be actions, and g be a goal state. Variable e can be an action precondition, an effect, an antecedent or a consequent of a conditional effect. All the rules are from a perceiving agent’s perspective, i.e., actions (including speech acts) and effects are those observed by the perceiver; epistemic states such as knowledge, intention, desire, obligation and coercion are those believed by the perceiver. General plan knowledge is supposed known to agents. Dialogue Inference Rules D1 [inform]: inform(s, h, p, t1) ∧ t1<t2<t3 ∧ etc 1 (s, p, t2) ⇒ know(s, p, t3) D2 [inform-grounded]: inform(s, h, p, t1) ∧ t1<t2<t3 ∧ etc 2 (h, p, t2) ⇒ know(h, p, t3) D3 [request]: request(s, h, p, t1) ∧ t1<t2<t3 ∧ etc 3 (s, p, t2) ⇒ want(s, p, t3) D4 [superior-request]: request(s, h, p, t1) ∧ superior(s, h) ∧ t1<t2<t3 ∧ etc 4 (s, h, p, t2) ⇒ obligation(h, p, s, t3) D5 [order]: order(s, h, p, t1) ∧ t1<t2<t3 ∧ etc 5 (s, p, t2) ⇒ intend(s, p, t3) D6 [order]: order(s, h, p, t1) ∧ t1<t2<t3 ∧ etc 6 (s, h, p, t2) ⇒ obligation(h, p, s, t3) 121 D7 [accept]: ¬obligation(h, p, s, t1) ∧ accept(h, p, t2) ∧ t1<t2<t3<t4 ∧ etc 7 (h, p, t3) ⇒ intend(h, p, t4) D8 [want-accept]: want(h, p, t1) ∧ accept(h, p, t2) ∧ t1<t2<t3<t4 ∧ etc 8 (h, p, t3) ⇒ intend(h, p, t4) D9 [accept-obligation]: ¬(∃t1)(t1<t3 ∧ want(h, p, t1)) ∧ obligation(h, p, s, t2) ∧ accept(h, p, t3) ∧ t2<t3<t4<t5 ∧ etc 9 (s, h, p, t4) ⇒ coerce(s, h, p, t5) D10 [unwilling-accept-obligation]: ¬intend(h, p, t1) ∧ obligation(h, p, s, t2) ∧ accept(h, p, t3) ∧ t1<t3 ∧ t2<t3<t4<t5 ∧ etc 10 (s, h, p, t4) ⇒ coerce(s, h, p, t5) D11 [reject]: reject(h, p, t1) ∧ t1<t2<t3 ∧ etc 11 (h, p, t2) ⇒ ¬intend(h, p, t3) D12 [counter-propose]: counter-propose(h, p, q, s, t1) ∧ do’(p, h, A) ∧ do’(q, h, B) ∧ t1<t2<t3 ∧ etc 12 (h, A, B, t2) ⇒ ∃a(know(h, a, t3) ∧ alternative’(a, A, B)) D13 [counter-propose-grounded]: counter-propose(h, p, q, s, t1) ∧ do’(p, h, A) ∧ do’(q, h, B) ∧ t1<t2<t3 ∧ etc 13 (s, A, B, t2) ⇒ ∃a(know(s, a, t3) ∧ alternative’(a, A, B)) D14 [counter-propose]: counter-propose(h, p, q, s, t1) ∧ t1<t2<t3 ∧ etc 14 (h, p, t2) ⇒ ¬intend(h, p, t3) D15 [counter-propose]: counter-propose(h, p, q, s, t1) ∧ t1<t2<t3 ∧ etc 15 (h, q, t2) ⇒ want(h, q, t3) 122 D16 [know-alternative-request]: know(s, a, t1) ∧ alternative’(a, A, B) ∧ request(s, h, p, t2) ∧ do’(p, z, A) ∧ do’(q, z, B) ∧ t1<t2<t3<t4 ∧ etc 16 (s, q, t3) ⇒ ¬intend(s, q, t4) D17 [know-alternative-order]: know(s, a, t1) ∧ alternative’(a, A, B) ∧ order(s, h, p, t2) ∧ do’(p, h, A) ∧ do’(q, h, B) ∧ t1<t2<t3<t4 ∧ etc 17 (s, q, t3) ⇒ ¬intend(s, q, t4) Causal Inference Rules C1 [cause-action-effect]: execute(x, A, t1) ∧ e∈effect(A) ∧ occur(e, t2) ∧ t1<t2<t3<t4 ∧ etc 18 (x, e, t3) ⇒ cause(x, e, t4) C2 [cause-relevant-effect]: execute(x, B, t1) ∧ B∈relevant-action(e, AT) ∧ e∈effect(A) ∧ A≠B ∧ cause(y, e, t2) ∧ t1<t2<t3<t4 ∧ etc 19 (x, y, e, t3) ⇒ assist-cause(x, y, e, t4) C3 [intend-action]: intend(x, p, t1) ∧ do’(p, z, A) ∧ ¬(∃y)(coerce(y, x, A, t1)) ∧ t1<t2<t3 ∧ etc 20 (x, A, t2) ⇒ ∃e(e∈effect(A) ∧ intend(x, e, t3)) C4 [intend-one-alternative]: intend(x, p, t1) ∧ do’(p, z, A) ∧ ¬intend(x, q, t1) ∧ do’(q, z, B) ∧ ¬(∃y)(coerce(y, x, A, t1)) ∧ alternative(A, B) ∧ effect(A)⊂effect(B) ∧ t1<t2<t3 ∧ etc 21 (x, A, B, t2) ⇒ ∃e(e∉effect(A) ∧ e∈effect(B) ∧ ¬intend(x, e, t3)) 123 C5 [intend-one-alternative]: intend(x, p, t1) ∧ do’(p, z, A) ∧ ¬intend(x, q, t2) ∧ do’(q, z, B) ∧ ¬(∃y)(coerce(y, x, A, t1)) ∧ alternative(A, B) ∧ effect(B)⊂effect(A) ∧ t1<t3 ∧ t2<t3<t4 ∧ etc 22 (x, A, B, t3) ⇒ ∃e(e∈effect(A) ∧ e∉effect(B) ∧ intend(x, e, t4)) C6 [intend-plan]: intend(x, b, t1) ∧ by’(b, plan, goal) ∧ A∈relevant-action(goal, plan) ∧ t1<t2<t3 ∧ etc 23 (x, A, t2) ⇒ intend(x, A, t3) C7 [intend-plan]: intend(x, b, t1) ∧ by’(b, plan, goal) ∧ e∈relevant-effect(goal, plan) ∧ t1<t2<t3 ∧ etc 24 (x, e, t2) ⇒ intend(x, e, t3) C8 [intend-plan]: intend(x, b, t1) ∧ by’(b, plan, goal) ∧ e∈side-effect(goal, plan) ∧ t1<t2<t3 ∧ etc 25 (x, e, t2) ⇒ ¬intend(x, e, t3) C9 [intend-foreknowledge-relation]: intend(x, b, t1) ∧ by’(b, A, e) ∧ t1<t2<t3 ∧ etc 26 (x, A, e, t2) ⇒ ∃ba(know(x, ba, t3) ∧ bring-about’(ba, A, e)) C10 [foreknowledge-performer]: e∈effect(A) ∧ t1<t2 ∧ etc 27 (performer(A), A, e, t1) ⇒ ∃ba(know(performer(A), ba, t2) ∧ bring-about’(ba, A, e)) C11 [foreknowledge-authority]: e∈effect(A) ∧ t1<t2 ∧ etc 28 (authority(A), A, e, t1) ⇒ ∃ba(know(authority(A), ba, t2) ∧ bring-about’(ba, A, e)) 124 C12 [coerce-primitive]: coerce(y, x, p, t1) ∧ do’(p, x, A) ∧ primitive(A) ∧ e∈effect(A) ∧ t1<t2<t3 ∧ etc 29 (x, y, e, t2) ⇒ coerce(y, x, e, t3) C13 [coerce-non-decision-node]: coerce(y, x, p, t1) ∧ do’(p, x, A) ∧ and-node(A) ∧ B∈subaction(A) ∧ t1<t2<t3 ∧ etc 30 (x, y, B, t2) ⇒ coerce(y, x, B, t3) C14 [coerce-non-decision-node]: coerce(y, x, p, t1) ∧ do’(p, x, A) ∧ and-node(A) ∧ e∈effect(A) ∧ t1<t2<t3 ∧ etc 31 (x, y, e, t2) ⇒ coerce(y, x, e, t3) C15 [coerce-decision-node]: coerce(y, x, p, t1) ∧ do’(p, x, A) ∧ or-node(A) ∧ B∈choice(A) ∧ t1<t2<t3 ∧ etc 32 (x, y, B, t2) ⇒ ¬coerce(y, x, B, t3) C16 [coerce-decision-node]: coerce(y, x, p, t1) ∧ do’(p, x, A) ∧ or-node(A) ∧ e∈definite-effect(A) ∧ t1<t2<t3 ∧ etc 33 (x, y, e, t2) ⇒ coerce(y, x, e, t3) C17 [coerce-decision-node]: coerce(y, x, p, t1) ∧ do’(p, x, A) ∧ or-node(A) ∧ e∈indefinite-effect(A) ∧ t1<t2<t3 ∧ etc 34 (x, y, e, t2) ⇒ ¬coerce(y, x, e, t3) C18 [coerce-decision-node-initial-one-alternative-available]: A∈choice(C) ∧ true(precondition(A), t1) ∧ (B∈choice(C) ∧ B≠A ⇒ ∃e(e∈precondition(B) ∧ ¬true(e, t1) ∧ ¬can-enable(x, e, t1))) ∧ coerce(y, x, p, t2) ∧ do’(p, x, C) ∧ t1<t2<t3<t4 ∧ etc 35 (x, y, A, t3) ⇒ coerce(y, x, A, t4) 125 C19 [coerce-decision-node-other-enable-one-alternative]: coerce(y, x, p, t1) ∧ do’(p, x, C) ∧ A∈choice(C) ∧ enable(z, precondition(A), t2) ∧ x∉z ∧ (B∈choice(C) ∧ B≠A ⇒ ∃e(e∈precondition(B) ∧ ¬true(e, t2) ∧ ¬can-enable(x, e, t2))) ∧ (execute(x, A, t3) ⇒ t2<t3) ∧ t1<t2<t4<t5 ∧ etc 36 (x, y, z, A, t4) ⇒ coerce(y∪z, x, A, t5) C20 [coerce-decision-node-self-enable-one-alternative]: coerce(y, x, p, t1) ∧ do’(p, x, C) ∧ A∈choice(C) ∧ enable(z, precondition(A), t2) ∧ x∈z ∧ (B∈choice(C) ∧ B≠A ⇒ ∃e(e∈precondition(B) ∧ ¬true(e, t2) ∧ ¬can-enable(x, e, t2))) ∧ (execute(x, A, t3) ⇒ t2<t3) ∧ t1<t2<t4<t5 ∧ etc 37 (x, y, A, t4) ⇒ coerce(y, x, A, t5) C21 [coerce-decision-node-disable-other-alternative]: coerce(y, x, p, t1) ∧ do’(p, x, C) ∧ A∈choice(C) ∧ true(precondition(A), t2) ∧ (B∈choice(C) ∧ B≠A ⇒ ∃e(e∈precondition(B) ∧ ¬true(e, t2) ∧ ¬can-enable(x, e, t2))) ∧ ∃B(B∈choice(C) ∧ B≠A ∧ enable(z, ¬precondition(B), t2)) ∧ x∉z ∧ (execute(x, A, t3) ⇒ t2<t3) ∧ t1<t2<t4<t5 ∧ etc 38 (x, y, z, A, t4) ⇒ coerce(y∪z, x, A, t5) C22 [coerce-conditional-effect-initial-antecedent-true]: e∈conditional-effect(A) ∧ true(antecedent(e), t1) ∧ coerce(y, x, e, t2) ∧ t1<t2<t3<t4 ∧ etc 39 (x, y, consequent(e), t3) ⇒ coerce(y, x, consequent(e), t4) C23 [coerce-conditional-effect-initial-antecedent-false]: e∈conditional-effect(A) ∧ ¬true(antecedent(e), t1) ∧ coerce(y, x, e, t2) ∧ t1<t2<t3<t4 ∧ etc 40 (x, y, consequent(e), t3) ⇒ ¬coerce(y, x, consequent(e), t4) 126 C24 [coerce-conditional-effect-other-enable-antecedent]: coerce(y, x, e, t1) ∧ e∈conditional-effect(A) ∧ enable(z, antecedent(e), t2) ∧ x∉z ∧ ∃c(c∈antecedent(e) ∧ ¬can-enable(x, ¬c, t2)) ∧ (execute(x, A, t3) ⇒ t2<t3) ∧ t1<t2<t4<t5 ∧ etc 41 (x, y, z, consequent(e), t4) ⇒ coerce(y∪z, x, consequent(e), t5) C25 [coerce-conditional-effect-self-enable-antecedent]: coerce(y, x, e, t1) ∧ e∈conditional-effect(A) ∧ enable(z, antecedent(e), t2) ∧ x∈z ∧ (execute(x, A, t3) ⇒ t2<t3) ∧ t1<t2<t4<t5 ∧ etc 42 (x, y, consequent(e), t4) ⇒ ¬coerce(y, x, consequent(e), t5) C26 [coerce-intend-relation]: ∃y(coerce(y, x, p, t1)) ∧ do’(p, x, A) ∧ t1<t2<t3 ∧ etc 43 (x, A, t2) ⇒ intend(x, A, t3) 127 Appendix C. Representation of Action Execution In the absence of external coercion, an executed action is either intentionally performed by the actor or due to negligence. Furthermore, action execution can succeed or fail. An intentional action, if successfully executed, will achieve the intentional effect and possibly some unintentional side effects. Otherwise, if the action execution is unsuccessful, it ends up with the failed attempt. Our approach provides the formalism to represent and detect these situations. • Intentional action intend(x, p, t1) ∧ do’(p, x, A) ∧ t1<t3 ∧ ¬(∃t2)(t1<t2<t3 ∧ ¬intend(x, p, t2)) ∧ execute(x, A, t3) • Negligence ¬intend(x, p, t1) ∧ do’(p, x, A) ∧ t1<t3 ∧ ¬(∃t2)(t1<t2<t3 ∧ intend(x, p, t2)) ∧ execute(x, A, t3) • Intentional effect intend(x, b, t1) ∧ by(b, A, e) ∧ ¬(∃t2)(t1<t2<t3 ∧ ¬intend(x, b, t2))) ∧ t1<t3<t4 ∧ execute(x, A, t3) ∧ occur(e, t4) • Side effect e∈effect(A) ∧ ¬intend(x, b, t1) ∧ by(b, A, e) ∧ ¬(∃t2)(t1<t2<t3 ∧ intend(x, b, t2)) ∧ t1<t3<t4 ∧ execute(x, A, t3) ∧ occur(e, t4) • Failed attempt intend(x, b, t1) ∧ by(b, A, e) ∧ ¬(∃t2)(t1<t2<t3 ∧ ¬intend(x, b, t2)) ∧ t1<t3<t4 ∧ execute(x, A, t3) ∧ ¬occur(e, t4) 128 Enabling Action Effects Agent x enables an effect set E at time t (i.e., enable(x, E, t)), if: • Agent x causes at least one effect in effect set E true at time t. • For every effect e in effect set E, if effect e is false before time t, then agent x cause effect e true at time t. (e∈E ∧ ¬true(e, t1) ⇒ cause(x, e, t)) ∧ ∃e(e∈E ∧ cause(x, e, t)) ∧ t1<t Agent x can enable an effect e at time t (i.e., can-enable(x, e, t)), if there exists a primitive action A so that • The preconditions of A are true at time t, or agent x can enable those preconditions of A that are false at time t. • At time t, agent x can execute action A that has e as its effect. ∃A((c∈precondition(A) ⇒ (true(c, t) ∨ can-enable(x, c, t))) ∧ e∈effect(A) ∧ can- execute(x, A, t)) 129 Appendix D. Computing Effect Set, Definite and Indefinite Effects Let A be an action. If A is an abstract action and has only one decomposition, let a i be a subaction of A. If A is an abstract action and has multiple decompositions, let a i be a choice of A. Effect Set The effect set of an action A is abbreviated to E(A). If A is a primitive action, then E(A) consists of those action effects of A. Otherwise, if A is an abstract action, E(A) is the aggregation of its definite effects and indefinite effects (defined below). Definite Effect Set The definite effect set of A is abbreviated to DE(A). It is composed of those action effects, which occur in each way of decomposing A into primitive actions. DE(A) is defined recursively as follows: • If A is a primitive action, DE(A) = E(A). • If A is an abstract action and has only one decomposition, ) ( ) ( ) ( i A subaction a a DE A DE i ∪ ∈ = • If A is an abstract action and has multiple decompositions, ) ( ) ( ) ( i A choice a a DE A DE i ∩ ∈ = Indefinite Effect Set The indefinite effect set of A is abbreviated to IE(A). It is composed of those action effects that only occur in some (but not all) ways of decomposing A into primitive actions. IE(A) is defined recursively as follows: • If A is a primitive action, IE(A) equals to ∅. 130 • If A is an abstract action and has only one decomposition, ) ( ) ( ) ( i A subaction a a IE A IE i ∪ ∈ = • If A is an abstract action and has multiple decompositions, ) ( )) ( ) ( ( ) ( ) ( ) ( i A choice a i i A choice a a DE a IE a DE A IE i i ∩ ∪ ∈ ∈ − ∪ = Note that for the purpose of this work, those elements in the effect sets that have zero utility value are often ignored, as they indicate no positive or negative significance to the agent(s) being evaluated. Thus, effect sets can be viewed as consequence sets under this simplification. 131 Appendix E. Definitions of Relevant Actions and Effects Given an action theory, an executed action set and a specific outcome p, the relevant actions to achieve e contain the following actions: The action A that causes e is relevant. The actions that enable a precondition of a relevant action to achieve p are relevant. If e is enabled by the consequent of a conditional effect of A, the actions that establish the antecedent of the conditional effect are relevant. If a precondition of a relevant action is enabled by the consequent of a conditional effect of an action, the actions that establish the antecedent of the conditional effect are relevant. The preconditions of these relevant actions comprise the relevant effects to achieve e. Other effects of relevant actions are called side effects. Plan Context If the action theory is confined to those actions, preconditions and effects in a specific plan, the relevant actions, relevant effects and side effects to achieve the goal (or goals) of a plan can be derived based on the same computation as above. 132 Appendix F. Computing Expected Utilities of Actions and Plans Probabilities of States Let E be the evidence. Observations of actions and effects change the probabilities of states. If an action effect x is observed, the probability of x given E is 1.0. If action A is observed (executing), the probability of each precondition of A should be 1.0, and the probability of each effect of A is the multiplication of its execution probability and effect probability. If A has conditional effects, the probability of a consequent of a conditional effect of A is the product of its execution probability, conditional probability and the probabilities of each antecedent of the conditional effect. IF x∈precondition(A), P(x | E) = 1.0 IF x∈effect(A), P(x | E) = P execution (A | precondition(A)) × P effect (x | A) IF x∈consequent(e) ∧ e∈conditional_effect(A), ∏ ∈ × × = ) ( ' ) | ' ( ) ), ( | ( )) ( | ( ) | ( e antecedent e l conditiona execution E e P e e antecedent x P A on preconditi A P E x P Probability of Action Execution If an action A is observed executed, the probability of successful execution of A given E is 1.0, that is, P(A|E)=1.0. In this case, the computation above can be simplified: IF x∈precondition(A), P(x | E) = 1.0 IF x∈effect(A), P(x | E) = P effect (x | A) IF x∈consequent(e) ∧ e∈conditional_effect(A), ∏ ∈ × = ) ( ' ) | ' ( ) ), ( | ( ) | ( e antecedent e l conditiona E e P e e antecedent x P E x P 133 If A is observed executing, P(A|E) equals to its execution probability. Otherwise, the probability of successful execution of A given E is computed by multiplying the execution probability of A and the probabilities of each action precondition. ∏ ∈ × = ) ( ) | ( )) ( | ( ) | ( A on preconditi e execution E e P A on preconditi A P E A P So the changes of state probabilities affect the probability calculation of action preconditions, and the probabilities of action execution are changed accordingly. Outcome Probability and Expected Utility of Actions The probability changes of action execution impact the calculation of outcome probabilities and expected utilities of actions and plans. Let O A be the outcome set of action A, and outcome o i ∈O A . The probability of o i given E is computed by multiplying the probability of A and the effect probability of o i . ) | ( ) | ( ) | ( A o P E A P E o P i effect i action × = If o i is the consequent of conditional effect e of A, the formula above should also include the probability of the antecedents of the conditional effect. ∏ ∈ × × = ) ( ' ) | ' ( ) ), ( | ( ) | ( ) | ( e antecedent e i l conditiona i action E e P e e antecedent o P E A P E o P The expected utility of A given E is computed using the utility of each action outcome and the probability with which each outcome occurs in A. ∑ ∈ × = A i O o i i action o Utility E o P E A EU )) ( ) | ( ( ) |( 134 Outcome Probability and Expected Plan Utility Let O P be the outcome set of plan P, and outcome o j ∈O P . Let {A 1 , …, A k } be the action set in P leading to o j , where o j is an action effect of A k . The probability of o j given E is computed by multiplying the probabilities of each action leading to o j and the effect probability of o j (Note P(A i |E) is computed according to the partial order of A i in P). ) | ( ) ) | ( ( ) | ( ,..., 1 k j effect k i i j plan A o P E A P E o P × = ∏ = If o j is the consequent of conditional effect e of A k , the formula above should also include the probabilities of each antecedent of the conditional effect. ) ), ( | ( ) ) | ' ( ( ) ) | ( ( ) | ( ) ( ' ,..., 1 e e antecedent o P E e P E A P E o P j l conditiona e antecedent e k i i j plan × × = ∏ ∏ ∈ = The expected utility of P given E is computed using the utility of each plan outcome and the probability with which each outcome occurs in P. ∑ ∈ × = p j O o j j plan o Utility E o P E P EU )) ( ) | ( ( ) |( 135 Appendix G. Model Predictions of Company Program Scenarios Note. Bold answers are the predictions of the model. Bold evidence is chosen by the model. Grey evidence indicates optional evidence, typically in the cases of grounding (i.e., acknowledgement), referencing, or the firing of alternative rules. Scenario 1 E1 E2 E3 E4 E5 E6 The vice president of Beta Corporation goes to the chairman of the board and requests, “Can we start a new program?” The vice president continues, “The new program will help us increase profits, and according to our investigation report, it has no harm to the environment.” The chairman answers, “Very well.” The vice president executes the new program. However, the environment is harmed by the new program. Questions: 1. Does the vice president want to start the new program? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 2. Does the chairman intend to start the new program? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 3. Is it the chairman’s intention to increase profits? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 4. Does the vice president know that the new program will harm the environment? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 136 5. Is it the vice president’s intention to harm the environment by starting the new program? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 6. How much would you blame the individuals for harming the environment? Blame the chairman: 1 2 3 4 5 6 Blame the vice president: 1 2 3 4 5 6 Little Lots Scenario 2 E1 E2 E3 E4 E5 E6 The chairman of Beta Corporation is discussing a new program with the vice president of the corporation. The vice president says, “The new program will help us increase profits, but according to our investigation report, it will also harm the environment.” The chairman answers, “I only want to make as much profit as I can. Start the new program!” The vice president says, “Ok,” and executes the new program. The environment is harmed by the new program. Questions: 1. Does the chairman know that the new program will harm the environment? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 2. Does the chairman intend to start the new program? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 3. Is it the chairman’s intention to increase profits? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 4. Is it the chairman’s intention to harm the environment? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High 137 Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 5. Is the vice president coerced to start the new program (i.e. by the obligation of obeying the chairman)? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 6. How much would you blame the individuals for harming the environment? Blame the chairman: 1 2 3 4 5 6 Blame the vice president: 1 2 3 4 5 6 Little Lots Scenario 3 E1 E2 E3 E4 E5 E6 E7 The chairman of Beta Corporation is discussing a new program with the vice president of the corporation. The vice president says, “The new program will help us increase profits, but according to our investigation report, it will also harm the environment. Instead, we should run an alternative program, that will gain us fewer profits than this new program, but it has no harm to the environment.” The chairman answers, “I only want to make as much profit as I can. Start the new program!” The vice president says, “Ok,” and executes the new program. The environment is harmed by the new program. Questions: 1. Does the chairman know the alternative of the new program? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 E7 2. Which program is the vice president willing to start? Your answer: New program Alternative program Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 E7 3. Is the vice president coerced to start the new program? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 E7 138 4. Is the vice president coerced to harm the environment? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 E7 5. How much would you blame the individuals for harming the environment? Blame the chairman: 1 2 3 4 5 6 Blame the vice president: 1 2 3 4 5 6 Little Lots Scenario 4 E1 E2 E3 E4 E5 E6 The chairman of Beta Corporation is discussing a new program with the vice president of the corporation. The vice president says, “There are two ways to run this new program, a simple way and a complex way. Both will equally help us increase profits, but according to our investigation report, the simple way will also harm the environment.” The chairman answers, “I only want to make as much profit as I can. Start the new program either way!” The vice president says, “Ok,” and chooses the simple way to execute the new program. The environment is harmed. Questions: 1. Is the vice president coerced by the chairman to increase profits? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 2. Is the vice president coerced by the chairman to choose the simple way? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 3. Is the vice president coerced by the chairman to harm the environment? Your answer: Yes No Your confidence: 1 2 3 4 5 6 Low High Based on which information (circle all that apply)? E1 E2 E3 E4 E5 E6 4. How much would you blame the individuals for harming the environment? 139 Blame the chairman: 1 2 3 4 5 6 Blame the vice president: 1 2 3 4 5 6 Little Lots 140 Appendix H. Subjects’ Responses to Company Program Scenarios Scenario 1 Scenario 2 Scenario 3 Scenario 4 Q 1 Q 2 Q 3 Q 4 Q 5 Q 1 Q 2 Q 3 Q 4 Q 5 Q 1 Q 2 Q 3 Q 4 Q 1 Q 2 Q 3 #1 Yes No Yes No No Yes Yes Yes Yes Yes Yes Alt Yes Yes No No No #2 Yes Yes Yes No No Yes Yes Yes Yes No Yes Alt Yes Yes Yes No No #3 Yes Yes Yes No No Yes Yes Yes Yes No No Alt Yes No No No No #4 Yes Yes Yes No No Yes Yes Yes No Yes No Alt Yes Yes No No No #5 Yes Yes Yes No No Yes Yes Yes No Yes Yes Alt Yes Yes No No No #6 Yes Yes Yes No No Yes Yes Yes No No Yes New Yes Yes Yes No No #7 Yes Yes Yes No No Yes Yes Yes Yes Yes Yes Alt Yes Yes Yes No No #8 Yes Yes Yes No No Yes Yes Yes Yes Yes Yes Alt Yes Yes Yes No No #9 Yes Yes Yes No No Yes Yes Yes Yes Yes Yes Alt Yes Yes No No No #10 Yes Yes No No No Yes Yes Yes No Yes Yes Alt Yes Yes Yes No No #11 Yes Yes Yes No No Yes Yes Yes No Yes Yes Alt Yes Yes Yes No No #12 Yes Yes Yes No No Yes Yes Yes Yes Yes Yes Alt Yes Yes No No No #13 Yes Yes Yes No No Yes Yes Yes No Yes Yes Alt Yes Yes Yes No No #14 Yes Yes Yes No No Yes Yes Yes No Yes No Alt Yes Yes Yes No No #15 Yes Yes Yes No No Yes Yes Yes No Yes Yes Alt No No Yes Yes Yes Ans Yes Yes Yes No No Yes Yes Yes No Yes Yes Alt Yes Yes Yes No No 141 Scenario 1 Scenario 2 Scenario 3 Scenario 4 Q 1 Q 2 Q 3 Q 4 Q 5 Q 1 Q 2 Q 3 Q 4 Q 5 Q 1 Q 2 Q 3 Q 4 Q 1 Q 2 Q 3 #A Yes No Yes No No Yes Yes Yes No No Yes Alt Yes No Yes No No #B Yes Yes Yes Yes No Yes Yes Yes Yes No Yes Alt Yes Yes Yes No No #C Yes Yes Yes Yes No Yes Yes Yes No No Yes Alt Yes No Yes No Yes #D Yes Yes Yes No No Yes Yes Yes No Yes No Alt Yes Yes No No No #E Yes Yes No No Yes Yes Yes No Yes Yes Alt Yes No Yes No No #F Yes Yes Yes No No Yes Yes Yes Yes Yes Yes Alt Yes Yes No Yes No #G Yes Yes Yes No No Yes Yes Yes No Yes No Alt Yes Yes Yes No Yes #H Yes Yes Yes No No Yes Yes Yes No Yes No Alt Yes No Yes Yes Yes #I Yes Yes Yes No No Yes Yes Yes No Yes No Alt Yes Yes Yes No No #J Yes Yes Yes No No Yes Yes Yes No Yes Yes Alt Yes Yes Yes Yes No #K Yes Yes Yes No No Yes Yes Yes No No Yes Alt Yes No Yes No No #L Yes No Yes No No Yes Yes Yes Yes Yes Yes Alt Yes Yes Yes Yes Yes #M Yes Yes Yes No No Yes Yes Yes No Yes Yes New Yes No Yes No No #N Yes Yes Yes No No Yes Yes Yes No No No Alt Yes Yes No No No #O Yes Yes Yes No No Yes Yes Yes No Yes No Alt Yes No Yes No No Ans Yes Yes Yes No No Yes Yes Yes No Yes Yes Alt Yes Yes Yes No No Yes 142 Appendix I. Belief Derivations and Steps of Algorithm Execution Firing Squad Scenarios The symbols sqd, mkn and cmd refer to the squad, the marksman and the commander, respectively. Time stamps ti, i=1, …, 5. t1< … <t5, ti<ti’. The severity of the outcome death is set to high. Scenario 1 Information Encoding: E1 subaction(firing)=shooting E2 antecedent(conditional-effect(shooting))=live-bullets E3 consequent(conditional-effect(shooting))=death E4 effect(firing)=death E5 execute(mkn, shooting, t1) E6 occur(death, t2) Algorithm Execution: Step 2: intend(sqd, p1, t1’) ∧ by’(p1, firing, death) intend(mkn, shooting, t1’) intend(mkn, death, t1’) cause(mkn 1 , death, t2’) Step 3: 3.1 cause(mkn 1 , death, t2’) attempt(mkn j , death, t2’), j=2, …, 10 143 3.2 primary-responsible(death) = mkn 3.3 parent-node = shooting 3.4 authority(shooting) = none; no coercion 3.5 intend(mkn, death, t) 3.6 degree-of-responsibility(mkn) = high Step 4: Primary-responsible agent: mkn Degree of responsibility: high Intensity of blame: highest Scenario 2 Information Encoding: E1 subaction(firing)=shooting E2 authority(firing)=cmd; authority(shooting)=cmd E3 antecedent(conditional-effect(shooting))=live-bullets E4 consequent(conditional-effect(shooting))=death E5 effect(firing)=death E6 order(cmd, sqd, p2, t1) ∧ do’(p2, sqd, firing) E7 execute(mkn, shooting, t2) E8 occur(death, t3) Algorithm Execution: Step 1: intend(cmd, p2, t1’) obligation(sqd, p2, cmd, t1’) 144 coerce(cmd, sqd, p2, t2’) Step 2: intend(cmd, death, t1’) coerce(cmd, mkn, shooting, t2’) coerce(cmd, mkn, death, t2’) cause/attempt(mkn, death, t3’) Step 3: 3.1 cause/attempt(mkn, death, t3’) 3.2 primary-responsible(death) = mkn 3.3 parent-node = shooting 3.4 coerce(cmd, mkn, death, t2’) primary-responsible(death) = cmd 3.5 intend(cmd, death, t1’) 3.6 degree-of-responsibility(cmd) = high Step 4: Primary-responsible agent: cmd Degree of responsibility: high Intensity of blame: highest Scenario 3 Information Encoding: E1 subaction(firing)=shooting E2 authority(firing)=cmd; authority(shooting)=cmd 145 E3 antecedent(conditional-effect(shooting))=live-bullets E4 consequent(conditional-effect(shooting))=death E5 effect(firing)=death E6 order(cmd, sqd, p2, t1) E7 refuse(sqd, p2, t2) E8 order(cmd, sqd, p2, t3) E9 execute(mkn, shooting, t4) E10 occur(death, t5) Algorithm Execution: Step 1: intend(cmd, p2, t1’) obligation(sqd, p2, cmd, t1’) ¬intend(sqd, p2, t2’) coerce(cmd, sqd, p2, t4’) Step 2: intend(cmd, death, t1’) coerce(cmd, mkn, shooting, t4’) coerce(cmd, mkn, death, t4’) cause(mkn 1 , death, t5’) Step 3: 3.1 cause/attempt(mkn, death, t5’) 3.2 primary-responsible(death) = mkn 3.3 parent-node = shooting 146 3.4 coerce(cmd, mkn, death, t4’) primary-responsible(death) = cmd 3.5 intend(cmd, death, t1’) 3.6 degree-of-responsibility(cmd) = high Step 4: Primary-responsible agent: cmd Degree of responsibility: high Intensity of blame: highest Scenario 4 Information Encoding: E1 subaction(firing)=shooting E2 antecedent(conditional-effect(shooting))=live-bullets E3 consequent(conditional-effect(shooting))=death E4 order(cmd, sqd, p2, t1) E5 enable(mkn 1 , live-bullets, t2) E6 execute(mkn, shooting, t3) E7 occur(death, t4) Algorithm Execution: Step 1: intend(cmd, p2, t1’) obligation(sqd, p2, cmd, t1’) coerce(cmd, sqd, p2, t3’) Step 2: 147 coerce(cmd, mkn, shooting, t3’) ¬coerce(cmd, mkn, death, t3’) Step 3: 3.1 cause(mkn 1 , death, t4’) 3.2 primary-responsible(death) = mkn 1 3.3 parent-node = shooting 3.4 ¬coerce(cmd, mkn, death, t3’) 3.9 degree-of-responsibility(mkn 1 ) = medium Step 4: Primary-responsible agent: mkn 1 Degree of responsibility: medium Intensity of blame: high Company Program Scenarios The symbols chm and vp refer to the chairman and the vice president, respectively. Time stamps ti, i=1, …, 5. t1< … <t5, ti<ti’. The severity of the outcome environmental harm is set to medium. Scenario 1 Information Encoding: E1 request(vp, chm, p1, t1) ∧ do’(p1, vp, new-program) E2 inform(vp, chm, p2, t2) ∧ bring-about’(p2, new-program, profit-increase) E3 inform(vp, chm, p3, t2) ∧ ¬bring-about’(p3, new-program, env-harm) E4 accept(chm, p1, t3) 148 E5 execute(vp, new-program, t4) E6 env-harm∈effect(new-program); occur(env-harm, t5) Question 1 (Rule D3 [request]): request(vp, chm, p1, t1) ⇒ want(vp, p1, t1’) Question 2 (Rule D7 [accept]): accept(chm, p1, t3) ⇒ intend(chm, p1, t3’) Question 3 (Rule C3 [intend-action]): intend(chm, p1, t3’) ∧ do’(p1, vp, new-program) ∧ ¬coerce(vp, chm, new-program, t3’) ⇒ profit-increase∈effect(new-program) ∧ intend(chm, profit-increase, t3’) Question 4 (Rule D1 [inform]): inform(vp, chm, p3, t2) ⇒ know(vp, p3, t2’) ⇒ ¬know(vp, ¬p3, t2’) Question 5 (Rule C9 [intention-foreknowledge-relation]): ¬(know(vp, ¬p3, t2’) ∧ bring-about’(¬p3, new-program, env-harm)) ⇒ ¬(intend(vp, b, t2’) ∧ by’(b, new-program, env-harm)) Question 6 (Algorithm 1): Step 1: want(vp, p1, t1’) know(vp, p2, t2’) 149 know(chm, p2, t2’) know(vp, p3, t2’) know(chm, p3, t2’) intend(chm, p1, t3’) Step 2: ¬intend(vp, b, t2’) intend(chm, profit-increase, t3’) cause(vp, env-harm, t5’) Step 3: 3.1 cause(vp, env-harm, t5’) 3.2 primary-responsible(env-harm) = vp 3.3 parent-node = new-program 3.4 ¬coerce(chm, vp, env-harm, t), t<t4 3.7 ¬intend(vp, env-harm, t2’) 3.8 degree-of-responsibility(vp) = low Step 4: Primary-responsible agent: vp Degree of responsibility/Intensity of blame: low Scenario 2 Information Encoding: E2 inform(vp, chm, p2, t1) E3 inform(vp, chm, p4, t1) ∧ bring-about’(p4, new-program, env-harm) E4 goal(chm, profit-increase); order(chm, vp, p1, t2) 150 E5 accept(vp, p1, t3); execute(vp, new-program, t3) E6 occur(env-harm, t4) Question 1 (Rule D2 [inform-grounded]): inform(vp, chm, p4, t1) ⇒ know(chm, p4, t1’) Question 2 (Rule D5 [order]): order(chm, vp, p1, t2) ⇒ intend(chm, p1, t2’) Question 3 (Rule C7 [intend-plan]): intend(chm, b, t2’) ∧ by’(b, new-program, profit-increase) ∧ profit-increase∈relevant- effect(profit-increase, new-program) ⇒ intend(chm, profit-increase, t2’) Question 4 (Rule C8 [intend-plan]): intend(chm, b, t2’) ∧ by’(b, new-program, profit-increase) ∧ env-harm∈side- effect(profit-increase, new-program) ⇒ ¬intend(chm, env-harm, t2’) Question 5 (Rules D6 [order] & D9 [accept-obligation]): order(chm, vp, p1, t2) ⇒ obligation(vp, p1, chm, t2’) obligation(vp, p1, chm, t2’) ∧ accept(vp, p1, t3) ⇒ coerce(chm, vp, p1, t3’) Question 6 (Algorithm 1): Step 1: 151 know(vp, p2, t1’) know(chm, p2, t1’) know(vp, p4, t1’) know(chm, p4, t1’) intend(chm, p1, t2’) obligation(vp, p1, chm, t2’) coerce(chm, vp, p1, t3’) Step 2: intend(chm, b, t2’) intend(chm, profit-increase, t2’) ¬intend(chm, env-harm, t2’) coerce(chm, vp, profit-increase, t3’) coerce(chm, vp, env-harm, t3’) cause(vp, env-harm, t4’) Step 3: 3.1 cause(vp, env-harm, t4’) 3.2 primary-responsible(env-harm) = vp 3.3 parent-node = new-program 3.4 coerce(chm, vp, env-harm, t3’) primary-responsible(env-harm) = chm 3.7 ¬intend(chm, env-harm, t2’) 3.8 degree-of-responsibility(chm) = low Step 4: 152 Primary-responsible agent: chm Degree of responsibility/Intensity of blame: low Scenario 3 Information Encoding: E2 inform(vp, chm, p2, t1) E3 inform(vp, chm, p4, t1) E4 counter-propose(vp, p1, p5, chm, t1) ∧ do’(p5, vp, alternative-program) E5 goal(chm, profit-increase); order(chm, vp, p1, t2) E6 accept(vp, p1, t3); execute(vp, new-program, t3) E7 occur(env-harm, t4) Question 1 (Rule D13 [counter-propose-grounded]): counter-propose(vp, p1, p5, chm, t1) ⇒ counter-propose(vp, new-program, alternative-program, chm, t1) ⇒ know(chm, a, t1’) ∧ alternative’(a, new-program, alternative-program) Question 2 (Rules D14 & D15 [counter-propose]): counter-propose(vp, p1, p5, chm, t1) ⇒ ¬intend(vp, p1, t1’) counter-propose(vp, p1, p5, chm, t1) ⇒ want(vp, p5, t1’) Question 3 (Rules D6 [order] & D10 [unwilling-accept-obligation]): order(chm, vp, p1, t2) ⇒ obligation(vp, p1, chm, t2’) ¬intend(vp, p1, t1’) ∧ obligation(vp, p1, chm, t2’) ∧ accept(vp, p1, t3) 153 ⇒ coerce(chm, vp, p1, t3’) Question 4 (Rule C12 [coerce-primitive]): coerce(chm, vp, p1, t3’) ∧ do’(p1, vp, new-program) ∧ primitive(new-program) ∧ env- harm∈effect(new-program) ⇒ coerce(chm, vp, env-harm, t3’) Question 5 (Algorithm 1): Step 1: know(vp, p2, t1’) know(chm, p2, t1’) know(vp, p4, t1’) know(chm, p4, t1’) know(vp, a, t1’) know(chm, a, t1’) ¬intend(vp, p1, t1’) want(vp, p5, t1’) intend(chm, p1, t2’) obligation(vp, p1, chm, t2’) coerce(chm, vp, p1, t3’) Step 2: intend(chm, b, t2’) intend(chm, profit-increase, t2’) ¬intend(chm, env-harm, t2’) coerce(chm, vp, profit-increase, t3’) 154 coerce(chm, vp, env-harm, t3’) cause(vp, env-harm, t4’) Step 3: 3.1 cause(vp, env-harm, t4’) 3.2 primary-responsible(env-harm) = vp 3.3 parent-node = new-program 3.4 coerce(chm, vp, env-harm, t3’) primary-responsible(env-harm) = chm 3.7 ¬intend(chm, env-harm, t2’) 3.8 degree-of-responsibility(chm) = low Step 4: Primary-responsible agent: chm Degree of responsibility/Intensity of blame: low Scenario 4 Information Encoding: E2 inform(vp, chm, p6, t1) ∧ bring-about’(p6, new-program, simple-way) inform(vp, chm, p7, t1) ∧ bring-about’(p7, new-program, complex-way) E3 inform(vp, chm, p8, t1) ∧ bring-about’(p8, simple-way, profit-increase) inform(vp, chm, p9, t1) ∧ bring-about’(p9, complex-way, profit-increase) inform(vp, chm, p10, t1) ∧ bring-about’(p10, simple-way, env-harm) E4 goal(chm, profit-increase); order(chm, vp, p1, t2) E5 accept(vp, p1, t3); intend(vp, simple-way, t3); ¬intend(vp, complex-way, t3); execute(vp, simple-way, t4) 155 E6 occur(env-harm, t5) Question 1 (Rule C16 [coerce-decision-node]): order(chm, vp, p1, t2) ⇒ obligation(vp, p1, chm, t2’) obligation(vp, p1, chm, t2’) ∧ accept(vp, p1, t3) ⇒ coerce(chm, vp, p1, t3’) coerce(chm, vp, p1, t3’) ∧ do’(p1, vp, new-program) ∧ or-node(new-program) ∧ profit- increase∈definite-effect(new-program) ⇒ coerce(chm, vp, profit-increase, t3’) Question 2 (Rule C15 [coerce-decision-node]): coerce(chm, vp, p1, t3’) ∧ do’(p1, vp, new-program) ∧ or-node(new-program) ∧ simple- way∈choice(new-program) ⇒ ¬coerce(chm, vp, simple-way, t3’) Question 3 (Rule C17 [coerce-decision-node]): coerce(chm, vp, p1, t3’) ∧ do’(p1, vp, new-program) ∧ or-node(new-program) ∧ env- harm∈indefinite-effect(new-program) ⇒ ¬coerce(chm, vp, env-harm, t3’) Question 4 (Algorithm 1): Step 1: know(vp, p6, t1’) know(chm, p6, t1’) know(vp, p7, t1’) know(chm, p7, t1’) 156 know(vp, p8, t1’) know(chm, p8, t1’) know(vp, p9, t1’) know(chm, p9, t1’) know(vp, p10, t1’) know(chm, p10, t1’) intend(chm, p1, t2’) obligation(vp, p1, chm, t2’) coerce(chm, vp, p1, t3’) Step 2: intend(chm, b, t2’) intend(chm, profit-increase, t2’) ¬intend(chm, env-harm, t2’) ¬coerce(chm, vp, simple-way, t3’) coerce(chm, vp, profit-increase, t3’) ¬coerce(chm, vp, env-harm, t3’) intend(vp, env-harm, t3’) cause(vp, env-harm, t5’) Step 3: 3.1 cause(vp, env-harm, t5’) 3.2 primary-responsible(env-harm) = vp 3.3 parent-node = new-program 157 3.4 ¬coerce(chm, vp, env-harm, t3’) 3.5 intend(vp, env-harm, t3’) 3.6 degree-of-responsibility(vp) = high Step 4: Primary-responsible agent: vp Degree of responsibility/Intensity of blame: high 158 Appendix J. Evidence Choice of Human Subjects S1/ Q1 E1 E2 E3 E4 E5 E6 S1/ Q1 E1 E2 E3 E4 E5 E6 #1 + #A + #2 + #B + + #3 + + #C + + #4 + #D + + + + #5 + + + #E + #6 + + #F + + #7 + #G + #8 + + + #H + + + #9 + + + + #I + #10 + + #J + #11 + + #K + #12 + + #L + #13 + + + #M + + #14 + + #N + + #15 + + #O + + + S1/ Q2 E1 E2 E3 E4 E5 E6 S1/ Q2 E1 E2 E3 E4 E5 E6 #1 + + #A + #2 + #B + #3 + #C + + #4 + #D + #5 + #E + #6 + #F + + + #7 + #G + #8 + + + #H + #9 + #I + #10 + #J + #11 + #K + #12 + #L + #13 + #M + #14 + #N + #15 + #O + + 159 S1/ Q3 E1 E2 E3 E4 E5 E6 S1/ Q3 E1 E2 E3 E4 E5 E6 #1 + + #A + #2 + #B + + #3 + #C + + #4 + #D + + + #5 + #E + #6 + #F + + + #7 + #G + #8 + + #H + #9 + + #I + + #10 + #J + + #11 + #K + #12 + #L + #13 + + #M + + + #14 + #N + + + #15 + #O + + + S1/ Q4 E1 E2 E3 E4 E5 E6 S1/ Q4 E1 E2 E3 E4 E5 E6 #1 + #A + #2 + #B + + #3 + #C + #4 + #D + #5 + #E + #6 + #F + #7 + #G + #8 + #H + #9 + #I + #10 + #J + #11 + + #K + #12 + + #L + #13 + #M + #14 + #N + + #15 + #O + 160 S1/ Q5 E1 E2 E3 E4 E5 E6 S1/ Q5 E1 E2 E3 E4 E5 E6 #1 + #A + #2 + #B + #3 + #C + #4 + #D + #5 + #E + #6 + #F + #7 + #G + #8 + #H + #9 + + #I + + #10 + #J + + #11 + #K + #12 + + #L + + #13 + #M + #14 + + #N + + #15 + #O + + S2/ Q1 E1 E2 E3 E4 E5 E6 S2/ Q1 E1 E2 E3 E4 E5 E6 #1 + + #A + + #2 + #B + #3 + #C + #4 + + + #D + + #5 + + + #E + #6 + + #F + #7 + #G + #8 + #H + + #9 + + #I + #10 + #J + #11 + + #K + #12 + + #L + #13 + + #M + #14 + #N + + + #15 + + + #O + 161 S2/ Q2 E1 E2 E3 E4 E5 E6 S2/ Q2 E1 E2 E4 E5 E6 #1 + #A + #2 + #B E3 + #3 + + #C #4 + #D + #5 + + #E + #6 + #F + #7 + #G + #8 + #H + + #9 + #I + #10 + #J + #11 + #K + #12 + #L + #13 + + #M + #14 + #N + #15 + #O + S2/ Q3 E1 E2 E3 E4 E5 E6 S2/ Q3 E1 E2 E3 E5 E6 #1 + #A #2 + #B + E4 + + #3 + #C + #4 + #D + #5 + #E + #6 + #F + #7 + #G + #8 + #H + #9 + + #I + #10 + #J + #11 + #K + #12 + #L + #13 + #M + #14 + #N + #15 + #O + 162 S2/ Q4 E1 E2 E3 E4 E5 E6 S2/ Q4 E1 E2 E3 E4 E5 E6 #1 + + #A + #2 + + #B + + #3 + #C + #4 + #D + #5 + + #E + #6 + #F + + #7 + + #G + #8 + + #H + #9 + + #I + #10 + + #J + #11 + + #K + #12 + + #L + #13 + #M + #14 + + #N + + #15 + + + #O + + S2/ Q5 E1 E2 E3 E4 E5 E6 S2/ Q5 E1 E2 E3 E4 E5 E6 #1 + + #A + #2 + #B + + #3 + #C + #4 + + + #D + + #5 + #E + + #6 + #F + #7 + #G + #8 + #H + + #9 + + + #I + #10 + #J + #11 + + #K + #12 + + #L + #13 + + #M + #14 + #N + #15 + + #O + + 163 S3 Q1 E1 E2 E3 E4 E5 E6 E7 S3 Q1 E1 E2 E3 E4 E5 E6 E7 #1 + + + #A + #2 + #B + #3 + #C + + + #4 + #D + + #5 + #E + #6 + #F #7 + #G + #8 + + #H + #9 + #I + 10 + #J + + 11 + #K + 12 + #L + 13 + #M + + + 14 + #N + 15 + #O + + S3 Q2 E1 E2 E3 E4 E5 E6 E7 S3 Q2 E1 E2 E3 E4 E5 E6 E7 #1 + #A + #2 + #B + + #3 + #C + + + #4 + + #D + + #5 + + #E + #6 + #F + #7 + #G + #8 + #H + #9 + #I + 10 + #J + + + 11 + + #K + 12 + #L + 13 #M + + 14 + + #N + 15 + #O + 164 S3 Q3 E2 E7 E1 E2 E3 E4 E5 E6 E7 S3 Q3 E1 E3 E4 E5 E6 #1 + + + #A + #2 + + #B + + #3 + + #C + #4 + + + + + #D + + + #5 #E + + + + + + #F + + #G #8 + + #H + #9 + #I + 10 + + + #J + 11 + + + + #K + + 12 + + + #L + 13 + #M + + 14 + #N + 15 + #O + #6 + #7 + S3 Q4 E1 E2 E3 E5 E6 S3 Q4 E1 E2 E4 E5 E7 E4 E7 E3 E6 + + #2 + + #B + + #3 + + #C + + #D + + #5 + + + + #E + #6 + + #F + + #7 + #G + #8 + + + + #H + + #9 + #I + 10 + #J + + + + + + 11 + + #K + 12 + + + #L + 13 + + + + #M + + + 14 + + #N + + 15 + #O + + #1 + #A + + + + #4 165 S4/ Q1 E1 E2 E3 E4 E5 E6 S4/ Q1 E1 E2 E3 E4 E5 E6 #1 + + #A + #2 + #B + #3 + + #C + #4 + #D + #5 + + + + #E + #6 + #F + + #7 + #G + + #8 + #H + #9 + #I + + + #J + #11 + #K + #12 + + #L + #13 + #M + #14 + #N + + #15 + + #O + #10 S4/ Q2 E1 E2 E3 E4 E5 E6 S4/ Q2 E1 E2 E3 + + + #2 + #B + + + #3 + + #C + #4 + #D + #5 + + + + #E + #6 + #F + + + + #G + #8 + #H + #9 + #I + #10 + #J + #11 + #K + #12 + #L + #13 + + #M + #14 + #N + + #15 + #O + + + E4 E5 E6 #1 #A #7 166 S4/ Q3 E1 E2 E3 E4 E5 E6 S4/ Q3 E1 E2 E3 E4 E5 E6 #1 + + #A + + + + #2 + #B + + + #3 + + #C + #4 + + + #D + #5 + + + + + #E + #6 + #F + #7 + + #G + #8 + + #H + + #9 + #I + #10 + #J + #11 + #K + #12 + + #L + #13 + + #M + #14 + #N + #15 + + #O + + + 167
Abstract (if available)
Abstract
Intelligent Agents are typically situated in a social environment and must reason about social cause and effect. Social causal reasoning is qualitatively different from physical causal reasoning that underlies most intelligent systems. Modeling the process and inference of social causality can enrich the capabilities of multi-agent and intelligent interactive systems. In this thesis, first we explore the underlying theory and process of how people evaluate social events, and present a domain-independent computational framework to reason about social cause and responsibility. The computational framework can be generally incorporated into an intelligent system to augment its cognitive and social functionality.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
The interpersonal effect of emotion in decision-making and social dilemmas
PDF
A framework for research in human-agent negotiation
PDF
Local optimization in cooperative agent networks
PDF
Machine learning in interacting multi-agent systems
PDF
Planning with continuous resources in agent systems
PDF
Bridging the visual reasoning gaps in multi-modal models
PDF
Efficient and effective techniques for large-scale multi-agent path finding
PDF
The politeness effect: pedagogical agents and learning outcomes
PDF
Speech and language understanding in the Sigma cognitive architecture
PDF
Decoding information about human-agent negotiations from brain patterns
PDF
A planner-independent approach to human-interactive planning
PDF
Managing multi-party social dynamics for socially assistive robotics
PDF
Robust causal inference with machine learning on observational data
PDF
Common ground reasoning for communicative agents
PDF
Modeling, searching, and explaining abnormal instances in multi-relational networks
PDF
Probabilistic maps: computation and applications for multi-agent problems
PDF
Behavioral form finding using multi-agent systems: a computational methodology for combining generative design with environmental and structural analysis in architectural design
PDF
Parasocial consensus sampling: modeling human nonverbal behaviors from multiple perspectives
PDF
An investigation of fully interactive multi-role dialogue agents
PDF
Scalable exact inference in probabilistic graphical models on multi-core platforms
Asset Metadata
Creator
Mao, Wenji
(author)
Core Title
Modeling social causality and social judgment in multi-agent interactions
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Computer Science
Publication Date
09/29/2006
Defense Date
07/24/2006
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
causality,cognitive modeling,commonsense reasoning,intelligent agents,OAI-PMH Harvest,social inference
Language
English
Advisor
Gratch, Jonathan (
committee chair
), Read, Stephen J. (
committee member
), Rosenbloom, Paul S. (
committee member
)
Creator Email
wenjimao@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m55
Unique identifier
UC1110849
Identifier
etd-Mao-20060929 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-5641 (legacy record id),usctheses-m55 (legacy record id)
Legacy Identifier
etd-Mao-20060929.pdf
Dmrecord
5641
Document Type
Dissertation
Rights
Mao, Wenji
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
causality
cognitive modeling
commonsense reasoning
intelligent agents
social inference