Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Caught in the loop: Narrative in the age of artificial intelligence
(USC Thesis Other)
Caught in the loop: Narrative in the age of artificial intelligence
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
CAUGHT IN THE LOOP: NARRATIVE IN THE AGE OF ARTIFICIAL INTELLIGENCE by Jay Douglas A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (CRITICAL STUDIES) May 2005 Copyright 2005 Jay Douglas UMI Number: DP22288 All rights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a com plete manuscript and there are missing pages, th ese will be noted. Also, if material had to be removed, a note will indicate the deletion. Published by ProQuest LLC (2014). Copyright in the Dissertation held by the Author. D issertation Publishing UMI DP22288 Microform Edition © ProQuest LLC. All rights reserved. This work is protected against unauthorized copying under Title 17, United S tates Code ProQ uest LLC. 789 E ast Eisenhower Parkway P.O. Box 1346 Ann Arbor, Ml 4 8 1 0 6 -1 3 4 6 UNIVERSITY OF SOUTHERN CALIFORNIA THE GRADUATE SCHOOL UNIVERSITY PARK LOS ANGELES, CALIFORNIA 90089-1695 This dissertation, written by under the direction of h I 5 dissertation committee, and approved by all its members, has been presented to and accepted by the Director of Graduate and Professional Programs, in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Date May 1 3 , 2005 Dissertation Committee D edication to my wife who always believed in me my mother who made it all possible and my father who would have been so proud to call me Doctor A cknow ledgm ents When the wife of a couple I know was expecting their second child, her hus band proudly, or more like gleefully, told me, “We’re pregnant.” After I dismissed the obvious biological problems with their both being pregnant I mused over the cultural meaning, that pregnancy was a team effort before, during and after con ception. Though, it still struck me as odd that over those nine months, those long nine months, the mother bore the brunt of pregnancy’ s effects: the hormonal changes, the physical discomfort and the pain of childbirth itself. Not to denigrate the father’s role during pregnancy, as partner, coach, comforter and occasional caterer, but equating the roles of mother and father, as those two words do, paid short shrift to mom. My mistake. After a year of researching and eighteen months of writing this dissertation, I’m ready to say, “We’re finished,” without any rhetorical sense of the royal “We.” While I endured the physical discomfort that comes from hunching over a computer for so long my chiropractor disowned me, or running out of eye drops at three in the morning, there were dozens of people who suffered as much because of their involvement in this project. Some of that involvement being voluntary. No expression of gratitude, certainly not words on these humble pages, can repay everyone involved for their support, kindness and free coffee. My greatest fear is leaving out one or more of the kind people I am deeply indebted to. Sadly, I’m sure that along the way, some names will fall through the cracks. For this, I apologize. On the other hand, since most of the people mentioned here are not critical theorists, the chances of their finding out I failed to mention them is rather small. First of all, I will never be able to repay the support I received from the most important women in my life, my wife, Penny, and my mother, Mitzi. I can only view them with awe. It is a testimonial to their strength that I am neither divorced nor disowned. Marsha Kinder, my dissertation committee chair, along with Tara McPherson and Marita Sturken, my other two members, were the ones who kept me an honest scholar. It was impossible to slide any questionable scholarship or weak arguments past this trio, though heaven knows I tried. Marsha, Tara and Marita also hung in there during my early flights of fancy as I juggled narrative and artificial intelligence theories, oftentimes mangling both of them. Jon Gratch, Randy Hill and David Traum at the University of Southern Cali fornia’ s Institute for Creative Technologies taught me more about artificial intelli gence, autonomous agents, planning and modeling than I ever expected to know. This work is better for it, even though I was not always the willing student. There is a long list of acquaintances, colleagues and strangers who were there for me when I needed to bounce around ideas. Doug Thomas, Alison deFren, Kather ine Hayles, Jacki Morie, Richard Weinberg, Allison McMahan, Diane Piepol, Andrew Gordon, Kate LaBore, Ed Hovy, Sheryl Kwak, Michael van Lent, Scott Fisher, Peggy Weil, Michael Renov, Lynn Spigel and Dana Polan. Fellow student Belinda Baldwin was about a year ahead of me in the Ph.D. program and she graciously shared with me the twists and turns I had to look forward to. Linda Overholt, the brilliant administrative assistant of the Critical Studies division helped me navigate the minefield, and the vagaries, of university pro cedures, including getting course credits transferred, applying for one waiver or another, meeting deadlines and knowing who to talk to about what problem. She was also there when I needed to let off steam and we often shared stories of our adventures in the cogs of the USC machinery, which did more to lift my spirits than anyone will ever imagine. Sherall Preyer is one of the unsung heroes of any Critical Studies student’s Ph.D. She introduced me to the procedures of teaching assistant-ing, secured text books, class lists for my TA work. In a pinch she stretched the rules surrounding the use of the office copier, something I will deny to my grave if asked to tes tify about it. Sherall is one of those seasoned professional who knows how to get things done: she knows how to contact a reclusive professor, what people in which departments I could talk to because they were the ones that really ran things, and she always seemed to know where my peripatetic committee chair was at almost any minute of the day. Finally, there is Michael, Randall, Kim, Eden, Aram, Whitney, Ana, Ana, Danielle, Danielle, Christina, Michelle, Shereen, Travis, Daniel, Matt, Luis, Jose, Heather, Hasmik and other partners, past and present, at the Starbucks at Vermont and Prospect Avenues in Hollywood, California. In addition to letting me nurse a decaf Americano while monopolizing a table for hours on end, they slipped me free coffee, kept me on a regular writing schedule, changed the background music on those days when I couldn’t take one more Rosemary Clooney song, and overall, treated me like a member of their family. They didn’t understand what I was writing, but they were this writer’s best friends for over two years. v As is customary for any work of this scale, I hold all of those whose assistance I relied on harmless for any omissions, mistakes, misinterpretations and misstate ments. Errors in this text are the sole responsibility of the author. C ontents D edication ii A cknow ledgm ents iii List o f Figures viii List o f Tables x A bstract xi 1 Introduction 1 2 G ettin g to A gency 24 3 T he H um an (C aught) in th e Loop 80 4 Posthum an Players 145 C onclusion 209 G lossary 216 B ibliography 227 vii List of Figures 2.1 A typical PDP-1 installation..................................................................... 46 2.2 By today’ s standards Spacewar is a pretty simple game...................... 47 3.1 Initial chess board ................................................................................... 95 3.2 Chess board after one move by each player............................................ 97 3.3 A simple model for buying milk at the supermarket............................... 120 3.4 A Command Line Interface..........................................................................127 3.5 The Finder GUI for Mac OS X 10.3.........................................................128 3.6 A JIT narrative event loop.......................................................................... 129 4.1 The virtual environment of the MRE training mission........................... 145 4.2 STEVE demonstrating how to start a diesel engine................................ 192 viii List o f Tables 3.1 Performance issues in JIT narratives vs. poststructuralist theory. . 83 3.2 Abbreviations for chess pieces................................................................. 95 3.3 Representation of the chess board.......................................................... 96 3.4 Changed entries after White’s move....................................................... 96 3.5 Changed entries after Black’s move........................................................ 97 ix A bstract This dissertation describes the Just-In-Time (JIT) narrative, a narrative form resulting from the replacing of some actor/characters in a theatrical improv sketch by intelligent software agents, software characters that simulate the behavior of human actors. The trajectory of the narrative is influenced, in real time, by multiple intelligences that have their own, independent agendas. Such behavior, this dissertation argues, demonstrates that the popular term interactivity is used incorrectly to describe agency, the ability to exert arbitrary influence on agents and objects in the story world. Traditional computer-based storytelling precludes agency because the actions of humans do not alter what appears to be the emo tional state of the characters. In addition, traditional computer-based storytelling relies on structural definitions of narrative to the exclusion of poststructuralist narrative theory. Structural theory, while computer accessible, downplays mean ings derived from a human participant’s specific web of beliefs, relying exclusively on the arrangement of narrative elements for meaning making. This dissertation tracks the increasing importance of poststructuralist thought in computer-based narratives through hypertext, hyperfiction, database narratives and finally coming to rest at the JIT narrative concept of the loop. The loop models the implementa tion and operation of the software used to program JIT narratives, as well as the x nature of the the cyclic and evolutionary nature of narratives. Characters in nar ratives often find themselves in the grip of repetitiveness, yet within this cyclical structure, micro-changes occur that advance the narrative (and the growth of the characters) until they are able to break free of the cycle. Within the loop, actions of the agents are arguably the number one controlling factor in the creation of signifiers in a JIT narrative. It is here that computer science and narrative theory intersect, through the mechanisms agents use to form their internal intentions, beliefs and goals, and to communicate them to a human participant. The disser tation concludes with an example of of how ideological influences can make JIT narratives initially accessible to critical theorists. C hapter 1 Introduction It seems customary in works of this type to recount the history of the project beginning with the moment the project became real. By that definition, this project never happened; one day, it was just there. What I can say is the project itself seems very much like a cyborg, an interleaving of humanity and technology, which may explain why there isn’t a date, time and location when it all began— only scraps of events that found themselves together in the document at hand. While a student at Loyola Marymount University in Los Angeles, I took a course in computer game design. At the time, there were no practical educational tools for implementing games, so the class had to be satisfied generating design documents that could be used in the future. The future came for me when I was an assistant professor at California State University Northridge and teaching a course in multimedia games. My experiences with this class and electronic games came shortly after my first play was produced. In addition, I had been writing radio, television and print advertising and found I could squeeze more personality into ads then I could into game characters. At this point I wondered whether game characters could demonstrate the same character/personality range as fictional characters (and even the narrativization of non-fictional characters). I was now interested in whether electronic games could approach the level of literature and, if not, how far could they go. Using the document as the basis of a game, a very talented group of students, working in teams and using Macromedia Director as an authoring tool, created 1 inventive implementations of the design. As much a part of the game building was a de-briefing of the limitations of this form of creative expression. One topic that surfaced was the difficulty encountered bringing forth the personality of the characters described in the design document. The nature of game construction, and the limitation of the tools, was such that capturing personality details was only possible at a superficial level. Though this didn’t necessarily detract from the game play, it was a disappointment to me, not only because they were my characters and I felt a certain attachment to them, but also because of that larger issue: whether some form of electronic game was could achieve a certain literariness of the type associated novels, cinema and the theater. At first I thought literary electronic games represented a rupture with tradi tional media because books, films and the stage could not duplicate the interactive capabilities of the computer-as-medium for storytelling. This quickly became a scary thought, for it implied a literary form that just appeared, a discovery rather than an invention. I decided that if I traced the factors that could influence this form of storytelling then I would discover whether I envisioned a process of evo lution or a true rupture with the past. If it was a rupture, I decided, I would be willing to accept that the literary comparison was unfair and unjustified and that electronic game characters were what they were and not weak relatives of an existing paradigm. My plan of attack was to take my game design and write a script for the game play. For better or worse, finding some sort of narrative approach was the way I worked as an author and I wanted to get something down on paper (and then into a computer) so I had a feel for who these characters were. I began work on a script using the style that was common in the mid- to late-1990s, a modified 2 screenplay format, in which sequences of game play were linked by decision points for the player. A typical script segment had a form something like this: ALIEN We come in peace. If the player agrees, go to LI. If not, go to L2. LI: ALIEN Take me to your leader. If the player agrees and has seen the diamond mine, go to L3. If the player agrees and has not seen the diamond mine, go to L4. Otherwise, go to L5. What began as a script quickly turned into a computer program. The author needed to be more concerned with the logical flow of the game than with character actions and dialog. Not only that, keeping track of what decisions, or groups of decisions, made sense and then making sure that every decision led to some dialog or action that paid off the decision quickly swamped the intellectual capacity of authors and programmers. No wonder game characters had few of the qualities I associated with their compatriots in literature and cinema. W hat’s more, even if such a form of scripting were possible (say with teams of authors and programmers 3 each working on different sections of the game logic), the primary use of the com puter would be to execute the logic flow. The computer would play the role of a database program, retrieving data from a database once the user/player specified the parameters for the information retrieval. Computers were certainly talented in this area, only I had studied an alternative use while a computer science graduate student at the University of California, Berkeley, where a pair of my professors were researching a rather new branch of theory called AI, or artificial intelligence. AI held out the promise that computers could be programmed to simulate human cognitive and decision-making processes. Although the field had changed radically since my days at Berkeley, I wondered whether the intellectually-daunting task of keeping track of the complex logic of computer games could be passed on to an intelligent game machine—one capable of creating the twists, turns and logical jumps on its own, so that information would not have to be encoded or created by an author and programmer. I reasoned that if such were the case, perhaps AI could be used to give the game characters more depth and subtlety than that presently available with the limitations of game implementation. By this time I had been writing radio and television commercials, industrial videos, comedy skits and short stories and felt comfortable creating characters and situations. With these writing forms, however, I had complete control over behaviors and environments. When one of the characters in the story was a human I had no idea what would happen and how much control the player, the game and the story would demand. Control, it seemed to me, was the antithesis of extending literature into game play; if the story and its outcome were already known in advanced—programmed in, as it were—then I felt the game experience would do nothing more than drag the player along a predetermined path. A focus of authorship would become misleading the player into believing she was in control 4 of the story’s outcome when nothing was further from the truth. My thinking shifted away from literariness, which I now viewed as too abstract, and toward something I thought of as experiences, something that extended the literary and cinematic traditions into what felt like a real-life situation. This idea was neither new or unique. Brenda Laurel and members of what was known as the Oz Project (at Pittsburgh’s Carnegie Mellon University), both con ceived of such an environment and, in the case of the Oz Project, had implemented such an interactive story using live actors instead of AI characters. Nevertheless, the concept of this literary/cinematic extension was more than an abstract exer cise for computer scientists, authors and game designers. More and more, what was forming in my mind was some sort of merger between narrative forms of sto rytelling and the AI branches of computer science. By now, this convergence of Hollywood and Silicon Valley was a popular paradigm among game studios and Hollywood studios as each jockeyed for position in the commercial entertainment marketplace. While a merger didn’t rule out a rupture—after all, hydrogen and oxygen combine to make water, something that resembles neither of its parents—I felt the circumstances pointed more strongly to an evolution than a revolution. In 1998 I was accepted into the Ph.D. program of the critical studies division of the University of Southern California’ s (USC) School of Cinema-Television. I’d reached the point where I was convinced that I could go no further with my evolu tionary approach without knowing existing narrative theory, especially as it applied to visually presented and performed fiction. I wanted to understand what would happen to narrative theory in a world of AI characters and computer-generated stories. Besides, if this were an evolution, there would have to be existing theory for the new theory to evolve from. It was also a double check on the evolu tion/revolution theory: if current narrative theory could not be extended to this 5 new storytelling form, then this storytelling form would have to be considered from its point of departure; it might even demand a discipline of its own. Then that stroke of luck that always seems to accompany discoveries, inventions and advances occurred. In 2000 I interviewed for, and was subsequently offered, a research position at the Institute of Creative Technologies (ICT), a research initiative at USC char tered by the U.S. Army to bring together Hollywood storytelling and artificial intelligence as training tools for the Army’s new role of peacekeeping. The posi tion was attractive because I would be surrounded by experts in AI, entertainment and scriptwriting. I would contribute narrative theory research. W hat’s more, the research underway was extremely advanced—natural language recognition, speech synthesis, emotions, gaze and gesture, virtual reality—the elements necessary to take the Oz Project’ s experiments and replace their collection of human actors with computer-generated artificially intelligent ones. At ICT, I would have a lab oratory where the elements I’d been chasing for twenty years would finally come together for practical and theoretical scrutiny. The decision to take a position so clearly affiliated with the military was a serious and difficult one. In the end, I weighed the fact that I could find no research of this type that was either funded by non-military (or non-government) money or was free of the restrictions placed on the publication and distribution of theoretical work by corporate research and development departments. Perhaps it was a giant rationalization, but I decided this work, and its open distribution, had its best chances of coming to fruition at ICT. I took the job. I finally felt positioned to bring together, at a theoretical as well as practical level, the various practices and theories I believed would demon strate that there was a form of storytelling in cyberspace that was evolving from contemporary theory and practice and that needed to be considered in the light 6 of a number of formerly disparate discourses that I encountered and studied dur ing my academic and professional career. That is what my dissertation is about: that we cannot study the new cyber-storytelling forms, from branching stories on CD-ROMs to humans and intelligent computer-generated characters sharing vir tual worlds, without following a theoretical approach combining the discourses of narrativity, computer science and artificial intelligence. T hat’s all very well and good, but that one-line summary does not address how one goes about accomplishing the task. Let me map out the trajectory of the theory in this dissertation as well as some positions I take with reference to established cultural institutions such as the auteur theory and the lone inventor origin myth. If this dissertation were a novel (and I will refrain from acknowledging the easily-anticipated remarks of some of my friends), the protagonist would be the Just-In-Time (JIT) narrative. At first, JIT narratives seemed to belong to the domain of the structuralists who emphasize narrative construction as a process of selection and and ordering. These structuralists, and I include in this non- exhaustive list names such as Seymour Chatman, Claude Levy-Strauss, Vladimir Propp and Mieke Bal, view a narrative as the product of the ordering of a set of events, historical events for non-fiction and imaginary events for fictional works. In theory, the narrative is the result of selecting some of the events and ordering their presentation in time, then presenting these ordered occurrences to a reader or spectator.1 Fundamental to this theory is that selection and ordering are both xThis is hardly an in-depth discussion of narrative construction, nor is it meant to be. Nor does it address the criticisms of the poststructuralists who decry the total dependence upon structure for the meaning of the narrative. These issues get a fuller airing in subsequent chapters. 7 sufficient to completely transmit meaning and that selection and ordering are com pletely under the control of the author.2 JIT narratives don’t adhere to this tra dition. These narratives are created the manner of a theatrical improv sketch. In an improv, each sketch includes some number of participants and each participant assumes the role of a character with intentions, beliefs and goals. There is usually a setting for the sketch, some real or imaginary props and a premise: two friends walk into a bar and encounter the same ex-girlfriend, but each doesn’t know the other was dating her; or, the president of a company discovers one of her fam ily members is planning to kill her and take control of the business. The sketch moves forward driven by the actions of the actor/characters, who behave in such a way as to further their own agendas. Since no one actor completely controls the unfolding of the narrative, each actor experiences the sketch one moment at a time, as it happens, like the ticking of a clock. Once the sketch is over an outside observer, someone not playing a part in the proceedings, can usually recount its narrative. For each actor in the sketch, however, the narrative develops in real time. There’s no way to anticipate the future, at least not with any certainty, because new actions are a function of past actions of all the other actors. Whether the actors share their intentions with others, or represent their beliefs accurately, is another unknown. JIT narratives take the idea of a theatrical improv and bring it into the age of artificial intelligence. If all but one of the actor/characters in such a sketch is replaced by an intelligent software agent, a software character that simulates the behavior of a human actor, 2Propp goes even further in his analysis of Russian folktales by asserting that even the selection and ordering pre-date the author. His or her role is one of filling in templates of specific characters and locations. 8 the result is a JIT narrative. I call this human player an interactor,3 and she plays her part opposite these agent characters that create one moment after the next, on the fly, using artificial intelligence technology. The crafted experience plays out in real time and raises questions that mix the technical, the artistic and the cultural, the very issues surrounding the literariness of a narrative that attracted me to this project in the first place. JIT narratives are not assembled from an existing set of elements.4 The trajectory of the narrative is influenced by multiple, and often independent, intelligences (artificially-intelligent software programs or agents) that have independent agendas. While the software represents the efforts of human programmers and designers, the decisions made in the moment, and the responses to the interactor’s actions, are too complex to be easily predicted. Just how then are next actions selected? W hat ideological forces control decision making? If the narrative unfolds under the influences of multiple intelligences, how do these add or subtract? What drives a JIT narrative forward? These are large questions for such a nascent field, and while I will do my best to answer them I offer a disclaimer up front that not everything I write will satisfy everybody (but then, what does). Some readers will find a certain depth, associated with close readings of literary and cinema theorists, lacking. Such readings are premature for JIT narratives; there aren’t enough in circulation. A fair amount 3As of this writing, new media terminology is in a state of flux. I use the term interactor to denote the JIT narrative equivalent of literature’s reader or cinema’s spectator. I avoid the latter terms because they are too ideologically loaded; however, by the time this dissertation is published, the term interactor may be carrying much of its own baggage. At present, interactor seems to be the preferred term for humans who are both the consumers of JIT narratives and participants in them. 4The language I use to describe JIT narratives is borrowed from manufacturing, as is the name Just-In-Time itself. The irony of this situation was pointed out to me by dissertation committee member Tara McPherson, who observed that I named a narrative form designed to provide the interactor maximum freedom within a narrative world after Just-In-Time manufacturing, a post- Fordian creation that historically served to oppress not only assembly line workers but workers along an entire supply chain by tethering them to arbitrary demands of capital further upstream. of my work, then, is extrapolated from cinema, inevitably leaving gaps I can’t fill to my satisfaction, let alone the satisfaction of others. I can only ask that readers recognize this as a work in progress in a field that is, itself, a work in progress. The opportunity to plug the gaps is what makes this area of research an exciting place to be. I’m getting ahead of myself, however; there is much that can be written at present and much groundwork to be laid before that writing. That having been said, I feel the need for another disclaimer. Much of what I write about JIT narratives is speculative (that is based on potentialities rather than on works that actually exist); however, it is speculation informed by current research in the AI community. For this reason, I chose to base my speculation on current models, research reports and experimental systems. This approach places certain limits on my assumptions and descriptions that appear to have no other basis. For example, when I require JIT narratives to have a single human interactor one might argue, successfully, that there seems to be no technological or narrative justification for such a restriction. While this is true in general, such a restriction is a reflection of the current state of research. Throughout this work I will highlight restrictions such as this one, decisions made to keep the work balanced between unbridled speculation and present-day reality. In this work I use the term interactor to denote the human participant in a JIT narrative. It may seem natural to interpret this decision as a privileging of the human character over the computer-generated characters. Certainly, in some areas, this is true. The agents appear to exhibit intelligent human qualities, yet they do not think (in any human sense of the word) nor do they feel pain, hear sounds or see objects. These very human functions are simulated in the case of the agents through the interpretation of data. For example, if an agent were to kick a rock, the agent may respond as if its foot is hurt. The response, however, 10 is a computation, based on rules that describe the rock (hard), the fact that the agent’s foot contacted the rock at a velocity high enough to generate pain and multiple rules controlling how the agent expresses pain in the current situation. To the agents, everything in the JIT narrative world exists only as data, includ ing the human inter actor. In other words, agents do not discriminate (in theory) between other agents and humans because, as data, they all “look” alike. This is an important concept in the study of JIT narratives. For many purposes, the human is just another agent. This may seem counter-intuitive because humans and agents don’t seem to be on the same continuum of life forms; but, that is true in the general case. Within the JIT narrative world, the human does not hold a privileged position. In fact, the human could be replaced by another agent and the narrative could go on without the other agents noticing any difference. The interactor, on the other hand, is aware of the differences between herself and the agents. One role of AI in JIT narratives is to create the illusion that this difference does not exist. This illusion leads to a fundamental premise of JIT narratives: the interactor and agents are interchangeable, and that the JIT narrative does not depend on one character being the interactor and the other characters being agents.5 When I refer to the interactor or the agent (or agents), I am doing so for purposes of explanation. I have fixed the character roles because I am not interested in explaining character but JIT narratives. If the role of the interactor were to change from paragraph to paragraph, this text would be hard to follow indeed. What I want to make clear is that the definition of a JIT narrative and its functioning is the same whether there is one human, multiple humans or no humans in the narrative world. & The choice of which character is the interactor may be a dramatic choice, that is, one made to make the experience more interesting for the interactor. It is not, however, a technical or operational necessity. Moving forward in the dissertation, Chapter Two contains one of the core asser tions of this dissertation: while current theory fetishizes interactivity in computer games and storytelling in cyberspace, I find the term overdetermined and mislead ing. As I argue in this chapter, what is often referred to as interactivity is meant to describe agency, the ability to exert arbitrary influence on agents and objects in the story world. I can interact with a character in an electronic game through a joystick, only I am really selecting from a set of pre-programmed operations. I do not, in fact I cannot, alter the emotional state6 of the character by my actions in the story world. In fact, I don’t even exist in the story world; characters are not aware of me. They are only aware of the occurrence of certain electrical inputs that map on a one-to-one basis with certain character behaviors. I develop this theme in far greater depth in the first chapter through a historical approach to interactivity and agency beginning with early cinema and proceeding through AI state of the art circa the 1980s (just shy of the personal computer explosion). It is here that I begin interweaving narrative and computer science. This chapter serves as a point of departure for multiple threads developed in the later chapters. I chose to start my historical mapping with the cinema, even though the concept of interactive narrative spans centuries, from participatory theater to the diorama to the active reader of poststructuralist theory. To me, the cinema represents the earliest use of technology as a medium for interacting with a narrative. In addition, unlike photography (which could also be considered a technology), cinema combines technology with a narrative that unwinds in real time, with or without the 6 Since one of the primary illusions of JIT narratives is that computer-generated agents are sentient and present, and since I subscribe to the necessity and potentiality of this illusion, I often use the term “emotion” in place of the more technically correct “emotional state” throughout this work. For agents, an emotional state is the result of a computation and is used to simulate an emotion. 12 participation of a spectator. For those readers who feel I’ve slighted these earlier institutions I apologize and hope they understand that my choice of starting points was motivated, and was also necessary to maintain the focus of the dissertation on a multiplicity of discourses and not on a detailed examination of interactivity in art. Chapter Two treats the concept of agency as it could be applied to cyber storytelling, Chapter Three tackles the task of resolving the differences between narrative discourses of storytelling and those employed in computer science. Most of the work on computer-based storytelling research is under the auspices of com puter scientists. Their focus over the years has been on modeling narratives in a way that they can be processed by computers. Here, “processed” acquires a very specific meaning: constructing a narrative is considered to be equivalent to solving a problem, in this case the problem of creating a narrative that matches a paradigm. Since the paradigm must be computer-friendly, that is, it must be describable in a form easily translated into computer language, most research looks to structural definitions of narrative, because structure is relatively easy to define in computer logic terms. Generating a narrative, then, becomes a problem of arranging blocks derived from narrative structure into an established pattern that meets the definition of a narrative. As I point out, the flaw in this approach is the lack of incorporation of poststructuralist narrative theory that holds that narratives derive their meanings from a reader’s specific web of relationships and beliefs and not exclusively from the arrangement of narrative elements. The com puter science approach completely ignores the content of the narrative and the reader/spectator/interactor’s response to it. Such worship at the feet of structure is heresy in the critical studies world but a fact of life in the world of computers. 13 Much work with computers relies on their ability to process data, which is predomi nantly a semantic-free operation; yet, data processing is a task for which computers are extremely well suited. Data processing plays a major role in hypertext and database manipulations, two forms of underlying structures that mix interactors and narratives. JIT narratives, by virtue of their privileging agency over interac tivity, and their objective of providing an experience and not a heavily mediated story line, have no solution in the computer science realm (solution in the sense of an answer to a problem). Computer science, to do its part, needs guidance derived from narrative theory. I find that guidance can be explained most effectively by the loop. Just as theorists such as Barthes and Foucault model the interaction between a reader and a narrative as a web of relationships activated by the engagement of the reader, and putting into play all previous intertextual and extratextual relationships constructed by the reader, Chapter Three introduces the notion of the loop as both a computer programming and narrative construct. The loop models the implementation and operation of the software used to program JIT narratives as well as the nature of the narratives. The short version of this relationship, which I expand upon in that chapter, is a combination of the cyclic and evolutionary nature of a loop: that stories, lives and experiences often find themselves in the grip of repetitiveness, yet within this cyclical structure, micro-changes occur that advance the narrative (and the growth of the characters) until they are able to break free of the cycle and move forward in their development, both in terms of character and narrative development. I want to emphasize that my use of the loop is strictly metaphorical. It is a tool I employ to both connect computer science and critical studies discourses and to make the act of creating a JIT narrative more accessible. I in no way advocate that narratives, JIT or otherwise, must 14 be written or structured in such a way that the end of the narrative brings the reader/spectator/interactor back to the beginning.7 By way of example, I analyze how the loop plays an important role in the nar ratives of two films, Groundhog Day and Run Lola Run. As readers will discover, this dissertation occasionally relies on cinematic analysis to bolster or elaborate on many of its arguments, and this begs the question of why use films instead of examples of actual JIT narratives. Many components of JIT narratives are still in their research phases. Even at ICT, where such work is quite advanced, there is no solid, rich, coherent narrative form to critique. Rather than supporting or illus trating points in this dissertation with fragmented examples from research labs or, worse, misleading examples that are hamstrung by their embryonic status, I turn to examples and representations in film and popular culture. Though fictional, they are more complete and illustrative and, in their own way, they are a form of science fiction—the events did not happen as depicted, but the underlying science and technology are sound if not a bit futuristic. This does not mean I will not turn to ICT (and other) examples when I believe they are instructive and a fair representation of the points I put forth. In the next two chapters I begin my love/hate relationship with the lone inven tor origin myth so pervasive in United States culture. As people retell the stories of history’ s great inventions, the cultural turn always directs them toward the one person responsible for its creation. The term “author” is increasingly a legal neces sity, indicating not a sole creator but a person or institution that is the recipient 7Of course, such structures do exist in film. The narrative of Sunset Boulevard (Wilder, 1950) begins a few moments after the murder of writer Joe Gillis, loops back in time to Gillis’s financial and professional problems, and then proceeds forward to his murder again, at the hands of over-the-hill movie star Norma Desmond. 15 of any royalties and that should be awarded damages if others infringe on its copy righted material. While describing the historical arc of interactivity into agency, I write about the process of using the names of single researchers and innovators on a regular basis. There are several reasons I chose this path. Oftentimes, that person’s name is the one recorded on historical documents by virtue of her or his position as the researcher in charge of a project (which typically is a reflection of who controls the project’s budget). Over time, other researchers come and go, especially those who are graduate research assistants, and the project leader’s name is the only one that remains constant, the only one through whom the historical path repeatedly passes. In other situations, a particular researcher becomes the spokesperson for the invention. Few people know who invented the I.B.M. personal computer. A far wider slice of the population believes the Apple personal computer was invented by Steve Jobs. Jobs, however, was not the technical guru behind Apple computers. That role belonged to Steve Wozniak. Bill Gates founded Microsoft with his pro gramming partner Paul Allen. Even before Allen left the company, Gates, the geek turned entrepreneur, was Microsoft’ s public persona. With such strong personality figures, the details of the actual origins get lost or transformed in their telling and retelling, making it difficult, if not impossible, to separate the public face from the other, invisible ones. There is also the case where the inventor purposely shields his colleagues from view with the sole objective of achieving recognition as the lone inventor. Thomas Edison acquired a reputation for this type of behavior, of putting his name on the inventions of others in his employ. To me, the lone inventor myth is comparable to the auteur theory in cinema. I do not believe a director is responsible for a film any more than I believe Morse invented the telegraph (which is what Bell was trying to improve when he stumbled 16 across transmitting voice over wires) alone. A director needs her actors, cinematog rapher, grips, electricians, carpenters, scenic artists, set designers, location scouts, sound recordists, camera assistants, editors, negative cutters and so on down the film’s production line. So it is in technology. Sir Isaac Newton who, under the lone inventor myth, is credited with the discovery of gravity and the invention of calculus, summed up what I consider the more realistic view of invention when he said, “If I have seen further than other men it is only because I stood on the shoulders of giants.” While I may use individuals as iconic references to inventions and discoveries, I use them only to humanize what might other be a dry chronicle of events. I do not consider these individuals as auteurs. I consider them standing on the shoulders of giants. The purpose of Chapters One and Two is to create some general extensions to narrativity that make it possible to discuss JIT narratives in both theoretical and technological terms. Chapter Four deals with the relationship between computer science theory and narrativity as it expresses itself in JIT narratives. It also introduces what I refer to as the dual nature of these narratives. When I discussed the improv sketch earlier, I alluded to this duality when writing about the narrative recounted by an observer outside the sketch and the narrative encountered by an actor performing it. The narrative observed from the outside is a traditional narrative, with the same characteristics as one finds in literature, cinema or theater. Occasionally, I call this a static narrative, not in any pejorative way, but because the narrative work,8 but not the narrative’s meaning, can be frozen in time.9 8I define a work in Chapter Three. For the moment, think of it as the frames of a film or the words on the page of a novel. 9 One could videotape the sketch and each playing of the video would present the viewer with the same signifiers in the same order. It is this unchanging presentation the term “static” is meant to identify. 17 The actor sees a narrative unfurling moment to moment. Even if this one actor re-performs the sketch and makes exactly the same actions every time, a highly unlikely event, the narrative is almost guaranteed to unwind differently. The other actors aren’t bound by this same-as-before constraint.1 0 Since this part of the narrative varies from performance to performance, I call it the dynamic part of a JIT narrative. The static part, the traditional component, is of less interest to me in the context of this work, because existing critical theory can be applied to its analysis. The dynamic part, the unwinding of a narrative as the result of a real time interplay of intelligences, enabled by artificial intelligence, is a characteristic of JIT narratives. It is this unique element I turn my attention to in most of Chapter Four.1 1 Referring back to the improv sketch once more, I want to examine the impor tance of the actors in shaping the sketch. On the surface, the flow of the sketch is dominated by their actions, which are the result of multiple influences and complex decision making. In JIT narratives, where all but one of the actors are replaced by 10Once the other actors are free to change their behaviors, it’s doubtful the constrained actor will be able to perform exactly as before without reducing the sketch to absurdity. This is not to rule out absurd sketches as somehow improper. In this dissertation, I deal with sketches and JIT narratives that do not belong to the world of the avant-garde because laying out their descriptions is a challenge enough without the extra burden of dealing with narratives that resist traditional forms and ideologies. My primary concern is that admitting the avant-garde would lead to examples in which it seems some computer program is running amok, and I wish to avoid that impression during one’s first exposure to JIT narratives. 1 1 It may seem that all narratives unwind dynamically. Roland Barthes demonstrates just such an unwinding in S/Z, illustrating how units of a story, or lexias, can support multiple interpretations and meanings. Yet, while the meaning unwinds in real time, the underlying work does not. The first word of the short story Barthes uses as an exemplar will always be the same, and it exists before the reader engages it. The work of a JIT narrative does not exists a priori. It, and the associated narrative, are uniquely created moment by moment through agent-interactor interplay. When I write about the dynamic narrative component, it is this real-time interplay to which I am referring. 18 agents,1 2 the actions of the agents are arguably the number one controlling factor in the creation of the dynamic part of the narrative. Understanding and analyzing JIT narratives in any critical or cultural context requires an understanding of what motivates the agents’ actions, for their behaviors are a major part of any authoring process. It is here that computer science and narrative theory intersect. I frame this intersection in terms of how agents communicate their internal intentions, beliefs and goals to the interactor, along with several major influences on their internal conditions or states. While agents may be authors of the JIT narrative, there are designers1 3 who are authors of agents. Designers’ lived experiences are imprinted on the agents and passed along to the narratives the way that the lived experiences of filmmakers leave their imprint on the cinema. Chapter Four is not meant to exhaust all these points of intersection, only those that are accessible now, given the current state of the art in computer technology and what might be called the penetration of computer science discourses into critical theory discourse. My objective throughout this dissertation is not to require the critical theorist to become a computer scientist, either while reading these pages or when turning to the critical analysis of JIT narratives. Hence, my motivation to stick to accessible areas that can be discussed (I hope) in ways that make sense to a broad audience of theorists and technologists. My intention is that what I write stimulates thought about other areas of investigation, those accessible soon, those accessible in the not-too-distant future and those I haven’t thought of. At the very 12 Again, I am staying close to the models of current AI research. In a speculative world there is no reason to attach this restriction to JIT narratives. 1 3 Agents are usually the result of many individual and collaborative efforts among producers, graphic artists, behavior scripting experts and computer programmers. I use designer throughout this dissertation as a blanket term for all those who contribute to making an agent perform in JIT narratives. 19 least, Chapter Four will serve as a concrete example of how and where narrative theory applies to the dynamic part of JIT narratives and offer some advice and cautions to both theorists and agent designers. One of the outcomes of this concrete discussion may be a realization of just how complicated JIT narrative analysis can be. The examples I make use of in uncovering how agents are made to act the way they do includes excursions into memories, emotions and the representation of knowledge. Critical theorists and computer scientists may find these combined discourses challenging. Rather than scare potential participants away from the field, I end this chapter with another example, one perhaps more familiar in the critical theory domain, and that is how an ideology can leave its mark on JIT narratives, quietly and invisibly, through the tools used to build agents and other pieces of JIT narrative systems.1 4 Even though ideology may not be a common term in computer science classrooms, I believe the example that ends Chapter Four will resonate with non-theorists, too, because of its influence on the nature of popular electronic games. My approach is historical and my objective is to direct attention to the way the ideology in the tools first used for cyber-storytelling, telling stories using the computer as a medium, passed from the military, which funded the development of these tools, to the off-the-shelf products of game publishers. As I will show, most of the tools first used to create cyberspace—both the software and the hardware— have their origins in projects dating back to the 1950s, projects funded by the 1 4 When I speak of systems here, I am acknowledging that these agents do not dance around in one’ s living room all by themselves. They are the product of computer programs running on hardware hosts. The venue in which a JIT narrative is performed may require special equipment such as digital projectors or computer-controlled surround sound. The computers involved need operating systems, anything from Microsoft Windows to special-purpose ones designed for narrow applications. It is all of this hardware and software, taken in the aggregate, that I refer to as a system, or occasionally a JIT narrative system. 20 U.S. government, most frequently through the Department of Defense. During the Cold War, a particularly prolific era for computer science due in no small measure to the huge influx of government research dollars, research focused on simulation and communication. Simulation provided realistic training environments for troops who, because of the political environment, were constantly at the ready. Simulation also allowed government agencies to test equipment and procedures in the safety of a computer lab rather than in the field. Novice fighter pilots, for example, could crash expensive jet planes while learning aircraft carrier landings without injury and without the loss of millions of dollars of military hardware.1 5 Military officers played war games, commanding two competing armies and measuring the efficacy of tactical procedures and new combat hardware, such as tanks, helicopters and communication gear. The need for realism in these simulations drove the development of virtual real ity techniques, graphics hardware and software, high-speed, low-cost computers, immersive audio, all the elements that now find their way into electronic games. These tools, however, were developed under (or within) a culture that privileged procedures and an unwaivering adherence to those procedures. This is one of the charges leveled against electronic games, that they promote obedience—an act- don’t think mentality—at the expense of other game-playing skills such as hand- eye coordination, strategy, tactics, critical thinking or imagination. Today, much of the basic research applicable to JIT narratives is funded by the government (through the military), and so the narratives that result are influenced by the rigid military culture. As these tools are commercialized, society must address the pos sibility (or the probability) that JIT narratives, and cyber-stories in general, will 15The development of the supercomputer was spurred, in part, by the government’s need to simulate nuclear explosions without actually detonating a device either underground or in the atmosphere. 21 lean toward highly-structured scenarios with right and wrong outcomes. Critical theorists often probe texts by asking questions such as why this way instead of that, why this event instead of another, why this character, that lens, this setting and so forth as a way of resisting blind subservience to ideology. JIT narratives deserve the same close scrutiny. While I was writing this dissertation, one of my colleagues argued that critical theorists didn’ t need to know anything about computer science and artificial intel ligence to critique JIT narratives; they only needed to know definitions of terms. I suppose there’ s some truth in that. Just not enough. This line of reasoning is the mirror image of the computer scientist who attempts to generate narratives by treating them as problems with solutions. It isn’ t that a computer scientist doesn’t know (or can’t learn) the terminology of narrativity. Knowing the mean ing of “poststructuralism” does little for creating computer-generated narratives without the understanding of reader-response and the operation of intertextual ref erence in the process of making meanings out of texts. In fact, just understanding the definition of a text is a far cry from understanding how to generate one for a JIT narrative. When Barthes talks about the readerly and writerly text he works his analysis on physically static works. Readers may experience the same text differently, but they all engage the same “galaxy of signifiers.” George Landow has since extended Barthes’s work in this area into hypertext and hyperfiction, which Marsha Kinder has extended once more in her work with database narratives. In return, I can extend the idea of the static work to cover those works where the creation time 22 (of a lexia) and the engagement time (reading, viewing, interacting) are not simul taneous.1 6 When theorists analyze JIT narratives their endeavors are wringing out answers encapsulated in the dynamic—the intersection of multiple processes, just as in real life. This branch of critical theory includes not only the critique of the individual processes absent their context, but the critique of the processes at their friction points. From getting to agency to the ideological influences on JIT narratives, th at’s the arc this project follows after it first seized my interest several decades ago. 16There is no technical restriction preventing a hypertext reader or a database narrative inter actor from viewing lexias or database objects as they axe created, in real time. However, such a capability, as I will argue later, takes the nature of the work out of the realm of hypertext or database narratives and into spheres more correctly belonging to Massively Multiplayer Online Role Playing Games and Al-based systems. Also, one must be careful about maintaining the difference between creation and accessibility. A hypertext system may make an existing lexia accessible to a reader (when it wasn’t before). The lexia, however, already exists. Making it accessible in real time does not mean the lexia was created in real time. Such real-time creation is a key characteristic of JIT narratives. 23 C hapter 2 G ettin g to A gency Were you in the small Studio Theater on the campus of Pittsburgh’s Carnegie Mellon University in the early 1990s you would have witnessed an unusual the atrical scene. A table, a few tall wooden boxes and several chairs were the set, but the magic of the theater transformed this array of dorm-room furniture into a bus station. A Carnegie Mellon technical report documenting this experiment in interactive drama said, in part: The characters included the Clerk, who was bored with her job and uncooperative with customers; Tom, a professional man in his thir ties, temporarily blinded from recent surgery, who was going to his girlfriend’s house; and Ned, a young punk who needed money. The interactor was to play him- or herself with this one instruction: buy a bus ticket to a city to attend a relative’s funeral (Kelso, Weyhrauch, and Bates 1992). Other than the Interactor, the characters were actors; offstage, a director watched the action and communicated, via radio, with her on stage troupe. The Interactor could see the receiver headsets worn by the actors, but couldn’t hear the director’s instructions. “The experiment was similar to an acting improvisation, but with several important differences,” wrote the authors. “First, there was a coherent (interactive) plot to follow. Second, the actors were directed occasionally as the story progressed, instead of making up or following the story themselves. Third, an interactor was on stage with the actors. And, finally, the experience was designed for the interactor. This is important. The experience was not for 24 the director, the actors, or the people observing the experiment; it was for the interactor.” While the Oz Project, as it was known, represented one of the first attempts at the free-form mode of storytelling commonly referred to as interactive story or narrative, most of its results were published in computer science venues or presented at computer science conferences. Little of the work appeared in narrative or literary journals, so one of the more important results of the experiment most likely never circulated in storytelling circles: While the observers often became bored and lost track of what was going on, the interactors were continuously engaged. A part of that disparity comes from the sense of time compression that the interactors experienced While the interactor was interpreting events, thinking over possibilities, and contemplating actions in his mind, the actors just “looped,” taking no actions The important result is that the movement of the story may not be actions in the physical world, but actions in the interactor’s mind. Actions in the mind may seem far away from actions in cyberspace, yet as this dissertation will show, richer forms of storytelling in cyberspace1 require the same sort of mental participation as their traditional media ancestors. The Oz Project was inspired, in part, by work done in 1986 by an Ohio State University Ph.D. candidate, Brenda Laurel, author of Computers as Theater. She had set a task for herself “to determine how to go about building a system that would enable a person to participate in a dramatic action in a relatively uncon strained way influencing the events and outcomes through his or her choices and X I use the phrases “storytelling in cyberspace” and “cyber-storytelling” to describe stories that require digital computers for their telling. In particular, I intend for the computer to: create and maintain a narrative world, either through text, graphics or both; participate in retrieving or generating successive story events in response to actions of the interactor; mediate commu nication between objects in the narrative world and those in the real world. This description is intentionally broad, for I want to encompass diverse activities that may or may not mirror traditional story worlds of literature and cinema. 25 actions as a character in the action” (Laurel 1993). Such a system would need to track the actions of agents and changes in the environment and respond with plot advancements based on causality and some artistic incorporation of the art of plotting. At the time, technology had not advanced as far as Laurel’s imag ination, but as of this writing, artists, theoreticians and computer scientists are realizing Laurel’ s “fantasy machine” in spaces from desktop computer screens to fully-equipped virtual reality (VR) theaters.2 In so doing, they give rise to new forms of visual and participatory art and entertainment, forms that resist critique and analysis through the well-established tools and methodologies of literature and cinema. Laurel’s fantasy machine concept is a case in point: is it a form of “interactive” cinema, accessible by film theory with the assistance of some interac tive “discourse” (yet to be specified); is it an electronic game writ large, but with its umbilical cord firmly attached to narrativity; is it a cognitive science research project whose goal of modeling story writing and predicting reader response can be vindicated by laboratory experiments and appeals to psychological theory; is it all of these; or, none? No one theory, context or field of study can offer a palliative to the difficulty one encounters grasping the nature of Laurel’ s ideas,3 and the nature of using the computer to help us tell stories in general. I believe the problems theorists have discussing and critiquing electronic games, computer-based storytelling and particularly the nature, and future, of narrative in cyberspace begins with over determined signifiers on the one hand, and the lack of a theoretical entryway into 2For the former refer to Mateas and Stern 2002. For the latter see Swartout et al. 2001. 3My reliance on examples from Laurel, the Oz Project and Janet Murray in no way is meant to discount the contributions from a long history of interactive theatrical events, including agitprop theater all the way up to Tamara and Tony V Tina’ s Wedding. My choice of exemplars is motivated by their relevance in reconciling the effects of narrativity and interactivity with the role of the computer in storytelling. 26 cyberspace as a site for storytelling on the other. While hacking a path out of this thicket, I will follow some historical threads that have made it both difficult and possible to talk about game playing and storytelling in cyberspace. My focus will be on activities like Laurel’s fantasy machine and the Oz Project’s experiments because the technology that enables one to conceive of such ideas as reality also separates the computer from literature, television and cinema as a mechanism for storytelling. The technology I refer to lies the domain of artificial intelligence (AI), the computer’s ability to simulate certain reasoning processes of the human mind. By combining the discourses of narrativity, cultural studies, computer science and AI, I hope to clarify an interactor’s engagement with storytelling on, or through the use of, computers, and suggest a methodology for reading and theorizing these stories. In the process, I will argue that one of the factors retarding theory develop ment is the difficulty in combining the discourses of narrative theory and computer science, especially artificial intelligence, into a unified discourse of storytelling in cyberspace. While this does not demand that all theorists must become computer experts, and vice versa, it does suggest that as new media technology continues to evolve, narrative theory and computer science, like the earth and the moon, will need to pull on each other to maintain the balance that allows both to progress in their orbits. I will defer further argument until I lay some historical groundwork. M edia and R eality Sitting in the cinema, lights out, giant images cavorting on the screen in front of them, it’s easy for spectators to forget who and where they are as the sink into the world portrayed in the film. So when Laurel writes that “[w]hen you are engrossed in [a movie], you forget about the projector, and you may even lose awareness of your own body” (Laurel 1993), she pays an homage to nearly half a century 27 of cinema theory. Though her heart may be in the right place, the loss of what Laurel alludes to here as a sense of reality opens the door to the question of how the term reality applies to cyberspace, where the action doesn’t take place in a darkened room and where the interactor is expected to be anything but physically immobile. In the cinema, one might talk, as does the seminal film theoretician Christian Metz, about the impression of reality, a perception Metz describes by carefully positioning cinema on a reality scale at a midpoint between theater and photog raphy. Theatrical reality is too real for Metz because in the theater the spectator is, indeed, watching something that is real. “The stage setting, for example, does not have the effect of creating a diegetic universe; it is only a convention within the real world,” he writes (Metz 1991). Photography is not real enough because it lacks motion, which endows its subjects with “a higher degree of reality, and the corporeality of objects.” If motion is a necessary condition for reality, but if the stage motion is separated from the audience by what Metz refers to as convention, where does that leave cyberspace, where many forms of cyber-storytelling, notably immersive electronic games and narrative spaces created inside virtual reality the aters precisely combine moving imagery and human interactors in the same place? “It is because the world does not intrude upon the fiction and constantly deny its claim to reality” is how Metz expresses cinematic separation. Interactivity inverts this position. Interactors expect reality to invade the diegesis; this invasion, per haps conflation is a better word, is one of the narrative’s driving forces. To deny it is to deny one of the founding principles of cyber-storytelling. Storytelling in cyberspace does not happen in the diegetic world of the cinema. The cinematic world is always an illusion, while the interactor in cyberspace may 28 or may not leave the real world. A fully immersive system, including a virtual real ity helmet or theater,4 spatial, multi-channel sound and three-dimensional imagery can obliterate real world reference points. Common experiences in this category include location-based entertainment (LBE), such as theme park rides. Disneyland park, Disney world resort and the Universal Studios theme parks in Florida and Hollywood are well-known for attractions that place audiences in subject posi tions that defy the reality of their normal existences. While cinema, LBE and cyber-storytelling share these immersive forms of alternate realities, storytelling in cyberspace typically shuns the profilmic event.5 In particular, the interactor is not bound by the camera lens, but is free to create an independent gaze and, in a properly functioning virtual world (one that maintains the correct visual and aural perspective relative to the interactor), will see imagery at the end of the gaze consistent with the rules of that world.6 The gaze is another one of the more interesting issues of agency in cyberspace. In the cinema, the spectator has the illusion of controlling the gaze in a world in which she not only has no presence but in a world in which she has no agency, 4 Virtual reality helmets include two small television screens that present the wearer with a stereoscopically correct rendering of a computer-generated landscape. Typically, these helmets contain senors that report the wearer’ s head (and sometimes eye) orientation to the computer, so as the interactor looks in different directions the image tracks with the wearer’s gaze. VR theaters employ large screens that dominate the interactor’s field of view. Depending upon the application, the screens may wrap around the interactor so there is a computer-rendered world visible from any viewing position in the horizontal plane. Although inherently two dimensional, VR theaters can incorporate three-dimensional imagery by fitting the interactor with shutter- equipped or polarized goggles. 5Profilmic events are those taking place in front of the camera while a film is being shot. This is not to say that there are no electronic games that make use of pre-recorded video clips or pre-programmed character behaviors, for that is not the case. These bits of pre-recorded mate rial, however, are designed to be building blocks for real-time experiences, not the experiences themselves. 61 would be tempted to say the inter actor would find a continuous world at the end of the gaze, but that is more common practice than a technical requirement. If the rules of the narrative world don’ t follow Newtonian physics, continuity is not a requirement. 29 especially over what the camera chooses to look at and what the screen chooses to frame.7 In electronic games, however, the player is granted (usually) wide dis cretion in choosing the direction of the gaze. While the screen may frame only portions of the narrative world, through the use of a keyboard, joystick or other input device, the player can manipulate her own gaze in real time and without any apparent mediation (although of course there is because the image is con structed by the computer based on pre-programmed data that includes field of view, focal length, distance between objects and three dimension to two dimen sion conversion). In virtual reality theaters, this agency in the narrative world is even more complete, sometimes offering full three-hundred-sixty-degree, three- dimensional views of the narrative world. While some might argue that such effects are the result of the interactor interacting with the computer, which then responds by generating the proper visual information, I argue that, based on the importance of the gaze in creating subject identification in the cinema, controlling the gaze is a component of agency; but while such control is a first step, it still does not allow the interactor to cross the line separating the real and narrative worlds. Breaking down this barrier, these separate spaces of story world and real world means that the interactor in the real world must be able to affect the story world. Affect, yes, but affect how? Certainly the interactor must be able to change objects in the story world: moving them, altering their size and shape. What, however, if the object is a also an agent, a representation of a living being? This adds a 7Film theorist Andre Bazin would argue that a spectator chooses what to look at within the frame. A JIT narrative, though, is more dynamic. The edges of a VR theater screen are not boundaries any more than the edges of one’s field of view are boundaries. To look beyond the edge of the current field of view one need only turn one’s head (or move the eyes). Interactors can also look behind, around, under or over objects. The film director presents a screen image and says (metaphorically), “Here is what I have to show you. Look around and find what you like.” A JIT narrative says, “If you don’t see what you want, change the view yourself.” 30 a second layer to the notion of change: the psychological. A character in a film is psychologically insulated from the audience.8 The actions of spectators will not influence the on-screen behavior of a film character. Not so in cyberspace. That a character responds physically and emotionally to actions of the interactor another part of agency; and, the response cannot be arbitrary. There is usually the requirement, as in a game, that the response be relevant or motivated by the interactor’ s actions. W hat I hope to show is that what motion is to Metz’s view of cinematic reality, agency is to storytelling in cyberspace. T he P ractice of Interactivity January 29, 1984 ushered in the era of popular interactive personal computing. On that day, Apple Computer introduced its unconventional Macintosh computer with an equally unconventional commercial during the Super Bowl. While one-on- one interaction with computers existed almost from their inception, the ability to issue instructions to a computer and receive meaningful responses, to control the behavior of the software, to manipulate elements of data while having them stand in for real-world objects (employees, bank balances, airplane wings), the idea that one could have such a relationship with an off-the-shelf computer at the very least injected this notion of “interactivity” into the minds of “the rest of us,” even if the rest of us did not know exactly what interactivity was. Thirteen years later, by the time Janet Murray published Hamlet on the Holodeck, computer users seemed to know all about interactivity. Thanks to the Macintosh and Microsoft Windows operating systems, interactivity was double clicking to open a document. It was dragging a folder into the trash. It was clicking 8I take a little liberty in this description, for the character is not only absent but hardly qualifies as an agent, either. 31 on a Web or CD-ROM hot spot to call forth another screen of images, text, video, animation and sound. Interactivity was also firing missiles at approaching alien spaceships, shooting energy torpedoes at out-of-control cyborgs, building cities and finding treasures. Yet despite the general grokking9 of interactivity as a concept, Murray took great pains to separate interactivity from another idea, agency. On the topic of interactivity she writes that computers create “an environment that is both procedural and participatory” (Murray 1997), and this combination “is what is most often meant when we say that computers are interactive.” By “procedu ral,” Murray was referencing a computer’s “defining ability to execute a series of rules;” by participatory she meant computers “are responsive to our input.” “Just as the primary representational property of the movie camera and projector is the photographic rendering of action over time,” she writes, “the primary representa tional property of the computer is the codified rendering of responsive behaviors.” While responding to input with defined and predictable output, a behavior that makes manuals and documentation possible but rarely results in surprising narra tive creations, sounds simple to understand, its simplicity is deceiving. The index of Hamlet on the Holodeck has this entry under interactivity: con fused with agency and activity, 128 [emphasis added]. Murray attributes the con fusion to an overdetermination of the term “interactivity” that leads to the con flation of agency with “the mere ability to move a joystick or click on a mouse.” As Murray stresses, these are activities, and activity does not necessarily equate to agency. “In a tabletop game of chance,” she observes, “players may be kept very busy spinning dials, moving game pieces, and exchanging money, but they may not have any true agency.” The actions are randomly controlled, and their 9Loosely translated, to grok means to be one with, to understand something globally and in an instant. The word comes from Robert Heinlein’s excellent science-fiction work Stranger in a Strange Land. 32 effects are not related to the players’ intentions. Nor does activity relate to agency through what Murray considers the somewhat specious measure of interactions per minute. “Some games, like chess, can have relatively few or infrequent actions but a high degree of agency, since the actions are highly autonomous, selected from a large range of possible choices, and wholly determine the course of the game.” The pleasures of agency are enabled by, but separate from, acts of interactivity. The pleasure is realized not by codified responses but, quite the opposite, by envi ronments rich in the vagaries of human behavior that still offer the safety of a controlled, if not predictable, experience. Interactivity is a mediated form of com munication, a mechanism for bridging the gap between the real world and a fictive world created by a computer. Interaction permits computer users to choose from a set of actions, guaranteeing only that users can carry out their intentions if they can be expressed as a combination of these actions. In storytelling situations, computers can help the interactor jump from page to page, or action to action, all over cyberspace. Computers neatly package several rendering features—sound, video and text—in one place, instead of requiring the interactor to employ a CD player, DVD player and book simultaneously. These are all important advances not only in storytelling but in communications in general; however, the computer’s ability to randomly and rapidly access a database, and its ability to support interfaces that grant easy access to this feature only make it an excellent “page turner,” underscoring its strength as a data processor. As symbol processors, digital computers are capable of performing a feat other media cannot: simulating the reasoning processes of humans. While data processing helps interactors follow the progress of characters in games and hypertext fiction, artificial intelligence technology can populate a world with autonomous agents and 33 unfurl an experience composed in real time as a stream of responses to the actions of an interactor. The intersection of AI and narrative is what makes Laurel’s fantasy machine, or the transference of the Oz Project to cyberspace, not only possible, but unre alizable in any non-digital, non-interactive medium.1 0 When Murray writes about narratives in cyberspace she uses cyberdrama as an umbrella term for the “vari ous new kinds of narrative” of “the coming digital story form.” “[L]ike the novel or the movie,” Murray writes, cyberdrama “will encompass many different for mats and styles but will essentially be a single distinctive entity.” It is AI that, I believe, fulfills this promise of a distinct entity. Nevertheless, current narrative discourses surrounding storytelling with the computer-as-medium still emphasize the interactive quality of the experience, with such terms as “interactive fiction,” “interactive story,” and “interactive narrative.”1 1 Murray rejects this terminology, writing that cyberdrama “will not be an interactive this or that, however much it may draw upon tradition, but a reinvention of storytelling itself for the new digital medium” [emphasis added].1 2 Interactive storytelling, however, would prove to be stubbornly attached to those traditions Murray alludes to. 10I include here photography, cinema, sound recording, television, video recording or any com bination of these. With the rapid evolution of new media technologies, this uniqueness may not last for very long. 1 1 The terminology is not mine. These terms enjoy currency in computer science discourses that conflate storytelling, narrative and fiction with apparent immunity. 12I read this as a rejection by Murray of the appropriation and misuse of terminology by computer scientists. 34 Interactivity and th e 1950’s C inem a In the discourse of storytelling in cyberspace, the difference between interactivity and agency is maintained under constant tension by the effects of two historical threads: computer interactivity and the screen-based illusions of television and the cinema. Focusing on moments when technology-mediated storytelling extended, or at least attempted to extend, its narrative world into the real world, I have chosen the early 1950s as my starting point. It was then that television, through direct address and the positioning of the television screen as a window (on the world), challenged cinema’s dominance as the enabler of the mobilized gaze. In discourses surrounding television, the new medium was described as bringing the entertainment, news and education into the spectator’ s living room, an intermixing of the real and the mediated through technology. At the same time, in a different arena, office machine companies such as I.B.M. and Remington Rand began selling computers as commercial products. The similarities between the worlds created by the cinema and television and the then-unnamed world of cyberspace went unrecognized. Analog and digital technologies did not mix, and neither did their respective discourses; but both conditions were about to change. For fifty years, cinema owned the pleasure of the gaze. Then television hap pened. When Milton (“Uncle Miltie”) Berle virtually emptied movie theaters on Tuesday nights, as patrons stayed home to watch his Texaco Star Theater, inter activity became a short-lived attem pt to differentiate cinema from television. The cinema offered producers control over both the real world of the audience and the world created by the cinematic text. They exploited this difference by extending the action on the screen into the audience’s space with stunts ranging from crude 35 to clever. Primarily used as a gimmick to promote horror films, these market ing stunts positioned the spectator as a participant in the narrative (but without agency). One of the more fertile minds in this area belonged to producer William Castle. Castle’s first film for Columbia, a 1959 release entitled The Tingler, was a less- than-B-movie film that used his new creation, Percepto: The film runs out and the shadow of the Tingler creeps across the blank screen. Vincent Price warns that the only way to stop the monster is to scream. T hat’s the signal to the projectionist to throw the switch. Under ten or twelve seats were some electric motors, war surplus things that Castle got a bargain on. The motors vibrated the seat, in the hope of scaring a scream out of someone. Just in case it didn’ t Castle planted someone in the audience to get the screams rolling (McGee 1989). Another of Castle’s creations was Emergo, which he advertised as if it were some sort of 3-D without glasses: “The thrills fly right into the audience.” All that flew was a luminous skeleton on a fishing line during a key moment in House on Haunted Hill (1959). Castle did succeed in arousing audience participation. Most of the kids in the audience hurled objects at the skeleton in usually successful attempts to knock it off the line. Mr. Sardonicus (1962) was Castle’s last attem pt to make the audience a part of the story, and, were it not for his irrepressible hucksterism, it might have rep resented an early experiment in interactive cinema. Using luminescent drawings of a pointed thumb, the audience was invited to decide the fate of the title char acter. The movie stopped long enough for Castle to take a thumb count before running the thumbs down ending (he never shot one for thumbs up). Castle figured he knew his audience pretty well. 36 As silly as some of these stunts might seem (it was the 1950s after all, a time of goldfish swallowing and telephone booth stuffing), their importance in inter activity’s relationship with narrative lies in their purposeful attem pt to give the spectator an additional illusion of agency by allowing audience members to physi cally act out interpretations generated by the film (fear, horror, amusement) and to receive feedback (choosing an ending, scaring away the Tingler) based on their actions.1 3 Hokey as it might have been, this technique would reappear in later relationships with graphical user interfaces (GUI) on personal computers. Whatever the screen’ s relationship with cinema theory, to interactivity the screen is a barrier. T hat’s because cinematic illusion demands the screen deny its corporeal existence, as Uncle Josh so vividly demonstrates in Edwin S. Porter’s Uncle Josh at the Motion Picture Show (1902). The film depicts what at the time would be called a “rube’s” attem pts to get the attention of characters on the screen and to interject himself into narrative situations. His attempts only lead to frustration, and Uncle Josh winds up tearing down the screen and revealing the startled projectionist behind it. The screen might be, among its many metaphors, a window, but it was a window marked “Do Not Touch.” Television, however, would literally change that. Its screen, described with great sweep as a window on the world would, through the technology behind it, allow the characters inside the television to have their own window into viewers’ living rooms. 1 3 Again, I want to emphasize that I am not discounting the contributions of film theory to our understanding of the active cinema spectator. My project extends that theory by contrasting the active spectator with the physically active interactor and, in the process, distinguishes between agency and the illusion of agency. 37 A ccessible Interactivity and 1950’s T elevision In February, 2003, William Prichett, Sr. passed away at the age of seventy nine. His obituary in the New York Times read, in part: Harry William Prichett Sr., an architect of television’ s first attempt at interactive broadcasting with the 1950’ s children’s program “Winky Dink and You,” died on Feb. 5 at Roosevelt Hospital in Manhattan, [emphasis added] (New York Times 2/6/03). Prichett, and his partner, Edwin WyckofF, created a show that extended the narrative world of television into living rooms and set the stage for one-on-one narrative participation now so causally associated with new media. While they couldn’t control life in the living room of the spectator, they could manage an important part of that world: the television screen. The screen was a magic window where story characters and inter actors could come together, even if only to wave at each other from opposite sides of the glass. Jack Barry hosted the original Winky Dink program, which premiered on CBS in in 1953. Stories on the program relied heavily on viewer participation to make the world “whole” for Winky and his friends. Other than Barry, the cast consisted of animated characters, but not the fluidly animated characters of the cinema. Winky, et al. moved with staccato jumps, not surprising, considering how the animation was performed. Billy Ingram runs a Web site largely devoted to the history of the program: Ed WyckofF handled the animation with a pair of tweezers and a bunch of Winky heads and bodies and eyes and arms. “We used a 16mm Animatic filmstrip projector which was faster than the eye and damn good in terms of budget” [according to Ed WyckofF]1 4 14Source: < http://www.tvparty.com/requested2b.html> accessed 4/10/03. 38 The choppiness of the animation was a necessary part of the substance of the show, not merely a stylistic affectation. When Winky was on an adventure, he routinely discovered objects missing from his world. He’d need a ladder to climb over a wall, or a board to cross a stream. The spectator helped out by drawing the necessary item, the drawing becoming part of the narrative space. The staccato movements of the characters became little interactive spaces, sites where the viewer could make a contribution to the narrative world. The process was simple. Young viewers (more likely their parents) mailed in fifty cents for a Winky Dink kit, which consisted of a clear plastic sheet, a grease pencil, and a cloth for cleaning the plastic (and also for creating enough static electricity on the sheet to allow it to stick to the television screen). When Winky needed help, the narrative would pause while a ladder, for example, appeared on the screen, one fragment at a time. One of the sides would appear for a few seconds, then vanish to be replaced by a rung somewhere else on the screen, and so forth. Young viewers traced the pieces of the ladder on the plastic sheet. When they were done, they saw Winky climb over a wall on the ladder they drew. Viewers not participating saw Winky climb thin air (which may have been more interesting). For the young audience, Winky Dink gave its members a sense of being a necessary part of the story.1 5 The program also conflated the drawing made by the spectator, and the drawings made by the show’s animators, at the screen, which became a shared space, one side in the viewer’ s living room, the other in a world created in a television studio. Much of the discourse surrounding “interactive” programs today have reference points in Winky Dink; unfortunately, technology had yet to solve the problem of the screen 15 Winky Dink and You met its demise, I speculate, because so many viewers, while waiting for their parents to order the Winky Dink kit, took to the television with crayons and without the benefit of the magic plastic sheet. 39 as a barrier, so the program left dangling the issues of actual interactivity and the potential for agency. While Winky Dink did not shatter the screen barrier, it turned out that its model of two worlds, one real and one created by technology, both meeting at a screen that seemed on the verge of being transgressed, was a powerful image.1 6 All the possibilities of merging the both sides of the screen—possibilities pointed to by the cinema, and now by television—were about to come together, but not as entertainment. At least, not yet. A B rief H istory of C om puter Interactivity The invention of the computer brought with it the invention of a new job: the computer programmer. The programmer made the computer perform in useful ways by arranging sequences of instructions to form a computer program in the computer’s memory. Instructions represent the features directly wired into the computer’s hardware. As such, they represent the smallest unit of control a pro grammer can have over the machine. Instructions are stored in memory, just like data, which means both the programmer and the computer have the ability to change the program. The program and its data are the common world of the programmer and the computer; in this world, both man1 7 and machine shared agency. Programmers would sit at the control console of multi-million dollar com puters testing out new programs hour after hour. Since these computers could 16This concept owes a debt of dramatic gratitude to early filmmakers. In addition to the exploits of Uncle Josh, Buster Keaton projected himself onto the screen in Sherlock, Jr., (1924) That Keaton’ s real-world character was a film projectionist only underscored the use of the projection metaphor. 17In the early days of computing and computer science, these fields were almost entirely occu pied by men. 40 only run one program at a time, their multi-million dollar price tags rapidly made it impractical for a single programmer, or programming team, to usurp control of the machine for such uninterrupted periods. The economic equations of computer ownership spurred the creation of an age of batch processing. In batch processing, programmers submitted jobs, programs to run on the com puter, to a computer center and received the results of running those jobs hours later. The slightest mistake, a missing parenthesis or a simple interchange of two letters, usually caused the job to abort; and even if the job ran to completion, errors in the output meant the programmer had to scrutinize his program, make his best guess as to what caused the errors, fix existing instructions or add new ones, and resubmit his job (with fingers crossed). At busy corporate and academic computer centers, a programmer could count on only a few opportunities to test his program in the course of a day. Within a five-year span, the job of program mer lost both interactivity and agency within the computer’s world, a world the programmer could now access only through the use of highly-mediating operating system software, software usually created by the computer’s manufacturer. Five years later, at the start of the 1960s, the economics of computing took interactivity off in yet another direction. The time required for a computer to execute one instruction is called a cycle.1 * Like airplane seats, cycles have economic value that vanishes once they go unused; and batch processing systems threw away cycles the way drunken sailors threw away money. I’ m not privy to how sailors got their reputation, but batch systems were wasteful because of the disparity between the speed of the central processing unit (CPU) and the input/output (I/O) equipment—printers, disks and the like. It 18This is no longer strictly true, but it was in the 1960s. Today, cycles are still linked to the time to execute different instructions. 41 is the CPU that performs the functions people attribute to computers: arithmetic, logic and data manipulation.1 9 Compared to the CPU, I/O equipment has all the speed of a herd of turtles. If the computer system only has a single job to run, and if that job needs to read data from the hard disk, the CPU must wait the equivalent of several hundred, or thousand, cycles before the disk data becomes available and processing can proceed. To the executives whose companies paid the monthly leases on these computers, that CPU idle time had an attached dollar amount that increasingly became intolerable. To programmers and systems designers, their inability to be close to their machines was met with equal disdain. In 1960-61, ideas circulated among computer engineers about the practicality of recovering those wasted cycles by sharing one computer among many users. While one user’s job was waiting for some I/O hardware, another user’s job could use the CPU for mathematical calculations. Cycles normally wasted would now be applied to useful work as long as the computer had a sufficient number of jobs to switch among. One method of attracting those jobs (and users) was by offering to install Teletypes in users’ offices or give users modems2 0 for connecting remotely over telephone lines. Programmers could run their own jobs and see the results immediately, reflecting the good old days of less than a decade before when programming was a hands- on job. The process was called timesharing, and most closely associated with its development were two MIT initiatives, Project MAC (Multiple Access Computers), and its offspring, Multics. 19In personal computers, the CPU has become synonymous with the microprocessor—Intel’s Pentium processor family or Apple’s G3, G4 and G5 series. 20Modems of the day ran at a blistering 300 baud (roughly 300 bits per second), which hardly seems like a blip on the communications landscape when compared with today’s 56,000 baud modems. Like water, technology seeks its own level. Three hundred baud was a perfect match for Teletypes which, because of their mechanical design, could not operate any faster. The demand for higher speed modems came about as Teletypes were replaced by electronic terminals that weren’t bound by mechanical limits. 42 In 1965, F. J. Corbato of MIT and V. A. Vyssotsky, a colleague of his from Bell Telephone Laboratories, Inc., in Murray Hill, New Jersey, presented a paper at a special session of the Fall Joint Computer Conference. The paper outlined the goals of Multics, the first attem pt at creating a large-scale, general-purpose timeshared computer system. Almost immediately, they addressed the question of programmer and user agency, acknowledging that “[t]he impetus for time-sharing first arose from professional programmers because of their constant frustration in debugging programs at batch processing installations.”2 1 Timesharing would allow Thus, the original goal was to time-share computers to allow “simultaneous access [to a computer] by several persons,” from which each user would share the illusion that he had “the whole machine at his disposal.” Multics gave physical form to what may have been an unconscious desire on the part of programmers to return to the womb, to those early one-on-one days, the days of programmer versus the machine. Multics, however, also had a life of its own, one that went beyond the objec tives of its creators. Freed from the tyranny of limited hours and the ritual visit to the holy ground on which the computer stood, computer users developed new applications for both hardware and software. Instead of simultaneous access to a computer being the major attraction, users enthusiasm for Project MAC had more to do with the “availability at one’s fingertips [the] facilities for editing, com piling, debugging, and running [computer programs] in one continuous interactive session.” These users, it turned out, were increasingly not computer professionals, “[b]persons for whom computers are tools for other objectives.” Timesharing was encouraging research that “would not have been undertaken otherwise....” 21The author(s) of these historical excerpts are anonymous; to read more about the history of Multics point your Web browser to < ftp://ftp.stratus.com/pub/vos/multics/tvv/history.html> accessed 1/3/05 43 As important as specific research projects were, the umbrella concept of Mul tics, bringing the computer to the user and providing that user with the ability to operate the computer in real time, proved to be one of the more important advances in computing that stemmed from “the drastic improvement in access time and convenience” afforded by timesharing. The new technology gave users “the ability to control and affect interactively the course of a process whether it involves numerical computation or manipulation of symbols.” The importance of timesharing in the ideology and discourses of interactivity and agency cannot be overlooked. A decade that began in a most modern way, with men controlling the vast powers of machines with their bare hands, ended with those same men working in a simulated environment where, much like the cinema, nothing was real. Programmers were given computer terminals in their offices—basically a keyboard and a monitor only capable of displaying text. The appearance that one used these tools to establish ultimate control of the computer was a mere illusion created by a clever program that created a fantasy world in which the computer one saw wasn’t really there, but was put there by another computer that erased its presence. Programmers now existed in a world created by the computer, a world where they not only had the ability to interact with the computer, but the power of agency as well. Most timesharing systems allowed programmers to write programs at the individual instruction level, much the way programming was done ten years earlier, re-creating through illusion, the sense of agency that existed when computers were in their infancy. At about the same time MIT scientists were mulling over large-scale timeshar ing, a small company west of Boston was about to deliver its first computer, only no one at the company called it a computer. Their venture capitalists wouldn’t allow 44 it. The company was the Digital Equipment Corporation (DEC) of Maynard, Mas sachusetts, and its founders were two engineers from MIT’s Lincoln Laboratories. The engineers, Ken Olson and Harlan Andersen, started their business manufactur ing the electronic components they originally designed at Lincoln Labs; eventually, however, they assembled these components into the computer that wasn’t: The venture capitalist’s insistence on avoiding the term computer was based on the stereotype that computers were big and expensive, needing a computer center and a large staff; by using the term Programmable Data Processor, or PDP, DEC avoided this stereotype.2 2 The PDP-1 (it’s hard for me not to call it a computer) was everything the mainframe computer of 1960 was not: it was inexpensive (about $120,000 ver sus $1,000,000); it required no special computer room, no raised flooring, no air conditioning; it operated on the normal one hundred twenty-volt, sixty-cycle elec tricity available from standard office outlets; and, it was designed to be operated by individuals running their own programs. In fact, early uses of the PDP-1 were often in non-computer environments, such as physics, chemistry or biology labo ratories, where the machine was used to capture and analyze experimental data. Subsequent members of the PDP family of machines were smaller and less expen sive, earning the product line recognition as the world’s first minicomputers. For the moment, the PDP-1 was an opportunity for communities within the academy, and occasionally the private sector, to set the rules for being close to their own machines. Their access and interactive ability was no longer controlled by someone else. These communities had the power to rewrite operating systems and other programs to behave in a manner suitable for their needs.2 3 22<htt,p: //www.faqs.org/faqs/dec-faq/pdp8/ section-1 ,html> accessed 4/3/03. 23The PDP-1 also came with a timesharing operating system, though nothing as sophisticated as Multics. 45 Figure 2.1: A typical PDP-1 installation. The Cathode Ray Tube is in the center. The typewriter-looking device at the right displayed printed output. The two film like reels above the printer were for reading programs from paper tape. Source: <http: / /www3.sympatico.ca/maury/games/space/spacewar.html> One of the attractions of the PDP-1 was the incorporation of a Cathode Ray Tube (CRT) display. Communication with a computer was no longer restricted to typing and reading text. The CRT meant graphics were another way a computer could speak to its user. It wasn’t long before a team of MIT programmers got together to figure out a practical (in college student terms) demonstration of the CRT. The result was Spacewar, a computer classic, and the first known computer game. “The PD P -l’s operating system was the first to allow multiple users to share the computer simultaneously.”2 4 This simultaneity was a prerequisite for the game, which pit two players against each other in an environment that included warring spaceships and photon torpedoes. “Each player could maneuver a spaceship and score by firing missiles at his opponent while avoiding the gravitational pull of the 24Source: < http://inventors.about.com/library/weekly/aa090198.htm> accessed 4/3/03. 46 Figure 2.2: By today’s standards Spacewar is a pretty simple game. Two spaceships start on either end of the screen, and are drawn toward the sun in the middle. Each ship is armed with torpedoes fired out of the front of the ship which eventually hit something or burn out. The players have to shoot the other ship while avoiding running into anything. Source: Chttp://www3.sympatico.ca/maury/games/space/spacewar.html> sun....” By the mid-1960s, one could hardly find a research computer th at didn’t have Spacewar on it, much to the chagrin of the computers’ owners who footed the bill for very expensive computer time. Steve Russell2 5 transferred to Stanford University, where he introduced computer game programming and Spacewar to an engineering student called Nolan Bushnell. Bushnell went on to write the first coin-operated computer arcade game and start Atari Computers. Dan Edwards made a subsequent enhancement to the original version of the game that pre-acknowledged the role of action in the field of agency. At first, players could simply fire their torpedoes without concerning themselves with the complexities of actually maneuvering their spaceships. Edwards added a heavy star in the center of the screen. The star pulled spaceships toward it, whether a player wanted to move that way or not. Now, no action meant a ship was pulled into 25 Spacewar was conceived in 1961 by Martin Graetz, Steve Russell and Wayne Wiitanen. It was first realized on the PDP-1 in 1962 by Steve Russell, Peter Samson, Dan Edwards and Martin Graetz, together with Alan Kotok, Steve Piner and Robert A. Saunders. Source: < http://inventors.about.com/library/weekly/aa090198.htm> accessed 4/3/03 the star and destroyed. Players were forced to navigate space—they were forced into more complex game play—unless they wanted to end their participation in a fiery hell (or what passed for that, given the relatively simple graphics of the day).2 6 This type of behavior is common in today’s first-person electronic games. The interactor can choose not to act, but that does not mean the narrative world comes to a halt. Not acting has its consequences, usually death at the hands of the enemy; but increasing the dynamics of the game was only a secondary contribution to electronic game play. Spacewar's enduring contribution was the creation of a narrative world in which a computer and spectators looking into that world were allowed to interact with its objects and, thanks to the world’s simplicity, experience a high degree of agency. While Spacewar's designers looked into the CRT and saw outer space another graduate student, Ivan Sutherland, saw something different. Visualizing the CRT as a writing surface, much like the producers of Winky Dink and You, Suther land invented the light pen, a handheld pointing device. Touch the light pen to any illuminated area of the screen and the chubby, cylindrical wand immediately reported the coordinates of the pen to the computer. Sutherland2 7 had created a screen with unique properties: on one side, the computer, as in the Spacewar game, could create a fictional, line-drawn world. On the other side, the spectator could, using the light pen, not only interact with the world (this object as opposed to that object), but experience the agency (move this object from here to there) Winky Dink and You could only simulate. 26 Source: chttp: / / www3 .sympatico.ca/maury/games / space / spacewar .html> accessed 9/15/04 27I use Sutherland’s name in a representative fashion. He was the last in a chain of developers and inventors who, together, vanquished the problem. 48 From G am es to Inform ation Overload The computer scientists of the 1960s were interested in very different worlds than those of games and stories. Their interests lay in grappling with a world of too much information. One researcher in particular, Douglas C. Engelbart of the Stan ford Research Institute2 8 in Menlo Park, California, was intrigued by the idea of using computer technology to augment human intellect. The advent of individu ally accessible computers equipped with CRT screens and light pens meant a new, more flexible way of presenting a computer user with information. Engelbart’ s imagination was fired by As We May Think, a seminal paper written by the bril liant inventor and engineer, Vannevar Bush. In it, Bush describes his vision for a memex, a device for storing and retrieving data by building associative links between pieces of information.2 9 The article, first published in 1945, when digital computers were room-sized affairs with price tags to match, envisioned the memex as an opto-mechanical device built into an office desk. Two translucent screens covered the work area. Inside, microfilm storage provided ample room for more information than, according to Bush’s calculations, any one person could store in a lifetime. Most of the internal space was devoted to the mechanics of storing and retrieving (which had to be accomplished through gears and motors). It was not the mechanical means that fascinated Bush. He had built many analog computers requiring just that sort of technology. W hat captured Bush’s imagination were the possibilities attendant to a system that could augment the ability to travel through 28From its founding in 1946 until 1977, Stanford Research Institute, commonly called SRI, was affiliated with Stanford University. In 1977, the university spun off the institute, which changed its name to SRI International. 29Bush’s article has been reprinted often. The original appeared in the Atlantic Monthly's July, 1945 issue. For a Web-resident copy of the original article from the magazine’s files see < http://www.theatlantic.com/unbound/ flashbks/computer/bushf.htm> (accessed 4/14/03). 49 vast collections of data. A human’s ability to make associative connections between data was unique, but the brain’s data retention was notoriously poor. For perfect retention, information had to be moved out of the brain and stored indexically, leading to slow storage and retrieval. Bush saw the memex as an elegant way out of the bind: Selection by association, rather than indexing, may yet be mechanized. One cannot hope thus to equal the speed and flexibility with which the mind follows an associative trail, but it should be possible to beat the mind decisively in regard to the permanence and clarity of the items resurrected from storage. It was this approach that fascinated Engelbart, as he argued in a 1962 report prepared for the Air Force Office of Scientific Research (AFOSR), the govern ment agency funding his work. Engelbart wanted to continue along Bush’s path of improving what Engelbart called the human-intellect system. “[Tjhere is no particular reason,” he writes, “ not to expect gains in personal intellectual effec tiveness from a concerted system-oriented approach that compare to those made in personal geographic mobility since horseback and sailboat days” (Engelbart 1962). Engelbart and his team at SRI went on to develop NLS (oN Line System), what in today’ s terminology might be called the first multimedia computer workstation. It consisted of a CRT, a light pen, a keyboard and a pointing device of Engelbart’s own design called a mouse. At the 1968 Fall Joint Computer Conference in San Francisco, Engelbart demonstrated NLS. It was the world debut of personal computing with a com puter mouse issuing instructions to a networked computer system which, in turn, 50 demonstrated feats of hypertext linking, real-time text editing, multiple window ing with CRT displays and shared-screen teleconferencing.3 0 Although the graphics were a far cry from the rich visual media of the Web, NLS implemented a funda mental, and recognizable, form of hypertext. Engelbart used both the light pen and mouse to interact with the computer, and the screen was the point of inter section between two worlds, Engelbart’s world and the world of data (cyberspace was still an unknown term in 1968) residing inside a timesharing system at SRI’s headquarters twenty miles or so south along Highway 101. When Doug Engelbart was a graduate student in electrical engineering...he dreamed of “flying” through a variety of information spaces3 1 The light pen gave Engelbart the ability to move around the two-dimensional surface of the screen. The mouse gave him the ability to move into the data in a three-dimensional way, a passage into the world of the computer. W hat Engelbart’s research in the 1960s did, in addition to augmenting human intelligence, was create a real-time generated world in which both the user and the computer had agency. While not the type of fictional world commonly found in literature and cinema, the environment created by NLS bore the marks of another famous two-dimensional world, the one created by Edwin Abbott in his book Flat- land. Although Flatland objects had agency in their world and NLS objects did not, the two shared a certain minimalist quality, entertaining only shapes that slid around a two-dimensional surface. The limited choice of shapes, the lack of color and the lack of depth (constraining movement and eliminating the option of rotat ing objects off their plane) reduced the number of actions inhabitants could take. 30Source: < http://www.sri.com/about/timeline/mouse.html> accessed 4/14/03. Streaming video of this demonstration is available, with written commentary, at < http://sloan.stanford.edu/ mousesite/1968Demo.html> accessed 4/14/03. 31Source: < http://www.sri.com/about/timehne/mouse.html> accessed 4/14/03. 51 While Abbott’ s triangles and circles were agents in their world, the world created by NLS was populated strictly by non-intelligent objects. W ith these limitations, programmers were able to construct an interface that offered the user a set of inter actions accessing (almost) all possible manipulations he might want to perform. The user actually affected the world behind the screen, both in terms of what was visible and how the data were represented in the computer’s memory. While the computer had no agency in the user’s world (other than the ability to influence the user through its responses), with strict structural limits the computer did create a space where both the user and the computer could influence the same objects and each other. It was agency of the most basic kind, but with NLS, agency was slowly becoming a reality, leaving illusion to the domain of the cinema spectator. In NLS, in Spacewar, in Pong and in many of the computer games that followed, the world created by the computer was populated with objects. Yes, today’s games have demons and animals and aliens and ordinary people in their worlds, but these characters are “sprites” or puppets. Their behaviors are carefully pre scripted and called forth, either intentionally or as a result of certain circumstances, through player interactions. Sprites have no feelings to be hurt, no goals to be accomplished, no plans to make or revise. They are objects, not agents and, as such, cannot exercise agency. Like the data in NLS and the space ships in Spacewar, these objects are manipulated through user interactivity. Only now, there’s a problem. As game worlds become richer, the number of options available to the player increases dramatically.3 2 In the end, something has to yield to the 32Here is a simple example of the problem. Imagine a piece of interactive fiction made up of units. Each unit in the story gives the reader the option of choosing one of two links to move on to another unit at the next story level. If the site consists of only ten levels, that is, after the tenth choice the user has reached one of a set of units designated as “the ends,” the story has 210 or one thousand twenty-four paths through it. Add two more levels and that number jumps to over four thousand. Now, imagine these paths are to have a certain coherence. 52 real world, and what is offered up for sacrifice is the option of performing arbitrary actions. Computers, and their programmers, do not deal well with infinities; one cannot program for all eventualities. Instead, game designers select a subset of actions to present to the player. Granted, these options are carefully chosen to simulate the effects of agency, but the options form only a simulation. Adventure was an early quest-type game that was entirely text-based. It was not unusual, in fact it was a part of the game, for the player to find herself in a maze of tunnels. If the player could dig an arbitrary tunnel to foster an escape, such agency might enhance a player’s feeling of control over the environment but would be impossible for a programmer to anticipate. No m atter how strong or physically capable a player may feel, digging a tunnel, although reasonable under the laws of physics, cannot be allowed in the game world. From making the tunnels out of impenetrable bedrock that doesn’t yield to fingers, rocks or other objects in the narrative world to placing the tunnels under a body of water where a breach of the walls would cause a flood and instant death, the game designers must limit the player’s agency while preserving the illusion of its existence. The simulation of agency through interactivity is one form of storytelling in cyberspace. As Laurel, the Oz Project and more recent work, such as that of former Oz Project researchers Michael Mateas and Andrew Stern,3 3 demonstrate, there are constructed worlds to explore that extend beyond the limits of today’s electronic games. Inhabitants of these worlds include both objects and agents, characters capable of independent For example, characters can’t die and then be discovered alive in the future. That means the authors and programmers must analyze, and write for, these four thousand plus story paths. Such an undertaking is, at the very least, intellectually unmanageable; and, this is a simple story, with only twelve levels and no doubling back to previously-visited units. Writing more complex interactive stories would be impossible without mechanical (computer) assistance. Yet, large number of choices like this is exactly what game designers and programmers are presented with when implementing experience-rich games. 33 See Mateas and Stern 2002. 53 action driven by needs, wants and goals. In the Oz Project, these agents were played by human actors, but the new worlds are home to autonomous agents, to characters that owe their existence to computer hardware and software. The difference between a world of sprites and a world of autonomous agents is the difference between playing blackjack or poker in a Las Vegas casino. Blackjack is played against the house, represented by a dealer who follows very strict rules in choosing her course of play. She must hit on sixteen and stand on seventeen. She cannot double down or split a pair. Play poker, however, and the situation is dramatically different. The dealer is a convenience while the challenge comes from the other players at the table. Poker requires a knowledge of odds and psychology, the ability to act while catching others in the act of acting. The other players make their own decisions, bound by the rules of the game not the rules of the house. They make plans and change them based, in part, on the actions of others and how those actions influence them (and their goal, which undoubtedly is some form of winning). Players no longer play against an object (the house or a deck of cards), they play against intelligence. Players don’t make selections from a list of approved actions, they react spontaneously, adjusting to changing situations. That is the behavior in these new forms of storytelling: agency not through simulation, but through intelligent, autonomous agents. It is here that artificial intelligence enters into the discussion of agency. C om puters T hat T hink While studying for an advanced degree in computer science at the University of California, Berkeley, I was persuaded by a friend to serve as tour guide for a class of eighth-grade girls. At the time, I worked on a project that was developing its own timeshared computer system and we had our very own computer lab with our 54 very own minicomputer. In this antediluvian period before personal computing, the closest most people came to computers were the punch cards that accompanied their utility bills, and these young students were hardly an exception. I was about to broaden their horizons. That is why, at the appointed date and time, twelve eighth-grade girls marched into Cory Hall on the Cal campus to experience, close up, their first encounter with a real computer. As befitting their age, the girls quickly grew tired of flashing lights and the occasional rat-ta-tat-tat of the printer. They wanted to know when the computer would do something. I led them to a corner of the room where we had installed a Teletype machine for programmers to use. I started a program called ELIZA, and the teacher and I retreated to a distant corner to gossip about our mutual friend. Some forty-five minutes later, and after almost continuous giggling and shouting from the group of girls, the teacher and I returned to end the tour and shepherd the students back to their waiting transportation. Not only did we nearly have to pry them away from the Teletype, I had to solemnly promise to destroy, without any further inspection, the knee-deep pile of paper accumulated behind the machine. I kept half my word. The paper belongs to history; but, I did peek, just a little, at what secrets these girls wanted kept. To understand what twelve girls, all roughly thirteen years of age, would confide to a computer you have to understand ELIZA. The AI field was approaching adolescence when Joseph Weizenbaum, then a professor at MIT, wrote a paper for the January, 1966 issue of the Communications of the ACM (Association of Computing Machinery, one of the oldest and most respected societies for computer professionals). The paper bore the somewhat bookish title ELIZA— A Computer Program For the Study of Natural Language Communication Between Man and Machine. W hat the young, teenage girls did was “tell” ELIZA all about their friends, enemies, crushes on boys, transgressions 55 real and imagined, the kind of soul-bearing that occupies the wee hours of pajama parties. They found Weizenbaum’s ELIZA,3 4 offered them a non-judgmental ear and the benefit of patient wisdom. For in its AI world, ELIZA plays the role not of a cockney lass, but that of a Rogerian psychiatrist. In the following sample session, taken from Weizenbaum’ s paper, the computer’s responses are in CAPITAL letters: Men are all alike. IN WHAT WAY? They’re always bugging us about something or other. CAN YOU THINK OF A SPECIFIC EXAMPLE? Well, my boyfriend made me come here. YOUR BOYFRIEND MADE YOU COME HERE He says I’m depressed much of the time. I AM SORRY TO HEAR YOU ARE DEPRESSED It’ s true. I am unhappy DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY I need some help, that much seems certain. WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP Perhaps I could learn to get along with my mother. TELL ME MORE ABOUT YOUR FAMILY My mother takes care of me. WHO ELSE IN YOUR FAMILY TAKES CARE OF YOU My father. YOUR FATHER Your are like my father in some ways. WHAT RESEMBLANCE DO YOU SEE You are not very aggressive but I think you don’t want me to notice that. 34The name was inspired by the learning ability of Eliza Doolittle, a character in playwright George Bernard Shaw’s Pygmalion. 56 WHAT MAKES YOU THINK I AM NOT VERY AGGRESSIVE You don’t argue with me. WHY DO YOU THINK I DON’T ARGUE WITH YOU You are afraid of me. DOES IT PLEASE YOU TO BELIEVE I AM AFRAID OF YOU My father is afraid of everybody. WHAT ELSE COMES TO MIND WHEN YOU THINK OF YOUR FATHER Bullies. DOES THAT HAVE ANYTHING TO DO WITH THE FACT THAT YOUR BOYFRIEND MADE YOU COME HERE (Weizenbaum 1966) Although I never asked the question, I wonder now how many of those girls thought they were talking to a human in another room. ELIZA carried on its conversations with anyone and everyone; its therapeutic prowess even impressed the mental health community. Yet, within a few years of demonstrating the first academically-recognized computer program to converse in natural English, Weizen baum left AI research in disgust, and for precisely the same reasons ELIZA so captivated my young visitors. Despite its conversational ability, ELIZA was, to use the vernacular, as dumb as a fence post. Weizenbaum offered no pretexts about ELIZA’s ability to understand the state ments put to it. ELIZA is all about syntactic manipulation, informed by a knowl edge of grammar and rules of transformation (for example from past to present tense). Keywords trigger responses designed to broaden out the conversation. In the sample dialogue, the “patient” mentions her mother and ELIZA’s response attempts to elicit more information about her family. ELIZA also remember previ ous topics and either returns to them or brings them into the current conversation, as when it ties in bullies and boyfriends at the end of the sample dialogue. 57 Weizenbaum wrote ELIZA to debunk the myth of artificial intelligence. That ELIZA could fool so many people into thinking they were conversing with a human was, to him, a strong argument for how easy it was to make computers appear intelligent; however, when medical professionals seriously suggested using computer programs like ELIZA to treat emotionally ill patients, a shocked Weizenbaum moved on to other areas of computer science. ELIZA is an example of weak AI. It is environment-specific. Take it out of the context of a psychiatric therapy session and ELIZA no longer functions. It can not, for example, debate currency exchange rates or offer an opinion about Frank Sinatra’s alleged ties to organized crime. Internally, ELIZA makes no attem pt to follow the thought processes of a human psychiatrist in formulating its answers. From its inception as a field of study, even before receiving its name, arti ficial intelligence researchers tended to separate themselves along the lines of a strong/weak binary. Michael Mateas, professor and a long-time practitioner of combining art and artificial intelligence describes the differences between strong (or GOFAI for Good Old Fashioned AI) and weak AI in functional terms when he writes that “GOFAI seeks general solutions: the theory of language understanding, the theory of planning, etc.” (Mateas 2001a) This stands in contrast to interac- tionist, or weak, AI that “starts with the assumption that there is a complex ’fit’ between an agent and its environment ” Unlike GOFAI, weak AI theory and practice accepts the possibility that “there may not be generic solutions for all environments.” ELIZA does quite well, though not well enough to win the admi ration of strong AI advocates, for the roots of AI’s version of the Hatfields and McCoys are deep indeed, deep enough to reach back to the era before World War II. 58 T he G erm ination o f AI The foundations for artificial intelligence, in fact, for all of digital computing, were laid in the 1930s when mathematician George Boole formulated a language for universally expressing problems using an alphabet of 0 or l.3 5 Claude E. Shannon, who would later write the seminal paper on the statistical theory of communication, at that time was working with telephone switching circuits, which also could be described by an alphabet of 0 (off) or 1 (on). Shannon postulated that if electrical circuits could represent Boole’s universal language of problems, and if Boole’s language was a representation of human thought, then electrical circuits could represent human thought. Intriguing though the idea may have been, Shannon and his contemporaries, Norbert Weiner, John Von Neumann and Alan Turing would write about the possibilities of artificial intelligence, but would see the promised land only over the edges of note pads and podiums. The task of breathing life into the theories and predictions of these farsighted pioneers fell to a group of mathematicians and psychologists many of whom met at Dartmouth University in the summer of 1956 for a research project on artificial intelligence. The project was proposed a year earlier, August 31, 1955, by four researchers interested in AI: Shannon, then a Mathematician at Bell Telephone Laboratories; Nathaniel Rochester, Manager of Information Research at I.B.M.; Marvin Minsky, Harvard Junior Fellow in Mathematics and Neurology; and, John McCarthy, Assistant Professor of Mathematics at Dartmouth College, who is given credit for spearheading the project. Their document, entitled “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,” was submitted for funding to the Rockefeller Foundation. The preamble read, in part: 35This is called the first-order predicate calculus, and allows for proofs of statements such as: all men are mortal; Aristotle is a man; therefore, Aristotle is mortal. 59 The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.3 6 Among the fields of study proposed for the project, fields which included program ming computers to use languages, designing computers that learn, and simulating neural networks, was one of potential interest to future narrative designers: A fairly attractive and yet clearly incomplete conjecture is that the dif ference between creative thinking and unimaginative competent think ing lies in the injection of some randomness. The randomness must be guided by intuition to be efficient. In other words, the educated guess or the hunch include controlled randomness in otherwise orderly thinking37. Forty years later, the idea of intuition and randomness would appear as sets of codified behaviors and algorithms for choosing among competitive behaviors as one way of creating artificially intelligent software agents. For the participants, the Dartmouth conference yielded mixed results. Most of what was discussed was theoretical, methods for attacking AI from directions not tied to neural network models for example; but, a team of researchers from a Santa Monica think tank, the RAND Corporation, came to New Hampshire with something more than theory. Their Logic Theorist machine3 8 was already proving theorems from Whitehead and Russell’s classic Principia Mathematical and in at least one case found a proof more elegant than the authors. Allan Newell and Her bert Simon3 9 proudly demonstrated the Logic Theorist and, according to cognitive 36Source: <www-formcJ.stanford.edu/jmc/history/dartmouth.pdf> accessed 12/14/01. 37Source: <www-formaJ.stanford.edu/jmc/history/dartmouth.pdf> accessed 12/14/01. 38 The origins of this tradition, of calling computers programmed for AI applications “machines” is not clear. The term is all-encompassing, however, including the hardware and software that allows the computer to perform in an intelligent fashion. 39Another member of the team, Cliff Shaw, did not make the trip to Dartmouth. 60 scientist Howard Gardner, “By devising and running Logic Theorist, Newell and Simon showed that A.I. was a possibility, if not a reality” (Gardner 1985). There had been many prior claims to the AI high ground in the past, but now there were two concrete demonstrations lying irrefutably in plain view. “[C]omputers could engage in behavior that, if exhibited by humans, would unambiguously be consid ered intelligent,” was one according to Gardner, and “[T]he steps through which the programs pass in the course of proving theorems bear a non-trivial resemblance to the steps observed in human problem solving” was the other. The question of theory versus practice caused some dissent after the confer ence. Newell and Simon argued they should present that summer’s results at the September meeting of the Institute of Radio Engineers, since they were the only ones who actually had anything working. Others felt that McCarthy, since he was the spiritual spark behind the conference, should represent the group and its work. As a compromise, the session chair suggested a dual focus, combining gen eral and specific topics. Artificial intelligence historian Pamela McCorduck writes, “McCarthy gave a general talk and then Newell and Simon presented their work in its particulars, which appeared as a paper in the transactions of the conference, the first widely published announcement of the Logic Theorist” (McCorduck 1979). Despite the clash of egos, the fact remained that the Dartmouth conference did more than put a name to a field of study: it demonstrated that, indeed, computers could reason autonomously, without having to be programmed with specific steps for solving a specific problem. Less than ten years prior to the Dartmouth meeting, the pursuit of artificial intelligence revolved around the perceived similarities between telephone switches and neurons, or what was known about neurons at the time: both were all-or- nothing devices, either in an on or off state; and, both were general purpose 61 devices that were combined in networks to meet the needs of individual situa tions. Now, that view had shifted. Rather than having to build machines that reflected the human brain’s architecture, proponents of this new school of intelli gent machinery claimed computers only had to follow human thinking processes to appear intelligent. At the center of this new wave of thinking was the concept of symbol manipulation, proffered by the brilliant mathematician Alan Turing and his contemporaries. W hat they were saying with their newfound focus on symbol manipulation was computers could accomplish what humans would recognize as intelligent behavior without the computers needing to know anything about the meaning of the symbols they were manipulating. T aring and his Test While the notion of the computer as an intelligent machine was making the rounds of the engineering community before World War II, no serious work was performed before the start of hostilities in Europe, and none was performed until after Japan surrendered in 1945. Turing’s exploits during World War II are somewhat leg endary within the AI, mathematics, cryptographic and computer science commu nities. His work on the team that cracked the Enigma code—Enigma was the infamous German code machine—reportedly enabled Britain to win the Battle of London; but, it was his traits as a brilliant mathematician and intense thinker that were his most valuable assets after the war. He transfered them to the investiga tion of computers and their formal descriptions, his efforts yielding two important papers. 62 Turing’ s 1948 paper, Intelligent Machinery went unpublished until 1969. A facsimile copy of his original draft exists online in the Turing Archives,4 0 and in it Turing describes three fundamental research areas: • the brain without a body; • intelligence as represented by branches of thought • organized and unorganized machines Turing’s work centered on what today is cognitive science. The questions Tur ing posed to himself and his colleagues revolved around the notion of not only duplicating human thought, but somehow representing the process of learning. Learning for Turing was an organizational process. He previously defined an organized machine as one conforming to certain mathematical conventions for defining the states of these machines4 1 and to constraints restricting transitions from one state to the next. He termed these “Universal Machines,” but later they were re-christened “Universal Turing Machines” in his honor. In 1948, how ever, Turing wrestled with a formal description of an unorganized machine, and the process by which an unorganized machine could be transformed into an orga nized, or universal, machine, with emphasis on how this process modeled what was 40See <http: / / www.cs.usfca.edu/www.AlanTuring.net/turingjarchive/archive/1 /132/L32- 001.html> 41 States can be used to represent the condition of things. Suppose there is a living room with three lamps, one on either side of a sofa and one behind an easy chair. Each lamp can be either on or off. Then there are eight possible combinations of on and off lamps. (Trust me on this one or work it out on paper yourself.) Each combination represents the state of the lamps in the room. Instead of explicitly listing what lights are one and off in each state, the states could be given names or numbers. For example, state zero is the one in which all the lamps are off. State one is the state in which only the lamp behind the easy chair is on. Then state one is a useful state for the lamps if someone is sitting in the easy chair reading the newspaper, but it is not a very desirable state if someone else is sitting on the sofa trying to read a book. Chapter Three revisits the term in a somewhat more accessible context. 63 observed of human behavior. At this point, Turing saw the human brain as the guiding principle in creating intelligent machinery (Turing 1948). Two years later, Turing’s thinking was more precise. Rather than writing gen erally about the human mind as a guide to the principles of machine intelligence, Turing proposed what he called “The Imitation Game,” which became the unoffi cial gold standard of machine intelligence, the Turing test. The game is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either “X is A and Y is B” or “X is B and Y is A.” The interrogator is allowed to put questions to A and B... In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alter nately, the questions and answers can be repeated by an intermediary. We now ask the question, “W hat will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think” (Turing 1950)? Turing expected an intelligent machine to be capable of responding in a common sense fashion, the better to mislead its interrogator. At the same time, he stripped the encounter of its humanness, lest the interrogator be influenced by “tones of voice ” or other physical characteristics of the intelligence behind the teleprinter. This Cartesian model, the brain without the body, was popular in AI research at the time, not in any small measure due to the limits of hardware technology. Com munication between a computer and a human was restricted to text, inked char acters on paper. Turing could not have helped being influenced by the hardware 64 restrictions of the time in his efforts to formalize some measure of machine intel ligence. However, the Turing test neutralizes many factors that separate human and machine intelligences, especially life experiences that are imprinted on the body and mind. A computer would have to deflect experiential questions ( “As a woman, how does it feel... ?”) or reply with stereotypical answers or nonsense. By establishing as one of the ground rules that a participant may attem pt to mislead the interlocutor Turing sidestepped experience as a component of expressed (or implied) intelligence and shifted the focus of intelligence into the province of logic and reasoning. Historically, though, Turing’s test was very much a product of its time. Com puters were brains, or at least that was the common metaphor used to describe them. Input and output (I/O) systems were not sufficiently advanced to represent the body nor process even a modicum of sensory stimuli from which the computer could simulate actions based on experiences. Even memory was a problem. Mass data storage was relegated to slow access devices such as magnetic tape. Even if the computer could answer experiential questions, the time delay between question and answer would point a swiftly accusing finger at the machine over the human. From a historical perspective, Turing’ s test did reflect what could be represented by a machine as human intelligence and the parts of a human that could also be represented. As hardware and software grew more sophisticated the computer would begin to acquire a body, new behaviors and new levels of sensory processing. For the present, artificial intelligence was, like the waning dramas on the radio, very much a theater of the mind. 65 SH R D L U C hallenges H um an T hought Critics of artificial intelligence often argue that “thinking” is a purely human func tion, that “to think” means to have thoughts about something, and that means having some sort of consciousness or awareness so there is something to have thoughts about. In other words, computers cannot think because they do not have human brains, which produce consciousness. Turing’s counter-argument seizes on his previous behaviorist stance: if the computer behaves in such a manner as is indistinguishable from a human, and if that behavior is produced on the spot (as opposed to being the result of playing a recording inside the computer), then isn’t this thinking, albeit not in the same way as the human brain, but with the same results? Betrayal by the body notwithstanding, Turing believed if the steps for solving a problem are known, then any problem a human could solve a com puter, given enough memory storage4 2 and time, could solve. Therefore, machine intelligence was an engineering and programming problem, not a theoretical impos sibility. Four years later, in the summer of 1970, Terry Winograd submitted his disser tation to his MIT doctoral committee. Procedures as a Representation for Data in a Computer Program for Understanding Natural Language described a program, SHRDLU,4 3 that carried on conversations in English, though in a much differ ent style than ELIZA. SHRDLU exists in its own virtual world, inhabited by a 42Not a trivial issue in 1950, when a thousand decimal digits was considered a very large memory store. 43Winograd writes on his Web site that “...the arrangement of the keys on Linotype typesetting machines was not QWERTY, but frequency arranged columns: ETAOIN was the first column (reading downward), SHRDLU the second, etc.” Winograd was familiar with SHRDLU from typesetting errors he’d seen and its nonsensical uses in MAD Magazine. “When it came time to name the system, I tried to come up with acronyms and none were very good so I decided to just pick something that looked like an acronym but wasn’t. I reached into my memories for a random sequence.” See < http://hci.stanford.edu/ winograd/shrdlu/name.html> 66 table, hand, box, an eye and several blocks and pyramids. Communication is via a teletypewriter (for questions, requests and responses) and a CRT (for a visual representation of the blocks and boxes world), and occurs in real time, that is, the computer formulates its answers in direct response to the situation at hand, and does so in a range of approximately five to twenty seconds per question. The following brief conversation with SHRDLU was extracted from a revision of Winograd’s original thesis. At the start of the conversation, the box contains a blue pyramid, and the big red block has a green block resting on it. Once again, the computer’s responses are in CAPITAL letters: pick up a big red block. OK. grasp the pyramid. I DON’T UNDERSTAND WHICH PYRAMID YOU MEAN. find a block which is taller than the one you are holding and put it into the box. BY “IT” , I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE ONE I AM HOLDING. OK. what does the box contain? THE BLUE PYRAMID AND THE BLUE BOX. what is the pyramid supported by? THE BOX. (Winograd 1971) SHRDLU made some nuanced deductions. It knew enough to move the green block off the red one before picking up the latter. The first time it was asked about a pyramid, SHRDLU requested clarification. The second time the subject of a pyramid was raised, SHRDLU used the context of the conversation to disambiguate the reference by itself. SHRDLU also made context-sensitive discriminations, for instance, when it’s asked to pick up a block taller than the one it’s holding. 67 Winograd attacked language understanding from a position diametrically opposed to Weizenbaum’s. In his thesis Winograd argued that humans don’t solve problems in a vacuum; they are equipped with a wide range of subject matter knowledge—syntactic, semantic, heuristic—to be called upon when necessary. He also argued that language understanding was not a specialty of “the brain,” but a cooperative effort of many different brain functions. SHRDLU’s design follows Winograd’s hypothesis drawing on a set of specialized subprograms working in con cert to extract unambiguous semantic content from user responses. But SHRDLU, Winograd maintained, was not only about understanding language. “We want to understand what intelligence is and how it can be put into computers,” he writes. Winograd picked language as his field of inquiry because it is “one of the most complex and unique of human activities and understanding its structure may lead to a better theory of how our minds work” (Winograd 1971). With respect to the fuzzy line between strong and weak AI, SHRDLU leaned toward the strong side. Its computational organization attem pted to imitate the organization of the brain. Its operation was based on, if not the theory of language understanding, then certainly on a theory. SHRDLU made use of a hand, but only as a convenience for communicating with its users. The program received no stimulus through the hand—it could not feel temperatures or hard and soft surfaces—which served as a pointer and not an appendage. Yet, SHRDLU had soft AI characteristics, specifically its intelligence was situation-limited. Outside its world of geometric shapes, SHRDLU shared ELIZA’s relationship with fence posts. Inside the world it was designed to inhabit, however, did SHRDLU think? It had a world to think about. It understood subtleties of English. It independently extracted speech acts from inputted strings of text and formulated both kinesthetic and verbal responses. It produced its output through the combined behaviors of 68 many small processes, a brain-like architecture heavily promoted by behaviorist and AI pioneer Marvin Minsky (Minsky 1988). Was having independent thoughts, such as “I want the blue box next to the green pyramid,” as Turing implied, just an engineering solution away? H um an T hought C hallenges A I John Searle, Mills Professor of the Philosophy of Mind and Language at the Uni versity of California, Berkeley hoisted the Jolly Roger and made himself, and his reasoning about the impossibility of strong AI, a target for critique (and invective) with his “Chinese Room” argument, which I will describe momentarily. Putting Searle’s thinking in perspective, he has no problem with weak AI because “the principle value of the computer in the study of the mind is that it gives us a very powerful tool” (Searle 1984). His denial of strong AI is based on the com puter not being a tool for study of the mind, but in claims that the computer is a mind.4 4 Searle objects to claims that computers with the appropriate programs are rewarded with an understanding of the world around them. That these computers have cognitive states is something Searle vigorously rejects because then “the pro grams are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations.” W ithout putting words into Searle’s mouth, I believe I’m being accurate when I say that Searle’s concern seems to be that if a particular computer program can duplicate human behavior in every detail, and since one presumably knows the theory behind the program (since one has access to the program’ s source code), then a human mind (in Searle’s words), abiding by the program’s “principles that 44Yet Laurel’s fantasy machine would seem to require exactly a mind for its implementation, and not a tool for research. 69 the theory says all minds work on,” will be equated, if not reduced, to a computer. In his writings on AI, Searle went to great lengths to avoid such equivocation. T he C hinese R oom Based on an inference program developed by Roger C. Schank, former Professor of Computer Science, Education and Psychology at Northwestern University, and former Professor of Computer Science and Psychology at Yale University, Searle proposed a Gedankenexperiment4 5 to contradict the claims of strong AI proponents. Searle envisions a windowless room with a man inside (most likely Searle him self). Every so often, someone outside the room slips a piece of paper under the door. On the paper are unintelligible squiggles. The man has at his disposal a massive collection of books and ledgers relating squiggles of the kind just slipped under the door to sheets of papers with different sets of squiggles the man has stored elsewhere in the room. These books allow him to associate one of the sheets of paper in the room with one of the sheets of paper slipped under the door. The man dutifully takes the sheet of squiggles indicated by his books and places it in a slot in a wall of his room. Unbeknownst to the man, both sheets of paper contain sentences in Chinese. Searle calls the sheets slipped under the door questions, and the sheets slipped out through the slot answers. Suppose, Searle continues, the books and ledgers are so complete and so carefully prepared by experts, that to a native Chinese speaker it appears the answers are perfect responses to the ques tions. Thus, Searle asserts, even though the man knows not a word of Chinese, and even though all the man did was formally manipulate symbols according to 45A thought or mind experiment as opposed to one actually performed in the real world. 70 a set of rules (a program), the arrangement gave the appearance of understanding Chinese when, in fact, no such understanding exists. Searle’s paper quickly drew the ire of strong AI researchers, and a transcrip tion of the original paper is available online.4 6 The crux of Searle’s argument is that only the m atter and chemistry of the brain can have the causal and inten tional properties of a brain. Digital computers, although they may be capable of reproducing the requisite features, do not, when programmed with software whose operating principle is based on the formal manipulation of symbols. Critics of Searle often read this defense as “nothing can be a brain but a brain.” Start ing with that definition, any hope of producing an artificially-intelligent computer using traditional hardware and software is defined into oblivion. W ithout wanting to turn this chapter into a full critique of Searle’s Chinese Room experiment, I find Searle less than compelling when he axiomatizes issues like intentionality, consciousness, causality and mental states that are liminal issues in this discourse. Searle not only assumes that people all think and feel the same way (that this is a function of the brain), but to me, Searle implies this is the only way for humans to perform these functions and still be human. If a computer performs them the computer cannot, by definition, understand them, because the computer is not human. Furthermore, and this is my own opinion, how would one know if the computer understood anything any more than we know if someone understands something now? If a human memorized the data in the books in Searle’s Chinese Room, and then was able to formulate written responses to written questions, would that human “understand” Chinese? If not, how many high school graduates should be forced to re-take their foreign language competency exams? If yes, then 46See < http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html> accessed 5/13/03. 71 denigrating a computer for the same feat amounts to nothing less than grey m atter discrimination. Searle, however, raises important questions for the role of AI in games and other forms of cyber-entertainment. The world of the cinema, the paragon of visual entertainment, is built on a compact of sustained illusions, which draw their strength from narrative, psychology and myth. Searle’s critique places limits on the “human-ness” of artificially-intelligent agents (characters) in stories. In particular, if AI agents cannot have intentional states, can they understand the concept of a narrative, of cause and effect, of motion toward a resolution, of an ending charged with desire? If autonomous agents cannot process the building blocks of drama, how will Laurel’s fantasy machine take an interactor through a dramatic, rather than episodic, experience? It is questions such as these that AI researchers ponder, both in terms of how to implement autonomous agents and in terms of just how knowledgeable these agents have to be. T he E m ergence o f Intelligent G am es A full-scale game world with life-sized characters that understand spoken language is still years away, but autonomous agents already create unscripted experiences in electronic games. Black & White (Lionhead Studios, published by EA Games, 2001, demonstrates the effects of higher levels of character artificial intelligence. The game is played out in Eden, a wonderful world of kind souls or a land of simpletons in need of severe discipline: it all depends upon the player’s attitude. The choice is not frivolous, either, for the player is God and what happens to the inhabitants of Eden is strictly in the player’s hand. As a benevolent God, the player can bring miracles down upon the land and grow stronger as the object of affection and worship. Unfortunately, while the player is God, the player is not the 72 supreme God. Other characters in the game are vying for that title, and some of these gods are evil and duplicitous. The player will be drawn into a holy crusade, where strength and alliances will play key roles. As an evil God, the player has miracles, too. Evil miracles. Raining down fire from the sky miracles. The player rules with fear, but fear also breeds worshipers. Like the benevolent God, the more worshipers the player has the stronger the god the player is. An evil God’s holy crusade is littered with backstabbing. broken alliances and generally nasty behavior. Since being a god is time-consuming, the player has a creature to help with some of the inevitable details surrounding being an object of worship. The creature, chosen from animals inhabiting Eden is the games only Al-based character, and that AI programming is used by the creature agent to learn the player’s style of rule, proper and improper behavior and likes and dislikes.4 7 Plans and next moves are modulated by the creature’ s actions, which are not easily predictable, and for practical purposes follow the logic of a human toddler. In an article in Wired magazine, futurist author Steven Johnson quotes from the posting of an anonymous Black & White gamer on one of the game’s many fan Web sites: “...I got the healing spell. So I thought, it would be pretty neat to teach my little ape [the player’s pet] this spell. Since Adiz [the pet] wants to be always kind and generous to the people, he ran to the village to try his new spell. He looked around and I guess there was no one to heal. He got pretty upset about that. So he just picked up a guy and threw him as hard as he could against a mountain. The man rolled down, and—for some reason—survived. He was hurt pretty bad, though, screamed like hell, and was trying to get back to his house as 47You teach your creature through a simple binary method. Reinforcement takes the form of gentle petting. A slap across your pet’s face is used for negative feedback. 73 quick as possible. My ape of course saw that, and healed him. After that he looked at me all happy and smiling.” There are hundreds of these stories, lined up one after another, recount ing behavior that may well be unique to each creature. Blur your eyes as you scan them, and you might think you were looking at a parenting site, rife with excited accounts of junior’s latest breakthrough (Johnson 2002). The ape character traversed some fairly sophisticated logical landscape. I know how to heal; I want to be kind; healing is kind; there is no one to heal; if I heal I am being kind. Wanting to be kind and not having the means to do so, the ape character creates a healing situation and proceeds: there is someone who needs healing; I will heal him. The logical disconnect is obvious to an adult, but perhaps not to a child who hasn’t learned the distinction between finding someone to heal and creating someone to heal. From outside the box, as a theorist, questions remain: does the pet think? does the pet understand? does any of that make a difference as long as the pet performs in character for this particular game? The bottom line, though, is that the creature, acting autonomously, but under the influence of the player’s actions, influences the player in return. Although interactions with the creature are simple, they are consistent with its toddler-level intelligence. Does the player interact with the creature? Yes. Do the player have agency? Yes to that, too, because the player’s interactions influence the creature; its responses are not pre-scripted. Instead, it plans its behavior based on that of the player. The computer-created world of Black & White is dependent upon the computer for its agency, not just its objects. This is the type of world literature, cinema or theater cannot actualize (though they may simulate it). The question is not do such worlds exist, but how rich are they, that is, how many arbitrary choices do they admit? 74 According to a pair of leading researchers working at the intersection of AI and electronic games had this to say about the progress of AI in entertainment, “Over the last thirty years, research in AI has fragmented into more and more specialized fields, working on more and more specialized problems using more and more specialized algorithms” Despite the successes with important theoretical and practical advancements this approach has wrought, “these successes have made it easy for us to ignore our failure to make significant progress in building human- level AI systems” (Laird and van Lent 2000). The purveyors of that opinion, John Laird, Professor of Computer Science at the University of Michigan, Ann Arbor, and Michael van Lent, Research Scientist at the University of Southern Califor nia’s Institute for Creative Technologies (ICT) assume characters with human-like intelligence will be required for “co-stars” in the drama of the game experience. They also set a high bar for their AI performers: Human-level AI systems are the ones that you dreamed about when you first heard of AI: HAL from 2001, a Space Odyssey; Data from Star Trek; or CP30 and R2D2 from Star Wars. They are smart enough to be both triumphant heroes and devious villains. They seamlessly integrate all the human-level capabilities: real-time response, robust, autonomous intelligent interaction with their environment, planning, communication with natural language, common sense reasoning, cre ativity, and learning. The writings of Laurel, Murray and the Oz Project researchers recognize the chal lenge. Autonomous agents must prove themselves worthy allies and opponents in any storytelling experience. The focus shifts from the degree of intelligent behavior to how such behavior might be realized—and whether a strong AI approach, with all its attendant complexity, is the right way to go. 75 Intelligence Versus Social R esponse Let me suggest another method of bringing intelligence to storytelling, one that differs from the assumption that intelligence must be everywhere, and that such intelligence must be of the highest quality. In an interaction between a human and AI agents, there already exists human-level intelligence in the loop: the human. Perhaps all th at’s necessary is to distribute that intelligence to the other agents to make up for their shortcomings. Such is the approach (though he may not think of it in exactly those terms) of Rodney Brooks, Fujitsu Professor of Computer Science and Engineering, and Director of the Artificial Intelligence Laboratory at MIT. One of his laboratory projects was a robot named Kismet.4 8 Kismet moved around on a rolling pedestal, but from the “waist” up, Kismet looked a bit like the star of the film Short Circuit (Badham, 1986). Its eyes were equipped with foveal cameras located behind large, human-like eyeballs; two more cameras with wide fields of view were hidden where its nose would be. Its ears were wired for sound, and servo motors provided its head and facial features with several degrees of articulation. Kismet was designed to be a social robot. A set of fifteen computers controlled various functions, with no one computer in complete control. Instead, the network of controllers gave Kismet a set of independent behaviors that could be mixed and matched. Visually, Kismet pays attention to moving things, things with saturated colors, and things with skin color. How much attention it gives to these features depends on Kismet’s internal state. If it’s lonely, Kismet gives more weight to skin-colored regions of an image. If Kismet is bored, it gives higher weight to saturated colors. If Kismet looks at one thing too long, then it gets habituated to 48Kismet was the Ph.D. dissertation project of graduate student Cynthia Breazeal. 76 it, and something else is more likely to become more salient (Brooks 2002). Kismet has a system that allows its eyes to track moving images and an auditory system that detects prosodic markers in speech and feeds the information back to modify Kismet’s “emotions.” Kismet can utter English phonemes, with prosody, and also express its emotions through its eyebrows, its lips and its ears. “W hat Kismet cannot do,” writes Brooks, “ is actually understand what is said to it. Nor can it say anything meaningful.” W hat Brooks finds interesting is that “neither of these restrictions seems to be much of an impediment to good conversation.” Kismet detects the prosody in its interlocutors’ voices, and may utter English phonemes, “but it does not understand what it is saying and does not know how to make the phonemes or syllables string together in any meaningful way.” Kismet simulates the basic conversational turn-taking, including pauses, gaze shifts, and “the filling in of awkward silences when its partner fails to speak.” Like R2D2 and C3PO, an AI agent like Kismet need not be intelligent to the point of being a protagonist. Understanding how to perform a fixed set of tasks, or how to construct tasks from smaller tasks, may be sufficient for the narrative roles of some intelligent characters. These agents would borrow their intelligence, their intentionality, from the human spectator by simple feedback mechanisms. The narrative, then, becomes a reflection of the spectator’s own wants and needs, converted into information, and re-played by a computer system, a system whose fidelity varies. On a theoretical level, such a system resembles a hardware and software incarnation of the active cinema spectator, only here, the spectator insti gates changes in the physical world that are tied to her performance. Practically, such a system suggests a solution to the high intellectual and temporal costs of implementing storytelling in cyberspace. A computer processing technique called prerequisites/actions/effects, which I introduce in Chapter Four, is a real-world 77 way of creating such cyber-stories; and, as I will argue, much of what motivates an interactor to take action is her interpretation of the state of the narrative world and the meanings of the actions of the intelligent characters. Like Turing’s test, apparently complex cyber-storytelling systems may succeed with more emphasis on how an intelligent character performs rather than what it experiences. How ever a system is designed, skilled programmers must translate aesthetic demands of the story into computer programs capable of realizing them. If the spectator provides much of the narrative thread, cyber-storytellers may be able to side-step the demand for a detailed, pre-specified narrative, along with its resulting pro gramming complexity. N ot th e End o f th e Story If it seems I have raised more questions than I answered in this chapter, I plead guilty as charged; but only because this story is just getting warmed up. The simulation of agency through interactivity in an impoverished Active world is one way of “telling stories in cyberspace,” that is, telling stories using digital technology as a base and the computer as both a medium and apparatus.4 9 Such simulation can only provide a limited dramatic experience. The more challenging experiences, such as those anticipated by the Oz Project, where the interactor is thoroughly immersed in a story world created just for her, require both the interactor and other story characters to share agency. Providing such an environment requires deployment of sophisticated levels of artificial intelligence technology, a technology unique to the computer. The result, as Murray said, is a new form of storytelling, 49 A computer can not only display the elements of the story, it can also create a environment in which a story can be told, such as in a virtual reality theater. 78 one that combines discourses from narrative theory, science and technology and demands, in return, analytical tools concocted from the same mix. There is always the danger that this chapter will be interpreted as technical discourse, yet that would be missing its point. W hat may sound like technical issues to be duked out among computer scientists and psychologists at research institutions are at the heart of the issue of what shape storytelling in cyberspace will assume. If this storytelling is the product of the same type of random, arbi trary process experienced by temporarily blind director Val Waxman in Hollywood Ending (Allen, 2002), then storytellers are retracing the path followed by early elec tronic games such as Pong, where computer technology was used to draw images on the screen and solve mathematical formulas based on input from a joystick.5 0 On the other hand, the processes used by a machine in constructing stories in cyberspace might have their own characteristics and personalities that distinguish one body of work from those of other machines. Before making that big a leap in critical theory, I explore the ontology of telling stories in cyberspace, looking at the process in a way that brings together narrativity, games and computer science. That is what Chapter Three is all about. 50 Occasionally, the computer program might throw in a random number or two to create excitement based on uncertainty. 79 C hapter 3 T he H um an (C aught) in th e Loop In a tension-filled room, chess Grandmaster Garry Kasparov weighed his next move. Across the table his opponent was oblivious to the lights, the spectators, even the significance of the match. It wasn’t incredible concentration on his oppo nent’s part, but one might describe the demeanor as detachment. Kasparov’s opponent was Deep Blue, the latest descendant in a line of world-champion chess- playing computers dating back to the late 1980s. W hat makes these computers, and chess itself, worth studying is the game’ s similarity to the act of narrative creation in cyberspace. In the chess match between Kasparov and Deep Blue, the opponents used their culturally-situated positions to create the next steps in a narrative. Actually, there are two narratives: one created by observers of the match and one engaged in by the players, what I call the narrative of the game. It is the latter narrative, the one observable only from inside the match, that is the focus of this chapter. I call this a Just-In-Time (JIT) narrative1 because players make moves at the moment the 1Today, manufacturing companies carry far less inventory than they used to, while shipping products to customers faster than in their inventory-laden days. Part of the ’’secret,” shared by manufacturers large and small in this networking era, is using a Web site to solicit and accept orders, then building the products from parts delivered by suppliers to arrive on the manufacturing line just in time to be assembled into the embryonic product. By connecting orders, inventory and suppliers via networks, manufacturers can automatically compute the parts they need, the time frame when they’re needed and the supplier best suited to provide them. Manufacturers issue purchase orders directly to the suppliers’ computers, specifying quantities and delivery times. This process is called Just-In-Time (JIT) manufacturing. 80 moves are required. A chess game is quite similar because each successive move is created by the computer just in time to satisfy the needs of the game.2 Theatrical improvs are a more complete example of a JIT narrative. So are children’s role-playing games. Many Massively Multi-Player Online Role Playing Games (MMORPGs) are, too. One can create JIT narratives using the popular The Sims computer game. Tony V Tina’ s Wedding, a theater piece in which the audience finds itself the guest at an Italian wedding reception has proven to be a commercial success in many venues. When I speak of JIT narratives in this dissertation, however, I restrict myself to ones in which all the participants, except one, are artificially-intelligent, computer-generated autonomous agents.3 The sole human participant is the interactor, a term I prefer to user or player because I want to keep the focus on the narrative aspects of JIT narratives, not the computer or game-like features. Critical theory terms, such as reader and spectator, come with too much distracting theoretical baggage, so I use them primarily when talking about traditional narratives or when I want to make the distinction between targets of traditional critical theory work and those of JIT narratives. Although one can construct JIT narratives with multiple interactors, this dissertation assumes a one JIT narrative one interactor relationship. While working on this project I was constantly challenged by colleagues to prove that JIT narratives were different from interactive narratives. After much questioning on my part I discovered that I was being asked to differentiate JIT narratives from narratives created by hypertext, hyperfiction, electronic games and 2This doesn’t mean the players don’t think of possible future moves. Such moves are specu lative at best and imaginary at worst. The only moves that ’’count” are the ones executed on the chess board. 3As I explained in Chapter One, this restriction keeps the discussion of JIT narratives close to the practices of current AI research. 81 databases, as well as traditional narratives viewed through the lens of poststruc turalist critique. One recurring theme, and I will address it shortly, is the concept of performance. From Roland Barthes to contemporary theorists of interactive texts, the poststructuralist reader has become the icon for the active reader, a reader who fashions meaning from what Barthes refers to as a “galaxy of signi- fiers.” At, first I had used performance as a tool for cutting out a theoretical niche for JIT narratives. After all, theatrics aside, a chess game is a performance. If one player decides not to perform, not to make a move, the game stagnates and the player’s opponent eventually wins by default. The poststructuralist reader performs a narrative, I was told. The computer user, clicking her way through a hypertext document, or the World Wide Web, is also performing, I was reminded. As a point of uniqueness, performance was rapidly becoming a bad idea. For every point of distinction I saw there existed an equally compelling counter-argument drawn from the work of poststructuralists. At every juncture I was challenged to explain not only the differences between interactivity and agency, but why a JIT narrative interactor was not the same as the reading subject typically referred to as “the active reader” (see Table /reftab:perf). The case of the indistinguishable interactor is a consequence of comparing inter actors in different contexts. The assumptions under which hypertext, hypertext fiction, the Web and so forth are analyzed do not hold for JIT narratives. Barthes, and those who use Barthes’ s’ work as a reference point, assume that • the smallest manipulable textual unit is the lexia; lexias are fixed. They can be added to a work, occasionally removed but they remain in their original form; • for any work there are an arbitrary, but finite, number of lexias; 82 JIT Narrative Poststructuralism Interactor makes choices to create JIT narratives. Interactor’s choices change the course of the narrative. The interactor’s actions cre ate a different physical end ing, not just a different psy chological one. The interactor doesn’t select a path from existing ele ments. The elements are created on the fly, in real time. So does a gamer, Web surfer or user of Microsoft’s Encarta CD-ROM-based reference tool. Reader or spectator makes choices that result in the active formation of inter- and extra-textual references, which change interpretation of a traditional text. The game player, the Web surfer, all wind up at differ ent ending places depending upon the paths they choose. Links in electronic games and CD- or DVD-ROMs appear and disappear based on how the player or user chooses. Table 3.1: Performance issues in JIT narratives versus poststructuralist theory. lexias are linked, either directly (each lexia contains links to other lexias) or through some external mechanism (an interface) that makes only certain lexias visible to the reader/spectator/user at any given moment; • a non-linear collection of lexias is linearized through repeated selection. W hat I will show is that none of those are true for JIT narratives and that while traditional theory is necessary for critical work in the field, it is not sufficient. To do this I will demonstrate two important points: a JIT narrative is not represented by a database organization, and the notion of performance in JIT narratives is 83 more complex and multi-variant than its expression in current writings about both traditional and interactive narratives. In th e B eginning In the beginning there was hypertext, a term coined by the technologist activist Theodor Nelson to represent a new writing form in which “all [text] materials are in a shared pool of units, but every element has a unit in which it originated, new units can be built from material in previous units, in addition to new material and there can be arbitrary links between arbitrary sections of units” (Nelson 1993:0/5-6). Nelson’s abstract concept of units and links is descended from the work of Van- nevar Bush and his memex as well as the linked data elements of Doug Engelbart’s NLS.4 Today, the most popular example of the hypertext concept is the World Wide Web, in which pages contain embedded links to other pages, images, video and audio clips, to what new media artist and historian Lev Manovich calls “mul timedia objects” (Manovich 2001). Even before Tim Berners-Lee fashioned the technology that made the Web possible, Apple Computer’s HyperCard software, created by Bill Atkinson, one of Apple’s original software gurus, brought the con cept of linked elements to the commercial marketplace.5 HyperCard, and similar authoring tools, spawned a class of CD-ROM-based computer games and reference materials employing hypertext technology. Clicking on so-called hot spots on the 4In the introduction to his book. Nelson gives credit to Engelbart for inventing hypertext and to himself for naming it. 5Atkinson, according to prevailing folklore, so believed that hypertext technology belonged in the hands of “the rest of us” that he insisted Apple bundle the software, for free, with each Macintosh computer or else accept his resignation. Such was his value to Apple that management agreed, and for years thereafter the HyperCard reader and authoring tools were part of the standard Macintosh software distribution. 84 computer screen became a familiar way of navigating through a collection of mul timedia objects for entertainment and education. For literary theorists, however, hypertext opened up a new thread of poststructural criticism by giving a physical existence to the formerly abstract notion of the active reader and the interactive text. Due in no small part to the efforts of George Landow, whose work I will dis cuss shortly, hypertext caught the fancy of literary theorists. It promised to give physicality to what poststructural theorists had only been able to write about for the preceding twenty years, the idea that a text was not the closed, rigid, univocal entity imagined by structuralist theory, but was an open and dynamic piece of work. Not only was the text ever-changing, it was the reader’s active engagement with the text that resulted in these dynamics. The reader, through engagement with a web of signifiers brought her inter- and extra-textual experiences to the interpretation of any text.6 Regardless of the actual signifiers, and no m atter how many times these signifiers were encountered, including successive readings of the text, the interpretations were still in play. The text never fixed itself—decentered as Jacques Derrida would say—and no two readings need ever be alike. W ith hypertext, one could see this in action as readers clicked on links to move from place to place in hypertext documents, annotated existing documents with infor mation of interest not to the document’s author but to the document’s reader and generally shaped what in the past would be a monolithic, linear print document into an informational web of data elements with multiple connections between them. 6 One can argue, as Barthes did, that the text is only created by the reader after engagement with this web. 85 This linear to non-linear transformation, this decentering of the text, this repression of the voice of the author in favor of voices of authors and readers are what hypertext critics, like Dartmouth University’s George Landow, find refresh ing because “[c]ompared to a printed text, one in electronic form appears relatively dynamic, since it always permits correction, updating, and similar modification” (Landow 1997). At the heart of the open text was a technology that allowed the reader to interact with the hypertext document, the key word being inter act. Through linking, “hypertext adds a second fundamental form of variation” to reading, observes Landow because readers choose among various paths through a collection of lexias. Behind that observation is the unstated assumption that the lexias and paths are fixed and finite, a dormant web of potentials waiting to be activated. Combine these potentials, waiting to be liberated by the paths chosen by a user, with lexias selected by an author to foster or resist narrative creation and the result is hypertext fiction, a writing form in which authors, readers and narratives all interact. If hypertext was the physical realization of the theoretical writings of Barthes, Foucault and Derrida, among others,7 then hypertext fiction might be thought of as the physical realization of Borges’s The Garden of Forking Paths (Borges 1962). While a real-life hypertext fiction work doesn’t contain all possible stories, the idea that the collection of lexias and paths represents an inconceivably large number of potential stories carries over from Borges’s fictional work. Organized into lexias, hypertext narratives are activated when the reader chooses a path through the 7I favor Barthes’s work in this dissertation not because Derrida and Foucault’s work on the decentered text and the role of the reader in creating a text are less important, but because Barthes’s work, especially as expressed in S/Z, closely tracks with the real-world implementations of hypertext and the World Wide Web. I find this makes the area of poststructural theory more accessible to both critical theorists and computer scientists. 86 collective body of narrative elements by choosing among links attached to the current lexia. Since it takes only a handful of lexias and links to represent literally millions of different narratives,8 one can easily speak of the narrative potential of a hypertext fiction work as, for all practical purposes, infinite. This sense of an infinity of story paths is one aspect of hypertext fiction that appears to separate it from traditional literature.9 “It is not a m atter of the river being different each time you cross it,” writes J. Yellowlees Douglas, professor and essayist in the field of interactive literature, “so much as it is a m atter of your stepping into an entirely different river with each journey you take” (Douglas 2000).1 0 The river is different, yes, but Yellowlees Douglas fails to mention what Jay David Bolter, author of Writing Space: The Computer, Hypertext, and the History of Writing observes, that within a given lexia “the reader is still compelled to read what the author has written” (Bolter 1991). Despite the power to move among lexias, the reader of a hypertext fiction cannot change or annotate the text. She is restricted to interactions with the lexias (or Bolter’s more expansive term, episodes) for the permutations that put the hyper into hyperfiction, but has no control over them. The river, in Yellowlees Douglas’ s metaphor is different, but the river still takes the voyager where the river wants to go. The reading restrictions Bolter writes about and the nature of JIT narratives are once again enmeshed in the difference between interactivity and agency. While the reader has interactive potential within the hypertext fiction for instantiating 8See footnote in Chapter Two. 9I say appears because poststructuralism postulates an infinity of stories from the same text, depending upon the reader. 10Yellowlees Douglas’s comment is significant. Not only is the reading of the text different each time, but the text itself, the underlying work is different, too. This is a point of differentiation between traditional literature and hypertext fiction. 87 a specific narrative, the reader is not a part of the world of the hypertext fiction. There is a always gulf, a gap, separating the reader and the hypertext fiction’ s collection of lexias and links. Despite her ability to follow different paths, the reader of a hypertext fiction cannot physically alter objects or characters in the fictional world, while the JIT narrative interactor can. This difference becomes increasingly significant when hypertext documents and hypertext fiction are viewed in the more general context of the database. D atabases Hypertext, hypertext fiction, even the World Wide Web are special cases of com puter databases. Manovich’s project, The Language of New Media which aims, among other things, to collect and codify some of the terminology attributable to both new media theory and practice, defines a database (in new media terms) “as collections of [new media] items on which the user can perform various operations— view, navigate, search” (Manovich 2001). Manovich’ s new media items include text, graphics, video clips, audio clips and other digital elements that are now associated with the make-up of World Wide Web pages or the contents of inter active CD- and DVD-ROMs. Since lexias can contain the same elements (though frequently lexias contain a narrower variety), admitting lexias into the definition of database objects is more a m atter of semantics than substance. From now on, I will use the term database texts to include hypertext, hypertext fiction, the World Wide Web and any other texts that are structured as linked lexias and use the computer as their medium. The database is not only a logical construction for organizing a collection of objects, it appears to be part of a mechanism for generating narratives, as described by structural theorists such as Mieke Bal and Seymour Chatman. Narratologist 88 Bal writes, “A story is a fabula that is presented in a certain manner. A fabula is a series of logically and chronologically related events that are caused or experi enced by actors” (Bal 1997)1 1 For Chatman, a narrative is made up of “the ’fable’ (fabula), or basic story stuff, the sum total of events to be related in the narrative, and... the ’plot’ (sjuzet), the story as actually told by linking the events together” (Chatman 1980a). Both authors describe the process of creating a narrative as selecting, from a totality of events only a relatively few, which are ordered and presented to the reader or spectator. This collection of objects (events), linked together (chronologically) from which some can be selected, seems to generally fit the definition of a database.1 2 The narrative’s author selects events from the database in a particular order and presents them to the reader or spectator. The difference between this type of narrative text and a database text lies in the nature of selection. The idealized author of Bal and Chatman typically has an unfettered view of the elements in the database. One might say any selections are at the author’s sole discretion. The reader or spectator never has access to the database. In a database text, however, it is the interactor who selects the objects, and these objects are visible only through an interface. The author in cyberspace, far from being the sole arbiter of which objects are selected, actually creates a collection of elements and an interface, the latter implementing the links between elements. A narrative is constructed when the user, working through the interface, selects items from the database in an order that is the consequence of both the user’s desires and the database elements accessible at the particular moment the choice is made. 1 1 Events in this context axe changes in the story world caused by the actions of actors or agents. 12I say “generally” because in a database there is no requirement that objects be chronologically related. 89 W hat is important here is that when an interface is involved,1 3 the user or the interactor does not have free and open access to the database, but has access that “follows the logic of another person’s thinking” (Manovich 2001). JIT narratives expand this analysis. Interactors are not influenced by another person’s thinking as much as by the actions of autonomous agents, software characters whose own wants, needs and goals give rise to a multiplicity of other logics.1 4 D atab ase N arratives From the descriptions in the preceding section one could easily be seduced by the totalizing power of database texts as represented by objects, links, interfaces and selections. This is precisely what happens to Manovich when he describes how the logic of the database becomes “the center of the creative process in the computer age” (Manovich 2001). By altering the interface, a single database can become the genesis of many different narratives, which Manovich initially defines as the traversing of a database. He qualifies this definition by reminding the reader that this does not mean “an arbitrary sequence of database records is a narrative.” For refinement, Manovich appeals to the work of structuralist Mieke Bal who contributes to Manovich’s definition the structural elements of text, story and fabula I quoted earlier. Manovich, it seems to me, allows computer science to trum p narrativity. He problematizes using databases for narrative generation by focusing on the structures that make this possible, eliminating from his discussions 13The existence of an interface is not inevitable. In practice, however, a database narrative uses the computer as a medium because of the computer’s ability to quickly access and manage large amounts of data. The complexity of the dual processes of accessing and managing imply the need for some sort of user-accessible interface. 14In the next chapter I discuss an agent’s ability to learn. When agents learn, their behaviors, over time, are a mixture of their original programming and their experiences. Eventually, these behaviors are no longer the sole product of another person’s thinking. 90 two decades of theory on the importance of the role of the reader in narrative creation. Marsha Kinder, professor in the Division of Critical Studies of the School of Cinema-Television at USC. interactive narrative theorist and director of the Labyrinth Project at USC’s Annenberg Center for Communication, rescues post structuralism from Manovich’s clutches when she addresses the role of the inter actor in narratives generated from database texts in her work on database nar ratives. To Kinder, the interactor in a database narrative is as much an active participant in a chain of meaning making as the active reader in poststructuralist critiques of traditional narratives. Kinder’s interpretation of the database nar rative re-presents the interactor as a performer who, in the process of selecting and combining database elements, interprets the raw materials the way an actor or musician interprets a script or a score. The result leaves the interactor “con tributing her own idiosyncratic inflections and absorbing the experience into her own personal database of memories” (Kinder 2002a). Kinder does not restrict database narratives to database texts. In fact, much of her motivation is found in the work of the surrealistic filmmaker Luis Bunuel. and continues through “a wide range of works,” drawn from the body of European art films, experimental documentaries and mainstream independent cyberfiction. Constant through her writing is her emphasis on the arbitrariness of particular choices as well as the “possibility of making other combinations” that would lead to other stories. The database, literally or in iconic form, becomes a convenient means for upending an ideological position through interactivity. Her focus at the intersection of narrative and database encourages Kinder to pose a question she finds common at interactive storytelling conferences: 91 How can we create engaging interactive narratives that provide an array of pleasures both emotional and intellectual, and that don’t have clear- cut beginnings or endings and are full of interruptions, and that still offer a satisfying sense of drama and still make us want to return to them again and again (Kinder 2002a). The database narrative. I’ m afraid, will fall short of providing the answer, for even as Kinder takes great pains to minimize, if not eliminate, the effects of structure from her equation, her work cannot talk about databases without being structured by them, not as long as database narratives are, like database texts, products of the computer. Computer science is all about structure, a theme th at runs through my work, and structure often stands in juxtaposition to agency. Kinder, in a private conversation, argued that database narratives should be considered unstructured because they are extendable. Interactors can add in data to both hypertexts and Wikipedias.1 5 Thus, the work more closely resembles the web of signification described by Barthes, where the text is constantly in flux. The issue here is not the extendability of the database, but rather the structuring effects imposed on that database by an interface that must function in the absence of constant human intervention, i.e., constant real-time redesign by a programmer. AI, on the other hand, has as one of its goals the elimination of this constraint by substituting interfaces that supply the real-time human intervention. As a result, Al-based interfaces require less structured instructions from their users and less structured organizations of the data they interface with. By reading database narratives against cinema, Kinder rescues cyber storytelling from falling into the clutches of structuralists, both traditional theorists 15Wikipedia is a Web-based, free encyclopedia. Its content is created by its users. Any visitor to Wikipedia can edit its articles, and many do, although in practice about half of all edits are done by just 2.5 percent of the users. Pages are always subject to editing, so no article is ever “finished”. Excerpted from < http://en.wikipedia.org/wik/Wikipedia>. 92 like Bal and Chatman and cyber-theorists like Manovich, whose work fetishizes the computer science and technology aspects of any combination of narrative and (the computer generated) cyberspace. While Kinder foregrounds the ideological potential of database narratives, and the collaboration between authors and inter actors in the meaning-making process, she,too, represses cyberspace’s potential for agency. Databases are a product of interactivity, not agency. Kinder doesn’t address the question of agency in the narrative world any more than do Landow, Bolter or Yellowlees Douglas, for whom hypertext and hyperfiction are tools of style, or Foucault and Barthes, for whom the text may be a distributed web of signification that enmeshes the reader, but for whom the nodes of the web are kept out of reach. Writing about databases, whatever their form, focuses on a static view of narrative, a system that is incapable of physically changing itself.1 6 This may seem strange for me to say. After all, database elements can be added and deleted. A hypertext user can add links. A database narrative interactor chooses the next step in the narrative development. The difference is most pronounced in Kinder’s use of the term performer, a term that sums up not only the difference between interactivity and agency, but that sums up the difference between the static and dynamic qualities that separate database texts, and their associated narratives, from JIT narratives. 16I make a distinction here between interactors adding new objects to a database and the database changing its own contents, that is, reasoning by itself which objects to add, delete or modify. A database capable of doing this at a non-trivial level would require some form of Al-enabled software. At that point, the database steps into the sphere of JIT narratives. Note that databases capable of being changed, such as hyptertexts and Wikipedias, change under the intervention of human intelligence. 93 Social Situations and T heir Influence on N arrative C reation JIT narratives are more dynamic than database texts and their associated nar ratives because the former result from the execution of sets of rules rather than selections from sets of alternatives.1 7 JIT narratives are dynamic in real time, which means the rules that can be applied, and the results these applications yield, constantly change as a result of previous applications. The rules are socially situated. That means the results of applying any rule must be interpreted in the context of the narrative world as it exists at the moment, not just the world’s phys ical structure but the emotional states and beliefs of all intelligences (agents) in the world as well. Rather than a database, this type of organization is represented by states, state variables and state changes. A chess game illustrates these points. Fortunately for computer programmers, and less so for chess grandmasters, programming a computer to play chess is fairly straightforward as long as the world of the chess game is the chess board.1 8 All a programmer needs to do is find a computer-friendly representation of the game that does not deviate too much from a human’s conception, so moves can be translated from human form to computer form and back again. Chess is quite cooperative in this area. At any point in the game, every square on the board is either occupied or vacant. Occupied squares can contain one of six types of pieces, and each piece is further identified by its color. The starting position of a chess game is represented in 17Refer back to Rodney Brooks’s description of Kismet the robot in Chapter Two for an example of the use of simple rules to create seemingly unrestricted behavior. 18 Straightforward does not mean easy. Hundreds of thousands of hours of programming and hardware design were necessary to create world-class chess playing computers. Chess is a game of perfect knowledge because each player knows everything about the state of the game. Perfect knowledge, plus the interactions between players, and between players and the rules, being well defined, makes chess straightforward to model. 94 Figure 3.1: Initial chess board Figure 3.1. For convenience in locating a specific square, rows are labeled 1-8 and columns are labeled a-g. pawn p bishop b king kg knight kn rook r queen qu Table 3.2: Abbreviations for chess pieces. If one uses the abbreviations in Table 3.2and uses “blk” for black and “wht” for white, the board can be represented as shown in Table 3.3. The two representations of the board are logically equivalent, but the form expressed in Table 3.3 is more amenable to computer analysis. Table 3.3 repre sents the state of the chess board at the start of the match. The square designation, occupied/unoccupied, type of piece and color are the state variables. State vari ables, taken in the aggregate, describe the state. Each move—literally moving a 95 S Square Occupied Piece Color la yes r wht 2a yes b wht 3a yes kn wht 4a yes qu wht 5a yes kg wht 6a yes kn wht 7a yes b wht 8a yes r wht 8g yes r blk Table 3.3: Representation of the chess board. piece from one square to another—changes the state because at least one state variable, the location of the piece moved, differs in the pre-move and post-move states. In Figure 3.2 both White and Black have made their first moves. Table 3.4 represents the new state of the board after W hite’s move, though for readability the table only contains the entries that differ from Table 3.3. The new state is defined by all sixty-four entries, as in Table 3.3. The difference between the two states is called a state change. Square Occupied Piece Color 3b yes p wht 2b no Table 3.4: Changed entries after W hite’s move. Table 3.5 represents the entries that change after Black’s counter-move. The chronologically ordered collection of state tables is a representation of the narrative of the chess game at every step of play. One might argue that Chess can be described in terms of a database because allowable moves and board positions are strictly determined by rules. All legal 96 Figure 3.2: Chess board after one move by each player. Square Occupied Piece Color 6c yes kn blk 8b no Table 3.5: Changed entries after Black’s move. board positions could be stored in a database and a player could dip in and choose one based on the location of pieces on the board, making a chess game a series of combinations and selections, much like activating a narrative in a database text. This is true as long as playing the game, and activating a narrative, is restricted to moving the pieces around on a chess board. Such an activity is very well represented by a database. One might think of a database containing all possible board positions (millions of database entries). Any one board position reduces the set of possible playable positions to perhaps a dozen or so. These are the board positions that can be reached by applying the rules of chess to the current board position. So, any single board position reduces the database to a much smaller # set of allowable objects, one of which is selected by a player when she makes her move. In other words, the rules act as a sort of interface for choosing which board positions are possible at any one moment and allowing access to those positions.1 9 A chess match, taken in its entirety, is not restricted to moving pieces. There is a human factor to consider, efforts on the part of the opponents to mislead, confuse, anger or demoralize each other. Those factors, which are also part of a chess match, and which play a role in choosing successive moves, are not so easily mapped into database objects. The implications of opponent interaction may be more obvious in the poker game of Texas Hold ’Em.2 0 Here, the rules of the poker game are one factor out of many used by players in deciding their next moves. Each move is formulated on the spot, based on the situation at hand. Even though more experienced players may use their knowledge to help them formulate an action, the actual behavior, whether to raise or call, bet or fold, is an on-the-spot assessment. There are no database objects relating all possible hole cards, up cards, pots and opponent behaviors to a betting strategy. The players’ interface with the poker game is not through a database of possible plays but through a set of rules, observations, evaluations, experiences and lucky guesses. The dominance of this combination of rules plus uncertainty helps explain why JIT narratives are not well-represented by database models. It might seem that playing chess can be represented by a database. All the legal moves in a game could be stored in a database and a player could dip in 19 Extending this line of reasoning even further, chess playing is rife with gambits (strategies) for playing out certain board positions (and moves) during different phases of a match. To the degree that these gambits are well-known and well-defined, using them is very much like selecting objects from a database. 20For a synopsis of the rules see < http://www.texasholdem-poker.com/beginnersintro.php>. 98 and choose one based on the location of pieces on the board, making a chess game a series of combinations and selections, much like activating a narrative in a database text. Yet that is not how the game is played. Chess players work not with a memory of all possible moves but with a memory of a set of rules that define how moves are constructed. The choice of a move is not based solely on the current board position as much as on a combination of the board position, what one player wants the other to believe about the strategies she is employing, the possibilities for enticing an opponent into misreading future board possibilities and other feats of social engineering. The player’s interface with the chess game is not through a filter of possible moves, but through a set of rules and rules of thumb for figuring out what to do next. The dominance of these procedural rules helps explain why JIT narratives are not well-represented by database models. Database models assume the database contains discrete elements from which the interactor will select and combine. JIT narratives contain ever-changing choices that are created and destroyed on the fly, in real time, with every action taken by an interactor or agent. These actions, or choices, change the state of the narrative world, which is something akin to changing the words on the next page of a book every time the preceding page is read. The spectator of a film, especially in this era of DVDs, can certainly view the last ten seconds of a film first, yet when she goes back views the film frame by frame from first to last, the final ten seconds will still contain the same frames in the same order as they did initially (though it may hold a different meaning for the spectator). With JIT narratives this is hardly true. If, in some JIT narrative world, an interactor chooses to remove a babysitter’s diary from the desk drawer, for example, and destroy it, the diary will never again be in the drawer. If an interactor leaves the diary where she first found it, it could remain in 99 that drawer forever, or not if someone or something else moves it. This interactor- affects-the-world-and-the-world-affects-the-interactor relationship is a major point of differentiation between the interactivity of the database text and the agency of JIT narratives. One example of an interactor performing a narrative that I encounter frequently (and one I used to use in my classes) is that of an orchestra conductor interpreting a score. Even though the notes are fixed on the page, the example goes, the conductor varies the pacing, the intensity, the color of her interpretation in often unpredictable ways so that each performance of the score is never quite the same as the others. The analogy here is between the conductor on the one hand and the interactor in some sort of interactive narrative, the latter ’’interpreting” the narrative differently by making different choices while traversing each narrative path. As with all analogies, this one is suitable as far as it goes. It just doesn’t go far enough to encompass both database texts and their narratives and JIT narratives. I would suggest, for a more accurate picture, the analogy of a stand-up come dian. Most professionals, even when performing set pieces.2 1 are sensitive to feedback from the audience. These performers may alter pieces of their acts on the fly. This can range from changing the style of delivery to adding or removing suggestive or obscene phrasings to eliminating minutes of material that don’t seem to be working with the audience.2 2 Unlike the orchestra conductor who changes her interpretation of her collection of notes, the stand-up comedian throws out one set of material and constructs another in real time. 211 refer to material that has been written and rehearsed to be performed in a particular way time after time. 22As a former stand-up performer, I have even abandoned my set pieces and turned to improv with the audience when my pre-planned material bombed. 100 Here is where I find Kinder’s use of the term performance highly problematic in the larger scope of narratives that rely on digital technology for both their creation and expression. When Kinder speaks of the interactor performing the narrative she is referring to a performance in the interactor’s mind that (a) can not easily be shared by others and (b) does not influence the contents of the database. While a clever database designer can change elements available to the interactor on the basis of the interactor’s choices, such changes are logically lim ited in complexity.2 3 JIT narratives vary according to the more complex real-time decision-making capabilities of AI technology, a technology that is not the driving force behind database narratives. The performances of a database narrative and a JIT narrative take place in two separate contexts: the former governed by the application of a low-complexity interface to a static database and the latter by the high-complexity application of real-time rules to a dynamic environment. W ithin computer science, these differences in context can be explained by examining the differences between data processing, from which database narratives arise, and symbol processing, which gives life to JIT narratives. T he C om puter as a D a ta P rocessor As I discussed in Chapter Two, the field of artificial intelligence developed because, among other reasons, computer scientists (though th at’s not what they called themselves in those heady days of the mid-1950s) began looking at computers 23 An interface designer typically alters access to the underlying database by using IF-THEN- ELSE logic. For example, IF the current page is page 1 AND the user has already visited page 5 THEN only make links to pages 6, 7, and 8 accessible ELSE only make links to pages 11 and 12 accessible. This logic grows in complexity when IF-THEN-ELSE instructions are nested in the form IF the user has already visited page 5 IF the current page is page 6 IF the previous pages was page 4 THEN only make links to pages 20 through 25 accessible ELSE IF the current page is page 14 Before long, the decision structure is too much to intellectually manage, so changing access to the database in this manner is limited in complexity. 101 as symbol processors and not simply data processors; but database texts rely on the data processing capabilities of computers for their operation. When the hypertext reader or the database narrative interactor reaches into a hypertext document or a database,2 4 she invokes the computer’s data processing capabilities, in particular the ability of a computer to retrieve specific data from mass storage based on highly specific criteria.2 5 The computer’s reasoning processes are limited to mapping the search criteria into a collection of data. For the simplest hypertext and database narrative systems, the computer makes no independent decisions; it merely expands a shorthand notation—a link—into a rich data set.2 6 In data processing, the computer manipulates and transforms data, which here I will define as the contents of memory locations. The manipulations and transformations are controlled by data processing rules, which are little more than valid sentences in a language. A (simplified) way of looking at data processing is what happens when the computer scans its input, finds a valid sentence in its input and manipulates the contents of memory locations as specified by the sentence. 24Structurally, there is no difference between a hypertext document and a database. I use both terms here to encompass scholarship that separates hypertext into a reading strategy and database narratives into a more complex set of narrativization operations. 25This activity goes by several other names, though not associated with narrativization because the operations are usually taken one or two at a time. For example, initiating an Internet search operation falls under the current definition, as does the printing of payroll checks, where the computer retrieves the name, address and payroll information for payees based on a Social Security or employee number. 26This is data processing at its simplest. I’ve already discussed what happens when the com puter is programmed to reason about this mapping from link to data through IF-THEN-ELSE logic. 102 T he C om puter as a Sym bol P rocessor Symbol processing is an important contributor to the realization of artificial intelli gence (and to its associated discourses) as it applies to this study of the relationship between narrative and AI. While data processing is all about structural relation ships AI depends more on the meanings of objects in their current environments. Data processing manipulations are defined by syntax and are performed on memory locations. Individual symbols in the input have no meaning to the pro cessing program, other than their role as placeholders. Thus, when the statement A = B + C s fed into a program input and is recognized as a valid sentence in a language (as it would be in Fortran27, the computer adds the contents of a memory location iden tified as “B” to the contents of the memory location identified as “C” and places the result in the memory location identified as “A”.2 8 Not only are the letters A, B and C arbitrary (the programmer could have used X, Y and Z or BANANA, KERCHIEF and PLATYPUS for that m atter), the memory locations the symbols denote are irrelevant and are assigned at the convenience of the computer pro gram that translates the symbols into computer instructions. In fact, all that is important are the underlying memory locations and the symbol. For Newell, Simon and Shaw’ s theorem prover, discussed in Chapter Two, the symbols in the input were important in and of themselves. They represented not 27Fortran is a programming language developed in the 1960s to allow scientists and mathe maticians to convert their formula-based problems into computer programs. The name is an acronym from Formula translation. 28Yes, there are several steps I’ve glossed over here in the interests of simplicity, plus a desire not to turn this dissertation into a treatise on computer programming. Suffice it to say there is appropriate intervening software that converts the string of symbols into the proper combination of instructions to perform the addition and storage operations. 103 underlying memory locations holding data waiting for manipulation; they were the data. In the realm of symbol processing, a researcher might write P C Q and Q C R and then ask the computer to “find something interesting” about the relationship between P and R.2 9 In proving P C R, a symbol manipulation program like the theorem prover cares not at all whether P , Q and R denote locations in the computer memory (in fact, they don’t). They are entities denoting abstract concepts—sets in this example—and the symbols can be manipulated according to the rules of set the ory (presumably programmed into the computer). When, in its calculations, the computer creates a string of characters P C Q C R, it is the semantics of the state ment the computer will process, according to a rule that says that the relationship of three quantities in the presence, of two transitive operators can be reduced by eliminating the middle term.3 0 This form of computation is typical of symbol manipulation, and of it, Rodney Brooks writes that intelligence operates on a system of symbols whose “meanings are unimportant,” but yield meaningful results “when an observer of the system 29This is a problem in set theory, where sets are collections of elements. The objects in a database, for example, form a set. So does the collection of all mammals. Any human belongs to the set of all mammals, but any reptile does not. In this example, the letters P, Q and R denote abstract sets. Their exact contents are irrelevant. The given information here states that P is a subset of Q and Q is a subset of R. P is a subset of Q if and only if every element in P is also in Q. So, the set of all humans forms a subset of the set of all mammals. One “something interesting” in this case is that P is also a subset of R. 30For computer science purists I own up to an oversimplification. In some computer program ming languages that are called strongly typed, there is some semantic processing on input such as A 4- B = C. In such languages, the data being manipulated are expected to be of a certain type, for example whole numbers (integers). If this is not the case, for example if the contents of memory location B were 7 and the contents of memory location C were 3.14159, then the data processing software would have built-in rules for handling the conversion. It might convert 3.14159 to a whole number (by throwing away the fraction or rounding it down) and expressing the result as the whole number 10. Or, it might convert 7 to 7.00000 and add the two operands to get 10.14159. Or, it might just flag the operation as an error, inhibit any processing, and allow the programmer to sort things out. 104 knows the groundings of the symbols within his or her own experience” (Brooks 1991).3 1 While the computer need not (and most likely cannot) have any sense of the meaning of the symbols it is processing, these symbols can be, and are, bound to real meanings in the real world. When the symbols are bound to what they sig nify, the result of the computer’s machinations appears to be intelligent behavior. The grounding of the symbols to real-world objects also creates socially-situated, meaning, or meaning that comes from evaluating the symbol-signifier binding in the context of the environment of the moment. Database texts become narratives when an absent authoring agent3 2 combines with a present reader author. W ith JIT narratives, an absent authoring agent com bines with present authors to create unpredictable lexias. These authors are the interactor plus all the agents in the JIT narrative, and the creation of the narrative cannot be attributed solely to the logic of selection and combination, but rather as a complex logic (or illogic) of human decision making in real time. Meaning comes forth from the collisions among the intentions, meanings, misinterpretations and contracts between agents (and between agents and the interactor). This is why I have repeatedly stressed the different contexts in which database narratives and JIT narratives operate. Understanding JIT narratives means understanding not only the narrative component but also the way in which various choices and oppor tunities present themselves, that is, understanding how new moves (to appropriate game terminology) can be created while existing ones are destroyed. It is the combination of the two that illuminate the nature of JIT narratives from both a 31 Grounding means associating the symbol to a signified—author. 32 Authoring agent is meant to describe the programmers and designers who create the databases and interfaces, as well as those who create characters and environments for JIT nar ratives. 105 practical and theoretical point of view. Before I delve into this topic in more detail, I want to better situate JIT narratives in the space of cyber-story telling. A B rief H istory o f C yber-Storytellin g If I were asked for an analogy of a JIT narrative (and since this is my dissertation I feel no compunction about asking) my choice would be the cocktail party, where multiple characters respond to their socially-situated environments, free of the taint of authorial control. T hat’s why I would choose a cocktail party as a model3 3 for cyber-storytelling. Narratives come and go, and while someone selected the locale, stocked the bar and invited the guests, there is no sense of an author behind the narratives at a cocktail party. There are many non-computer examples of the cocktail-party-type emergent narrative. Having written a couple of stage plays, theatrical improv is one that comes to my mind. “[T]he basis for a ’free improvisation’ is also the basis for an emergent narrative,” writes Ruth Aylett of the United Kingdom’s Centre for Vir tual Environments, “since actors will normally try to produce some kind of emer gent narrative framework in order to hold the attention of an audience” (Aylett 1999). In improv, loosely defined frameworks provide support for the promise of emergent narratives. Agitprop theater, commercial theater productions, such 33A model is the reduction of a complex process to a simpler analogy or representation. Sci entists use models all the time when describing the human circulatory system in terms of pipes, a pump and water, or an atom as a moon (electron) orbiting around a planet (nucleus). The electronic game Sim City uses a representation of a real city that includes relationships among some people and institutions while ignoring others in an effort to clarify the relationships that are present. 106 as Tamara and Tony and Tina’ s Wedding3 4 even the socially-structured environ ments that suggest the little narratives people participate in on a daily basis, from ordering coffee at a Starbucks to exchanging information after a fender bender, are products of improvisation within a social framework. W ith such a rich emergent narrative tradition to draw from, one might expect storytelling in cyberspace to evolve along improvisational lines; and, it might have, if research and development funding were not directed toward solving problems with strong AI. Instead, the history of story telling using artificial intelligence, which begins in earnest with one of the first acknowledged and somewhat successful attem pts at storytelling called TALE-SPIN, is a history of modeling and planning. TALE-SPIN, a storytelling computer program written by James Meehan in the late 1970s, simulated narrative-building by creating events and ordering them as dictated by a template (actually a set of rules describing relations between characters and between characters and their world). TALE-SPIN was not real time interactive. The user entered a set of initial conditions demanded by the template and the program instantiated a narrative. Meehan describes the rules for telling the story of The Fox and the Crow, an Aesop’s fable with the moral “Never trust flatterers” this way: “Never do X” means that if you do X, then something “bad” will hap pen. “A flatters B” means that A says something “nice” to B, but is insincere, doing it for some ulterior motive. Since a consequence 34 Participatory theater traces its roots far back in history. From the groundlings at the Old Globe Theater to contemporary plays such as Tamara and Tony and Tina’ s Wedding spectators were and are incorporated into the theatrical experience. In contemporary theater, however, the narratives are pre-scripted. What changes are local moments when a spectator acts in a way that requires a response from one of the characters. In Tony and Tina, for example, the audience members represent guests at a wedding ceremony, Actors interpellate the spectators into roles by behaving in certain ways. The actors also respond to the spectators as if the spectators were characters playing their own roles in the narrative. 107 of saying something nice to B is that B will become more kindly dis posed toward A, then it’ s reasonable to assume that B’s kind dispo sition toward A will enable something to happen which is “good” for A. Putting this all together, we predict that A has some goal which requires that B be kindly disposed toward A, so A says something nice to B, B reacts accordingly and something happens which causes A to achieve his goal and also causes B to suffer (Meehan 1980). Meehan lays out the general tale, which includes the social embedding of the archetype characters, while the user of TALE-SPIN provides specific details such as what characters are in the story,3 5 what some of their wants are and what objects in the story world are available for fashioning the narrative. After that, TALE-SPIN takes over and creates a story with the underlying structure of The Fox and the Crow, but using characters and objects from the user’s imagination. TALE-SPIN was an intelligent program “cognizant” of the process for transforming the “what” into the “how.” Peter Weyhrauch proposed a method for guiding interactive drama in his Ph.D. dissertation (Weyhrauch 1997). Building upon the work of Brenda Laurel, as well as the Oz Project, Weyhrauch based his work on the concept of destiny, a quality he asserted must be part of every interactive story and the foremost contribution of the author/artist responsible for the interactive dramatic experience. Destiny “encodes the broad qualities the experience must have, rather than just one specific sequence of events that have that quality,” he writes. While Weyhrauch believes the computer-created characters in an interactive drama “should appear reactive, goal- directed, emotional, modestly intelligent, and capable of using natural language,”3 6 and his work is based on formal planning and a story model, he still operates under 35 Nothing in the template requires the characters to be a fox and a crow. They could just as easily be a hyena and a donkey or a gigolo and a widow. 36I would add proactive to his list of qualities for autonomous agents. 108 the premise that “the artist specifics a destiny in such a way as to let the system guide the User’s experience at a later time, so that she may fulfill her destiny.” Weyhrauch’s reasoning, which is steeped in the attitude of the Oz Project, is that interactive drama must admit closure; and not just closure, but satisfying closure. The user cannot enjoy the experience unless she reaches the Hollywood- style cinematic ending where loose ends are resolved, the protagonist achieves his or her goals and the moral of the story lifts the spirits of the spectators. This was Weyhrauch’s implicit model of interactive drama,3 7 and it guided the interactor to a pre-scripted finale, even when that finale could take several forms.3 8 While some of my colleagues consider this type of structure valid cyber-storytelling, because the interactor is free to explore the narrative world even while she is guided to an ending, I consider it a form of trickery. The interactor is led to believe she has the freedom to create her own experience though what she actually has is the freedom to be coerced into following someone else’s story line. Truly affording the interactor freedom to create her own story would require a more abstract approach to the nature of story and narrative, but abstraction was not on the minds of members of the artificial intelligence community. D isap poin tin g R esu lts C hange T he Focus o f A I R esearch The AI winter, as the early 1980s were known to computer scientists, shifted the focus of AI research. The exuberance in the AI community created by an influx 37I’m not sure he knew he was using this model. 38Weyhrauch used a method common to early chess-playing programs. Prom the current point in the drama his program would make story “moves,” legal state changes within the story world. First the computer would move, then it would generate possible moves on the part of the user, then generate its counter-moves to the user’s moves, and so on. After a few iterations, his program would evaluate the most “interesting” story fragment, back up, and make the first move down that path. 109 of government funding dollars and a gross underestimation of the difficulty of pro gramming intelligent machines resulted in too many promises and too few results. W ith nothing to show for millions of dollars in research, the government, the prime source of AI funding, turned off the money spigot for AI projects that appeared to be nothing but sink holes for millions of dollars and playgrounds for the cloistered researchers more interested in seeing how things worked than producing technolo gies funders could use. W ith major funding gone, the AI community spent the first part of the 1980s fairly dormant, like plants under a blanket of snow, wait ing for conditions to become more favorable for growth. By the late 1980s, AI research had regained respectability and underwriting by shifting its priorities to producing results that could be measured against program objectives. Looking back on that period, Michael Mateas and Phoebe Sengers, both researchers into interactive narrative and believable software agents, write that measurable objec tives rule out “the ability to work on complex phenomena such as the human use of narratives” because the process doesn’ t fit into “complete, decisively testable models” (Mateas and Sengers 1999). Work proceeded in the field of autonomous agents, software that could function independent of direct human control while demonstrating human-like qualities of intelligence and thought, with models of human cognition becoming popular because software performance could be gauged against experimental psychology results.3 9 While strong AI research favored agents, work on storytelling in cyberspace shifted to research less tied to hard computer science. Brenda Laurel’s writings in the late 1980s, based on her training in theater arts, influenced the Oz Project to think in terms of social responses not brain process simulations. Weyhrauch, himself, was a member of the Oz Project team, as was Michael Mateas who, with 39For an example of the use of cognitive modeling, see the description of Emile (Gratch 2000). 110 his partner Andrew Stern, began work on Fagade, an interactive drama based on the concept of a one-act play in which the interactor experiences significant agency in her conversations with two computer-generated characters.4 0 Gradually, research into cyber-storytelling became multi-disciplinary. The autonomous agents of strong AI were being combined with cognitive theories about narrative and lan guage, theatrical performance and highly-immersive virtual reality, VR experiences that included total dominance of a user’s field of view coupled with multi-channel, spatialized sound. Nonetheless, the holy grail of telling stories in cyberspace remained the proper model of a narrative; for the researchers addressing this problem, narrative was interpreted in its classical, Aristotelian form, where conflict drove drama to its peak before events unwound the drama leading to a catharsis, a release of the pent-up emotions of the audience. Upon reflection, this makes sense as a by-product of strong AI’s focus on demonstrable results. The classical narrative proved to be a common denominator for both developers and funders. If the funders recognize what they see as a story, then their funds were appropriately spent. Unfortunately, this restricts the artistic range of storytelling in cyberspace to familiar formats, while cinema and literature transcend and critique such conformity. AI researchers, faced with the somewhat daunting task of understanding the role of narrative in cyber-storytelling systems, often retreated into their comfort zone within computer science. In such cases, the path to storytelling in cyberspace is obstructed by reasoning such as this from interactive drama researcher Nicholas Szilas. In his proposed framework for interactive drama, Szilas assumes the inter actor controls the actions of the story’s protagonist, a character whose personality 40According to its developers, interactors in Fagade actually control the fate of the crumbling marriage of the two animated characters in the story. I l l has been defined by the author of the drama.4 1 Szilas proposes a solution for the situation in which the interactor does not adopt the same character position for the protagonist as the author had in mind: The protagonist adapts to the user: if user behavior for the protagonist differs from the behavior initially chosen by the author, then learning techniques allow to adapt [sic] the protagonist model accordingly. After a training period, the user and the protagonist act in the same direction. The user adapts to the protagonist: the user does not completely con trol his or her protagonist, who has its own autonomy. If the user acts in a non consistent way, the protagonist refuse to execute [sic] ( “no no, it is not a good idea”) or if the user docs not choose the action suited to the protagonist, then the protagonist performs it anyway ( “sorry, I could not help... ”). Progressively, user [sic] understands which actions the protagonist accepts without resistance, and plays accordingly (Szi las 1999). Szilas’s thinking is flawed in several ways. He seems to assume that the inter ac tor and protagonist “act in the same direction,” for long periods of time, which discounts the interactor’s engagement with the narrative and her ability to change beliefs. While the system is constantly retraining itself it also appears the pro tagonist controls the narrative, which only serves to marginalize the interactor’s role. His most egregious assumption is that the user will come to see the narrative in the same light as the “author,” and will synchronize her narrative interpreta tions accordingly, for this only wraps his work in the cloak of those before him who insist the author’s vision must prevail. While such approaches may make for a computer-friendly architecture, they represent reductive interpretations of narratives, a structuralist approach in the service of computer science. I want to revisit Fagade, the interactive story system of Mateas and Stern, here because it represents an attem pt to move away from Al-driven cyber-storytelling 41 Note that the interactor is not the protagonist. The protagonist already has its own, well- defined personality apart from the personality of the interactor. 112 with a project th at combines technology and art. According to its creators. Fagade is “an experiment in building a fully-realized interactive drama” (Mateas and Stern 2002). Mateas and Stern, recognizing the difficulty contemporary electronic games have in reaching beyond impersonal relationships and shallow character definitions, set a goal of addressing the deeper elements of narrative content. “Most notably, games are unable to convincingly address many of the topics and themes of human relationships,” they write, “thereby limiting both their mass market appeal and potential cultural value.” Rather than another storytelling environment whose experiences are based on ascending the ladder of game levels, and whose satis faction comes as a result of dozens of hours in front of a game box or computer, Facade creates stories in the form of interactive, one-act plays, each lasting fifteen to twenty minutes. The player’s actions control the unfolding of the plot, so suc cessive engagements with Fagade can produce story variations resulting in different dramatic experiences. Fagade interactors play the long-time friend of Grace and Trip, a mid-30s, materialistic, outwardly successful couple whose marriage is on the verge of a meltdown. “During an evening get-together at their apartment that quickly turns ugly, [the interactor] become[s] entangled in the high-conflict dissolution of Grace and Trip’s marriage,” write Fagade’s creators. By his or her presence, the interactor changes the course of the couple’s life, hopefully motivating the interactor to re experience the drama but make different choices to solicit different outcomes. Fagade is performed on a personal computer, with an interface designed to allow the interactor to communicate with Grace and Trip in natural English. The story world consists of the animated bodies of Grace and Trip and their apartment, primarily a large furnished living room where the action of the drama 113 is designed to take place. The interface is first-person in 3D space, navigated with arrow keys. The mouse controls a small hand-shaped cursor with which the player picks up and uses objects. The player can click Grace and Trip on their shoulders to comfort or hug them, or click on their lips to kiss them. To speak dialog, the player types text which appears on the lower part of the screen, like subtitles. Discourse is continuous and real-time, not turn-based; if the player enters text while Grace or Trip is speaking, it tends to interrupt them at the moment the enter key is pressed. Mateas and Stern chose a story that does not depend on visual and aural stimulation or challenge physical reactions, but one that turns on subtle character interactions that drive the story along. There are an acceptable array of topics that can be discussed, only a subset of which will be activated in any one run- through. “It is safe to say.” they write, “that certain types of stories, such as character-oriented kitchen sink dramas, lend themselves better to interactivity than plot-oriented action dramas.”. Fagade’s constraints on the interactor, however, go beyond those associated with genre. Conceptually. Fagade resembles a database narrative rather than a JIT nar rative. Mateas and Stern rely on a database of beats, self-contained sequences of action and meaning, to define the intentions of Grace and Trip from moment to moment. This structure reduces the generality of Fagade’s moves, but the database does meet one of Fagade’s objectives, that of preserving the author’ s vision. “The system’s role is that of an editor, assembling a story from an array of story frag ments,” with those fragments created by an author “who presumably possessed some original intent on how the fragments could ultimately fit together.” When the interactor takes an action, Fagade fashions its response by examining its collec tion of beats and selecting one that makes dramatic sense, according to a formula th at’s computed after every beat. Although Fagade makes some subtle decisions, 114 such as whether to return to a beat that was interrupted, at the heart of the sys tem is a database of beats indexed as responses to most dramatic states and legal inter actor actions. C reating B elievable C haracters During the late 1980s and on into the 1990s, artists and technologists interested in cyber-storytelling also became interested in relaxing the cognition-centric approach championed by the proponents of strong AI. W hat, they wondered, would happen if the focus shifted away from making agents think like humans to making humans think the agents were thinking like humans? The difference would be that agent behavior would no longer be defined by human cognitive processes; it could be described in terms of how believable the behavior appeared to the interactor. Such believability, researchers theorized, could come from paying attention to certain behavioral details. Bryan Loyall and Joseph Bates, both from the Oz Project, cite the “use of the eyes, the timing of pauses in speech, an awareness of body position and personal space” as examples (Loyall and Bates 1997). Such believable characters are necessary if cyber-storytelling is to position the interactor in a believable story world. While cyber-stories using these types of agents are not character-centric in the way Mateas and Sengers use the term, one would be hard pressed to imagine a story driven by character personalities if there were no way of creating the personalities in the first place. Working at the Media Research Laboratory at New York University (NYU), Ken Perlin and Athomas Goldberg developed Improv, an authoring, or scripting, tool for creating just such believable interactive characters for embedding in social situations. W ith Improv, Perlin and Goldberg are “building believable characters that respond to users and to each other in real-time, with consistent personalities, properly changing moods 115 and without mechanical repetition, while always maintaining an author’s goals and intentions” (Perlin and Goldberg 1996). The reference to the intention of the author does not mean restricting the interactor to fulfilling some pre-determined prophesy, but rather to the expression of a repertoire of desired personality traits. The author—I prefer to think of her as an agent designer—specifies how the char acters will react when socially situated in a space of interactors and other agents. In that sense, Improv is a tool for defining an agent’s social understanding of the world and the way these perceptions shape the agent’s responses to an unfolding narrative. This type of authorial direction becomes a necessary part of the design of a JIT narrative, where what each agent believes about its social environment governs its actions and, in turn, the details of the JIT narrative, itself. Another researcher in this area is Phoebe Sengers, whose research into interac tive narrativity has taken her into two areas, one she calls alternative A I and the other situated AI. The former stands in opposition to strong AI’s focus on internal mental processes, while the latter represents her approach to agents’ behaviors when they are embedded in the narrative world. Sengers work mediates the space between critical theory and computer science, forging a relationship between cul tural studies and the creation of intelligent agents, and I will have more to say about her work in Chapter Four. The strength of her contribution rests on her work in defining the behaviors of intelligent agents by situating them in a cultural context. “Autonomous agents are not ’intelligent’ in and of themselves,” main tains Sengers (Sengers 1998). Their intelligence is apparent only with reference to a particular system of explicit and implicit goals and group dynamics that define the environment in which the agents behave. 116 Think of agents as communicators is Sengers’ s message. “By making the com mitment that ’agentness’ is meant to be communicated, we can explicitly com municate to the audience what the agent is about, rather than assuming (often incorrectly) that this will happen as a side-effect of the agent ’doing the right thing’.” To Sengers, a believable agent is not necessarily a rational one, but rather one whose behavior and motivations can be understood, even if its behavior does not make social sense. W hat Loyall, Bates, Perlin, Sengers and others working on believable agents attem pted to do was free cyber-storytelling from the strong-AI reliance on testable models. T he N arrative as a P rob lem to be Solved Success (in storytelling in cyberspace) is often measured by the answer to Mateas’s question “[h]ow similar is the system’s solution to a human solution?” (Mateas and Sengers 1999). I find the language alarming. W hat is a solution to a narrative, let alone a “human solution?” The question brings back memories of my undergrad uate days when I studied electrical engineering. One of the unspoken principles of an engineering education is that there is always an answer. All problems have unambiguous solutions because the model of the world is a combination of m ath ematics and the laws of physics.4 2 If one found the correct model for the problem then, in the words of the dean of my engineering school, the answers “would all come out in the wash.” The remnants of this philosophy echo in Mateas’s words. Cyber-storytelling is about finding solutions, and clean solutions at that. 42The situation is worse than a universe of solvable problems. Homework and exams taught my fellow students and me that answers are always “nice.” An answer such as x = .43978 is prima facie incorrect. It is “ugly,” and a sure sign that the solution needs another look. Sure enough, the re-evaluated answer x = 8 is “clean” and most likely correct. 117 Computer science discourses surrounding storytelling in cyberspace, aside from conflating “narrative” and “story,” rest on two building blocks of artificial intel ligence: the model and the plan. Strong AI researchers hew to the tenet that one cannot program a computer to simulate human intelligence until one knows how a human exhibits that intelligence (that is, how a human solves the same problem).4 3 The theorem prover described in Chapter Two mimicked the human quality of decomposition, of breaking a complex problem into smaller pieces until these sub-problems (or sub-goals) could be solved. The smaller pieces get combined into larger solutions, until the original problem is solved. Like generations of engineering students, strong AI researchers approach cyber storytelling by determining an appropriate model of the story (or class of stories) of interest. Then, the model is used to find a solution to the problem of nav igating the model from the beginning of the story to the end. Models used in the sciences can be quite stylized and subject to a fine degree of abstraction to simplify the world while foregrounding only those subjects of interest: the human circulatory system as a collection of hoses connected to a water pump; a human eye as a camera; an atom as a miniature solar system with electron “planets” orbiting a nucleus “sun.” When AI researchers develop a story model, any act of abstraction—the conscious omission of relationships, causes and effects or influ ences for purposes of simplification—is an intellectual statement. Randy Brooks believes that “abstraction is the essence of intelligence and the hard part of the problems being solved.” When humans do all the abstraction they solve the dif ficult part of the task at hand. There is “little for the AI programs to do but search [for a solution]” (Brooks 1991). Find the right model with the right level of 43I am indebted to Dr. Jon Gratch of the ICT for never wavering in his attempts to convince me of this even though I don’t completely agree, his argument finally finds its way into this dissertation, though not in the way he might prefer. 118 abstraction and apply planning algorithms to get from the beginning of the story to the end. In one admittedly abstracted sentence, that is a description of strong AI research into storytelling in cyberspace. Like all good abstractions, this one hides a problem it chooses not to solve, for the solutions posed by the programmer quietly filter out any unpredictability. T he P rob lem w ith “S olving” N arratives One of the pleasures of both hypertext and database narratives is the navigation through unexplored space, the pleasure of the unexpected. Dealing with open- ended explorations, however, is not pleasurable for computer research. Instead of the meanderings associated with new media, imagine a hypertext fiction or database narrative that required the interactor to start at a particular link and stop when she reached a specific page or line or graphic. Yes, the interactor is still maneuvering through cyberspace, only now she has a win/lose or right/wrong consequence shimmering over her computer, eternally haunting her until she finds a way from start to finish or admits her intellectual lack. Computers can be pro grammed to solve this sort of problem using a model and any number of planning algorithms that arrive at a destination by forging a path through what is called a solution space, a collection of all possible paths, some of which will solve the problem and some of which won’t. Figure 3.3 diagrams some possible steps in solving the problem of buying milk at the supermarket. Solving the problem of buying milk is equivalent to finding a path between “Start” and “Finish.” The point-and-click navigation of cyberspace is replaced in the model’s defini tion by a collection of preconditions, actions and effects. If all the preconditions of an action are met the action can occur. The result of an action, its effects, changes conditions in the model enabling new preconditions that allow new actions raising 119 Go To Pel Store Go To School Go To Close: Go To Supermarket Buy Tuna Fish i \ \ Read A Book t Buy M ilk i Sit in Chair Sit Some More t Etc. e tc . . Read A Book Finish Start Figure 3.3: A simple model for buying milk at the supermarket (from Russell and Norvig 1995). new effects and so forth. In Figure 3.3, the action “Buy Milk” can only occur if the precondition “At Supermarket” has been fulfilled. “At Supermarket” is an effect of the action “Go To Supermarket.” Being at the supermarket fulfills the precondition for buying the milk. The action “Buy Milk” has the effect of making the answer to “Got milk?” affirmative.4 4 In this simple example there is only one solution to the milk-buying problem; however, suppose the problem includes a number of different ways to get to the supermarket: drive, bicycle, take a taxi, walk or ask a friend for a lift. Now there are five solutions (one for each method of getting to the market). If one of the initial conditions states that the would- be shopper is confined to bed with a broken leg there are zero solutions. It is the planner’s job to sort this out, partially by trial-and-error and partially by informed 44 The quest genre of electronic games is replete with such precondition/action/effects logic. The player cannot defeat the dragon without the magic weapon (precondition). Once the weapon is used (action), the dragon is slain, but the weapon is drained of its magic and cannot be used again (effects of the action). 120 heuristics that steer the planning process away from unpromising solutions (so the planner does not have to test all potential solutions). Describing narratives in this fashion, as a problem of wading through some structure, is “critically incorrect,” ignoring as it does any active engagement on the part of the reader/spectator/interactor.4 5 This technique is, however, com putationally friendly, which is why, regardless of the final purpose to which a program will be put, discourses surrounding AI programming are heavily biased toward structure and model. In the case of interactive storytelling these discourses are woven into the very fiber of most projects. A ltern ative A I as an A ltern ative to M odeling There is work being done that circumvents strong AI’s bias toward modeling of stories, characters and interactors. Researchers in the field of weak AI (what Sen gers calls alternative AI) focus more on the signification aspects of narrative while easing structural constraints and allowing the interactor to participate in creat ing meaning. The principles of this type of AI are reflected in Rodney Brooks’s description of the robot, Kismet (in Chapter Two). Kismet was capable of per forming many simple actions. These actions competed for dominance, that is, for expression in the real world. A complicated arbitration system decided which behaviors took priority over the others. As a result, Kismet exhibited recognizable human behaviors which, because of their method of generation, had no meaning other than what was given to them by Kismet’s interlocutor. 45These solutions do not, for example, allow for the case where the shopper, on her way to the milk, gets sidetracked by a conversation with a friend, or finds her particular brand of milk is out of stock and might choose to not buy milk that day. 121 Somewhat more literate socially-situated agents now populate the worlds of Multi-User Dungeons (MUDs). These agents cannot rely on traditional planners to guide their actions because, as Mark DePristo and Robert Zubek, write in their paper presented at the AAAI4 6 2001 Spring Symposium4 7 (DePristo and Zubex 2001) the environments are simple but dynamic, and the combination of many characters acting quickly and simultaneously complicates a traditional planning approach. Just like in the realm of JIT narratives, agents access “the same sensory information, the same repertoire of actions, an d ... the same survival requirements as human-controlled characters.” DePristo and Zubek combined symbolic AI and behavior-based robotics in their agent being-in-the-world, a hybrid approach they found necessary for dealing with the rapid decision-making their agent had to deal with in MUD environments. Less complicated intelligences perform equally well in MUDs, where “it is com mon to find computer programs, called bots, that present themselves as, people,” writes professor and psychologist Sherry Turkle. “Since most MUDs use only text to simulate a three-dimensional virtual reality a software program that produces text within a MUD can seem as real as a person who produces text within a MUD” (Turkle 1997). One somewhat notorious bot is Julia. Much like ELIZA, the computer therapist of Chapter Two. Julia uses textual-analysis tricks to carry on conversations with MUD visitors. Besides relying on grammatical manipu lations, Julia can also reach into her extensive database of slang and idioms to craft answers to her conversation partners. As DePristo and Zubek point out, agents need survival skills while operating in MUDs and for Julia, who is endowed 46The acronym, pronounced “triple AI,” stands for the American Association of Artificial Intelligence. 47The topic of the symposium was “Artificial Intelligence and Interactive Entertainment.” 122 with a sassy female persona, her most important skill is being “adept at detect ing and deflecting sexual advances” (Turkle 1997). Her flirting and parrying skills notwithstanding, there arc still those male visitors who, with the appropriate set of hormones engaged, thrill in the pursuit of this mysterious woman; none, how ever, were as persistent as Barry, a tall, blond-haired student who chased Julia for nearly two weeks, apparently challenged by exchanges such as this one (quoted in Turkle from July 14, 1992): Barry says, “Hello, how are you?” Julia says, “Hello, Barry.” Barry says, “Are you a virgin?” Julia says. “As far as you are concerned Barry, I always will be.” Like ELIZA, Julia is an example of an intelligent agent formulated using the principles of alternative AI that can give the illusion of not only human-like behav ior, but also a facile ability to converse intelligently on virtually any subject. Com pared with ELIZA, Julia seems more intelligent because she/it is more conversa tional; she/it responds to questions and statements with assertions of her own. Julia also exhibits a rather unique personality. Barry was able to recognize Julia as an individual on his subsequent visits to the MUD, th at is, he believed he was talking to the same person he conversed with previously. On the inside, Julia is not particularly smart. She/it is an example of Searle’s Chinese Room. Phrases directed at her/it are matched against a table of probabil ities of meaning and a response is selected from a collection of possible responses for that meaning.4 8 In the environment of a MUD, however, it seems an agent doesn’t have to have that much intelligence to be intelligent. Nor does the agent need to plan a solution to be a gifted conversationalist. Just ask Barry. 48This is another simplification. The software also keeps track of what has been discussed and how it has responded previously, so the set of responses can be quite dynamic. 123 M odeling as a V aluable P rocess I want to clarify a possible point of confusion. Modeling is a basic and necessary method within the realm of artificial intelligence, and I don’t mean to imply it has no place in JIT narratives or in storytelling in cyberspace. In fact, planning is crucial to the implementation of autonomous agents, which need to find ways to achieve their goals, whether those goals are pre-programmed or evolve with the situation the agents are embedded in. Agent researchers Harland and Winikoff argue that at a minimum, agent systems should include properties such as • pro-activeness: the agent has an agenda to pursue and will persist in trying to achieve its aims; • reactiveness: the agent will notice and respond to changes in the environ ment; • autonomy: the agent will act without necessarily being instructed to take particular steps; • situated: the agent both influences and is influenced by the environment around it. Other possible attributes of agent systems include being social, (i.e., teaming up with other agents in order to achieve common goals), learning (i.e., taking note of previous actions and adjusting future actions accordingly), and rationality, (i.e., working to achieve its aims, and not working against, them) (Harland and Winikolf 2001 ). Little, if any, of these conditions could be achieved without some form of planning and modeling. The danger of models in cyber-storytelling comes when the researcher attem pts to model a narrative or story. Not only are there dis agreements on exactly what properties of a narrative make it a narrative, such 124 modeling and planning, treating the narrative as a problem to be solved, leads to dangerous abstractions that cannot be supported by narrativity theories. An equally troubling abstraction is the belief that a narrative can be modeled and solved independently of the perceptions and interpretations of the characters and interactor. There is plenty of work in the AI universe that focuses on modeling autonomous agents situated in their environments. There seems, at the moment, to be less work in applying these agents to model-free narrative generation, which would bring the discourses of AI and those of narrativity closer to alignment. That, in turn, would give both discourses insight into the other’s view of the common problem of telling stories in cyberspace and lead to a stronger, unified theory and understanding of how narratives can be made in the age of artificial intelligence. N arratives and T he Loop I will have more to say about the technology of, and the role played by, agents in Chapter Four, and I included this description now to round out this picture of cyber-storytelling over the last two decades. The narrative theory I have included in this chapter is primarily concerned with what different theorists have refer enced, albeit by different names, as an underlying work or a web of signifiers. JIT narratives add to this static narrative component a dynamic one. The dynamic component is a result of changes in the JIT narrative world brought about through the agency of the interactor and agents. Their ability to change objects and other agents in the world means that the underlying material, in this case the world pre sented to the interactor (or agents for that m atter) is fluid. Objects can appear, disappear, reappear or change. Agents can shift goals, abdicate responsibilities, exhibit emotional upheavals or change behaviors as part of a series of complex interactions and decisions, a feedback mechanism that motivates these changes 125 on a moment to moment basis. Analyzing JIT narratives requires that one pay attention to both their static and dynamic components. The static component is amenable to traditional narrative analysis, the dynamic component requires a bit more inquiry. Looking back over a completed JIT nar rative experience one can critique that experience in much the same way as one critiques cinema or literature. The descriptions or personal observations of what occurred during the experience form the underlying material for a traditional nar rative because after the fact, the occurrence of the experience will not change.4 9 As Kelso, et al. wrote in (Kelso, Weyhrauch, and Bates 1992) about the Oz Project experiments, the target of the experience was the participant, not the observers. The same is true in JIT narratives. For interactors, one critical questions theorists should ask is what makes a particular JIT narrative behave the way it does? In other words, what social, political and ideological forces influence the way agents behave?5 0 One place to start looking for an approach to analyzing the JIT narrative’s dynamic part is with the mechanism that creates the dynamic part, itself. Until the introduction of Apple’s Macintosh, personal computers (and most commercial and scientific ones) used a command line interface instead of today’s familiar graphical user interface (GUI) associated with Macintosh and Windows operating systems. W ith a GUI, a user can perform various (often unrelated) actions, in any order, and at any time. The computer is not waiting for a particular 49As I wrote earlier, a JIT narrative could be videotaped by an outside observer. No matter how many times the tape was replayed, the physical events (signifiers) making up the narrative would never change. 50I can draw a similar analogy with database narratives. While one can analyze the narrative created by an interactor through the process of selection and combination, one might also want to know what influenced the inclusion of specific elements in the database as well as the design of the interface and the network of links. 126 n *8oe»sJitr It* bash; la ; co«*ancl not fcursd •ftfijcauits f t i f W slpM' £ M f > g u lL ch Clp Ccwiitiari ^km^rrnkdJ^m^JPrm C s a r r t r - i l B O^SJ&k p- Etoetfihtt teaktcp D tew N lafelsjr* O aauw nfcs ffi Syslmi - D oom m&Mm nm*PU**te Library If® ftoyteirir® ffejfcriK J fa s s r ® tttfeS l® # ;•' ‘oucaM lt:" jcfcuglasJI I mm- M u n a h H e*#*** PCTfirtirte fV fikB renta* Piisburas; Ptaytlsts F te s I f e l® y c r itM ¥ « „ je c h S S E * " 3 L . fte*swti„Slbli^r^t(y ffesBsrdh C lts tic m ShelfJJLfe - ....... St*sn#tel iM u J ’ rxm m & t Imxinkw fo(S3oral,lxigic mMim-t folder ofe , M n 1 fO^IiEt.fctu bause^irairuct ions „ d c c jawdbc Itrax ■cMmfe M odule i^fuajiseirjir so ;: ,**«. we.adtfio.list - c t o c ^fcdressgod'',ldlf Jjhp-A rtS.iJ sa i* 3 l@ d ir ■ a yp tem g rs_ tons’ : . #av tewoardhtw vtaiateiaer Figure 3.4: A Command Line Interface. The computer expects only certain com mands in a specific order. The command “lx” is not one recognized by the com puter’s operating system. The proper command, “Is”, list the contents of the current director (equivalent to double-clicking on a folder). command. In contrast, with a command line interface, the computer is waiting specifically for a character from the keyboard, then another and another, until the RETURN (or ENTER) key is pressed. The computer then tries to make sense out of the string of characters the user entered. See Figure 3.4. Any other action, except entering characters from the keyboard, is simply ignored by the computer. The GUI in Figure 3.5 hints at the multiplicity of actions open to the user. There arc folders to open, menus to pull down and programs to initiate. The user can move the cursor or click a mouse button at will, in addition to entering 127 Figure 3.5: The Finder GUI for Mac OS X 10.3 characters from the keyboard.5 1 Pointing and clicking on a menu, as in Figure 3.5 is only one way the user expresses her desires to the computer. While a command-line interface is reactive, a GUI-based interface is proactive. Rather than waiting for the user to enter a command, the logic behind the GUI constantly tests its environment looking for something to do. The software checks to see if the user has typed a character, resized a window, selected a menu item, moved an icon, started an application, any and all of the offerings of the particular operating system. Operating systems using GUIs are virtually one large loop, 51 Not every operation produces meaningful results, however. Typing characters works only if the user has previously specified where those characters should go. Unlike the command line interface, the computer doesn’t become defiant until those characters are entered. Waiting or not, the user can still avail herself of other actions available through the GUI. 128 Agent 1 _ Interactor Agent 2 Figure 3.6: A JIT narrative event loop. known to programmers as the event loop. The event loop is similar to the logic used in the timesharing computer systems described in Chapter Two, only in that case the loop (usually called the scheduler) looks for user programs ready to take actions. The GUI event loop focuses on both programs and users. Include autonomous agents in the mix and the event loop applies to JIT narratives.5 2 52 In (Manovich 2001), Manovich writes about loops as the engines of narratives. Although Manovich’s terminology may sound similar to mine—that it the loop that somehow represents the nature of interactive narratives—he uses loops as building blocks, like words in a sentence. His loops are self-contained units that he assembles into a narrative. I use the loop as a representation of the entire JIT narrative structure, not for its individual pieces. 129 Figure 3.6 shows a stylized application. In this example, the computer repeat edly checks to see if the interactor or any agents are prepared to take an action and, if so, the computer invokes a program specifically designed to respond to their needs. Any agent can perform only one action in any cycle of the loop, but since the computer traverses the loop at speeds much faster than human reaction times, an agent can appear to perform two actions uninterrupted by other agents (or the interactor). In reality, the agent is the only one requesting attention during this particular cycle (or successive cycles) through the loop.5 3 The event loop is deeply tied to the idea of preconditions, actions and events described earlier in this chapter. If, at the start of the loop, certain preconditions are true, then an agent54 can perform a particular action during the current cycle. The action’s effects change the environment in some way so that other agents may find their preconditions satisfied, during the same cycle or the next one, so those agents perform their actions, which have associated effects, which change the environment, and so forth. The event loop, then, is the mechanism that enables the JIT narrative environment to change, which provides the opportunity for the JIT narrative to move forward, that is, toward some sort of ending.5 5 Moving forward, however, is not as simple as moving from some beginning point to some end. The interactor spends most of her JIT narrative time inside this event loop. For an idea of what that implies I call upon two examples from cinema in which two protagonists find themselves caught in a loop. 53In and Out represent paths for entering and leaving the loop. 54The discussion applies to more than one agent, too, but using one agent keeps the analysis simple. 55For the moment, I will defer the discussion of what constitutes an ending in a JIT narrative. 130 T he Loop in th e N arrative: Groundhog D ay and R un Lola Run If a spectator steps into the central role of Phil Connors in Groundhog Day (Ramis, 1993) or Lola in Run Lola Run (Tykwer, 1998) she finds herself in a temporal space where time stops running forward and instead loops around and around. Ground hog Day's loop begins when TV weatherman Phil Connors (Bill Murray) awakens on Groundhog Day morning to the strains of Sonny and Cher’s I ’ ve Got You, Babe. Connors is in town to cover the traditional prediction of Puxatawnee Phil, the hibernating rodent allegedly the world’s foremost predictor of the onset of spring. Connors’s day is marked by conditions and actions that become psychologically charged as the film progresses, although initially they seem innocent opportunities for director Harold Ramis to showcase Murray’s comedic talents. During his first cycle through the loop, Connors’s day includes the following: • an encounter with a much-too-happy guest at the bed and breakfast inn where Connors is staying; • the owner of the inn who tries to engage him in idle chit chat; • a panhandler, whom Connors dispatches with a throw-away “Catch you tomorrow”; • a former high-school-acquaintance-turned-insurance-salesman, Ned Ryerson, who dogs Connors for several blocks with a pitch to buy life insurance; • a step off the curb and into a puddle of slush; • an unvarnished advance toward his producer, Rita (Andie McDowell); • a prima donna attitude toward his cameraman, Larry (Chris Elliott); 131 • a taping of sincerely insincere wrap arounds for the beginning and end of his news report; • an unexpected blizzard (Connors had predicted the storm would pass to the north of Puxatawnee) that closes the highway and forces Connors, Rita and Larry to spend another night in town. At six in the morning, when Connors’s clock radio starts the day with the same recording by Sonny and Cher the loop begins again. These events, plus a few more added to create a cinematic plot instead of a theoretical exercise, are repeated endlessly, though after an initial period of adjustment, Connors alters his responses to the events. He goes, for example, from tolerating Ryerson as just another cloying annoyance to punching him in the face to buying every type of insurance policy Ryerson has in his portfolio. As Connors cycles through Groundhog Day after Groundhog Day, he evolves from the cynical, self-centered person he was upon arriving in Puxatawnee to a humble, well-respected, integrated member of society. The cues to this evolution are informed by Connors’ s altered responses to the events in the loop that have become sites of psychic energy. If this were a JIT narrative, I would say that Connors is cycling through an event loop in which he is the sole active agent of change. The loop has the property that it resets the world back to its initial state every time the loop begins. Connors can choose to play out that loop as before, or take different actions, in the face of identical preconditions, to experiment with their different effects. One purpose of experimenting with different actions might be to find a way out of the loop, that is, to find some combination of effects that prevent the loop from resetting the world at the end of every cycle. As the sole agent of change, that is Connor’s only means of escape. 132 Run Lola Run is structured around loops, circles and spirals, basic geometric shapes connotating repetition, either endless, as in circles, or progressive, as in loops and spirals. Unlike Phil Connors’s looping world, every cycle in Lola’s life reflects changes in the world instantiated in the cycle before. For Lola, these are small change in each cycle that result in major changes over the life of the narrative. This is in keeping with the traditional idea of an event loop, where the interactor remains in the loop while simultaneously using actions to edge closer to an escape. Where Connors’s life is more rounded, with events repeating as they are, Connors uses his situation for the pleasure of what he can gain. On the other hand, Lola uses her time in the loop the way an electronic game player uses repeated game play. Lola learns with each repetition, making changes through increasing knowledge (or skill), then emerging from each cycle at a higher proficiency level.6 6 Where Connors finds pleasure, at least initially, from remaining in the loop, from the start Lola is motivated to find a way out. In this narrative, Lola must get the one-hundred-thousand marks her boyfriend, Manni, needs to save his life, while getting to Manni before he commits armed rob bery and leaves himself open to capture and imprisonment. A few of the markers in Lola’s loop: • her exit from her apartment. It is always accompanied by a request from her mother to stop by the drug store, while the circular nature of the story is suggested by a three-hundred-sixty degree camera move around her mother as she coos to her lover on the telephone; • Lola’s encounter with a man and his dog on a spiral staircase. This is done in animation and has the feel of a computer game, albeit a simple one, in 56Marsha Kinder, private conversation. 133 which the player controls a character that has to leap over obstacles, run through a course and so forth. Here, Lola must avoid the dog and run down a seemingly infinite path, another model of the event loop where one goes around in circles while still making progress; • bumping into a woman pushing a baby carriage; • distracting the driver of a car leaving a parking garage who subsequently is involved in an accident; • asking an ambulance driver for a lift; • rejecting an offer from a cyclist to buy his bicycle; • deflecting workmen carrying a glass pane into the path of the ambulance; • barging into her father’s office and interrupting his assignation with a female colleague; • stealing money from her father’s bank (and subsequently winning money in a casino); • stopping Manni from robbing a store. In each cycle, one sees Lola become more proficient in bending her environment to her own goals and desires. Granted, these films are stylized treatments of trapped protagonists because the protagonists are not interactors and the time they spend in their respective loops is part of a broader narrative th at makes up the film. I use them here to illustrate, perhaps in more concrete terms, the way an event loop generates a JIT narrative, moment to moment, through the actions of the interactor and the 134 various agents.5 7 The forces that control how this JIT-like narrative unwinds are gradually revealed to be the goals and desires of not only Connors and Lola, but those of the other agents within the loop’s reach. These agents become analogous to objects in the database of a database narrative: they are the building blocks from which the narrative will be constructed. Connors progressed because he correctly interpreted R ita’s criteria for a man she would potentially enter into a relationship with, and was able to find and manipulate objects and agents in the narrative world so he could meet those criteria. Lola successively alters her behavior to modify the behaviors of agents, and change the effects of actions, in what she hopes are predictable ways. The composition of the agents play a major role in determining the composition of a JIT narrative. I will have more to say about this in Chapter Four. T he Loop and th e E nding Much has been written on the nature and importance of endings in traditional narratives, what I have been calling the static part of JIT narratives. The cyclic nature of a loop, its repetitiveness, raises the question of just where the end of a loop lies. This point in my dissertation seems like a suitable place to address the nature of JIT narrative endings with respect to the dynamic part of the narrative, the part generated by the AI component. Among the scholars whose work I surveyed, there seems to exist a consensus that one purpose of the ending in traditional narratives is structuring the beginning and middle, of putting into perspective all that has come before, of unveiling the relationships among previously unrelated threads. Writing metaphorically about 57In the examples, the agents were human actors, but the idea is the same. 135 stories and storytelling, Walter Benjamin asserts “the meaning of a m an’s life [meaning of the story] is revealed only in his death,” which is represented at least by the end of the story (if not the actual death of the object of the tale) (Ben jamin 1968a). When he likens it to death, Benjamin places a stamp of finality on an ending; yet, he does not overstate the issue. Until the end is reached, until the narrative stops, there is always one more move, one more event, one more occur rence that can overturn what the narrative previously implied. W ith death comes quiescence and with quiescence, reflection: reflection not possible in the face of a tale with new possibilities around every narrative corner. Peter Brooks, author of Reading for the Plot, gives substance to the twists and turns when he introduces a relationship between the beginnings of narratives and their ends that he calls the plot. The plot is the slippery middle that complicates the path from beginning to end by introducing obstacles, detours and diversions that raise questions for the reader or spectator, questions the reader or spectator moves forward to resolve. Resolution, or what Brooks calls wholeness, is what an ending is there to provide. “The sense of a beginning, then,” Brooks writes, “must in some important way be determined by the sense of an ending” (Brooks 1984). Where the plot complicates, the ending simplifies. This power of an ending to simplify, to structure what has come before ending is the subject of an entire book by Frank Kermode (Kermode 2000). In it, he likens endings to the framing effects of the impending Apocalypse in myths and religions, giving rise to the tension between determinism implied by the plot and the apparent ability of persons within the plot to alter its structure. Lacking an ending, a narrative has no tension because it unwinds wildly into empty time. It is this sense that the unwinding will stop that creates a meaningful middle within the story, much like the effects of the rhythmic ticking of a clock structure time 136 into moments of expectation. Tick always signifies the presence, the promise, of tock, so “[T]he interval between the two sounds, between tick and tock is now charged with significant duration.” Kermode’s analogy organizes narrative time the way the tick and tock organize real time: a beginning promises a middle, and a middle promises an end. Where the structuring effects of beginning and middle promise an end, the end promises a discharge to what the beginning and middle have charged with significance.5 8 W ith all this emphasis on the structuring importance of endings, the possibility of a lack of an ending in a JIT narrative is, itself, charged with significance.5 9 This state of narrative suspension, in which an ending may never come, troubles Peter Lunenfeld, a new media designer and critic. “Technology and popular culture propel us toward a state of unfinish in which the story is never over, and the limits of what constitutes the story proper are never to be as clear again” (Lunenfeld 1999a). Lunenfeld illustrates my point that JIT narratives must be approached from both their traditional and dynamic sides, which means asking the question what, if anything, structures a JIT narrative? Critical theorist Andrew Darley, when writing about what signifies closure in electronic games argues that completing a game means “successfully playing one’s way through all the levels, overcoming all the obstacles and defeating all the ene mies” (Darley 2000. To an extent, I agree. As I wrote earlier, this repetitive 58Begiiming, middle and end are terms relative to the time it takes to view a film or read a novel, that is, to the time of the discourse and not the time spanned by the narrative. The latter may move from past to present to future in any combination. However, Kermode’s point is that at the end of the discourse, when there are no more frames in the film or pages in the book, this anticipated discharge effect should be realized. 59I’m not speaking here of endings in the sense of switching off the computer or, as is possible in many electronic games, saving the state of the narrative world. I’m addressing a real-world combination of Phil Connors and Lola, an interactor who endlessly traverses a loop, perhaps advancing incrementally over time, like Lola, but who has no clear exit point, like Connors. 137 progression through multiple levels is exactly the structure of Run Lola Run, and in the film Lola “wins” when she amasses enough money to pay back Manni’s boss and run away with Manni as well. Run Lola Run was a game, and games are structured by rules. A chess game is over because the players have achieved a certain relationship among the pieces beyond which, according to the rules of the game, no further moves are possible. It seems, however, at least some JIT narratives have the potential for looping into infinity, as in Groundhog Day. Earlier, in Chapter Two, I discussed the game Black & White, in which the player takes on the role of a benevolent or malevolent god and proceeds to twist the world to match his or her vision of power, adulation and obedience. Interactors in JIT narratives, like the players of Black & White, and unlike readers and spectators, at least have the opportunity to attem pt to twist their environment to match their goals and desires. W hether they are successful or not depends on their relationships vis-a-vis the agents in the narrative world and the ability of interactors to use objects in the world to their own advantages. This agency, one of the fundamental properties of JIT narratives, means interactors’ goals and desires need not remain fixed over time. Nor must the goals of the agents. When dealing with the nature of desire in narratives, one theoretical work that cannot be ignored is Freud’s Beyond the Pleasure Principle. I believe a JIT narrative comes to an end, and the interactor leaves the loop, when the interactor can accept the current state of the world as quiescent. If no further changes in the world were to occur, the interactor would feel the current world either structured the world before it, represented a pleasurable state that does not motivate departure, or represents a state from which no further actions are possible. In forming this argument I am extending Freud’ s work in Beyond the Pleasure Principle (Freud 1961) to interactors in JIT narratives. Where Freud 138 writes about the tension between the pleasure of repetitive acts and the need of the organism to move toward death and closure, the interactor does the same. The difference is that while organic death may be an objective condition, the metaphoric death of the interactor is a personal choice, as is everything else in the JIT narrative experience. The interactor may choose infinite pleasure, and be caught in the loop forever, or inaction, however motivated, which is another form of death in a dynamic narrative component. While Freud’s work can be generally applied to narratives, Peter Brooks, in Reading for the Plot, relates Freud’s writings to them. Working against Freud’s theories in Beyond the Pleasure Principle, Brooks questions why Freud finds it necessary to go beyond what he has already written about. It is here that Brooks undertakes a reading of Beyond the Pleasure Principle in the context of narratives and narratability, calling Freud’s work “Freud’s own masterplot,” in which Freud lays out the mechanism by which “life proceeds from beginning to end” (Brooks 1984). Brooks concludes that, based on his reading of Freud, “each individual life repeats the masterplot,” searching for an answer of whether closure of the life is necessary as opposed to desirable but not required. Brooks concludes that Freud addresses the issue of going beyond the Pleasure Principle when Freud confronts the question of why unpleasant experiences, such as traumas, repeat themselves in dreamwork, in “the apparent contradiction of the wishfulfillment of dreams.” Here, Brooks sees the similarity between this form of repetition and the inherent repetition of narratives. “Narrative,” writes Brooks, “always makes the explicit claim to be in a state of repetition.” To make the link stronger between Freud and narrativity, Brooks notes th at the process of repetition in traum a and psychoanalysis, the reliving in the present rather than 139 the remembering about the past, is a move from a passive to an active position relative to the material or experiences being repeated: If repetition is mastery, movement from the passive to the active, and if mastery is an assertion of control over what man must in fact submit to—choice, we might say of an imposed end... repetition taking us back again over the same ground, could have to do with the choice of ends. Brooks, via Freud, offers another view of the purpose of narrative endings. Brooks takes Freud’s thesis that “unbound” energy, that which demands and cre ates instant gratification, can become bound through the process of repetition and writes that “[r]epitition in all its literary manifestations” binds “textual energies” by reshaping them into what Brooks calls “serviceable forms.” The binding pro cess has repercussions, however. While making narratives more accessible, the binding of textual energies through repetition at once delays the discharge of those energies, initiating “a turning back from immediate pleasure to insure that the ultimate pleasurable discharge will be more complete.” Thus, the narrative end becomes problematic for the person engaging it: immediate pleasure versus the promise, but not the guarantee, of some unspoken ultimate pleasure. Re-reading Brooks in the context of the event loop, I believe what drives the interactor in a JIT narrative, especially in the face of agency, is not a desire to act, but a desire to master; not a desire to master the text or the narrative, but the desire to master the world through repetitive behavior. There is a certain repetitiveness to JIT narratives that is apparent when they are viewed, as they were earlier, as part of a precondition, action, event cycles. Certainly the interactor can leave a JIT narrative at any point. She simply stops acting. Going further, she can leave the VR theater, switch off the computer, take any number of steps to terminate the mechanics of the situation. W hether this coincides with a sense of fulfillment, what in traditional narratives Peter Brooks describes as a “narrative 140 desire,” is a personal decision, much as it is with traditional narratives. Only, with JIT narratives, the interactor has a choice not of re-interpreting an existing ending, but of creating a new, and perhaps more satisfying, one. The previous film examples demonstrate this drive toward mastery. Weather man Phil Connors enters the loop with the desire to have sex with his producer, Rita, but temporarily abandons that desire to capitalize on the pleasure of omnipo tence engendered by the fresh opportunities at his disposal beginning at six every morning. Thus, his attem pts at suicide for the pleasure of knowing that no m atter what happens his life will reset itself in a few hours, and his insincere interest in R ita’s interests, which he uses as weapon to fan her libido and the systematic application of food and eating, which have long substituted for the appetite for the sexual act (if not for the act itself),6 0 will continue unabated. When his long- anticipated conquest fails, Connors realizes the pain (and futility) of being trapped in a loop where fulfillment of desire, like the desire itself, is a fleeting illusion. Con nors only way out of his twenty-four-hour a day purgatory is to evolve himself by using his god-like insight into R ita’s soul as a basis for change rather than a trifle to feed his cynical self-absorption. Connors eventually achieves the mastery he seeks through reshaping himself using the actions of agents as his guide. When he masters not only his world, but himself, he is able to exit the loop with a sense of fulfillment, and is able to free himself from his circular prison. Lola discovers that her way out of the loop is through desire motivated less by pleasure than by fear of the ending. Where Connors feared the beginning of each loop, waking up and discovering he would spend yet another Groundhog Day, Lola’s fear is at the end of the loop, the coming 60 Watching the scene in which Connors gorges himself on doughnuts and pastries I could not help but be reminded of the legendary eating scene in Tony Richardson’s 1963 version of Tom Jones. 141 apocalypse so eloquently described by Kermode. Lola bends the event loop to her own purposes, but she is externally motivated. Where Connors bent the behavior of the world to match his self-self-absorption, Lola adjusts her behavior to the world. For Connors, repetition was an annoyance. For Lola, repetition is a palliative, not a postponement but a preparation, a preparation for escaping the game-like circular-patterned life of the event loop for the linear lifestyle with Manni, from which her narrative promises pleasure and closure without the unwanted pain of delaying the end. Her escape from the loop is also her escape from an apocalyptic moment in which she sees a life without Manni but with an over-indulgent mother and an escapist father. Lola escapes because she learns to change her strategy right at the beginning of each loop, then to capitalize on the changes in the narrative her behavior puts into play. This is exactly the role of the event loop in JIT narratives; it is not there to drive endless repetition but to stimulate alternative actions based on the behaviors of multiple agents. The interactor decides whether life in the event loop is pleasure or pain, whether the end of the loop is life or death. Creating a more satisfying outcome in a JIT narrative, achieving this state of mastery, is not guaranteed. The JIT narrative develops through the active engage ment of agents and the interactor with each other. Having desire, having a vision, even having agency, does not mean the agents will cooperate with the interactor in providing a sense of closure. Depending upon their programming, these agents may be allies or obstacles to fulfilling the interactor’s desire; and, positions may switch, so allies become obstacles and vice versa. These interactions, driven to a large degree by the agents’ programming, take the place of the textual obstacles and detours of the more traditional narrative. The unfolding JIT narrative becomes, and is, the interactor’s world, so mastery of the narrative becomes mastery of that 142 world. Through agency, however, the interactor manipulates the world and the narrative as one. This choice of the manner of death, this choice of an ending, is not only a consequence of agency in the JIT narrative world, it is a responsibility of agency. Responsibility implies obligation, and I believe the interactor is obligated to choose some form of ending, even if this is through her own inaction or refusal to make a choice. I do not believe the interactor must like her choice. She may have to accept the inevitable.6 1 She may have to accept the fact that she cannot shape the world in her image. However the end comes, it is the responsibility of the interactor to define it. To sum up, the interactor is propelled through the narrative by a sense of desire which in JIT narratives is represented by a mastery of something requiring repeti tive behavior. Agents are propelled through the narrative through their program ming, which drives their goal-oriented behavior, the mechanism propelling them toward this goal being reducible to the same principles, at least, as the mechanism facing the interactor. The commonality is the prerequisite/action/event loop that insures a modified version of Newton’s Third Law holds true: every action has a reaction, and every reaction will eventually require a new action, whether that is on the part of an agent or the interactor. I believe a JIT narrative comes to an end, and the interactor leaves the loop, when the interactor can accept the current state of the world as quiescent. If no further changes in the world were to occur, the interactor would feel the current world either structured the world before it, represented a pleasurable state that does not motivate departure, or represents a state from which no further actions are possible. In forming this argument I am extending Freud’ s work in Beyond 61 She could die in the JIT narrative world, at least figuratively. 143 the Pleasure Principle (Freud 1961) to interactors in JIT narratives. Where Freud writes about the tension between the pleasure of repetitive acts and the need of the organism to move toward death and closure, the interactor does the same. The difference is that while organic death may be an objective condition, the metaphoric death of the interactor is a personal choice, as is everything else in the JIT narrative experience. The interactor may choose infinite pleasure, and be caught in the loop forever, or inaction, however motivated, which is another form of death in a dynamic narrative component. 144 C hapter 4 P ost hum an P layers It was the end of a frustrating experience. Lying in the street was a young boy, the victim of an auto accident between the car driven by his mother and an army humvee. The ROTC cadet playing the role of a platoon leader (lieutenant) in this training mission wanted to help the boy, but she couldn’t communicate her intentions to her platoon sergeant. Reaching a point of frustration she ordered the platoon to move out. One of the training observers asked, “W hat about the boy?” to which the cadet replied, “I’m not worrying about him. He’s only a cartoon.” Figure 4.1: The virtual environment of the MRE training mission. The sergeant (left foreground), the combat lifesaver (kneeling) and the mother (alongside her son) are intelligent agents. The hapless cadet’s adventure occurred a few blocks from the shoreline in Marina del Rey, California, a pricey stretch of Los Angeles real estate on the 145 edge of Santa Monica Bay. In a nondescript, six-story office building is a state-of- the-art VR theater, part of the University of Southern California’s (USC) Institute for Creative Technologies (ICT). At the request of the U.S. Army, ICT researchers are developing an interactive two-dimensional training system using agents that understand and speak natural English,1 that gesture appropriately (emphasizing spoken phrases and pointing toward objects and directions under discussion) and relate to the interactor as a new lieutenant on a peacekeeping mission in Eastern Europe. The view in Figure 4.1 is a snapshot of the virtual training environment in which the ROTC cadet was immersed. Surrounded by agents similar to those of JIT narratives,2 the cadet momentarily stepped out of her lieutenant identity and became herself, a position in which she recognized the agents and NPCs (non playing characters, in this case, non-agents) as cartoon-like graphics on a ten-meter wide curved screen.3. Although the agents’ identities, which were distilled from those of active mili tary personnel, earned high marks from the project’s military advisers the cadet’s inability to elicit situationally expected behavior from the sergeant agent (here after referred to simply as “the sergeant”) was enough to shuffle human and agent identities back to real life. Admittedly, the cadet’s rising frustration level wasn’ t helped by the agents still being in the research stage. Nevertheless, not only did the trainee revert to her real self, the agents reverted to their real identities, computer-created wallpaper. The narrative imploded when the interactor and the 1Natural English can be thought of as everyday English (American English in this context), the language people employ when they engage in ordinary conversation. 2The agents’ knowledge of how to perform tasks was limited to a handful of structured military doctrine procedures. 3The screen is curved to subtend an arc of one-hundred-fifty degrees of the trainee’s field of vision. This allows the screen imagery to dominate the trainee’s peripheral vision. 146 agents could not communicate their intentions and objectives in an Enlightenment like world where each character had its own sense of self: the interactor as a newly minted lieutenant in an Eastern Europe country; the agents whoever (or whatever) they were programmed to be. W ith the hardware and software in a research stage, the trainee and the sergeant could not compensate for this slippage in communica tion. Unfortunately, their roles in this scenario served as a form of shorthand, a way of reducing the amount of information necessary to pass between them to a level that could be handled by the available digital and intellectual horsepower. Try as the cadet might, she was not a graduate of the United States Military Academy or an ROTC program. Unfortunately, for the sake of this narrative, such a lieutenant was what the sergeant’s programming expected. The interactor and the sergeant could not negotiate a middle ground, a cadet rather than a graduate, a teacher and mentor rather than a sergeant. Agency, among other things described in Chapter Two, requires the ability to change mental states as well as physical ones in the story world. Absent the ability to negotiate and change the mental perceptions of the other characters agency breaks down, as does any narrative depending upon agency. The frustration illustrated by this scenario is the result of a communications breakdown. The Mission Rehearsal Exercise (MRE) Project required the agents and the interactor to exchange information about their intentions, beliefs and goals across an interface, a human-machine interface, that is not particularly adept at transm itting large, rapid bursts of data. This data is not solely spoken text (or typed text in some interactive venues). It is a combination of speech, prosody, gestures—subtle and otherwise—and actions that, taken as a whole, signify the next moment in the JIT narrative. 147 This signification is different than meaning in the same way shadows on a movie screen signify but do not impose meaning by themselves. Meaning, as my excursion into poststructural theory in Chapter Three describes, is a collaborative effort between the spectator’s subjectivity and what Jean-Louis Baudry, one of cinema’s seminal theorists, called a work (Baudry 1992). The work was a product that was deliberately cut off “objective reality,” what actually happened in front of the camera lens, by its construction and presentation. Instead, the totality of the experience for the spectator is assembled using various techniques of cinema prepa ration: continuity editing, which joins sequences occurring in different segments of time into a smooth sequence that appears to occur without any temporal seams; montage, in which scenes are edited together to give the illusion of simultaneity or temporal sequence, even when that is not the case; and following the gaze, in which the image of a character looks off screen followed immediately by an image of what the character is purportedly looking at. These techniques, and others, combine to erase the existence of the profilmic4 in the finished work. Adding to the illusion is the mechanism of viewing the work. The darkened theater, the immobility of the spectator, and the framing effect of the screen, contribute to the seduction of the spectator into a world where shadows on the screen carry the same signifying ability, the ability to create meaning, as their real world counterparts, even though their real world objects are absent. Signification in a JIT narrative proceeds along different lines because the work is somewhat different from that found in the cinema. The agents are not rep resentations of people who were present at one time to perform the actions and behaviors the interactor encounters. There is, in Baudry’s words, no objective 4 Recall that profilmic denotes action that goes on in front of the camera while a film is being made. 148 reality. The agents are transformations of computer data th at is being created in real time. If the interactor could read the data directly, there would be no need to create characters like the sergeant in the MRE scenario, let alone provide the agents with capabilities such as speaking, gesturing and acting. The illusion in JIT narratives, if that is the appropriate term, is that these agents are intelligent beings who are present in the same space as the interactor. Communication motivated by artificial intelligence is the tool that replaces cinema technique. A fundamental principle cinema’s operation is the position of the spectator as a voyeur, as someone who is watching invisibly, someone safely apart from what is being observed. Thus, communication in the cinema is a one-way affair. Informa tion flows from the screen to the spectator, but not vice versa. This does not mean that the cinema spectator sits passively in her chair while sparkles of data shower upon her. Cinema viewing has been written about in ways that indicate activity on the part of the spectator. Laura Mulvey, another of cinema’s seminal theorists, has written extensively on the subject of suture a process of close coupling between the subjectivity of the spectator and that of the film’s protagonist. Her psychoanalytic reading of cinema spectatorship also posits that “the position of the spectators in the cinema is blatantly one of the repression of their exhibitionism and projection of the repressed desire on to the performer” (Mulvey 1992). In other words, the darkened theater and the immobility suppress the ability of the spectator to phys ically perform the narrative on the screen;5 instead, the spectator assigns personal feelings of mood and desire to the screen characters. Christian Metz, another sem inal theorist, speaks of the spectator bestowing all the “schemata of intelligibility” on a character so that the character “benefits by analogical projection,” from this 5Perform is used here in the sense of exhibiting the same, or different, behaviors as the characters on the screen. 149 projection of personality from the spectator to the screen ((Metz 1992)). At the same time, so that the “fiction can be established,” the spectator has to (mis)take the character for his or her self. Even if one agrees with the cognitive approach taken by professor and psychol ogist Richard Gerrig, that spectators do physically perform the narrative, either through internal physical changes, such as muscle tension, or through outward behavior such as screams or shouted warnings to screen characters6 (Gerrig 1993), my argument still stands: while cinema spectatorship is an active process, at either the psychic or physical level, information flows only from the screen to the spec tator.7 Two-way communication is one of the fundamental requirements of JIT narra tives, since the agents affect, and are affected by, the interactor.8 This information flows back and forth between agents and interactor in real time, driven by the oper ation of the event loop, and it is from this flow that the next increment of a JIT nar rative is constructed. It is also from this flow that signifiers are constructed, both by agents and the interactor. Interpretation of the signifiers motivates, through actions and behaviors, the next cycle of signification. Communication across this human-machine interface is critical to the JIT narrative process. In an ideal world, the human-machine interface would be perfectly transparent, that is, interactors would not have to adjust their behaviors to compensate for any 6In the work cited, Gerrig names such performances participatory responses or p-responses. He uses the term because it includes within it the notion that “these responses arise as a consequence of the readers’ active participation.” 7This is hardly a complete explanation of cinema theory, which has a rich and varied history and an even richer and varied present. Some critical theorists may even find my commentary reductive. My objective here is not to delve deeply into cinema theory, but to map out the significant differences between cinema and JIT narratives from a communications perspective. 8Agents also affect other agents, but that does not send data across any human-machine interface. 150 data loss in any of the information flowing back and forth. Compensation in this case is anything from speaking slowly to using broad gestures to moving at a slower than normal speed to communicating using a restrictive subset of vocabulary and grammar. The absence of such compensation gives rise to the kind of deep splicing that Katherine Hayles, professor and author, writes about in her book, How We Became Posthuman,9 and that has been romanticized in cyberpunk literature. This is the world of “ jacking in” popularized in William Gibson’s Neuro- mancer (Gibson 1984), the book popularly credited as the wellspring of the cyber punk genre. Gibson’s characters were so eager to inhabit cyberspace, his name for a data universe and a term that leaped into the vernacular, they had microchips implanted in their heads so they could connect their neural cells directly into an infinite space of pure data. The real world, the world a character’s body inhab its, becomes a necessary inconvenience, a place to sustain the flesh (or meat in cyber-speak) while the mind is free for more pleasurable pursuits. The posthuman, for Hayles, begins when the prevailing view, whether that be in literary works, science, banking or commerce, “privileges informational pat tern over material instantiations” (Hayles 1999). Hayles argues that such a view erases the importance of embodiment because the binary of presence/absence, on which embodiment is based, is swept aside for one of pattern/randomness. This pattern/randomness pair is the enabling difference of cyberspace, where all data either falls into some recognizable pattern or it doesn’t. In cyberspace, the meat 9Hayles, of course, is not the only scholar writing about the posthuman. I chose her work as my point of reference because of her background in the sciences (chemistry) and critical theory (English literature). Because of her fluency in combining the critical and scientific aspects of the posthuman I find in Hayles a kindred spirit whose work integrates well with mine. 151 doesn’t m atter. Since pattern and randomness play an im portant role in the func tioning of agents as communicators, I find her arguments for the historical roots of the posthuman worth exploring. Hayles traces the roots of the posthuman back as far as 1948, when Claude Shannon published his now-famous paper that described information as context- and substance-independent (Shannon 1948). Shannon’s revolutionary theory replaced the concept of information as a material element, something one could put one’s finger on and say, “There it is,” with the idea that information was the probability of observing some sort of pattern. Engineers could design communi cations channels (telephone circuits, radio signals) with no preconceived notions about what shape patterns took. All that m attered was whether the pattern was expected or not. Shannon’s theory reduced information to a concept of probabili ties, that is, the probability that a particular pattern would appear at a particular time. The pattern did not exist in isolation, because there was always possibil ity that randomness might appear where pattern was expected. This randomness carried information, too. The notion that randomness, a sort of informational confusion, could actually carry information was another counter-intuitive contribution on Shannon’s part. His conclusion followed from his treatm ent of information in terms of probability. Hayles explains Shannon’s theory in great detail in Chapter Three of her book, but for present purposes what is important is that if one is expecting a particular pattern of data with one-hundred percent certainty that pattern will arrive, its arrival provides no new information. On the other hand, if the pattern doesn’t arrive this non-arrival does equate to new information.1 0 Pattern and randomness 10Yes, this may sound strange. Keep in mind, though, that Shannon does not attach any meaning to this information. That is up to the receiver. What Shannon is saying is that the 152 affect the performance of agents in JIT narratives by influencing how they make decisions and what those decisions are based on. One of Hayles’s interpretation of the posthuman that I find fascinating is her description of the inhabitants of cyberspace who saw data as dimensionality, as objects. Hayles describes these constructs as data matrices, n-dimensional collec tions of information, that aesthetically represented the cyberspace world. W hat Hayles grasped, and what I find an exciting result of her work, is the idea that narratives come about through the temporal passage of these matrices. She has reached through theoretical analysis, a destination also reached by computer sci ence discourse for, as I discussed in Chapter Three, a chess game can be described as a data matrix by arranging the state variables appropriately. The chess game becomes a collection of matrices, each matrix representing the different allowable board position, and the narrative of any game is, as as Hayles puts it, the temporal traversing through the data.1 1 Moved into the realm of cyberspace, where patterns and randomness preside, narrative discourse and computer science discourse begin to dance closer and closer. It is the ideal dance. Indistinguishable humans and artificially intelligent agents meeting at the common point of data and information. If data and the body could become almost indistinguishable, if it were pos sible to consistently and universally achieve this posthuman condition, presence and absence—so problematical in the cyberspace world of a JIT narrative—would cease to be a problem. Communication between agents and the interactor, the transmission of goals, desires, intentions and beliefs that are signifiers in JIT nar ratives, would be expressed in a single form. Unfortunately, JIT narratives take unexpected data carries more information than the expected pattern. It is the job of the receiver to process that information into something meaningful. 1 1 Referring back to Chapter Three, this temporal effect is also similar to the results of moving through lexia in a hypertext document or selecting objects in a database narrative. 153 place in a type of digital nether world. Hayles describes it by referencing the pop ular computer game Pong, and asking where the game is played. Her answer is the play space was part in the real world and part in the computer: presence and absence meets pattern and randomness. The two worlds create what engineers call a boundary problem, an effect occur ring at the intersection of materials, circuitry, computer code and so forth, where incompatibilities collide.1 2 The intersection of data and the body is just such a boundary. This particular boundary poses a challenge to JIT narratives because they require as seamless an exchange of information as possible between the human interactors and computer-generated agents to promote the illusion that agents are real and present. As the MRE Project example at the beginning of this chap ter illustrates, the illusion of a seamless transition from the real world to the computer-generated world is quite fragile, more so than that implied by Hayles’s suggestion that with new virtual reality technologies a user learns that “the bound aries of self are defined less by the skin than by the feedback loops connecting body and simulation in a techno-bio-integrated circuit”. While such loops exist on a small, local scale, such as artificial limbs capable of sensing nerve activity and converting it into an appropriate mechanical behavior, the techno-bio circuitry she describes is a useful philosophical and theoretical construct when applied to JIT narratives. Here the fundamental challenge is reconciling presence/absence with pattern/randomness. The JIT narrative boundary problem challenging the design of agents as communicators is that the real world is analog and the computer world is digital. 12In general terms, one might think of a boundary problem as occurring at the interface of two different systems, where a system can be anything from computer circuits to molecular structures. 154 N egotiatin g th e A n alog-D igital B oundary When agents “see” objects in a JIT narrative they actually recognize patterns in digital data.1 3 When humans see objects their brains interpret a continuous flow of analog data, which in the case of sight are high-frequency electromagnetic (radio) waves. For an agent in a JIT narrative to see an object in the interactor’s world that object has to be transformed into some sort of digital representation, or pattern. This transformation process is called analog-to-digital conversion, abbreviated as A /D (and pronounced “ay to dee”) conversion. The familiar term digitized is a shorthand for A /D conversion. The reverse process, D /A ( “dee to ay”) conversion takes digital information and re-presents it in analog form, as when the contents of computer memory are fed to a monitor and displayed as a visible image, that is, the digital data is converted into the analog form known as light. Although Manovich, in his glorification of the database (Manovich 2001), sug gests it can model the world, in fact only certain real-world properties can be modeled this way. Only properties that can survive both A/D and D /A conver sions make practical candidates for a database-structured world. Other properties either cannot be digitized (the sensation of pain) or cannot be practically recon structed from digital data (the brightness of the sun at noon on a sunny July 4th in New York City). For properties, mainly physical ones, that can undergo such translations, agents can engage them in the form of data matricies. The agents do not see individual matrix elements, however; agents view patterns. This 13This context-establishing sentence masks a deep AI problem. Humans can recognize a tree, for example, even though the variation between different trees can be enormous when considered in terms of height, color, texture, symmetry, diameter, virtually any parameter one cares to choose. A tree is a tree with or without its leaves. It is a tree even if it has been cut down and is lying on its side, apart from its roots and no longer in the ground. A computer with such an ability is, at present, unheard of, in part because psychologists and AI researchers do not fully understand the process in humans. 155 skill was etched into the cinema spectator’s consciousness by the film The Matrix (Wachowski Bros. 1999).1 4 Tank, one of the crew members aboard the rebel ship, the Nebuchadnezzar, became quite adept at reading the cascading symbol patterns of the Matrix and translating them, in his head, into the physical images humans trapped in the Matrix see.1 5 Agents find the patterns and match them against templates, using the closeness of the match as one tool in their decision making. For an interactor to become a part of a JIT narrative agent’s world she must be translated into data, though not her whole being, since the state of the art has not caught up with the fictional science of the Transporter of the Star Trek television series (Paramount, 1966). This translation includes not just physical attributes and actions, but also the intangible qualities th at are communicated (and inter preted) in the real world—mood, emotion, intention. Today, the technology for digitizing an interactor’s physical appearance and actions is both sophisticated and established. Infrared cameras can capture subtle changes in facial expressions in a darkened VR theater, while CD-quality A/D converters can capture the full frequency range, timbre and prosody of speech. Even so, the translation occurs through what communication engineers would call a low-bandwidth, or low-fidelity, channel, so comparatively little of the subtleties that make up an interactor’s com munication can be faithfully moved between the real and JIT narrative worlds. For an agent to become a part of an interactor’s world the data that comprises the agent must be translated into analog form, only this translation has to reach 14 For those who wish to practice reading the Matrix on their own there are several versions of Matrix screensaver programs floating around on the Internet. 15This is not a conceit for the movie. As a young computer programmer I often worked with what are called core dumps, numerical printouts of the entire contents of memory generated when one of my batch computer programs committed some heinous error. After a while, I no longer saw strings of digits but data structures. I could scan a core dump and see whole sections of my computer program, not as numbers, but as the characters I typed when entering the program on punch cards. Reading data patterns as reality is not all that unusual. 156 beyond the mere physical characteristics of the agent’s representation. One of the agent’s goals is to signify, to create a mini-work (in Baudry’s sense) for the interactor to engage and so to formulate meaning. This mini-work becomes the next moment, the next event, the next element in a JIT narrative. The low band width of the interface means that every bit (no pun intended) of precious data sent across the interface in each trip through the event loop has to contribute in the most efficient way to the expression of that next moment. Unfortunately, agents have two strikes against them. First of all, they are historically poor communicators. In addition, agents don’ t understand any of what they are communicating. This is diametrically opposed to interactors, who are comparatively rich communicators and who understand what they are trying to communicate.1 6 Agents, then, are inherently ill equipped to operate across the human-machine interface of JIT narratives, yet in order to generate any JIT narratives that is exactly what agents have to do. As it will turn out, the solution to the first problem is closely linked to the solution of the second, and both raise questions of the need for a poststructuralist critique of JIT narratives. T he A gent as C om m unicator The relationship between plans and agents in AI research has historically been the same as that between a computer program and the computer. A computer is what might be called a domain-independent machine, which means the computer doesn’t require any reconfiguration to run different programs solving different problems. A computer that solves a set of simultaneous equations can also alphabetize a list of 16 This does not mean that the interactor necessarily communicates what she intends to com municate, only that she is aware of the former even in the absence of an ability to execute the latter. 157 names. All that m atters is whether the computer is executing the desired program. Programs change, computers don’t. Plans are the programs agents follow. The agents themselves are typically domain-independent, too. W hether buying a carton of milk or moving troops around on a battlefield, whether directing a robot or animating the sergeant in the MRE Project, an agent is merely following orders: the orders, or steps, specified in the plan. Plans are produced by planners, programs that find a plan, a series of steps, starting at an initial state of the world and ending at some desired state of the world. In the milk buying problem of Chapter Three, the initial state is having milk and the desired state is being at home with milk. The planner is constrained in how it gets from the initial to desired state. It cannot, for example, suggest that a neighbor buy the milk because in the world of the problem there aren’t any neighbors (at least none are specified). Instead, the planner finds a series of steps that are realizable in the real world, such as walking to the store, walking to the dairy case, picking up a container of milk and so forth. Agents use plans to perform actions in the real world. The buy milk plan could be input into a robot, in which case the robot would go to the store, proceed to the dairy case, select a container of milk, not only following the plan, but following it slavishly.1 7 Upon entering the store, the robot would proceed to the dairy case, because that is what the plan specifies. It would not, for example, be distracted 17I have simplified this discussion significantly because I want to use it for illustrative purposes and not as a planning tutorial. If such a plan were controlling a robot, the steps in the plan would have to be far more specific than the ones I indicated here. The robot doesn’t know where the store is, for example. Assuming the robot had some mechanism for measuring distance traveled and direction, the plan would have to specify details such as “travel one mile north, then one-quarter mile east” to direct the robot to the store. 158 by another robot also shopping at the store, so the two robots would not engage in some friendly chit-chat before finishing their shopping. This blind addiction to a set of instructions is not typical human behavior. It would be perfectly normal for one shopper, spying a second, attractive-looking shopper entering the spices aisle, to set aside her immediate plans to buy milk and head for the spices aisle, too. Humans make plans, but these plans are only one resource among many that contribute to a person’s behavior at any given moment. Kismet, Rodney Brooks’s robot described in Chapter Two, uses this multiple-source reasoning for choosing its actions. Any action selected at any given time is the result of an arbitration among multiple inputs, not the evaluation of a fixed formula. In 1988, Philip E. Agre and David Chapman, two researchers in MIT’s Artificial Intelligence Lab, circulated a memo (later revised in 1989) suggesting that agents could be more than their plans, they could be communicators.1 8 They described two views of plans in controlling activity: the plan-as-program, in which the plan played the central role, serving as the sole authority in m atters of behavior; and, the plan-as-communicator, in which the plan guided the agent while the agent con tinually re-evaluated the plan to make it relevant to the agent’s particular situation (Agre and Chapman 1989). The plans-as-programs approach required agents to control their world. The plans-as-communicators approach required agents to live in their world and to flow with the events in it. Agents in a JIT narrative will be miserably intolerable if built with a plan-as- program philosophy. It might be easier to think about these two views in terms of Hayles’s pattern/randomness binary. W ith a plan-as-program, the agent sees its 181 cannot guarantee that others did not also put forth this suggestion; however, while doing my research I found Agre and Chapman’s work was consistently cited as a foundation of the agent-as-communicator principle. 159 world as if it were constructed from specific, fixed patterns. Any deviation from these expected patterns causes the agent to behaviorally implode. This was partly the case in the MRE scenario. The sergeant, not receiving a recognizable pattern (speech) from the trainee simply responded with non sequiturs or what appeared to be arbitrary or irrelevant behavior. W ith a plan-as-communicator philosophy, the agent interprets randomness—the lack of a recognizable pattern—as its signal to reassess its view of its world, possibly modifying the way it applied the plan to the current situation. Cast somewhat figuratively in narrative terms, the plan- as-program is the ultimate readerly text, containing a single interpretation to be recognized by the reader (agent). The plan-as-communicator allows the agent to create a writerly text, engaging the plan as written but interpreting it in light of what the agent knows about its world. Prom either analogy, what is clear is that the plan-as-communicator philosophy is more suited to handling the unpredictable behavior of an interactor in a JIT narrative. Before continuing, I want to emphasize that agents capable of performing in an arbitrary JIT narrative, something on a par with the Oz Project scenario of Chapter Two, are not yet realizable. Narrative worlds in any form of cyberspace storytelling must be severely restricted in terms of time, place and properties to be inhabitable by state-of-the-art agents, and I by no means want to imply that one can rush out to Wal-Mart and purchase programs with such agents in them. Research into these agents, however, is far enough along to raise questions about the way they can influence interactors in JIT narratives, and my reason for delving into this topic is partly because little narrative theory is being incorporated into research being done on agent development. The pitfalls along this path are the subject of the rest of this chapter. 160 B elievable A gents At cinema’ s heart is the illusion that action on the screen is happening in real time in front of the spectator’ s eyes. W hat is the corresponding illusion for JIT narratives? How must agents behave so that interactors judge them as being a part of the JIT narrative world? Along with Agre and Chapman, computer sci ence researchers, among them Bryan Loyall and Phoebe Sengers, suggest that the corresponding JIT narrative illusion is that agents are real, intelligent beings responding to the constantly changing JIT narrative world in ways that seem, to the interactor, to be (somewhat) understandably motivated.1 9 When an agent produces this illusion these researchers consider it socially situated in recognition of its ability to express itself believably in agent-human interactions, such as con versations and mutual problem solving. This may sound like a formidable task, and it is; however, it turns out that humans are willing to meet agents more than half way in fostering this social embeddedness illusion. According to media researcher and author Clifford Nass, interactions with com puters are governed by what he calls the media equation. Using a group of human test subjects, Nass and his colleagues administered multiple sets of the stan dard psychological experiments used to measure how humans respond to other humans under controlled conditions of behavior and proximity. One such experi ment shows, for example, that most people like people who flatter them. Nass and a colleague, Byron Reeves, postulated that in all such experiments, replacing the second “human” in any assertion with “computer” (or its plural) will not change the results. Their findings, obtained through laboratory testing similar in form to 19This does not mean that a JIT narrative could not include an agent that was, say, mentally ill so that its actions seemed arbitrary. What it does mean is that the interactor would have to understand that the agent was ill, thus putting its behavior into context. 161 experiments used when only humans are involved (Reeves and Nass 1996), indi cate the media equation is both valid and an automatic human response, similar to the suspension of disbelief so often cited as an empirical requirement of cinema spectatorship. Just as Gerrig believes that suspension of disbelief is an a priori state for spectators watching a film, Nass asserts the media equation is an a priori state for humans in human-computer interaction. The conclusions Nass and Reeves draw from their work include that “it doesn’t take VR [virtual reality]” to create a social presence and that people feel comfort able thinking about media images as the characters they portray. Not only do human interactors accept computer characters for who or what they appear to be, a character “doesn’t have to look anything like a real person” to elicit real social responses. Similar results are reported in (Marsella, Johnson, and LaBore 2003), which describes a a research project “designed to teach coping skills to mothers of pediatric cancer patients,” through the use of an interactive pedagogical drama. The mother (interactor) makes decisions on behalf of one of several characters (autonomous agents) in the story world, observing the results of her decisions in real time. The authors entered into field tests of their interactive system with a deep concern about the possibility of negative reactions to the two-dimensional cartoon-like characters with whom the interactors were sharing “deeply distressing problems.” Despite the lack of visual realism, interactors assessed the story as “believable, interesting and convincing.” Nass and Reeves conducted one further test to rule out the possibility that test subjects accepted the human-ness of computers and computer characters because the humans knew there were other humans responsible for the computers and characters’ behavior. W hat they found was that subjects did not consider the existence of computer programmers when formulating their responses. A computer 162 that criticized the performance of a human performing a task was treated as if it were a rude or critical person, not as if someone made it that way through programming. Putting Nass’s work into perspective, what he does not deal with is the question of behavior over time, as would occur if the human and computer were engaged in an on-going narrative. It is one thing for a computer to act rude in the context of an experiment in which the response to rudeness is the quality being tested, and quite another thing when the computer has to hold up its part of a narrative in a way that an interactor can recognize and follow. One role of agents in a JIT narrative is not just to exhibit recognizable character-ness, but to consistently exhibit it across a narrative whose elements are being composed in real time. This is the extension the concept of socially situated agents adds to the media equation. T hat the human interactor is predisposed to treat the agents as the characters they portray gives agent designers a leg up on the total process; but socially situated agents have to be believable agents and be believable over time. W hat makes an agent believable in the context of a JIT narrative? Bryan Loyall suggested in his Ph.D. dissertation that believable agents could be constructed by borrowing the principles of characterization from what he called the traditional story arts of literature, theater, cinema and animation (Loyall 1997).2 0 For Loyall, the emphasis in designing believable agents falls on personalities. These agents don’t just strive for a goal, they strive, as Loyall says, “in character,” much the way a human actor or animated character does not display raw intelligence2 1 in ^Loyall draws a distinction between cinema and animation, one that cinema theorists might argue isn’ t valid. Loyall doesn’t explain the basis for his distinction. I suspect he sees a difference between an actor, whose movements are innate, and an animated character whose movements have to be externally constructed. 21 Raw intelligence is what the nerd in math class exhibited when answering every arcane question without realizing it was arcane. 163 favor of displaying an identity generated by the intelligence. It is not enough for the agents to appear aware or have internal drives, emotions and social relationships, they must display these qualities in a certain way. Loyall’s contribution to the research path initiated by Agre and Chapman is that when plans are treated as communication, and when agents use these plans as one resource among many to chose their moment-to-moment behaviors, these choices must have a strong component that signifies to an observer or interactor that the agent has a certain internal world view. Perhaps the closest description to what Loyall is saying here is that there is an agent subjectivity that arises not from the way an agent thinks about itself and its world, for it doesn’t think, but from the way it is programmed with the results of a subjective point of view. An agent that “sees” itself as meek will eventually cower in the face of a threat. It will not cower in one situation and become aggressive in another similar situation. Loyall’s work flirts with the problematic of the human-machine interface I dis cussed earlier in this chapter, for when he speaks of characterizations and person alities he moves into a domain that includes the very subtleties of communication that have trouble squeezing through the narrow bandwidth linking interactors and JIT narrative agents. Somehow, the qualities that make up varied and believable behavior on the part of an agent must be compressed for transmission, the way a JPEG file is a compressed version of a digital image. AI researcher Phoebe Sen- gers offers just such a solution when she confronts the issue of the schizophrenic behavior of agents.2 2 It’s a solution she grounds in both computer science and cultural theory discussions, and it is also a solution that foregrounds the inherent problems in considering agent behavior apart from the world the agent occupies. 22 She does not use the term in its medical sense. I will put it in context shortly. 164 Sengers work concerns itself less with the fidelity of an agent’s performance and appearance and more with what cues agents could provide to an interactor to direct the interactor’s perceptions toward a particular result. Prom believable agents she borrows the idea that an “audience’s perception of agents, rather than an agent’s correctness per se” was most important. (Sengers 1998). By 1998, the year Sengers published her dissertation, several researchers were working on the idea of agent behavior as motivated by social environments, but I refer to her work in particular because her contribution to the field lies in her combination of cultural and computer science theory. While there are gaps in both threads of her work, they are not as important as her conclusion that the architecture of AI agents are models of different views of subjectivity and that cultural studies, at least in theory, should be able to inform the computer science underpinnings of agent design. P h ysical Id en tity The ability of a JIT narrative agent, or any socially-situated agent, to communi cate its internal world view across a human-machine interface is the role of what I call the agent’s physical identity. Even though an agent has no physical substance, I consider its physical identity to begin with its visual or graphic representation of what is otherwise free-floating data stored somewhere in digital form. The physical identity houses, or is home to, what Sengers calls transitions. Transitions are phys ical behaviors that link an agent’s internal state with its outward representation. Sengers repeatedly underscores the need to supply to the user or interactor the motivation behind an agent’s actions. The need arises not only because this type of communication is common in ordinary interpersonal engagements, but because of the natural tension between a plan-as-program and a plan-as-communication. One 165 might think of this as analogous to the differences in thinking between structuralist narrative theory and the poststructural theory that evolved from it. The plan-as-program philosophy makes use solely of the structure of the plan. The plan is a set of ordered instructions that must be traversed in a particular way. Implicit in the plan is the assumption that the effect of a particular instruction (action) in the agent’s world can be totally specified. In other words, the agent’s world will always be as the plan expects it to be, so th at the agent’s interpretation of the plan can be assumed to be expressed in the order and nature of the plan steps.2 3 Under controlled conditions, achieving this situation is not a problem. When the agent is situated in an uncontrollable world, such as a JIT narrative environment, any assumptions of specificity are ill-founded. When a plan fails because the agent’s environment differs from the environ ment encoded in the plan, the plan is thrown away and the planning software generates a new plan. After all, the old instructions don’ t work. They are as useful as yesterday’s trash, and they receive much the same treatment. The new plan is constructed independently of the original one, so there is not necessarily any connection between the agent’s previous action, the last action specified by the old plan, and the agent’s next action, the first behavior specified by the new plan. Technically, this behavioral rupture is not a problem because, in theory, the new plan will find a way to surmount whatever obstacle dictated the plan change and continue moving the agent toward its goal. To an observer watching the agent, however, valuable continuity is lost. If a robot is programmed to follow the beam from a flashlight, and the light is suddenly extinguished, the robot could begin spinning in circles. From a planning 23 Compare this description to that of narratives as offered by structuralists who argue that the order and nature of events in a narrative universally specify its meaning. 166 aspect this is quite reasonable. The old plan directed the robot to move toward the light. When the light vanished the new plan directed the robot to find the light by looking in all directions until it regains contact. A human observer, however, might find this behavior somewhat unsettling. Sengers gives a name to this abrupt change in behavior at the boundary point of the two plans. She calls it schizophrenia (though she does not use the term in the medical sense), and she credits it with the inability of the interactor to follow the motivation of the agent. Here is where Sengers makes what I consider a significant contribution to the art of agent performance. She suggests that tran sitions be used to “help the user understand the reasons for behavioral change” (Sengers 1998) of the kind caused by changes in planning or changes in an agent’s internal state. Rather than leaving the observer to concoct reasons for the robot’s rotational behavior, Sengers would use transitions to at least suggest what might be happening. The transitions would signify something more closely allied to the observer’ s experiences in the real world. As an example, let the robot stop in its tracks when it loses the light beam, proceed slowly toward the last place it saw the light, all the while turning its “eyes” from side to side,2 4 Eventually, it may begin spinning in circles, but Sengers asserts the transitional action, moving slowly while scanning the environment, helps give the robot a chance of conveying the idea that it is confused and is looking for the light. The spinning is more likely to seem motivated especially as a consequence of Nass and Reeves’ findings that people are inclined to treat computers as if they were other people. With some liberty, I describe the inclusion of transitional behavior as a rejec tion of the structuralist tyranny of a plan-as-program. Rather than assuming the 24I’m calling the robot’s light sensing mechanism eyes regardless of its physical appearance. 167 slavishly-performed plan will always be meaningful to an observer, and meaningful in the same way, Sengers suggests that the robot’s performance ought to be the sum of many behaviors, some planned and some reactions to the state of the robot’s world, that taken together form a web of signification from which an observer will construct a narrative describing the robot’s adventure.2 5 Sengers offers a view of transitional behavior that promises at least a partial solution for the previously discussed need for data compression across the human- machine interface. Sengers suggests an agent’s performance itself should be con sidered a narrative, but not in the way that term is used in this dissertation. Her choice of terminology makes sense in the context of her work, where she deals with the narrow field of the behavior of an agent, in isolation, over a relatively short period of time, say several minutes. In the context of my work, however, using the word narrative in Sengers’s sense is tantam ount to laying out a welcome m at at the doorway of confusion. While I will keep the spirit of her work in what follows I will drop her use of the term narrative and replace it with the term story. Sengers writes that what an interactor knows about an agent’s life will come from the interactor thinking about “the connections between [agent] behaviors” so that the collective set of behaviors will form a story about the agent, a story apart from the JIT (or other) narrative in which the agent is engaged. The story Sen gers references is one that explains or expresses the agent’s history (what screen writers call the back story of a character). The transitions th at string together changes in the agent’s reasoning or goals are themselves strung together to provide a diachronic picture of the agent. Prom such a view, the interactor constructs 25 Such a narrative might be that the robot began moving toward a flashlight but became startled when the light went out. Carefully, it moved toward the last place the light came from. Alas, it had no luck. Now, it was frantic. It looked in all directions Of course, there are other narratives that can spring from this same set of behaviors. 168 a back story about the agent, its wants and needs, but especially about its his tory, a causal chain of events, real or imaginary, the inter actor attributes to the agent’s past to explain or motivate the agent’s actions in the present. Through this process, Sengers sees intentional behavior being made comprehensible, even if the comprehension is a construct of the interactor. (Loyall and Sengers offer examples of agents that can express their “lives” as stories. See the description of Woggles in (Loyall 1997) and the Luxo, Jr. in (Sengers 1998) for examples of how these stories work to compress the detail the agents have to deliver to an interactor while painting a picture of their internal motivations.) W h at A gent D esigners Can Learn From P oststru ctu ralism Before moving on to a discussion of another facet of an agent’ s identity, I want to look at Sengers’s work only this time from a critical perspective. The Oz Project, from which Sengers’s work is, if not derived, heavily influenced had as one of its objectives the implementation of intelligent, interactive agents that fulfilled the vision of their designers. Noble as this goal may be—and it certainly is no better or worse than other choices open to the researchers—it leaves these agents open to the same criticisms literary and cinema theorists find appropriate to structuralist theory. Sengers work is full of references to agents communicating their intentions. She measures the success of such communication by whether the user gets the message, that is, does a certain set of behaviors and transitions communicate, for example, a given emotion the agent is experiencing. While this is a superficially acceptable metric, I find it unnervingly similar to the structuralist theory of narrative: a certain set of elements, when arranged in a certain fashion, will signify certain 169 meanings for all readers or spectators. Replace some of the critical theory terms with some of Sengers’s terminology and the principle reads that a certain set of behaviors, when linked by certain transitions, will signify a certain emotion or intention for all interactors. W hat Sengers fails to address is the body of work surrounding poststructural ism’s core principle that a reader or spectator’s engagement with a work creates a unique text that is partially a product of the reader or spectator’ s beliefs and experiences. Her work suggests an all too heavy reliance on highly stereotypical behaviors, since these guarantee, at the very least, a high degree of recognition. Unfortunately, stereotypical characters detract from the richness of a JIT narra tive experience the same way eating the same meal night after night detracts from the (potential) richness of the dining experience. Stereotyped characters lose their element of unpredictability and surprise. Assuming a poststructuralist stance with respect to JIT narratives suggests if not a better solution then certainly a better interpretation. Putting aside situa tions in which an agent may not want to express its true internal emotional state, what an agent expresses is what it thinks (figuratively speaking) is the appropriate behavior linked to that emotion. This expression may be the result of program ming, socially situated learning or, more realistically, a combination of the two. How the interactor interprets the behavior is not under the control of the agent. All the agent can, and should, do is express itself and then reappraise its envi ronment based on the behavior of other agents and the interactor. It is through their behavior that the agent infers whether its communication was accepted as intended. 170 Designing a JIT narrative around the lessons taught by poststructuralist the ory not only aligns the interpretation of JIT narratives2 6 with traditional narrative thinking,27, it also enriches the experience for the interactor. As in life, misunder standing behavior and intention is part of the human experience; searching for ways to eliminate misunderstanding and misinterpretations means approaching life as a puzzle to be solved, a notion I find boring and hardly worth pursuing. E xperiential Id en tity I define a JIT narrative agent’s experiential identity as the memories and experi ences, skills, social knowledge and personality traits programmed into it. Experien tial identity is the genesis of transitional behaviors. While the behaviors express an agent’s internal mental state,2 8 there are three broad categories of an agent’s expe riential identity—common sense, emotions and memories—that serve as inputs for selecting or synthesizing transitions, as well as affecting how an agent determines its needs, goals and actions. Thanks to the influence of its experiential identity, a JIT narrative agent need never be a passive fixture in a JIT narrative world. An agent’s experiential identity includes a non-specific form of memory that can be termed common sense. John McCarthy who coined the term “artificial intelligence,” and has, by definition, been in AI since its beginning, addressed some aspects of common sense in a 2003 paper summarizing the major branches of AI. One of the most important qualities, and one of the hardest to create, is general 26A more accurate phrasing might be the interpretations of behaviors in JIT narratives. 27My position is that JIT narratives are descendants, or extensions, of the traditional narrative and I recognizing its contribution to the state of the JIT narrative art 28I use mental here as an analogy, not as a fan to fuel the embers of any debate on whether computers can think. 171 knowledge about the world in which the agent exists, what McCarthy describes as “[w]hat a program knows about the world in general the facts of the specific situation in which it must a c t... ” (McCarthy 2003). This is a significant technical and cultural challenge. On the technical side, this information about the world has to be represented somehow, typically in a mathematical language. But, the form of representation and its meaning or interpretation are linked. Not everything can be expressed in any one language, and some things can be expressed more efficiently in one language than in another. The choice of representation automatically affects what information about the world an agent has access to. Usually the choice of representation is based on the resources available. Deep Blue represents a chess board as a matrix because its hardware can make efficient use of that organization. The chess board could be described in human-friendly sentence form, such as, “Black knight one is on square a3,” but manipulating that form of representation would drain processing power that could be spent choosing the next move; Deep Blue’s hardware is simply not arranged to work with sentences. Then there is the question of what is being represented. Specific objects in the agent’s world are fairly easy to represent because they can usually be digitized in some fashion so that their important parameters, from size and shape to location and hardness or permeability, can be stored in a database. Mental states, such as emotion, are far more difficult to represent and track over time. Most difficult of all, however, is common sense knowledge and reasoning, the area in which, according to McCarthy, “ AI is farthest from human-level.” Even though there are computer programs capable of proving complex mathematical theorems, knowledge that children acquire at an early age—it’s not a good idea to stick one’s hand into an open flame, chocolate milk is more desirable to drink than cough syrup— has to be explicitly programmed into agents. 172 The Cyc system is an ambitious project for making everyday human knowledge available to agents as one factor in their inference processes.2 9 R. V. Gupa and Douglas B. Lenat, Cyc’s principles, explain that By knowledge, we don’t just mean dry, almanac-like or highly domain- specific facts. Rather, most of what we need to know to get by in the real world is prescientific (knowledge that is too commonsensical to be included in reference books; for example, animals live for a single solid interval of time, nothing can be in two places at once, animals don’t like pain), dynamic (scripts and rules of thumb for solving problems) and metaknowledge (how to fill in gaps in the knowledge base, how to keep it organized, how to monitor and switch among problem-solving methods, and so on) (Guha and Lenat 199033). Much of this commonsense knowledge Guha and Lenat write about is culturally dependent. The choice of what constitutes everyday knowledge, and therefore what goes into the database, is as important to agent performance as the choice of how the information is represented. Do agents need data on both shaking hands with new acquaintances and bowing toward them? Should agents think of cars as psychically-loaded status symbols as well as means of transportation? Should agents believe that girls don’t like electronic games? Even if what counts as commonsense knowledge can be agreed upon, there remains the challenge of finding a practical method for filling the Cyc database. Until then, agents need to be programmed with these everyday scraps of information. Such a daunting programming chore is one reason why JIT narratives will, for the foreseeable future, play out in localized environments containing a minimal number of agent characters and props, like the small kitchen sink dramas of Fagade. 29Inference here has the same meaning as it does in real life. It refers to an agent’s ability to derive conclusions not specifically programmed into it by using information it does have and applying that information to knowledge that is part of its programming. 173 A gen ts and E m otions Situations in which a person is engaged, be it in the real world or a JIT narrative, are never static. The chess board in Chapter Three changes with every move, and any move by one player can upset the other player’ s plans. Each move forces a player to reappraise the state of the board and to recompute what actions she needs to take to make the board favor a winning outcome. The human subjects in Nass and Reeves’ s media equation experiments reappraised their feelings toward a computer based on what they believed to be the computer’s appraisal of them. Such reassessments of the environment are not one-time occurrences. Each reappraisal affects the appraiser’s future actions by altering her beliefs and emotions. The altered beliefs may cause the appraiser to assess the current environment differently, so it is fair to say that reappraisals promote further reap praisals. This sort of process continues constantly, and its results do not fall into the nice linear progressions computer programmers prefer. Yet, this combination of planning (or cognitive thinking) colored by constant reappraisals of the world that are driven by changing emotions and beliefs is necessary to specify an agent’s actions and the transitions between them in a JIT narrative. The combination not only takes advantage of the information that can be transmitted across the human-machine interface, but it helps provide a rich experience in which an agent’s behavior appears motivated by multiple factors that must be assembled by the interactor into a set of assumptions and stories upon which the interactor acts. To create behavior motivated by this appraise/reappraise process, human designers must choose how an agent must be programmed both to assess the world around it and to modify that assessment in some computable way. Advanced work in the area of appraisal/reappraisal involves the combination of planners and models of human emotion in such a way as to have the emotion part of 174 the system affect the cognitive performance of the traditional planner.3 0 Jonathan Gratch from the ICT and Stacy Marsella from USC’s Information Sciences Institute (ISI) have been working for several years on an agent driven by an emotional model. Their latest result is an emotional agent based on contemporary appraisal theories (Gratch and Marsella 2004). In this agent, appraisal characterizes “the relationship between a person and their physical and social environment,” a quality Gratch and Marsella call the “ person-environment relationship. ” A coping process repairs or maintains the relationship, by “recruiting resources” to maintain it.3 1 The agents behavior is a combination of cognition,3 2 emotion and coping responses. Gratch and Marsella demonstrate both reappraisal and coping activities an example of a sophisticated doctor agent faced with a decision about administering morphine to a terminally-ill cancer patient. The doctor is reluctant to administer the drug, for he fears it will hasten the patient’s death without providing the relief both the patient and mother (the interactor) expect. The mother, through her reaction to the doctor’s admonitions, cause him to re-evaluate both his belief in the efficacy of the morphine and the expectations of the mother and child. In one case, the mother’s insistence causes the doctor to administer the morphine while he (internally) shares responsibility for the decision with the mother. The example is too detailed to summarize it with the quality it deserves, and the interested reader is referred to Gratch and Marsella’s paper for the full description. 30Remember, plans and planners are not inherently bad. The danger lies in their misuse as the sole determinant of an agent’s behavior. 31 Outwardly, coping may take the form of creating a plan to reverse the effects of an undesirable, but controllable situation, or adopting an escapist or resigned attitude toward uncontrollable events. Inwardly, coping may take the form of discounting a potential threat or abandoning a cherished goal (Gratch and Marsella 2004). 32What in the context of this dissertation might be called planning to achieve a goal. 175 Not only have Gratch and Marsella developed a mathematical model of emotion for their agent, they have designed it to reason in a way that resembles human cognition. Humans rarely solve problems or make decisions by considering the consequences of a single course of action from start to finish and then backing up and starting the process over making different decisions until they exhaust all possible outcomes. Humans reason in what seems to be a parallel fashion, consid ering multiple courses of action and the way they affect each other simultaneously. Gratch and Marsella’s model simulates the latter process, examining many differ ent actions all at once to find one that meets the agent’s immediate goals, wants and needs. The result is an agent that, programmed with the right data, can formulate a response to changes in its environment, which would then be passed along to that part of the agent that evaluates all available input to create the next moment’s behavior. This combination of an emotional agent and a socially situated agent addresses the bandwidth problem of the human-computer interface in a more specific way. The raw data that makes up the internal state of the agent is manipulated by the emotional model to produce a subsequent internal state that needs to be communicated to the interactor. The socially situated agent model then finds a way to use transitional behavior to send that state across the relatively low-bandwidth interface while preserving its information content. This is one way of implementing the desirable compression of data across the human- machine interface I wrote about earlier, while simultaneously factoring emotion into an agents intentions, goals and needs. 176 A gen ts and M em ories One of my favorite cyberspace anecdotes is the one related in Chapter Three about Julia the Bot. Julia was the sassy, flirtatious AI agent that was pursued by a real- life suitor in an online MUD. W hat fascinates me is why the interactor chased Julia—what was it about her/it that led him to believe he was conversing with another human being? There were no gestures, only text indications of feelings and actions. There was no prosody in the text, either. Word choices certainly played an important role, but I suggest there was another factor at work. If a visitor encountered Julia before, she remembered the meeting and parts of earlier conversations. She also could answer questions about who was currently in the MUD and in which room they were active. Julia’s designers used memories to maintain a part of her back story, the illusion that she/it had a life prior to the present moment and that events in this past life could affect present actions. For all its grammatical word play, ELIZA used memory tricks to further its illusion of an involved psychotherapist. ELIZA stored some of the previous dis cussion topics and during the session ELIZA suddenly re-visited one with a lead in such as “let’s talk more about ” While ELIZA may have used this trick to extricate itself from following a path its programming was not particularly adept at negotiating, to the interactor ELIZA appeared able to link present discussions with earlier ones, presumably as some part of the therapy. Memories do more than convince an interactor an agent has been paying atten tion. Author and philosopher Daniel Dennett writes that one role played by mem ories to to guide an “organism’s actions along the paths that avoid harm and capture good” (Dennett 1998).33 Viewed from Dennett’s perspective, memories 33In his text, Dennett credits this phrasing to the poet Valery. 177 serve much the same purpose as the emotional model developed by Gratch and Marsella: they provide reference points for assessing the present in light of the past, what Dennett refers to as “mining the past for the future.” There are two principal ways an agent can develop memories. One way is through learning, for which Julia and ELIZA are simple examples. Returning to the buy milk problem, suppose the agent has its choice of ways to reach the grocery store. Also assume that each mode of travel is associated with a value representing the time it takes to make the trip. Asked to create a plan for buying milk that takes the shortest time, the agent’ s planner will have to try all ways of traveling to the store before it can select the quickest one.3 4 If this agent were equipped to learn, it would remember the following: the alternative methods of travel, the fastest method of travel and the date it made the plan. In the future, the agent could solve this problem by immediately selecting the fastest form of travel if • the desired travel method still exists in the agent’s world; • no new travel methods have been added (a new one might be faster); • the last update of the travel times for each method is on or before the time the agent did its learning. Broadly speaking, then, the agent has learned the fastest way to the grocery store and will continue using it until something in its world changes, causing it to reassess what its memory indicates as the best choice. Expressed more generally, the agent has learned to proceed from action a to action z without the need for explicitly considering actions b,c,---,y. 341 am assuming the agent’s programming does not include any rules of thumb such as “look to see which travel time is the smallest,” but that it plans by searching through all possible solutions to find the best one. 178 Neural networks, another type of learning system patterned on a model of brain behavior, remember past experiences by forming associations between specific sit uations and a set of behaviors suggested by those situations. A neural network is made up of three types of nodes. There are multiple input nodes, multiple output nodes and what are called hidden nodes (usually many more than the input and output nodes). The hidden nodes are arranged in layers, and all the nodes of one layer are connected to all the nodes of the following layer. All the output nodes are connected to all the hidden nodes of the layer before it. Each input node sends some signal from the world outside the network to the first layer of hidden nodes. Each output node sends some sort of signal to the outside world. In operation, some pattern of signals appears on the input nodes, and that causes some other pattern of signals to appear on the output nodes. If the neural network were replacing a thermostat controlling a furnace, the signals on the input nodes would represent the temperature of the room and the signals on the output nodes would be used to turn the furnace on or off.3 5 When the neural network is hooked up to the furnace and switched “on,” however, the network will, in anthropomorphic terms, not have the foggiest idea of what to do. It has to learn. A neural network learns by being taught. In the case of the network serving as a thermostat, the teacher (either a human or another agent) would raise the temperature in the room. If the network switches off the furnace before the room is warmed to its desired temperature, the network is sent a signal that it shut off the furnace too soon. The room temperature is reduced to its starting value and the process repeats. Perhaps on the next try, the network allows the temperature in the room rise above its maximum desired value. Then the teacher sends the 35This is an illustrative example. Controlling a furnace is a simple process and there are less complex and less expensive methods than a neural network. 179 network a signal that it turned the furnace off too late. The temperature is reduced to its starting value and the teacher begins another trial. The teacher’s objective is to indicate to the network when it performs correctly and when it doesn’t. The neural network learns by strengthening and weakening the associations between its nodes. Each path between nodes has a value assigned to it that ranges from, say zero (little association) to one (high association). Initially, the network may begin with some random set of associations. When the network is signaled it has performed properly, it typically strengthens the strong associations and weakens the others. When the network is signaled it has performed incorrectly, it weakens the strong associations and strengthens the weaker ones.3 6 Over time, the network establishes a pattern of associations among the input and output nodes that link a set of signals on the input nodes with a set of signals on the output nodes, or the room temperature and instructions to the furnace, that control the temperature of the room according to the what the network learned. W ith appropriate training, the network would learn to keep the temperature between two extremes by turning the furnace on and off. The network, and the agent it operates (the furnace on/off switch), learns how the owner of the house wants the temperature set, remembers it and adjusts it accordingly. One downside of neural networks, however, is that if the owner changes her mind and wants a different temperature range, the neural network has to be trained all over again.3 7 36This is another simplification for the sake of clarity. The associations between nodes axe produced by special algorithms, and what I have called associations are excitation values used to activate or deactivate a node. 37Neural networks axe best used in situations in which (a) there is a large volume of training data available and (b) the relationship between inputs and outputs is not known. Typical uses, then, might be credit rating scoring, stock market predictions and complicated control functions (like guiding a robot). 180 There is another type of agent memory, what Alison Landsberg, writing in Cyberspace/Cyberbodies/Cyberpunk: Cultures of Technological Embodiment, calls prosthetic memory (Landsberg 1995), memories implanted in an agent by its designer (or designers). While agent memories created through learning can be kept at a cultural arms length. Landsberg suggests the “unsettled boundaries between real and simulated [memories]” can disrupt the relationship between the body and its “subjective autonomy... ” . In agents prosthetic memories don’t dis rupt such a relationship, they are the relationship. The disruptive potential Lands berg refers to surfaces in two films that raise the issue of the connection between memory and identity. The films also raise the issue of the role of memories in autonomous agents. M em ory P lay in Total Recall and The Step ford W ives Doug Quaid (Arnold Schwarzenegger), the protagonist in Total Recall (Verhoeven, 1990), believes he’s been married to his wife, Lori, for eight years. I say believes, because in the film’s narrative world human memories are as much a commod ity as toothpaste. One firm, Rekall, Incorporated even specializes in implanting pleasant vacation or travel memories into the minds of paying customers. It’s the company “where you can buy the memory of your ideal vacation, cheaper, safer, and better than the real thing.” Lured by advertising, a bored and troubled Quaid visits Rekall and purchases a trip to Mars that includes, as part of the fantasy, a role for Quaid as a secret agent. There’s a hiccup—a major hiccup—during the implantation procedure raising the question of what Quaid has retained about who he is. The prosthetic memories have gained control of his identity. The fantasy he purchased has become real. He is the secret agent. 181 Quaid’s identity crisis worsens when he comes into possession of a videodisc on which his own image addresses him directly. The image identifies itself as “Hauser,” and goes on to tell Quaid “You’re not you. You’re me.” Apparently, Quaid is Hauser after Hauser was implanted with memories turning him into Quaid. The memories implanted in Quaid at Rekall gave Quaid Hauser’s memories of his mission, the one Quaid is playing in real life. In raising the issue of which memories belong to the real Quaid, or even if there is a real Quaid, Schwarzenegger’s char acter accepts who is is—Quaid—even though he knows his identity is a product of someone else’s imagination. By the end of the film, Quaid has shot his “wife,” who was an agent working against him and finds himself physically on Mars and in the arms of Melinda, a double of the woman Quaid requested as his companion on his vacation-that-wasn’t. Is Quaid Hauser? It doesn’ t make a difference. Quaid, having no memory of ever being Hauser, has little choice but accept himself as he remembers himself to be, the one walking through the world, even though he might be recognized as Hauser by those who knew him way back when. In per forming his role as a secret agent, did Quaid know how to act the part because of the memory implants or because he was/is one? I suggest once the memories commingle the question is moot. Prosthetic memories represent a suturing process inside JIT narratives. The subject’s position is fixed, in part, by the act of per forming those memories. Quaid is Quaid in the story world because he performs Quaid. In the same way, agents are who/what they are because they perform the character their memories drive them to perform.3 8 Memories strongly influence their behaviors, both directly and through the reappraisal mechanism associated 38When I speak of an agent’s memories I encompass a range of relationships between the agent and objects and other agents. An agent’s memory might tell it that it likes ice cream, gets car sick and doesn’t agree with its mother they it should be married by now. Memories can also include knowledge of past events, as illustrated by Juila the Bot. 182 with emotions. While agents don’t have identity crises, like Quaid, like Quaid they fashion their intentions, beliefs and goals only from memories (information) accessible to them. The implanted memories that disrupt Quaid’s subjectivity become a primary source of the agent subjectivity Loyall writes about. As Quaid discovers, even in humans disruption is relative, for if the memories of Hauser are inaccessible what is there to disrupt, other than a dreamlike fantasy? The whole question of identity can turn on a single memory, as demonstrated by Joanna Eberhart (Katharine Ross) and the other women of Stepford known collec tively as The Stepford Wives (Forbes, 1975). The women of Stepford walk through the world with all the intellectual capacity of garden weeds. When Joanna meets Bobbie Marko (Paula Prentiss), an apparently normal, healthy woman, Joanna finds a soul mate who is also curious about the women’s strange behavior. When Bobbie returns as a typical Stepford wife after a weekend getaway with her hus band, Joanna at first assumes Bobbie was “neutralized” by some sort of biological agent.3 9 Looking to verify that Bobbie is no longer Bobbie, Joanna asks her what “archaic” means. When Bobbie, who earlier defined the word for another Stepford wife, doesn’t know, Joanna plunges a knife into Bobbie’s abdomen, convinced now that Bobbie is an animatron-like robot. The animatron doesn’t bleed, but instead gets trapped in an infinite computer loop. All of this information is not enough to save Joanna, who has already, under the pretense of a cultural study, given the Stepford men a complete narrative of her memories. Joanna’s life-less double exhibits a sense of self derived from Joanna’s appropriated memories. Yet, the plot turns on Bobbie’s faltering memory. It is the sole indicator that she is not who she appears to be. “If memory is the precondition for identity or individuality,” 39Earlier, she and Bobbie had the tap water chemically analyzed, looking for some sort of chemical that could be responsible for the change in the women. 183 writes Landsberg, “if what we claim as our memories defines who we are—then the idea of a prosthetic memory problematizes any concept of memory that posits it as essential, stable or organically grounded” (Landsberg 1995). For the moment, that is just fine with the men of Stepford. W hat C ritical T heorists N eed to K now A b ou t A gen ts and Ideology W hat Total Recall shows is that memories are more than casual impressions to be called upon to change one’s mood, in this case pleasure because Rekall’s clients chose the memories they wanted to relive. That is just the point. Memories are not impressions to be relived at a distance. Memories are a part of, and perhaps all of, the very fabric of people’s lives. Quaid cannot divorce himself from who his memories say he is. In fact, he cannot even tell who he is. Nevertheless, the memories he has are the basis for his moment-to-moment improvisations in his life. On Earth he’s just another married guy. On Mars, he is the secret agent. That is what enables him to kill his “wife.” It is the secret agent who commits that deed, not the husband. W hat’s more the change is not willed. He does not decide to become the secret agent. There is every indication in the video recording from his former self that he is that agent. W hat keeps him from performing the role of the secret agent is the inaccessibility of those memories. Where Total Recall may experiment with the fluidity of memories, the men of The Stepford Wives have different goals driven by the same mechanism. They want to modify the behavior of their wives while maintaining parts of their personalities. In a way, the Stepford men practice a form of mind control, only they don’t bother to control humans. They dispense with the body entirely and replace it with an 184 android they can program to meet their needs. In other words, the men take what is important to them from their wives’ memories and selectively place them in a mechanical body. To the men, these androids are their wives, they way the men always envisioned them to be. They are Active representations of the designers of JIT narrative agents, who get to pick and choose the memories they want their agents to contain. The deeper issue for me is the question of who gets to decide which memories are included and which ones are not. Clearly, the men of Stepford felt th at was their choice; but Stepford is both fictional and dramatic. In the real world the issue persists with agent design. Who decides, and how that decision is reached, is an artistic choice and could be critiqued as such, using the same techniques used on literary and cinematic texts. T hat is, the agent could be viewed as a text. Only, what is there to view? An agent designed with a certain set of memories could, for instance, be seen as a product of a white, patriarchal society in terms of its performance in a JIT narrative, where attitudes toward women and men come into play. (Imagine such an agent as a boss interviewing two interactors, one male and one female, who are job candidates.) Only an agent’s behavior is fraught with complexity. The role of memories, while important, is not the sole influence on behavior. W hat’s more, agents can be programmed to learn, so that over time, the original set of memories become inaccessible because they have been overlaid not necessarily with new memories, but with new behaviors whose origins may have been lost (intentionally or not). One could ask whether it make sense to say that an agent is anti-feminist? The question is important in light of the role agents play creating the dynamic component of JIT narratives. Agents are arguably the most dominant “authors” of the narrative dynamics, so how a narrative, or a series of narratives, unfolds becomes a product of memories, 185 emotions, common sense and learned knowledge. This hearkens back to a question I asked about database narratives in Chapter Three: who gets to decide what goes in the database, what objects link to what others and what new media elements the objects are assembled from? Like reappraisals, it seems the “who” questions go on indefinitely; and these questions lack firm answers. So it is with the question of who decides what behaviors an agent is biased toward and away from. What complicates JIT narrative analysis is that there is no underlying work to probe. Instead, there is a machine, a real-time dynamic narrative generator made up of seemingly impenetrable hardware and software. The text read by this machine is made up of representations—of memories, of emotions, of behaviors, of goals—that combine in a complex way in the presence of an interactor. Can an agent be anti-feminist? I would answer yes in the same way I answer yes to the question of whether a computer can be intelligent. An agent can display behavior that, if displayed by a human, would signify the human as anti-feminist. Can I specify how the agent came to be that way? Only partially. Agents are a product of people and people learn the structure of their culture in various ways, not the least of which is from the media. Mass media, writes Landsberg, is one of the largest suppliers of prosthetic memories in a commodity- driven culture: Because the mass media fundamentally alter our notion of what counts as experience, they might be a privileged arena for the production and circulation of prosthetic memories. The cinema, in particular, as an institution which makes available images for mass consumption, has long been aware of its ability to generate experiences and to install memories of them—memories which might then become experiences that film consumers both possess and feel possessed by (Landsberg 1995). 186 People’ s memories are altered by their identification with characters in film and lit erature. Mass media teaches people how to perform in culturally-acceptable ways, ways that are imprinted on agents through the design process. An agent’s pros thetic memory, then, becomes a combination of lived experiences of its designer(s) and culturally-formed impressions of behavior extracted from the media. At one level, the agent as a text is a product of the inter- and extra-textual references of its designers. Somewhere along the agent’s production line it was filled with information that produces particular behaviors; but who is to say where on the line a particular behavior was injected? Will this anti-feminist agent always display such behavior? Not necessarily. It depends upon the agent’s dominance of the dynamic narrative generation, as well as whether displaying anti-feminist behavior enables it to achieve its goals. It may be that these questions cannot be answered in the domain of critical theory; or, it may be that someone more clever than I will find a way to connect the design or manufacture of agents with the bias they contribute to JIT narratives. I promised to raise more questions in this dissertation than I could answer, and this is one area that keeps me true to my word. All is not lost, however, because there are ways to examine potential biases in JIT narrative agents specifically because they are the product of a machine, and that is by examining the machine and the tools used to build it. While the choice of agent characteristics is an artistic decision, the results of these decisions tend toward the inaccessible. There is no question that any individual JIT narrative can be critiqued after the fact, that is, after the interactor has concluded the experience is over for her. An individual JIT narrative does not necessarily demonstrate the range or nature of narratives th at a particular collection of agents in a given environment with a given premise will generate. The situation is similar to that 187 of an experienced chess player. Watching her play against a novice will provide little insight into her abilities, while watching her play against a grandmaster may tend to produce an underestimation of her typical game play. Only by observing a range of games, or in this case JIT narratives, can one develop a feel for the style, biases, structures and ideological positions that can occur. In the case of JIT narratives, a fair question to ask is how many narratives are necessary to reach any conclusions? I don’t know, and I doubt there is a single answer. Some JIT narratives, simple ones with few characters, may yield to analysis after only a handful of executions. More complex scenarios may fail to yield all their richness after dozens of successive engagements. This is why I suggest that one way into analyzing the dynamic part of JIT narratives is through the choices made by those that design them, even if the influences on those choices seem unfathomable at present. The difficulty here is twofold: first, there are not enough JIT narratives to practice on, not enough on which to develop critical skills; and, the tools for designing characters still fall under the auspices of computer scientists, who have little impetus to make their work available to those outside the discipline. Both of these situations will change as AI technology becomes more mainstream in both electronic games and research institutions. This does not mean that all of JIT narratives are off limits to critical theorists. There is, in fact, a major influence on these narratives that is not only accessible right now, but belongs to an area of analysis that is already practiced when writing about literature, theater and the cinema. This area is the transmission of ideology through the tools used to create the narrative, or the way in which the mode or environment of production is reproduced in, and by, it. This analytical path requires no knowledge of how the tools work, only the forces that brought them into existence. 188 Right now, the tools used to implement AI technology axe primarily a product of military research. Even though the actual tools were created and implemented by individuals who vehemently insisted that they did so without any military influence, the hardware and software behind not only AI technology, but behind popular electronic games carries with it the stamp of military doctrine. This ideo logical pressure yields its secrets to historical probing, probing that begins with the relationship between the tools of the AI researcher and the military organizations that funded them. T he Stories W e Tell are th e Tools U sed to Tell T hem One of the Army’s objectives in establishing the ICT was the creation of a nexus of collaboration between the military, academia, and Hollywood—especially Hol lywood. At its zenith, military simulation technology represented some of the best immersive experiences in day-to-day use.4 0 By the mid-90s, electronic games, location-based entertainment (LBE)4 1 and special effects eclipsed the m ilitary’s work as the gold standard. The military could have hired theme park designers and special effects artists to create new training environments, but more immersive bang for the buck was only part of what was lacking. W hat the military discovered, writes Tim Lenoir a historian of the military influence on modern-day computing, was that simulator training was an effective tool for teaching rote skills but To train for real-world military engagements is to train not just on how to use the equipment, but on how to cope with the implementa tion of strategy in an environment with uncertainties, surprises, and participants with actual fears (Lenoir 2000). 40Universities and research labs were working on various VR projects, only these were still far from deployment. 41 LBE includes such immersive experiences as theme park attractions and IMAX theaters. 189 The military needed uncertainties and surprises to meet one of its new goals, teach ing critical thinking skills; but military simulations lacked stories with plots, twists and emotions. The entertainment industry, with its movies, theme park rides, and games had already mastered the art of packaging storytelling in computer-ready form. The objective of the ICT was clear: create leading-edge immersive and game environments and use AI agents to involve interactors as participants in a Hollywood movie-like world. The ICT is not all about technology: it’s also about technology platforms for Hollywood storytellers.4 2 One of the purposes of the MRE Project was to teach inter-cultural skills to new lieutenant-interactors by allowing them reasonable exploration of unfamiliar cultures. Such open-ended training conflicts with the event orientation of military culture: for every task there are rules and procedures that lead to a successful outcome. For its part, the MRE required the interactor to pass through a set of learning points, events deemed necessary to meet training objectives. Demanding such specificity runs counter to the philosophy of JIT narratives because such demands restrict agency rather than promote it.4 3 The resolution of this dichotomy was almost unconsciously introduced into the MRE through experiences and beliefs forged much earlier and transm itted through the software used to construct the MRE environment. 42I use the word storytellers to incorporate writers, directors, producers, cinematographers, special effects artists and the rest of the trades necessary to package a story for the screen. 43There is a major pedagogical question here of whether agency and training are compatible; does providing the interactor with freedom in the story world allow trainers to control what the interactor learns? I will not address that question here because the discussion centers around issues of training and pedagogy as well as JIT narratives. I will say that this topic was spiritedly debated at the ICT and is still a topic of discussion with many of the ICT’s projects. 190 A P lan -B ased A gent in a S torytellin g W orld Many of the researchers working on the initial phases of the MRE were veterans of an earlier AI project for training engine room crews in the steps necessary for starting a ship’s diesel engines. Step-by-step training of this sort can be handled by an agent with a few general properties: • the ability to recognize what step the interactor has performed; • the ability to recognize whether this step is the proper next one; • the ability to detect an incorrect ordering and engage the interactor in a dialogue leading to the execution of the proper step; • the ability to answer questions from the interactor regarding the proper next step. This agent design is based on the plan-as-program philosophy discussed in Chapter Four, and is appropriate for teaching step-by-step tasks.4 4 Add an agent-readable list of the tasks and their proper order, and the agent is pedagogically equipped for teaching. Learning inter-cultural skills, on the other hand, is an experiential process. Knowledge develops through practice over time. While the objectives of the MRE were suitable for a series of well-crafted story experiences, the researchers tapped to build the MRE were not storytellers but computer scientists familiar with a particular AI agent called STEVE. STEVE (pictured in Figure 4.2), the Soar4 5 Training Expert for Virtual Envi ronments, has the features I just described. Given the development time estab lished for the MRE’s initial version, the already operable STEVE was pressed into 44 The “Buy Milk” problem in Chapter Three is just such a task. 45Soar is a programming environment for creating intelligent agents. 191 Figure 4.2: STEVE demonstrating how to start a diesel engine. STEVE is repre sented in the virtual world by a floating torso, allowing the agent to gesture and gaze at items in the world. service as the intelligence behind the sergeant, medic and mother agents pictured in Figure 4.1. STEVE, however, was the product of military ideology—learning by rote—and even though the MRE employed the services of a veteran television drama writer, STEVE’s belief system prevailed. The MRE script had to be made STEVE-friendly, so it gradually morphed from a fairly open drama into a col lection of ordered tasks, and the interactor traversed a path privileging learning points over inter-cultural skills. STEVE is a contemporary example of the effect of ideology on the cyber-stories one is able to tell. Putting STEVE in charge of JIT narrative development means stories are constrained to a step-by-step, point-by-point orientation, the antithesis of an improvised, experiential environment. Other projects at the ICT fall victim to this same ideology, the privileging of procedure over process. Fallout from the ICT projects will include tools that others will use to build entertainment 192 environments and to create VR effects, and these software tools will always favor the stepwise culture of the military. Long before the ICT, tools acquired cultural baggage from another close cou pling between academia and the military. Fueled by government money, and pow ered by a colorful collection of nerds and geeks, computer programs took on new significance as objects of study and objects d ’ art. Through this group’s efforts, ethics and fascination for performing stunts of derring-do with computer software, tools arrived on the scene with their ideological payloads. The time was the early 1960s, and the group of nerds and geeks were know collectively as hackers. T he Hacker E thic If one were a youth living on the edge of the counter-culture revolution of the 60s, there were only two ways of getting your hands on a computer. You could join the programming department of a large corporation, and at best you would have access to a timesharing terminal. If you were fortunate, you could find work on a university research project that was fortunate to have a dedicated computer avail able on a first-come-first-served basis. Then you had the machine all to yourself, one hour at a time. If computers lit a fire in your belly, the former method was unsatisfying, for you never got your hands on the real computer; the machine was still controlled by someone else. The latter method was hacker heaven, only it usually meant enrolling at the university whose computer you planned to use (and then being fortunate enough to land a research position on the project that owned the computer). If one were a true hacker, according to Steven Levy, journalist turned hacker culture historian, there was no choice: many young people in the late 1960s saw computers as something evil, part of a technological conspiracy where the rich and powerful used 193 the computer’s might against the poor and powerless. This attitude was not limited to students protesting, among other things, the now exploding Vietnam war (a conflict fought in part by American com puters). The machines which stood at the soul of hackerism were also loathed by millions of common, patriotic citizens who saw computers as a dehumanizing factor in society (Levy 2001). Once upon a time the word “hacker” meant an artist, usually (but not neces sarily) a computer programmer who was passionate about writing elegant soft ware that performed some function that just had to be done. These hackers built Unix and Linux. They built the Internet. They made the World Wide W eb... well... worldwide. For virtually no financial reward, hackers promulgated computer graphics, e-mail and the Hacker Ethic: • Access to computers—and anything which might teach you something about the way the world works—should be unlimited and total. Always yield to the Hands-On Imperative: • all information should be free; • mistrust authority—promote decentralization; • hackers should be judged by their hacking, not bogus criteria such as degree, age, race, or position; • you can create art and beauty on a computer; • computers can change your life for the better (Levy 2001). “The precepts of this revolutionary Hacker Ethic,” observes Levy, “were not so much debated and discussed as silently agreed upon. No manifestos were issued. No missionaries tried to gather converts. The computer did the converting ” This “immaculate diffusion” proved to be the glue holding the hacker culture 194 together. It also made the culture difficult to oppose because it existed everywhere and nowhere. Like the Internet it eventually inspired, the Hacker Culture had no single point of authority, no critical link that, once cut, would throw the hacker community into chaos. Hackers were not in the mainstream of the 60’s cultural upheavals, but they were anti-authoritarian and some of the movement’s anti-establishment ideology inevitably rubbed off on them. Contrary to the view that computers were that era’s weapons of mass destruction, Pekka Hinamen, author of The Hacker Ethic described the hacker view as “the use of machines for the optimization and flexi bility of time should lead to a life for human beings that is less machinelike—less optimized and routine” (Himanen 2001). Hackers saw the humanistic side of com puters but de-mystifying them, discovering what made the ones and zeroes dance, meant easing your way in between the lights and transistors to view the world from the inside out. That view demanded a certain naivete.4 6 Writing about this conflict between the hackers’ sense of morality and the Department of Defense’s (DOD) use of computer research, Douglas Thomas, author and former hacker observes that “[wjhile the ethic belonged to the hackers, the product belonged to the Department of Defense.” Hackers either distanced themselves from that conflict, or denied it altogether. It was, as Douglas writes, “the ultimate irony—work produced in the climate of absolute freedom would be deployed by the military in absolute secrecy” (Thomas 2002). Of course, that was not the way hackers saw it. “The general [MIT] AI lab response to that charge” that all the lab’s projects were paid for by the same Department of Defense that was killing Vietnamese and drafting American boys 46It is here that we see the hacker ideology mobilize its own version of the origin myth. Where the origins of the software produced by hackers was tainted by its association with the military, the connection was erased while the Hacker Ethic was instantiated in its place. 195 to die overseas, “was that the Defense Department’ s Advanced Research Projects Agency (ARPA) which funded the lab, never asked anyone to come up with specific military applications,” writes Levy (Levy 2001). Their rationale was that ARPA was run by computer scientists whose only objective was advancing pure research. True, through the efforts of James Licklider, ARPA’s first director, the DOD took a mostly hands-off approach when it came to the research performed at its funded sites; and this laissez faire attitude proved vital to hacking and to the software hackers produced, for it encouraged programmers to experience the pleasures of playing with computers the way children extracted pleasure from their games of make-believe. Perhaps this is why the first known computer game, Spacewar, was a hack,47 with all the consequences playing games implies. “By the mid-sixties,” writes critic (and video game veteran) J. C. Herz, “there was a copy of Spacewar on every research computer in America, as well as hundreds of personal variations on the source code and millions of dollars of lost-tine cost to academia and the military-industrial complex” (Herz 1997). Herz views Spacewar with too critical an eye. The game was a pleasant diversion while fulfilling the role of a tutorial on software development for this budding field of computer graphics. To portray ARPA as a million-dollar sugar daddy to a cadre of post-adolescent social misfits was, at the least, an active case of tunnel vision. Despite how hacker folklore recast the times, ARPA was not quietly writing checks to any institution with the word “university” or “technology” in its name. Licklider and his successors carefully evaluated proposals submitted by researchers. Licklider’s talent was reading those proposals in a context of their potential rather than their potential risks. Nevertheless, ARPA was still a research arm of the DOD, and regardless of how Licklider spent his employer’s money, the money was 47The origin legend, or myth, of Spacewar was related in Chapter Two. 196 spent on projects deemed useful in fighting the Cold War. ARPA’ s checks were not ideology-free, and the money it spent on the development of computer science carried the politics of the DOD along with it. ARPA was not the sole source of money for computer science research. There were interested funders outside the government as well, although they often had strong affiliations with it. These funders operated in the private sector, but rec ognized the potential of tossing seed money into the research and development bucket ARPA established, with the hopes of negotiating lucrative production con tracts down the line. Ivan Sutherland’s work on interactive computer graphics and virtual reality helmets exemplifies the widespread funding support available at the time. Funding for his project, according to Lenoir, came from a mix of public and private sector sources. “The CIA provided $80,000,” he writes, “and funds were also supplied by ARPA, the Office of Naval Research, and Bell Labs.” The Office of Naval Research was the research funding unit of the Navy, while Bell Labs was the research and development unit of the then-unified communications monopoly, AT&T, a private sector company with a great interest in emerging computer tech nology. Lenoir also records that “Bell Helicopter provided equipment. The Air Force gave a PDP-1 computer, while MIT Lincoln Labs, also under an ARPA con tract, provided an ultrasonic head-position acoustic sensor” (Lenoir 2000). Bell Helicopter operated in the public sector, but its major customer was the govern ment. MIT Lincoln Labs was a research arm of MIT, much the same way that SRI was a research arm of Stanford University, and both profited from ARPA 197 contracts. All non-government funders had a vested interest in supporting ARPA- directed research: early and continuous access to state-of-the-art advancements in computer technology.4 8 Sutherland, as was the convention at the time, remained focused on his work rather than its potential military and industrial applications. He moved from Harvard to the University of Utah in 1968, never envisioning VR as a gaming platform; rather, he saw it as an access point into “virtual worlds,” abstract spaces such as the inside of a molecule, or “real” spaces such as the inside of the human body. His intentions notwithstanding, the head-mounted display, which allowed the wearer to see inside a computer-generated room, became synonymous with Virtual Reality (VR), a signifier that has acquired a career of its own. Meanwhile, ARPA continued funding graphics research at Utah, leading to algorithms that solved major computational problems encountered in three- dimensional representations. According to new media professor and critic Lev Manovich, ARPA and the Department of Defense were always interested in apply ing the results of graphics research to heighten the realism in military simulators. The development of photorealistic computer graphics, for example, was motivated by “its application for flight simulators and other training technology. And since simulators require synthetic landscapes, a lot of research went into techniques to render clouds, rugged terrain, trees, and aerial perspective” (Manovich 2001). Work continued on improving both the realistic look of simulations and the human- computer interfaces, but after two decades of advances, progress tapered off. As 48In the 60s, any product developed with government money was automatically in the public domain. Universities and research labs could not privatize their results, nor could manufacturers gain a competitive edge by negotiating exclusive contracts with researchers. One way to stay a step ahead of the competition, then, was for manufacturers to invest in research projects of interest, that investment buying them the right to progress reports, briefings and access not available to non-funders. 198 Lenoir notes, the local problems had been solved; the remaining ones spanned a wide range of computer technologies. “By the mid-1980s, it was universally acknowledged that the creation of virtual worlds technology depended upon devel opments in several fields, including computer architectures, processors, operating systems, and languages”. Once again, DOD money, funneled through DARPA,4 9 was crucial in underwriting the necessary research. More than crucial, this fund ing enabled the development of hardware and software necessary to realize the gaming environment that traces its roots back to the early computer games of the 1980s; but the funding was not (and is not) ideologically neutral, and the needs and beliefs of the military left a deep impression on the organizational structure of games, and the games left their imprint on their players. T he M ilitary Influence Thanks to DARPA’s funding of critical areas needed for military simulation, the electronic game industry has access to graphics algorithms, high-speed graphics hardware (graphics cards), RISC5 0 microprocessors, photorealistic graphics, three- dimensional modeling and real-time rendering. These technologies, in turn, drove down the cost of incorporating VR-quality features into low-cost computers and home game consoles. RISC technology was also at the heart of the high-speed graphics workstations first designed and built by Silicon Graphics, Inc. (SGI). Much of the development work done by SGI founder Jim Clark took place in 49DARPA was the new name for ARPA, with the “D” denoting “Defense,” and it is the name the agency is known by today. 50RISC stands for Reduced Instruction Set Computer. Microprocessors with RISC architec tures axe often faster than non-RISC chips of the same clock speed (i.e., 300MHz or 1.2GHz). The architecture has fewer instructions than non-RISC ones making it easier to program and espe cially making it easier to optimize the time it takes for a program to run. Much special-purpose graphics hardware employs RISC microprocessors. 199 government-sponsored programs while he was a student at Stanford University.5 1 Advances such as these were dealt a severe blow, however, with the end of the Cold War. By the mid-1990s, military spending encountered extreme culture shock. Accustomed to underwriting the development of mission-specific equipment, what had been standard operating procedure for the lifetimes of military officers and scientists ran headlong into the Federal Acquisitions Streamlining Act of 1994. By an act of Congress and the stroke of the President’ s pen—and much to the cha grin of the DOD—the Department of Defense was mandated to acquire commercial, off-the-shelf products whenever possible. The development of unique systems was given lowest priority in the procurement process. In response, the DOD established a requirement that all new systems were to be modeled, simulated, and virtually tested in realistic synthetic environments in the hopes of exercising control over the development of its simulation technology, the same technology trickling down to the game industry. Som e R am ifications o f M ilitary-Funded G am e Technology In his book, Homo Ludens, play scholar Johan Huizinga delimited the territory of play by mapping out some of its required structural features: • play is a voluntary activity; • play is not ordinary or real life; 51 SGI is now one of the leading suppliers of computer processing hardware to the entertainment industry. The company was popularized in the press when its equipment was used to create the dinosaurs in Steven Spielberg’s Jurassic Park. 200 • play is performed within certain limits of time and place; • play has rules that determine allowable actions in the play world; • play requires a ritual space in which it is performed; • play is labile. Ordinary life can reassert its rights at any time (Huizinga 1950). While one can see these fundamentals at work in electronic games, the simu lation technology inherited by game designers came predisposed toward the con struction of military simulated environments. One can see in adventure, fighting, simulations, and strategy game genres the expression of qualities like obedience and the adherence to doctrine the military so highly prizes: • Both simulations and games emphasize a reliance on action. Most games contain little if any plot. Events are triggered either by actions of the player or by the game’s programming. In most games, actions occur in spite of the player, who while experiencing the illusion of agency merely reacts to a pre-scripted sequence. The player is encouraged to develop his own script for achieving success. The equating of script to success parallels the idea of military doctrine, that there is a “script” for battlefield operations that also leads to success. In real life, however, there must be time for reflection and critical thought as well as action; • Both simulations and games privilege a state of sensory overload. Modern- day combat relies on streams of digital data as much as on conventional weapons. The foot soldier is less an “Army of One” than a cyborg, weighed down by sixty pounds or more of electronic instrumentation. Field comman ders and strategists are barraged by this relentless data flow, so much so that 201 they are in danger of succumbing to “the fog of war,” a condition in which decision-making is inhibited by overload from the data designed to foster it.5 2 In games, this “fog of war” translates into both visual and aural spectacle as well as a never-ending onslaught of enemies and obstacles. Where the military decision-maker is frustrated in his attem pts to strategize and com mand, the game player is frustrated in his attem pts to win. The danger is that data and images may become disconnected from what they are meant to signify. In the end, the commanders and the players manipulate free-floating signifiers that are no longer associated with moral consequences; • Both simulations and games condition the user to sense-act-think. Simula tions are used to teach complex tasks that must be performed (nearly) flaw lessly under conditions of extreme stress. While an obvious feature of the fighting game, this ideological stance of unquestioning reliance on rote exe cution also manifests itself in other genres, where the game play discourages the player from questioning core assumptions. This leads to what professor and psychologist Sherry Turkle calls “a willingness to abdicate authority to the simulation as well a s.. .to accept its opacity” (Turkle 1997), a condition that masks the need for applying critical thinking to assess situations. Yet critical thinking is a part of the checks and balances system that prevents the rise of group-think in a society, the very type of thinking that sense-act-think tends to engender. Simulated training and mission rehearsals were not designed to be recreational sports, but activities meant to be confined to a certain type of play space: bounded 52 The term “fog of war” was not birthed by the overflow of data generated by current technol ogy. The term is used today, however, by military officers adjusting to the shift from boots on the ground observations to those that are part of the electronic battlefield. 202 locales within which the military could impart knowledge to its members. Elec tronic games are designed with their own play spaces, too, anything from the small footprint of a laptop computer to an LBE attraction requiring its own building. Military simulations are monitored by training officers who have the responsibil ity of keeping their play spaces intact, and of cutting off a simulation if it spills over into real life.5 3 Electronic games usually do not have such monitors. If, as Huizinga writes, play requires its own space in which certain meta-rules apply, among them the finiteness of the limits of that space and the ability of ordinary life to re-enter the space at any time, what are the effects of violating some or all of these constraints in the context of electronic game play? University of Calgary lecturer Randy Schroeder suggests that games and other forms of immersive media collapse the distinction between “different kinds of space,” in this case the real and the hyperreal. “The boundaries of the play space,” he writes, “begin to disappear” (Schroeder 1996). If the real world were to seep into the play world then, following Huizinga, play would have to disappear. There is certainly no empirical evidence for this, at least not in the game industry. Then if Schroeder is correct, the immersive play world of gaming is most likely inter-penetrating the real world, imposing upon it an organizing structure derived from the qualities outlined earlier. Most disturbing to me is the bias away from critical thinking suggested by Turkle when she refers to the player’s willingness to yield authority to a computer program. As cultural critic Terri Toles puts it, “what appears to be aggression during video game play is actually obedience.”(Toles 1985). Perhaps within the play space that makes perfect sense; but outside the play space such lack of need to question authority sounds dangerous. 53 As happened in the MRE scenario described at the beginning of Chapter Four. 203 Inside the play space, actions have consequences determined by rules. When the world of the game leaks out into non-play spaces it carries with it the consequences of simulated worlds: nothing happens that cannot be corrected or ameliorated by saving, quitting, or choosing a different menu option. For death or frustration relief is only a mouse click away. Lack of consequences implies a lack of morality, for without consequences how do we measure ethical behavior? “Only the game counts,” observes Schroeder, “and the game can be started over and over again.” This is an example of the organizing power of technology apart from its content. Military ideology, turned beliefs into needs that were channeled into several specific areas of research, only to emerge from the end of a three-decade pipeline to be marketed by the entertainment culture in the late twentieth century. The process occurred not at the content level, but at the structural level, through a combination of taken-for-granted principles and immersive technologies that blurred the lines between the real life and virtual worlds and shared the qualities of play spaces. Whether the player identified with Super Mario, Lara Croft or the pilot of an F-15, he was still interpellated into an ideology that privileged order and obedience over independence and critical examination. Ironically, with its recognition that incoming recruits grew up with these very same electronic games, the Army is paying contractors to develop new games, some small enough to run on laptop computers and others on the scale of the MRE Project that reproduce this critical part of its culture the Army wants to instill in its soldiers. Thus, the Army is turning to research facilities like the ICT where it pays to re-acquire its own belief system inherent in these games. Even the Army’s next step, collaborations with the entertainment industry to create environments 204 for teaching critical thinking, are still infused with the idea of adherence to doctrine and strict obedience.5 4 These military-inspired tools will not maintain their dominant position in AI implementation forever. As electronic games begin to generate large profits for major software publishers, these corporations will become major funders in the field of game and AI research, through the sponsoring of major initiatives at research institutions and the endowing of academic chairs. Eventually, the influences of capitalism may rival or eclipse those of the military. W hat will happen then is a m atter of speculation. Nevertheless, based on my experience in the high- technology industry and my studies in critical and cultural theory, I am willing to make some well-grounded guesses. Most likely, the software engines that make agents go, if not the agents themselves, will become commodity products. That means that agents will be available off the shelf or one will be able to purchase an agent skeleton on which one puts behaviors and memories. In the name of standardization, the ability of agents to perform certain behaviors—and I would not rule out including the so-called deviant behaviors, such as homosexuality—will be curtailed or eliminated. This domination of the center by capitalism will be met by a response from the margins. The signs of this resistance are already visible in what is called the open source movement. A part of the hacker culture is a sense of shared ownership of software, that one is not only permitted but expected to circulate, change and critique programs written by others. As late as the 1980s, this mind set didn’t have a name at the time—it was just the way things were done—but gradually the practice formed 54This time, obedience takes the form of pedagogical learning “points,” experiences in an interactive story the trainee must encounter. This results in the trainee being steered down a particular story line rather than experiencing the freedom to explore a situation and learn by making mistakes. 205 a social system standing in direct opposition to the closed, proprietary stance of corporate software. Richard M. Stallman, co-founder of the Free Software Foun dation and a leader in what became the open source movement believes the idea of proprietary software, software the user is legally enjoined from sharing or changing, represents an antisocial system “th a t... is unethical, [and] th a t... is simply wrong.” While acknowledging the shocking nature of this statement, Stallman argues that “software publishers have worked long and hard to convince people that there is only one way to look at the issue [of software ownership” (Stallman 1999). Stallman called the alternative he described free software, using free in the sense of freedom and not price. The name was problematic, though, because, as Eric Raymond, self-declared hacker revolutionary, writes, “the strong association of the term ’free software’ with hostility to intellectual property rights, communism, and other ideas was hardly likely to endear it to an MIS5 5 manager” (Raymond 2001). Although Stallman continues to use the free software name, the term is generally replaced by a less anti-commercial-sounding one: open software. The open software, or open source, model provides an alternative to propri etary software by circulating not just the computer-readable bits that make up a program, but the source code as well.5 6 According to Stallman, a program is free software, for a particular user, if: • you have the freedom to run the program, for any purpose; • you have the freedom to modify the program to suit your needs ([t]o make this freedom effective in practice, you must have access 55Management Information Systems, also called IT (Information Technology.) 56Source code represents not only instructions for the computer, but the ultimate definition of the program. Anyone trained in reading the particular programming language used in writing the software can learn how the program works and, with enough time and skill, modify the program’s behavior. 206 to the source code, since making changes in a program without having the source code is exceedingly difficult); • you have the freedom to redistribute copies, either gratis or for a fee; • you have the freedom to distribute modified versions of the pro gram, so that the community can benefit from your improvements (Stallman 1999). Already, open source programs mirroring most of the functions of Microsoft Office Suite, Adobe Photoshop, Adobe Illustrator and many video editing appli cations circulate freely on the Internet.5 7 Users take these programs and modify them to suit their own needs. A fundamental assumption is that if a modifica tion is useful and desirable, the modified version of the program will become more popular than the original and the original version will gradually drop out of circu lation. The users of a program, and not corporations, decide on what features the program should have. In an open source world, for example, agents could be freely modified. Marginalization of agent behavior would become increasingly difficult as those users who felt relegated to the fringes could band together, via the Internet, to create agents that entrenched agent producers choose to ignore. The design and distribution of agents may or may not evolve this way. Nevertheless, the strategy of analyzing JIT narratives by following the ideological influences of their tools will, I believe, prove to be a valid and viable scholarship path, one that will be accessible to theorists before the other avenues I’ve written about open up. The dissertation ends here, but not the work that needs to be done. Much of what I have written about, especially the static/dynamic nature of JIT narratives, can be applied to interactive narratives as well. As I wrote earlier, one can, and should, question how the objects in database narratives got into the database, 57While the preponderance of open source programs are written for the Linux operating system, there are also many that work with Microsoft Windows and Apple OS X. 207 what restrictions were placed on their composition and how the links connecting them took the shape they did. Database narratives, like JIT narratives, don’t yield their narrative secrets after one engagement, or two or perhaps a hundred. In a real way, they are a good place to start asking questions that will be asked of JIT narratives, too. Database narratives still encourage the interactor to follow the logic of the designer, and here is where database narratives and JIT narratives begin to diverge. JIT narratives are very much a product of the actions of the interactor as well as the agents, giving the interactor the means to resist, whether successfully or not, the hidden influences of a JIT narrative system. JIT narratives become harder to control and more of a collaborative product than interactive narratives because of the nature of creation of the narrative’s dynamic component. Investigation of the dynamics gives every indication of being an evolution that begins with the interactive narratives of hypertexts and databases and becomes a dominant factor as artificial intelligence becomes a dominant mechanism in cyber-storytelling. From interactivity to agency, from narratives that unfold in time to ones that are navigated through data matricies, from linearization to the event loop, from narratives with static works to ones in constant flux, this dissertation has traversed rapidly changing narrative conditions in not too much time. But, that is the nature of new media narratives, ones closely tied to technology which, itself, does not stand still. If what appears on these pages seems somewhat futuristic, just put the dissertation away for a while. It won’t be long before it’s out of date. 208 C onclusion If there is any one aspect of this work I am most emphatic about it is the impor tance of the difference between interactivity and agency, two terms easily conflated because it seems so reasonable that being able to interact with a computer results in a sense of control easily mistaken for agency. Were agency the mere the ability to communicate with machines, people would all share agency with small cars and major appliances. Agency surpasses the exchange of information called interaction and refers to the ability to be influenced by objects and agents in a story world and to influence them, not only physically but emotionally as well. Those kinds of changes require the presence of artificially-intelligent software agents, both because they can simulate and externalize emotional states and also undergo changes in those emotional states caused by external actions and events. At the same time, AI agents are uniquely equipped to appraise and reappraise their actions in light of constant changes in their environment and their internal intentions, beliefs and goals. Should the interactivity/agency difference fail to be acknowledged then I am afraid narrative theory will not properly confront the analysis of a cyberspace narrative’s dynamic component. Dynamism, when linked to interactivity, will always be seen as a consequence of the interactor’ s ability to affect the narrative through choice. As I hope I have argued successfully, the more general dynamic 209 component actually results from the interactor’s ability to act. Actions are not arbitrary, however, for the interactor must be able to act in such a way that she can transcend the barrier observed by George Landow that the hypertext (and other interactive) reader cannot change the text another author has written. Another reason I find the difference between interactivity and agency so impor tant is that it foregrounds the difference between a JIT narrative and a traditional narrative. As I discussed in Chapter Three, terminology that easily applies to JIT narratives, such as performance and active reading, has long been the staple of tra ditional narrative analysis. By separating JIT narratives into two parts, the static and the dynamic, I believe one can see how the terminology can reference different concepts at virtually the same time. Much of the confusion over what is an inter active narrative, as brought up by Murray and discussed in Chapter Two, I believe stems from attem pts to apply the terminology and interpretation used for tradi tional narrative to dynamic narrative. The difficulty in finding an umbrella term for a single-meaning traditional/dynamic narrative combination demonstrates that such an approach does not work. Applying the right terms in the right contexts is a first step in opening up JIT narratives to more meaningful critiques. A second concept I feel is extremely important in the study of JIT narratives is the idea of the loop as the narrative’s generating force. The loop is a common construction in both narrative and computer science, so I am not surprised I found it as a bridge between the two discourses in my study. Unlike Lev Manovich, whose work I referenced in Chapter Three, I do not see the loop as a narrative building block. Manovich’s view fails to take agency into account. His loops are static objects interconnected in ways that allow an interactor to move among them in various ways without affording the interactor the ability to change the 210 loops themselves. The loops in Manovich’s work construct narratives but they don’t generate them. The loop applied to JIT narratives is the metaphor for the driving force of the narrative’s creation. The loop provides a mechanism through which the next moment of the narrative comes into being where nothing existed before. The loop provides a haven, though not necessarily a safe haven, for the interactor in her attempts to master the JIT narrative world and structure her experience in such a way that all the events that have come before play out to some sort of end game. I don’t deny the argument that such a position challenges the long-held (and well- written about) beliefs that the purpose of an ending is exactly to make sense out of the beginnings and middles of narratives. W ith due respect to scholars such as Peter Brooks and Frank Kermode, it just may turn out that such an observation belongs to the traditional narrative domain; the dynamic element may have its own set of rules. The interactor caught in the narrative loop may not be promised an all- framing ending every time. JIT narratives may produce more outcomes associated with avant-garde filmmaking, where the endings resist traditional interpretations and clean solutions. I don’t see this as problematic at all, since the two contexts, the static and dynamic, allow for two different types of endings. Again, this is an advantage of recognizing the dual nature of JIT narratives. In my introduction I promised to raise more questions with this dissertation than I answered. On the off chance I haven’t accomplished that mission so far, here are a few unknowns that narrative theorists will have to deal with in the age of artificial intelligence. An important area, and one that I discussed at some length in Chapter Four, is the issue of understanding JIT narratives in the context of the human choices that enter into their creation. I’m speaking not of the actions of the interactor but the choices of agent designers. When I introduced the notion of 211 a JIT narrative in the introduction, I did so in the context of a theatrical improv in which a group of actors collaborate in producing a sketch based on characters, a setting, some props and a premise. A JIT narrative, as a computerized version of the improv sketch, still needs its props and premise, and devising these is a bit of an art in itself. Some settings and premises are too thin to support anything but the briefest of sketches; others are overspecified, leaving the actors little room to maneuver. The choice of settings and premises unquestionably affects the nature of individual JIT narratives. A chance meeting in a Starbucks store will have a different tone and texture than a chance meeting set in Valley Forge during the American Revolution. Nevertheless, the single most important factor affecting the political, social, cultural and ideological umbrella of a JIT narrative is the political, social, cultural and ideological makeup of the agents involved. Understanding the relationship between agent designers, agent modes of production and the makeup of agents means understanding what kind of political, social, cultural and ideological impact particular JIT narratives can have. In Chapter Four I flirted with ways in which this relationship could be examined. At the same time, I admit the tools—what computer programmers would call the hooks—are not there to do so. Exactly how these hooks will come into being is still an open issue. W hat I can say, however, is that for this relationship to become accessible by critical theorists, the development of JIT narrative agents, in fact the entire field of cyber storytelling—will need to be moved out of the exclusive domain of computer science and into some arena shared by computer scientists, artists and theorists. Until then, the pressure of the community of AI funders, mostly the military, will steer the development of agents and cyber-storytelling away from art and heavily toward science. W ith an influx of money from game publishers and Hollywood studios, 212 the development of agent tools, and the agents themselves, may at last wriggle free from the grip of the technologists. Another area that needs investigation, and one I did not touch upon in this dissertation, is the nature of interactorship. I stayed clear of that topic in the body of this work because there are not enough JIT narratives, or narratives close to that form, available to study. Since I’m speculating about the future here, I feel comfortable offering at least some partial insights into the topic. In Chapter Three, I wrote about Mateas and Stern’s interactive project, Fagade. T hat’s the narrative game in which the interactor plays the friend of a mid-thirty-ish couple whose marriage is undergoing a meltdown. Mateas and Stern define the couple fairly well. Besides their age, the couple is reasonably well off and seems to live in a fashionable part of town. One look at them and it’s clear they are white and, from the furnishings in their house and their habits (they drink martinis) undoubtedly middle class. W hat does this say about the interactor? If the interactor is positioned as a friend of the couple it might be reasonable to assume that she is socially and economically equal (or nearly equal). How then does an interactor of color, or of a lower socioeconomic class, or of any significantly different metric one can think of, engage this couple in the manner Fagade expects? The interactor’s subjective position, that of the couple’s friend, is another one of those human-machine interface compression techniques I discussed in Chapter Four. Little information has to flow across this interface to acquaint the interactor with the characters in the narrative, and vice versa. The downside is that Fagade cannot reorganize itself when the pattern it expects, the mid-thirty-ish, middle- class interactor, gets replaced by randomness, a twenty-something student still furnishing her house with board and cinder block bookcases. 213 For a JIT narrative, the nature of interactorship will be affected by the interac tor’s subjectivity, the premise of the narrative and the ability of the JIT narrative system to adapt to injections of randomness with respect to the role expressed by the inter actor and the role suggested by the premise. It may very well be that interactorship, unlike spectatorship in the cinema, is not stable. While there are many points of view on cinema spectatorship, any one point of view references all (or most) of cinema. One doesn’t have a different theory of spectatorship for every film, or even for every film genre. T hat may not be the case with JIT narratives. Once again, the dynamic component of the narrative may dictate that interactor ship varies along two axes: the distance between what the premise expects the interactor’s role to be and the ability of the JIT narrative system to adjust to the interactor’s actual performance of her role.5 8 One final question, although certainly not the least important, is why would anyone want JIT narratives in the first place? My interest began when I asked myself if characters in some form of interactive story could be as rich and com pelling as characters in novels, theater and cinema, a more or less “academic” pursuit.5 9 Having lived with the topic for these past years, and having been involved with experiments such as the MRE Project, I look at today’s crop of young gamers and wonder how their tastes in entertainment will change as they 58Szilas, whom I referenced in Chapter Three, also observed the existence of this problem. His solution involved adjusting actions in the narrative world so that the interactor and the narrative’s author synchronized their goals and events ultimately unfurled according to the wishes of the author. I trust it is clear by now that I don’t share his point of view. 59 By which I mean I did not set out on this investigation with the goal of uncovering something of commercial value. 214 grow older. W hat might be their post-first-person-shooter literature?6 0 My expec tation is that these older gamers will seek entertainment marrying the participatory nature of the games they grew up with and richer stories and experiences. JIT nar ratives can provide worlds where interactors can be more than themselves without the demands of skills beyond their reach. Lieutenant Data, the android starship officer of Star Trek: The Next Generation (Paramount, 1987) fame enjoyed don ning the mantel (not to mention the cape and hat) of Sherlock Holmes during his Holodeck recreation breaks. How else could he experience the intellectual chal lenge, and danger, of catching thieves and murderers in a world thousands of years in the past?6 1 These same participatory experiences of gamers raise the issue of their gaming background’ s impact on learning. The Army is already investing in AI agent technology to teach its soldiers. Why not let a middle schooler partici pate in the Constitutional Convention instead of just reading about it? Or allow an anthropology major to perfect her skills in living with and observing cultures before engaging in field work? So, yes, I do see a future for JIT narratives, not as a product of technological determinism, but as an evolution of modern (and not so modern) entertainment and education narrative forms. I see JIT narratives as art enabled by science and not science with a thin veneer of art. Most of all, I see JIT narratives as a meeting place, where humans can hang out not with computers, but with an interesting group of post human players. 60Before going further, I want to emphasize I am not talking about an either/or choice. JIT narratives will hardly replace cinema, television or electronic games. Instead, they will become an adjunct form of entertainment, providing experiences not available in other media forms. 61 Not to mention a world that itself was a literary creation. Data’s experiences as Holmes were one of my inspirations that JIT narratives could host characters as complex as those of literature. 215 G lossary Combining the discourses of narrativity, computer science, games, virtual reality and artificial intelligence guarantees a close encounter with conflicting terminology. No author will ever find the “right” set of definitions that will satisfy the practi tioners and theorists of their respective fields; one can only hope to be consistent, making it easier for the reader to follow along. In this dissertation, these are terms I use, along with their intended interpretations: A /D is an abbreviation for Analog-to-Digital. Actors are agents (see Agent) that perform actions. They are not necessarily human. To act is defined here as to cause or to experience an event (see Bal 1997). Agency is the ability to exert arbitrary influence on agents and objects in the story world. Agent is a real or Active entity capable of causing changes in the story world. In computer science discourses, agent also denotes a computer program with an ability to act with no or little direction. The most common type of agent is one that searches the Internet and performs specific functions for its owner, such as retrieving news on a list of topics. In this dissertation the term agent is used in its literary definition.6 2 62This definition follows Ortony, Clore and Collins’s treatment of agents in ‘ The Cognitive Structure of Emotions, a theory which underlies much of the work being performed in the building of emotional agents. 216 AI is an abbreviation for artificial intelligence. Analog-to-Digital conversion is a process of translating an analog (continu ously varying) quantity (such as sound) into discrete quantities (such as col lections of numbers). Analog-to-Digital conversion is the process performed by computer scanners as well as that performed by digital samplers, which convert the electrical output of a studio audio console into a form suitable for creating CDs. Artificial Intelligence is the branch of computer science concerned with the ability of computers to duplicate reasoning and problem-solving processes of the human brain.6 3 The field is divided into two divisions, strong AI (or Good Old-Fashioned AI, usually called GOFAI), which attem pts to replicate or simulate the brain itself, and weak AI, which attem pts to demonstrate intelligent6 4 behavior, but not necessarily by duplicating internal mental pro cesses. Because strong AI tends to consider the Cartesian mind without a body, weak AI (which addresses both mind and body) is gaining favor in the theory and practice of storytelling in cyberspace. Auteur is the French term for author. In film criticism, the auteur theory holds that the director of a film as its sole author. Autonomous Agents are Al-enhanced software agents that can function as “humans” within a defined environment. These agents are driven by their own needs, wants and desires and are capable of planning and executing actions to satisfy their drives. Following the common usage in computer science, I will refer to autonomous agents simply as agents. Where this usage may be ambiguous, I will clarify any differences between human and non-human agents. Authoring Agent denotes a human, machine, device or combination of the three that creates a tangible narrative text through the selection and ordering of elements. These elements may be events in the sense of structural narrative 63This definition is after John McCarthy, the “father” of artificial intelligence who defines the field as “the science and engineering of making intelligent machines, especially intelligent computer programs” (McCarthy 2003). 64Intelligent behavior is defined informally as behavior that, if exhibited by a human, would be considered intelligent. 217 theory or lexias as used by Roland Barthes to denote the smallest units of meaning in a text. Back story is the history of a fictional character’s life that (supposedly) occurred before the time span covered by the narrative. Often, pieces of this back story are expressed or implied in the narrative. Batch processing refers to a procedure followed when computing was per formed by single, large computers. In batch processing, programmers sub mitted programs to run on the computer by bringing the programs to a central computer center. Boundary problem denotes an effect occurring at the intersection of materials, circuitry, computer code and so forth, where incompatibilities collide. Command line interface describes a computer interface (see Interface) that requires the computer to wait for a character from the keyboard, then another and another, until the RETURN (or ENTER) key is pressed. The computer then tries to make sense out of the string of characters the user entered. Computer programs. See Programs. Core dump refers to a printed copy of the contents of all of a computer’s memory. Rarely encountered on today’s personal computers, core dumps were tools programmers working on large, central computers commonly used to find and fix errors in their computer programs. Cyberspace, first introduced by cyberpunk novelist William Gibson in 1984, is a fictive construction created by a computer interface’s ability to allow nav igation through data, converting data into an apparently three-dimensional structure with its own boundaries and spatial conventions. Cycle denotes, strictly speaking, one tick of a computer’s internal clock. In the early days of computing, an instruction required one clock tick to com plete its function, so cycle became synonymous with instruction time. In today’s more sophisticated microprocessors, instructions often require sev eral clock ticks to accomplish their functions, so cycles and instruction times are no longer identical. (On the other hand, today’s computer clocks are a million times faster than those in early computers, so modern-day instruc tions complete their operations far faster than those in early machines.) 218 D/A is an abbreviation for Digital-to-Analog. Database, in this dissertation, is used in its computer sense, and refers to a collection of digital objects, typically representing or describing real-world objects, that are associated with each other through links. Under normal circumstances, the database is accessed indirectly, via an interface (see Inter face), that describes or enforces how items in the database may be stored or retrieved. Database narratives are narratives created by an interactor who, as a per former, employs the processes of selection and combination of database ele ments, interpreting these raw materials the way an actor or musician inter prets a script or a score (after Kinder). Database Texts include hypertext, hypertext fiction, the World Wide Web and any other texts that are structured as linked lexias and use the computer as their medium. Diegetic refers to events that belong to the domain of a story world. For example, an orchestra playing a song in a film scene is considered diegetic because the orchestra is part of the story world. The same song played as a background accompaniment is not diegetic because the music does not originate within the story world. Digital-to-Analog conversion is a process of taking discrete information, such as a string of Is and O s, and transforming it into a continuously varying quantity. Digital-to-Analog conversion is the process performed by com puter monitors when converting binary data into visible patterns, and by CD players when converting binary data into sound. Digitize. See Analog-to-Digital. Domain independent is a term denoting that a computer doesn’t require any reconfiguration to run different programs solving different problems. Dynamic denotes that part of a JIT narrative that varies from performance to performance because of the unpredictable interactions of interactors (see Interactor) and agents (see Agent). 219 Elements are events, actors, time, and location together that constitute the material of a fabula (see Bal 1997). ELIZA is a computer program by Joseph Weizenbaum that is capable of carrying on a “conversation” with a human by simulating the behavior of a psychiatrist. Event The transition from one state to another state (see Bal 1997). Event loop denotes a computer program that repeatedly checks to see if any computer users or interactors (see Interactor) or programs or agents (see Agent) using or running on a computer are ready to perform some action(s). If so, the event loop software invokes a program specifically designed to enable such actions. Fabula denotes a series of logically and chronologically related events that are caused or experienced by actors (see Bal 1997). Graphical user interface describes the computer interfaces of Apple’s Macin tosh operating systems and Microsoft’s Windows operating systems. W ith a graphical user interface, a user can perform various (often unrelated) actions, in any order, and at any time. The computer is not waiting for a particular command. GUI is an abbreviation for graphical user interface. Hacker initially meant an artist, but the term became attached to com puter programmers who were passionate about writing elegant software that performed some functions that just had to be done. Today the term has a mixed use in which its original meaning is combined with malicious computer users who attem pt to disrupt the operation of computers or steal sensitive information stored on computer networks. Hypertext is a new writing form in which all [text] materials are in a shared pool of units, but every element has a unit in which it originated. New units can be built from material in previous units and/or through the addition of new material. There can be arbitrary links between arbitrary sections of units (Nelson 1993). 220 Hypertext fiction is a writing form, based on hypertext, in which authors, readers and narratives all interact. Readers choose among various paths through a collection of lexias, offering the possibility of creating large num bers of unique narratives. While paths are chosen by a user, lexias are selected by an author to foster or resist narrative creation (after Landow). Instructions represent the features directly wired into the computer’s hard ware. As such, they represent the smallest unit of control a programmer can have over a computer. Interactive is a computer-created environment that is both procedural and participatory, where procedural references a computer’s ability to execute a series of rules and participatory means computers are responsive to user input (see Murray 1997). Interactivity in computer games and storytelling in cyberspace is an overde termined term most often meant to describe agency. Interactor denotes the JIT narrative equivalent of literature’ s reader or cin ema’s spectator. Interactors are also human agents. Interface denotes a computer program that serves as a mediator between the human user and the computer’s resources. When applied to a database, an interface defines how objects can be retrieved from, or added to, the database, as well as the relationship among objects. Jobs are computer programs submitted to a computer center (see Batch Pro cessing). In environments where computer users are charged for the amount of computer time they use, a job is the program currently using the com puter’s resources (for accounting purposes). In the era of personal computers and decentralized computing the term is less common and is more frequently replaced by “program.” J IT narrative is another name for Just-In-Time narrative. Just-In-Time narrative denotes a narrative th at combines artificial intelli gence technology with human interactors to create successive events in a narrative, in real time, at the moment each event is needed. Since the events are created moment to moment, there is no database of events from which 221 to build the narrative. Rather, the narrative is a product of the complex, real-time interplay of human and artificial intelligences. Lexia is the smallest unit of a work (see Work) that contains any mean ing. This analysis is subjective and so there is no hard and fast formula for choosing lexias within a work (see Barthes 1984). Light pen denotes a hand-held pointing device used in conjunction with a display device connected to a computer. When touched to an illuminated area on the display, the light pen is capable of informing the computer where the pen touched the display. Credit for inventing the light pen is given to Ivan Sutherland. Link is an identifier specifying the location of items in a database or multi- media objects on the World Wide Web. In addition to specifying the location of items or objects, links may specify some relationship between (or among) items or objects. Memex was the vision of engineer Vannevar Bush, an opto-mechanical device for storing and retrieving data by building associative links between pieces of information. Two translucent screens would cover a work area. Inside, micro film storage would provide ample room for more information than, accord ing to Bush’s calculations, any one person could store in a lifetime. W ith a memex, Bush envisioned the augmenting of a human’s ability to travel through vast collections of data. Minicomputer is the term applied to a class of computers that were less expensive than large, central computers and required no special environmen tal considerations such as air conditioning, raised flooring and heavy-duty electrical wiring. Minicomputers were often found in non-computer environ ments, such as physics, chemistry or biology laboratories, where they were used to capture and analyze experimental data. Mouse denotes a hand-operated pointing device first designed by Doug Engel- bart at the Stanford Research Institute (now SRI) in the mid-1960s. The modern-day computer mouse traces its lineage back to this product of Engel- bart’s lab. Open source software is a large collection of computer programs that pro vides an alternative model to proprietary software. Open source software 222 means that both the computer-readable bits that make up a program and the human-readable source code are available to anyone who wants them. Physical identity, as applied to agents, denote both their visual, or graphic, representation and the behaviors that link an agent’s internal state with its outward representation. Planning algorithm refers to a type of computer program that solves a prob lem by forging a path through what is called a solution space (see Solution space). Plot is the slippery middle of a narrative that complicates the path from beginning to end by introducing obstacles, detours and diversions that raise questions for the reader or spectator, questions the reader or spectator moves forward to resolve (after Brooks). Prerequisites/Actions/Effects denotes the model behind JIT narratives. When certain conditions are met in the story world (the prerequisites) cer tain actions occur which, in turn, produce effects (changes in the story world) that establish new prerequisites. Programs are a collection of instructions stored in a computer’s memory. Prosthetic memories are memories implanted in an agent by its designer (or designers) (Landsberg 1995). Scheduler software was developed for timesharing computer systems (see Timesharing). Scheduling software checks to see which, if any, computer programs on the system need access to the computer’s central processing unit and then selects a program and grants it access. Schizophrenia, as applied to agents, denotes abrupt changes in behavior at the boundary point between a current behavior and a new behavior. For an interactor, schizophrenia makes it difficult or impossible to follow the motivation of the agent (after Sengers). SHRDLU is a computer program by Terry Winograd. It exists in its own virtual world, inhabited by a table, hand, box, an eye and several blocks and pyramids. SHRDLU is capable of performing arbitrary manipulations of 223 objects in its virtual world in response to plain English requests. SHRDLU also answers arbitrary questions put to it about the virtual world, formulating its answers in direct response to the situation at hand. Socially situated agents are capable of responding to the constantly changing JIT narrative world in ways that seem, to the interactor, to be (somewhat) understandably motivated (after Loyall, Sengers). State is a description of the properties of an object or agent (real or imag inary) at a particular moment in time. A simple example of state is the location of all the pieces on a chess board. The positions define the state of the chess board at that moment. Difficulties describing the states of objects vary with their complexity. The state of a human being is almost impos sible to define because so many factors—physical, biological, chemical and mental—affect the human condition. State Variable is a measurable quantity or quality of an object that can be used in specifying its state. In the chess example, a state variable can be the type of piece on a square (if we are interested in the state of the chess board), or the particular square on which a specific piece resides (if we are interested in the state of that piece). State variables are useful for narrow ing the scope of a state. Complex as humans are, we can talk about their states in narrow areas such as location, display of emotion (often called the emotional “state”), well-being, marital status or employment status. States are typically described by combinations of state variables. Suture is a film theory term from psychoanalytic theorist Laura Mulvey. She describes suture as the process of close coupling between the subjectivity of the spectator and that of the film’ s protagonist. Symbol processing is a property of computer programs that allows them to simulate the reasoning processes of humans. In symbol processing, computer programs attach real-world meanings to otherwise arbitrary symbols, such as letters, and then draw conclusions about the world by manipulating the symbols according to a collection of rules. Sets are collections of elements. The objects in a database, for example, form a set. So does the collection of all mammals. Any human belongs to the set of all mammals, but any reptile does not. 224 Set theory is a branch of mathematics that deals with the properties of sets (see Sets). Solution space denotes a collection of all possible steps for solving a problem, some of which will actually solve the problem and some of which won’t. Story A fabula (see Fabula) that is presented in a certain manner (see Bal 1997). Subsets are sets created by selecting some, or all, elements from a single set (see Sets). Humans form a subset of all mammals, and female humans form a subset of all humans. Template refers to an established pattern of data that agents match to incom ing data. Agents use the closeness of the match as one tool in their decision making. The Media Equation is a theory proposed by Byran Reeves and Clifford Nash that suggests that in a set of baseline psychological experiments used to measure human-to-human transactional behavior, replacing the second “human” in any experiment’s assertion with the term “computer” (or its plural) will not change the results. Timesharing developed in the early 1960s as a way of recovering otherwise wasted computer cycles(see Cycle) by sharing one computer among many users. Transitions are physical behaviors that link an agent’s internal state with its outward representation (after Sengers). Turing Test, named after mathematician Alan Turing who first formulated the idea, is a hypothesis for testing the ability of an “intelligent” computer to fool a human into believing he or she is communicating with another human. Work denotes a collection of signifiers which, when engaged by a reader, spec tator or interactor, are given meaning and structure, as when the signifiers are fashioned into a text. 225 Wikipedia is the name of a Web-based, free encyclopedia. Its content is created by its users. Any visitor to Wikipedia can edit its articles. Pages are always subject to editing, so no article is ever “finished”. See the Wikipedia home page at < http://en.wikipedia.org>. 226 B ibliography Agre, Philip E. and David Chapman. October. 1989 “W hat are plans for?.” . MIT AI Lab Memo 1050a. Aiken, Henry David. “The Aesthetic Relevance of Artists’ Intentions.” The Jour nal of Philosophy (1955). Allen, James F. “Towards a General Theory of Action and Time.” Readings in Planning. Palo Alto, California: Morgan Kaufmann. 1990. Allen, James F. et al. “Toward Conversaitional Human-Computer Interaction.” A I Magazine 22 (Winter 2001): 27-37. Anon. “First Amendment Does Not Protect Posting or Linking to DVD Decryp tion Code.” The Entertainment Litigation Reporter (December 31 2001). Ashline, William L. “The problem of impossible fictions.” Style 29 (Summer 1995). Austin, J. L. How To Do Things With Words. 2nd edition. Cambridge, Mas sachusetts: Harvard UP, 1975. Aylett, Ruth. “Narrative in Virtual Environments—Towards Emergent Narra tive.” Papers from the 1999 A A A I Fall Symposium. Ed. Michael Mateas and Phoebe Sengers Technical Report FS-99-01, Menlo Park, California: Amer ican Association of Artificial Intelligence, 1999. Bailey, Paul. “Searching for Storiness: Story-Generation from a Reader’s Per spective.” . < http://www-2.cs.cmu.edu/afs/cs/user/michaelm/www/nidocs/ Bailey.pdf> accessed December 6, 2004. Baird, Robert. “The Startle Effect: Implications for Spectator Cognition and Media Theory.” Film Quarterly 53 (Spring 2000): 12-24. Bal, Mieke. Narratology: Introduction to the Theory of Narrative. Toronto Buf falo London: U of Toronto Press, 1997. 227 Barlow, John Perry. February. 1996 “A Declaration of the Independence of Cyberspace.”. < www.eff.org/barlow/Declaration/Final.html> accessed December 6, 2004. Barthes, Roland. “1.” Mythologies, eng. translation edition. Hill and Wang. 1972. Barthes, Roland. “Introduction to the Structural Analysis of Narratives.” Image, Music, Text. New York: Hill and Wang. 1977. Barthes, Roland. Empire of Signs, trans. Richard Howard. New York: Hill and Wang, 1982. Barthes, Roland. S/Z. trans. Richard Miller. New York: Hill and Wang, 1984. Bates, Joseph. June. 1991 “Virtual Reality, Art, and Entertainment.” . Baudry, Jean-Louis. “Ideological Effects of the Basic Cinematographic Appara tus.” Film Theory and Criticism, fourth edition. Oxford UP. Original article dated 1970., 1992. Bazin, Andre. What is Cinema? trans. Hugh Gray. Volume Volume I . Berkeley, Los Angeles, London: U of California Press, 1967. Bell, David. An Introduction to Cybercultures. London and New York: Rout- ledge, 2001. Beller, Jonathan L. “Cinema, Capital of the Twentieth Century.” < http://muse.jhu.edU/journals/postmodern_culture/v004/4.3beller.html> accessed December 13, 2004. Postmodern Culture 4 (1994). Benjamin, Walter. “The Storyteller.” Illuminations. New York: Schocken Books. 1968. 83-110. Benjamin, Walter. “The Work of Art in the Age of Mechanical Reproduction.” Illuminations. New York: Schocken Books. 1968. 217-252. Bolter, J. David. Writing Space:The Computer, Hypertext, and the History of Writing. Hillsdale, New Jersey: L. Erlbaum Associates, 1991. Bolter, Jay David and Richard Grusin. “The Double Logic of Strange Days.” Configurations 4 (1996): 311-358. Booth, Wayne C. The Rhetoric of Fiction. 2nd edition. Chicago and London: U of Chicago Press, 1983. Bordwell, David. Narration in the Fiction Film. Madison, Wisconsin: U of Wis consin Press, 1985. Borges, Jorge Luis. “The Garden of Forking Paths.” Ficciones. Grove Press. 1962. Bramer, M. A., editor. Computer game-playing: Theory and practice. New York: Halsted Press, 1983. 228 Branigan, Edward. Point of View in the Cinema: a theory of narration and subjectivity in classical film. Berlin and New York: Mouton, 1984. Branigan, Edward. Narrative Comprehension and Film. London and New York: Routledge, 1992. Brooks, Peter. Reading for the Plot: Design and Intention in Narrative. Cam bridge, Massachusetts and London, England: Harvard UP, 1984. Brooks, Rodney A. “Elephants Don’t Play Chess.” USA Robotics and Autonomous Systems (1990). Brooks, Rodney A. “Intelligence without representation.” Artificial Intelligence 47 (1991). Brooks, Rodney A. Flesh and Machines: How Robots Will Change Us. New York: Pantheon Books, 2002. Bruner, Jerome. Actual Minds, Possible Worlds. Cambridge, Massachusetts: Harvard UP, 1986. Bruner, Jerome. “W hat is a narrative fact?.” Annals of the American Academy of Political and Social Science 560 (November 1998): 17-27. Buchanan, Bruce G. 2001 “Brief History of Artificial Intelligence.” . < http://www.aaai.org/aitopics/bbhist.html> accessed December 14, 2004. Butsch, Richard. “A History of Research on Movies, Radio, and Television.” Journal of Popular Film and Television 29 (Fall 2000): 112-120. Campbell, Jim. “Delusions of Dialogue: Control and Choice in Interactive Art.” Leonardo 33 (2000): 133-136. Campbell, Murray, A. Joseph Hoane Jr., and Feng hsiung Hsu. “Deep Blue.” Artificial Intelligence (2002): 57-83. Cassell, Justine and Henry Jenkins, editors. From Barbie to Mortal Kombat: Gender and Computer Games. Cambridge, Massachusetts: The MIT Press, 1998. Castells, Manuel. The Internet Galaxy: Reflections on the Internet, Business, and Society. Oxford and New York: Oxford UP, 2001. Castronova, Edward. A Center for the Study of Synthetic Worlds: Strategic Plan, presented at the February, 2004 meeting of the Southern California Digital Culture Group, February 2004. Chatman, Seymour. Story and Discourse: Narrative Structure in Fiction and Film. Ithaca and London: Cornell UP, 1980. Chatman, Seymour. “W hat Novels Can Do That Films Can’ t (and Vice Versa).” Critical Inquiry 7 (Autumn 1980): 121-140. 229 Chatman, Seymour. Coming to Terms: The Rhetoric of Narrative in Fiction and Film. Ithaca and London: Cornell UP, 1990. Chisholm, Roderick M. “The Structure of Intention.” The Journal of Philosophy (1970). Clark Dodsworth, Jr., editor. Digital Elusion: Entertaining the Future with High Technology. Reading, Massachusetts: Addison-Wesley, 1998. Coleman, David S. “PC gaming and simulation supports training.” United States Naval Institute. Proceedings 127 (February 2001): 73-75. Coste, Didier. Narrative as Communication. Minneapolis: U of Minnesota Press, 1989. Cubitt, Sean. “Phalke, Melies, and Special Effects Today.” Wide Angle (1999). Darley, Andrew. Visual Digital Culture: Surface play and spectacle in new media genres. London and New York: Routledge, 2000. Dautenhahn, Kerstin. December. 2001 “The Art of Designing Socially Intelligent Agents — Science, Fiction and the Human in the Loop.” . < http://homepages.feis.herts.ac.uk/~comqkd/aain3.ps> accessed December 6, 2004. Davies, David. “Artistic intentions and the ontology of art.” British Journal of Aesthetics (1999). Davis, Paul K. Rand’ s Experience in Applying Artificial Intelligence Techniques to Strategic-Level Military-Political War Gaming. The Rand Paper Series; P-6977. Santa Monica, California: Rand Corporation, 1984. Lauretis, Teresade . “Desire in Narrative.” Alice Doesn’ t: feminism, semiotics, cinema. Bloomington: Indiana University Press. 1984. Dennett, Daniel C. Brainchildren: Essays on Designing Minds. Cambridge, Mas sachusetts: MIT Press, 1998. DePristo, Mark A. and Robert Zubex. “being-in-the-world.” Artificial Intelli gence and Interactive Entertainment. Menlo Park, California: AAAI Press, 2001, 31-34. Dolan, William F. and Marc D. Leve. “Striking A Balance Between Linking and Infringing.” The National Law Journal (October 16 2000): C6. Douglas, J. Yellowlees. The End of Books— Or Books Without End? Ann Arbor: U of Michigan Press, 2000. Douglas, Jay and Jon Gratch. “Adaptive Narrative: How Autonomous Agents, Hollywood, and Multiprocessing Operating Systems Can Live Happily Ever After.” Virtual Storytelling: Proceedings of the International Conference on Virtual Storytelling, Avignon, France. Springer, September 2001. 230 Drumheller, Michelle. “Military simulations: More than entertainment.” National Defense 84 (November 99): 56-57. Durham, James. April. 2001 “Tracing the roots of components from OOP through WS.” . < http://www-106.ibm.com/developerworks/ webservices/library/co-tmline/> accessed December 6, 2004. Dyer-Witheford, Nick. Cyber-Marx: Cycles and Circuits of Struggle in High- Technology Capitalism. Urbana and Chicago: U of Illinois P, 1999. Edmonds, Bruce. Capturing Social Embeddedness: A Constructivst Approach. Technical report, Manchester Metropolitan University, 1998. < http://cfpm.org/cpmrep34.html> accessed Dec.7, 2004. Elliott, Clark, Jacek Brzeninski, Sanjay Sheth, and Robert Salvtoriello. “Story- Morphing in the Affective Reasoning paradigm: Generating stories semi- automatically for use with ” emotionally intelligent” multimedia agents.” Procedings of the 2nd International Conference on Autonomous Agents, Min neapolis. 1998, 181-188. Engelbart, Douglas. Augmenting Human Intellect: A Conceptual Framework. Technical Report AFOSR-3233, Stanford Research Institute, Menlo Park, California, October 1962. Featherstone, Mike and Roger Burrows. “Cultures of Technological Embodi ment: An Introduction.” Cyberspace/Cyberbodies/Cyberpunk: Cultures of Technological Embodiment. London, Thousand Oaks New Delhi: SAGE Pub lications. 1995. Feenberg, Andrew. Critical Theory of Technology. Oxford and New York: Oxford UP, 1991. Feigenbaum, Edward A. and Julian Feldman, editors. Computers and Thought. New York: McGraw-Hill, 1963. Filiciak, Miroslaw. “Hyperidentities: Postmodern Identity Patterns In Massively Multiplayer Online Role-Playing Games.” The Video Game Theory Reader. New York and London: Routledge. 2003. Flanagan, Mary. “Mobile Identities, Digital Stars, and Post-Cinematic Selves.” Wide Angle 21 (January 1999): 77-93. Forrest, Stephanie. “Genetic Algorithms.” ACM Computing Surveys (1996). Foucault, Michel. “W hat is an Author?.” Language, Counter-Memory, Practice. Ithaca: Cornell UP. trans. Donald Bouchard and Sherry Simon, 1977. Franchi, Stefano and Giiven Giizeldere. “Dialogues with Colorful Personalities of Early AI.” < http://w ww.stanford.edu/group/SHR/4-2/text/toc.htm l> 231 accessed December 14, 2004. Stanford Electronic Humanities Review 4 (1995). Freeland, Cynthia. December. 2001 “Cognitive Science and Film The ory.” . c h ttp ://www.hfac.uh.edu/cogsci/CogSciFilmTheory.html> accessed December 6, 2004. Freud, Sigmund. Beyond the Pleasure Principle, trans. and ed. James Strachey. Standard edition. W. W. Norton & Company, 1961. Fuller-Seeley, Kathy. “Spectatorship in Popular Film and Television.” Journal of Popular Film and Television 29 (Fall 2000): 98-99. Funge, John. “Cognitive modeling for games and animation.” Communications of the ACM 43 (July 2000): 40-48. Gard, Toby. “Building character.” Game Developer 7 (May 2000). Gardner, Howard. The M ind’ s New Science: A History of the Cognitive Revo lution. New York City: Basic Books, Inc., 1985. Gardner, Howard. “Artificial Intelligence: The Expert Tool.” The M ind’ s New Science: A History of the Cognitive Revolution. New York: Basic Books. 1987. Gates, B. June 13. 1998 “Compete, Don’t Delete.”, < www.microsoft.com/ presspass/features/1998/bill/economist.asp> accessed December 6, 2004. Gee, James Paul. What Video Games Have to Teach Us About Learning and Literacy. New York: Palgrave Macmillan, 2003. Genette, Gerard. Narrative Discourse: An Essay in Method, trans. Jane E. Lewin. Ithaca: Cornell UP, 1980. Genette, Gerard. Narrative Discourse Revisited, trans. Jane E. Lewin. Ithaca, New York: Cornell UP, 1988. Gerrig, Richard. Experiencing Narrative Worlds: On the Psychological Activities of Reading. New Haven: Yale UP, 1993. Gibson, William. Neuromancer. New York: Ace Books, 1984. Glymour, Clark, Kenneth M. Ford, and Patrick J. Hayes. “Ramon Lull and the Infidels.” A I Magazine 19 (Summer 1998): 136. Goetz, Thomas. “Open Source Everywhere.” Wired Magazine 11 (November 2003). Gottdiener, M. “Recovering Lost Signifieds: Cultural Criticism in a Postmodern World.” Postmodern Semiotics: Material Culture and the Forms of Postmod ern Life. Oxford UK and Cambridge USA: Blackwell. 1995. 233-254. 232 Gottdiener, M. “Semiotics, Socio-Semiotics and Postmodernism: Prom Idealist to Materialist Theories of the Sign.” Postmodern Semiotics: Material Cul ture and the Forms of Postmodern Life. Oxford UK and Cambridge USA: Blackwell. 1995. 3-33. Gratch, Jonathan. 2000 “Emile: Marshalling Passions In Training And Educa tion.”. < http://www.ict.usc.edu/~gratch/agents2000.pdf> accessed March 16, 2005. Gratch, Jonathan and Stacy Marsella. 2004 “A Domain Independent Framework for Modeling Emotion.” , < http://www.ict.usc.edu/~gratch/ GratchMarsellaCOGSYS04.pdf> accessed December 13, 2004. Grice, H. P. “Meaning.” The Philosophical Review 66 (July 1957): 377-388. Grusin, Richard. “W hat is an Electronic Author? Theory and the Technological Fallacy.” Virtual Realities and Their Discontents. Baltimore and London: The Johns Hopkins UP. 1996. Guha, R. V. and Douglas B. Lenat. “Cyc: A Midterm Report.” A I Magazine (Fall 1990). Haddon, Leslie. “Interactive Games.” Future Visions: New Technologies of the Screen. London: British Film Institute. 1993. Hahm, Milton C. “Structure and the Judgment of Art.” The Journal of Philos ophy (1948). Haraway, Donna J. 1991 “A Cyborg Manifesto: Science, Technol ogy, and Socialist-Feminism in the Late Twentieth Century.”. <http: / / www.stanford.edu / dept/HPS/Haraway / CyborgManifesto.html> accessed December 13, 2004. Haraway, Donna J. “A Game of Cat’s Cradle: Science Studies, Feminist Theory, Cultural Studies.” Configurations 2 (1994): 59-71. Haraway, Donna J. Modest*Witness@Second*Millennium.FemalMan*Meets* OncoMouse: feminism and technoscience. New York: Routledge, 1997. Harland, James and Michael Winikoff. 2001 “Agents via Mixed-mode Compu tation in Linear Logic.”, citeseer.nj.nec.com/579891.html. Harvey, David. The Condition of Postmodemity. Cambridge, Massachusetts and Oxford: Blackwell, 1990. Hauben, Ronda. January. 1996 “On the Early History and Impact of Unix: Tools to Build the Tools for a New Millenium.” . <www.internet- history.org/archives/early.history.of.unix.html> accessed December 6, 2004. 233 Hawker, Norman W. “Consistently Wrong: The Single Product Issue and the Tying Claims Against Microsoft.” California Western Law Review 35 (Fall 1998). Hayes-Roth, Barbara. “Getting into the story.” Style 33 (Summer 1999): 246- 266. Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernet ics, Literature, and Informatics. Chicago and London: U of Chicago Press, 1999. Hayles, N. Katherine. “Desiring Agency: Limiting Metaphors and Enabling Con straints in Dawkins and Deleuze/Guattari.” SubStance 94/95 (2001): 144- 159. Hearst, Marti and Haym Hirsh. “AI’s greatest trends and controversies.” IEEE Intelligent Systems (January/February 2000): 8-17. Heberer, William M. “Advertising and Selling on the Internet.” E-Commerce Law Report 3 (August 2001). Heim, Michael. “The Erotic Ontology of Cyberspace.” Cyberspace: First Steps. Cambridge, Massachusetts and London: MIT Press. 1991. Heim, Michael. The Metaphysics of Virtual Reality. New York: Oxford UP, 1993. Heim, Michael. “The Design of Virtual Reality.” Cyberspace/Cyberbodies/Cyberpunk: Cultures of Technological Embodi ment. London, Thousand Oaks New Delhi: SAGE Publications. 1995. Herman, David. “Narratology as a cognitive science.” < http: / /millennium, arts, kuleuven. ac. be/narrative / preview_art_l .cfm? articleJD=89&kind=2> accessed December 7, 2004. Image & Narrative (2000). Herz, J. C. Joystick nation: How videogames ate our quarters, won our hearts, and rewired our minds. Boston: Little, Brown, and Co., 1997. Hess, John and Patricia R. Zimmermann. “Transnational Digital Imaginaries.” Wide Angle 21 (January 1999): 149-167. Himanen, Pekka. The Hacker Ethic. New York: Random House Trade Paper backs, 2001. Hughes, Justin. “”Recoding: intellectual property and overlooked audience interests.” Texas Law Review 77 (March 1999): 923-1010. Huizinga, Johan. Homo Ludens: A Study of the Play Element in Culture. Boston: Beacon Press, 1950. Jahn, Manfred. “Frames, preferences, and the reading of third-person narratives: Towards a cognitive narratology.” Poetics Today 18 (Winter 1997): 441-468. 234 Jameson, Fredric. Postmodemis,m or The Cultural Logic of Late Capitalism. Durham: Duke UP, 1991. Jarvie, Ian. “Is Analytic Philospohy the Cure for Film Theory?.” Philosophy of the Social Sciences 29 (September 1999): 416-440. Jenkins, Henry. Textual Poachers: Television Fans & Participatory Culture. New York and London: Routledge, 1992. Jenkins, Henry. 2001 “Thoughts on Tomb Raider and Final Fan tasy.” . <http: / / www.joysticklOl .org/?op=displaystory&sid=2001/7/27/ 135553/338> accessed December 6, 2004. Johnson, Steven. Interface Culture: How New Technology Transforms the Way We Create and Communicate. San Francisco: HarperFd^e, 1997. Johnson, Steven. “Wild Things.” Wired Magazine (March 2002). Jon Gratch, Stacy Marsella. “Tears and Fears: Modeling Emotions and Emo tional Behaviors in Synthetic Agents.” Proceedings of the Fifth International Conference on Autonomous Agents. Montreal, Canada: ACM Press, 2001. Joyce, Michael. “RePlacing the Author: ”A Book in the Ruins” .” Otherminded- ness: The emergence of network culture. Ann Arbor: U of Michigan Press. 2000. 13-34. Kafai, Yasmin B. Minds in play: Computer game design as a context for chil dren’ s learning. Hillsdale, New Jersey: L. Erlbaum Associates, 1995. Kelso, Margaret Thomas, Peter Weyhrauch, and Joseph Bates. Dramatic Presence. Technical Report CMU-CS-92-195, School of Computer Science, Carnegie Mellon University, December 1992. Kermode, Frank. The sense of an ending: studies in the theory of fiction. Oxford and New York: Oxford UP, 2000. Kinder, Marsha. “The Subversive Potential of the Pseudo-Iterative.” Film Quar terly 43 (Winter 1989-90): 2-16. Kinder, Marsha. Playing with Power in Movies, Television, and Video Games: From Muppet Babies to Teenage Mutant Ninja Turtles. Berkeley: U of Cali fornia Press, 1991. Kinder, Marsha. “Doors to the Labyrinth.” Style 33 (1999): 232-245. Kinder, Marsha. “From Mutation to Morphing: Cultural Transformations from Greek Myth of Children’s Media Culture.” Meta-Morphing: Visual Trans formation and the Culture of Quick Change. Minneapolis and London: U of Minnesota Press. 2000. 235 Kinder, Marsha. “Hot Spots, Avatars, and Narrative Fields Forever—Bunuel’s Legacy for New Digital Media and Interactive Database Narrative.” Film Quarterly (Summer 2002). Kinder, Marsha. “Narrative Equivocations Between Movies and Games.” The New Media Book. London: BFI Publishing. 2002. Kinder, Marsha. “Honoring the Past and Creating the Future in Cyberspace: New Technologies and Cultural Specificity.” The Contemporary Pacific 15 (2003). King, Geoff. “Spectacular narratives: Twister, Independence Day, and frontier mythology in contemporary Hollywood.” Journal of American Culture 22 (Spring 1999): 25-39. Kleinrock, Leonard. August. 1996 “Leonard Kleinrock’s Personal His tory/Biography: The Birth of the Internet.” , < http://www.lk.cs.ucla.edu/ LK /Inet/birth.htm l> accessed March 16, 2005. Knaller, Susanne. “Scattered voices. Some remarks on a narrative theory of postcolonial storytelling.” The Germanic Review 74 (Spring 1999): 99-115. Knight, Deborah. “Reconsidering film theory and method.” New Literary His tory 24 (Spring 1993). Kohn, Nathaniel. “The screenplay as posmodern literary exemplar: Authorial distraction, disappearance, dissolution.” Qualitative Inquiry 6 (December 2000): 489-510. Kosuth, Joseph. “Intentions(s).” The Art Bulletin (1996). Kuhns, Richard. “Criticism and the Problem of Intention.” The Journal of Phi losophy (1960). Kurzweil, Ray. The Age Of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Viking, 1999. Laird, John E. and Paul S. Rosenbloom. “In Pursuit of M ind... The Research of Allen Newell.” A I Magazine 13 (Winter 1992): 17-45. Laird, John E. and Michael van Lent. 2000 “Human-level AI’s Killer Application: Interactive Computer Games.”, < http://ai.eecs.umich.edu/ people/laird/papers/AAAI-00.pdf> accessed December 7, 2004. Landow, George P. Hypertext 2.0: The Convergence of Contemporary Critical Theory and Technology. Baltimore and London: The Johns Hopkins UP, 1997. Landsberg, Alison. “Prosthetic Memory: Total Recall and Blade Runner.” Cyberspace/Cyberbodies/Cyberpunk: Cultures of Technological Embodiment. London, Thousand Oaks New Delhi: SAGE Publications. 1995. 236 Laurel, Brenda. Computers as Theater. Reading, Massachusettes: Addison- Wesley, 1993. Legrady, George. “Intersecting the Virtual and the Real: Space in Interactive Media Installations.” Wide Angle (1999). Lehman, Jill Fain, John Laird, and Paul Rosenbloom. “A Gen tle Introduction to Soar, an Architecture for Human Cognition.” . < http://www.cse.msu.edu/~cse841/papers/Soar.pdf> accessed December 14, 2004. Leiner, Barry M. et al. December. 2003 “A Brief History of the Inter net.”. < http://www.isoc.org/internet/history/brief.shtml> accessed March 15, 2005. Lenoir, Tim. “All but War is Simulation: The Military-Entertainment Com plex.” Configurations 8 (2000): 289-335. Leonard, Andrew. November. 2002 “AbiWord up.” , < www.salon.com/tech/ col/leon/2002/ll/15/abiw ord-print.htm l> accessed December 6, 2004. Levi-Strauss, Claude. “The Structural Study of Myth.” Structural Anthropology. Basic Books, trans. Claire Jacobson and Brooke Schoepf, 1963. 206-231. Levinson, Jerrold. “Seeing, Imaginarily, at the Movies.” Philosophical Quarterly 43 (January 1993): 70-78. Levy, Pierre. Becoming Virtual: Reality in the Digital Age. New York: Plenum Trade, 1998. Levy, Pierre. Cyberculture, trans. Robert Bononno. Minneapolis and London: U of Minnesota P, 2001. Levy, Steven. Hackers: Heroes of the Computer Revolution. New York: Penguin Books, 2001. Lewis, Michael et al. February. 1999 “dispatches: The Microsoft Trial.”. < http://slate.m sn.com /id/4422/entry/13302/> accessed March 16, 2005. Licklider, J. C. R. and Robert W. Taylor. In Memoriam: J. C. R. Licklider. Technical report, Digital/Systems Research Center, 130 Lytton Avenue, Palo Alto, CA 94301, 1990. Lillington, Karlin. December. 1998 “Microsoft on Microsoft.”. < http://archive.salon.com /21st/feature/1998/12/l8feature.html> accessed December 7, 2004. Loomis, Tamara. “Cyberspace Copyright: Boundaries to Be Decided in Review of DVD Case.” New York Law Journal (April 26 2001): 5. 237 Loyall, A. Bryan. Believable Agents: Building Interactive Personalities. PhD thesis, School of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, 1997. Loyall, A. Bryan and Joseph Bates. “Personality-Rich Believable Agents That Use Language.” Proceedings of the First International Conference on Autonomous Agents. 1997. Lunenfeld, Peter, editor. The Digital Dialectic. Cambridge, Massachusetts and London: The MIT Press, 1999. Lunenfeld, Peter. “Unfinished Business.” The Digital Dialectic. Cambridge, Mas sachusetts and London: The MIT Press. 1999. Maes, Pattie. “Artificial Life meets Entertainment: Lifelike Autonomous Agents.” Communications of the ACM (1995). Manovich, Lev. The Lanquaqe of New Media. Cambridge, Massachusetts: The MIT Press, 2001. Marsella, Stacy C., W. Lewis Johnson, and Catherine M. LaBore. 2003 “Interactive Pedagogical Drama for Health Interventions.” . < http://www.isi.edu/~marsella/marsella-aied03.pdf> accessed Decem ber 29, 2004. Mateas, Michael. An Oz-Centric Review of Interactive Drama and Believ able Agents. Technical Report CMU-CS-97-156, Carnegie Mellon University, Pittsburgh, Pennsylvania, June 1997. Mateas, Michael. “Expressive AI: A hybrid art and science practice.” leonardo: journal of the international society for arts, sciences, and technology 34 (2001). Mateas, Michael. “Expressive AI: A Hybrid Art and Science Practice.” Leonardo 34 (2001): 147-153. Mateas, Michael. 2001 “A Preliminary Poetics for Interactive Drama and Games.”. < http://www-2.cms.edu/~michaelm/publications/ DigitalCreativity2001.pdf> accessed December 7, 2004. Mateas, Michael and Phoebe Sengers. “Narrative Intelligence.” Papers from the 1999 A A A I Fall Symposium. 1999. Mateas, Michael and Andrew Stern. “Towards Integrating Plot and Character for Interactive Drama.” Socially Intelligent Agents: The Human in the Loop. Ed. Kerstin Dautenhahn Menlo Park, California: AAAI, 2000, 113-118. 238 Mateas, Michael and Andrew Stern. Architecture, Authorial Idioms and Early Observations of the Interactive Drama Fagade. Technical Report CMU-CS- 02-198, Carnegie Mellon University, Pittsburgh, Pennsylvania, December 2002. McCallum, E. L. “Mapping the Real in Cyberfiction.” Poetics Today 21 (Sum mer 2000): 349-377. McCarthy, J., M. L. Minsky, N. Rochester, and C. E. Shannon. 1955 “A Pro posal for the Dartmouth Summmer Research Project on Artificial Intel ligence.” . Chttp: / /www-formal.stanford.edu/jmc/history/dartmouth.pdf> accessed December 14, 2004. McCarthy, John. 1959 “Programs W ith Common Sense.” . < http://www~ formal.stanford.edu/jmc/mcc59.pdf> accessed December 14, 2004. McCarthy, John. 1977 “Epistemological Problems of Artificial Intelligence.” . < http: / / www-formal. Stanford. edu/j me / epistemological. pdf> accessed December 14, 2004. McCarthy, John. 1979 “Ascribing Mental Qualities to Machines.” . < http://www-formal.stanford.edu/jmc/ascribing.pdf> accessed December 14, 2004. McCarthy, John. 1983 “The Little Thoughts of Thinking Machines.”. < http://www-formal.stanford.edu/jmc/little.pdf> accessed December 14, 2004. McCarthy, John. 1996 “W hat has AI in Common with Philosophy?.”. < http://www-formal.stanford.edu/jmc/aiphil.pdf> accessed December 7, 2004. McCarthy, John. 2002 “Making Robots Conscious of Their Mental States.” . < http://www-formal.stanford.edu/jmc/consciousness.ps> accessed Decem ber 14, 2004. McCarthy, John. 2003 “W hat is Artificial Intelligence.” . < http://www- formal.stanford.edu/jmc/whatisai.pdf> accessed December 7, 2004. McCarthy, John and Tom Costello. Combining Narratives. < http://www- formal.Stanford.edu/jmc/narrative2.pdf> accessed December 14, 2004, 2001. McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman, 1979. McDonald, Henry. “The Narrative Act: Wittgenstein and Narratology.” < http://www.pum.umontreal.ca/revues/surfaces/vol4/mcdonald.html> November 9, 2001. Surfaces 4 (1994). 239 McGee, Mark Thomas. Beyond Ballyhoo: Motion Picture Promotion and Gim micks. Jefferson, North Carolina: McFarland, 1989. McHale, Brian. “Poetry as Prosthesis.” Poetics Today 21 (Spring 2000). McLuhan, Marshall. Understanding Media: The Extensions of Man. New York: McGraw-Hill, 1964. McMahan, Alison. “Watching You Watching Me.” < http://www.film- philosophy.com/voll3-1999/n24mcmahan> accessed April 10, 2001. Film- Philosophy 3 (1999). Meech, John F. “Narrative Theories as Contextual Constrains for Agent Inter action.” Papers from the 1999 A A A I Fall Symposium. Ed. Michael Mateas and Phoebe Sengers Technical Report FS-99-01, 1999. Meehan, James. The Metanovel: Writing Stories by Computer. New York and London: Garland Publishing, Inc., 1980. Meers, Philippe. “Is There an Audience in the House?.” Journal of Popular Film and Television 29 (Fall 2000): 138-144. Metz, Christian. Film Language: A Semiotics of the Cinema, trans. Michael Taylor. Chicago, Illinois: U of Chicago P, 1991. Metz, Christian. “From The Imaginary Signifier.” Film Theory and Criticism, fourth edition. Oxford UP. Original work dated 1975., 1992. Miller, Angela. “The Panorama, the Cinema, and the Emergence of the Specta cle.” Wide Angle 18 (1996): 34-69. Minsky, Marvin. The Society of Mind. New York London Toronto Sydney Tokyo: Simon & Schuster, 1988. Morris, Bertram. “Intention and Fulfillment in Art.” Philosophy and Phe nomenological Research (1940). Morse, Margaret. “Body and Screen.” Wide Angle 21 (January 1999): 63-75. Mulvey, Laura. “Visual Pleasure and Narrative Cinema.” Film Theory and Criticism, fourth edition. New York and Oxford: Oxford UP. Chapter is a reworked version of a paper given in the French Department of the Uni versity of Wisconsin, Madison in 1973. This version dated 1975., 1992. Murray, Janet. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, Massachusettes: MIT Press, 1997. Myers, D. “Computer Games Semiotics.” Play and Culture 4 (1991): 334-345. Nash, Ronald Alan. “Cognitive Theories of Emotion.” Nous 23 (September 1989): 481-504. 240 Negroponte, Nicholas. “The Bit Police.” Being Digital. New York: Vintage Books. 1995. Nelson, Theodor Holm. Literary Machines. Sausalito, California: Mindful Press, 1993. Nichols, Bill, editor. Movies and Methods. Volume one . Berkeley, Los Angeles, and London: U of California Press, 1976. Nunberg, Geoffrey, editor. The Future of the Book. Berkeley: University of Cal ifornia Press, 1996. Ortony, Andrew, Gerald L. Clore, and Allan Collins. The Cognitive Structure of Emotions. Cambridge, England and New York: Cambridge UP, 1988. Packer, Randall and Ken Jordan, editors. Multimedia: From Wagner to Virtual Reality. New York: Norton, 2001. Pandolfi, Maria. “The Expanded Body and the Fragmented Body: Inside and Beyond Narrative.” < http://www.pum.umontreal.ca/revues/ surfaces/vol2/pandolphi.html> November 9, 2001. Surfaces (1992). Penny, Simon. “The virtualization of art practice.” Art Journal (1997). Perlin, Ken and Athomas Goldberg. 1996 “Improv: A System for Scripting Inter active Actors in Virtual Worlds.” , < http://mrl.nyu.edu/sig96-improv/sig96- improv.pdf> accessed December 14, 2004. Pinhanez, C.S., J. W. Davis, S. Intille, M. P. Johnson, et al. “Physically inter active story environments.” IBM Systems Journal 39 (2000): 438-455. Poster, Mark. “Postmodern Virtualities.” Cyberspace/Cyberbodies/Cyberpunk: Cultures of Technological Embodiment. London, Thousand Oaks New Delhi: SAGE Publications. 1995. Propp, Vladimir. The Morphology of the Folktale, trans. Laurence Scott. 2nd edition. U of Texas Press, 1988. Punday, Daniel. “The narrative construction of cyberspace: Reading Neuro- mancer, reading cyberspace debates.” College English 63 (November 2000): 194-213. Raymond, Eric S. The Cathedral & The Bazaar, revised edition. Sebastopol, California: O’Reilly & Associates, 2001. Reeves, Byron and Clifford Nass. The Media Equation: How People Treat Com puters, Televisions, and New Media Like Real People and Places. New York: Cambridge UP, 1996. Reilly, W. Scott and Joseph Bates. Building Emotional Agents. Technical Report CMU-CS-92-143, School of Computer Science, Carnegie Mellon University, May 1992. 241 Reilly, W. Scott Neal. Believable Social and Emotional Agents. Technical Report Ph.D. Thesis. Technical Report CMU-CS-96-138, School of Computer Sci ence, Carnegie Mellon University, Pittsburgh, PA, USA. May 1996, 1996. citeseer. ist. psu. edu / reilly 96belie vable. html. Rivlin, Gary. “The Leader of the Free World.” Wired Magazine 11 (November 2003). Rosenberg, Scott. November. 1998 “Microsoft’s Halloween scare.”. < http://archive.salon.com /21st/rose/1998/ll/04straight.htm l> accessed December 6, 2004. Rousseau, Daniel and Barbara Hayes-Roth. Improvisational Synthetic Actors with Flexible Personalities. Technical Report KSL 97-10, Knowledge Systems Laboratory, Stanford University, December 1997. Rousseau, Daniel and Barbara Hayes-Roth. Interacting with Personality-Rich Characters. Technical Report KSL 97-06, Knowledge Systems Laboratory, Stanford University, September 1997. Russell, Stuart and Peter Norvig. Artificial Intelligence: A Modern Approach. Upper Saddle River, New Jersey: Prentice Hall, 1995. Ryan, Marie-Laure. Possible Worlds, Artificial Intelligence, and Narrative The ory. Bloomington and Indianapolis: Indiana UP, 1991. Ryan, Marie-Laure. “Introduction: From possible worlds to virtual reality.” Style 29 (Summer 1995). Ryan, Marie-Laure. “Interactive Drama: Narrativity in a Highly Interactive Environment.” Modem Fiction Studies 43 (1997): 677-707. Ryan, Marie-Laure. “Immersion vs. Interactivity: Virtual Reality and Literary Theory.” SubStance 28 (1999): 110-137. Sagoff, Mark. “On Restoring and Reproducing Art.” The Journal of Philosophy (1978). Sandler, C. “The Game of Life: Why We Play Games and the Impacts of Com puter Games.” PC World 11 (1993): 89-91. Schank, Roger C. The cognitive computer: on language, learning, and artificial intelligence. Reading, Massachusetts: Addison-Wesley, 1984. Scholes, Robert E. and Robert Kellogg. The Nature of Narrative. New York: Oxford UP, 1966. Schou, Corey D. and Roland Nord. “Literary Criticism and Programming Pedagology.” Communications of the ACM (1988). 242 Schroeder, Randy. “Playspace invaders: Huizinga, Baudrillard and video game violence.” < http://cas.bellarmine.edu/sociology/newpage24.htm> accessed December 13, 2004. Journal of Popular Culture 30 (Winter 1996). Scott, Derek. “The Effect of Video Games on Feelings of Aggression.” The Jour nal of Psychology 129 (March 1995): 121-132. Searle, John. Minds, Brains and Science. Harvard UP, 1984. Searle, John R. Speech Acts: An Essay in the Philosophy of Language. Cam bridge London New York New Rochelle Melbourne Sydney: Cambridge UP, 1969. Sefton-Green, Julian, editor. Digital Diversions: Youth Culture in the Age of Multimedia. UCL Press, 1998. Sengers, Phoebe. Anti-Boxology: Agent Design in Cultural Context. PhD thesis, Carnegie Mellon University, 1998. Sengers, Phoebe. “Practices for a Machine Culture: A Case Study of Integrating Cultural Theory and Artificial Intelligence.” < http://www.pum.montreal.ca/revues/surfaces/vol8/sengers.pdf> accessed December 7, 2004. Surfaces VIII (2000). Shannon, C. E. “A Mathematical Theory of Communication.” The Bell System Technical Journal 27 (July, October 1948): 379-423, 623-656. Shechtman, Nicole and Leonard M. Horowitz. “Media Inequality in Conversa tion: How People Behave Differently When Interacting with Computers and People.” Proceedings of CHI 2003. New York, New York: ACM Press, 2003. Silverman, Kaja. The Subject of Semiotics. New York and Oxford: Oxford UP, 1983. Sismondo, Sergio. “Reality for Cybernauts.” < http://m use.jhu.edu/journals/ postmodern_culture/v008/8.1sismondo.html> accessed December 6, 2004. Postmodern Culture 8 (1997). Slane, Andrea. “Rpmancing the System: women, narrative film, and the sex uality of computers.” Processed Lives: Gender and Technology in Everyday Life. London and New York: Routledge. 1997. Slater, M. and M. Usoh. “Representational Systems, Perceptual Position, and Presence in Immersive Virtual Environments.” Pressure 3 (1993): 221-233. Slocum, J. David. “Film Violence and the institutionalization of the cinema.” Social Research 67 (Fall 2000): 649-681. Sloman, Aaron. “A Philosophical Encounter.” Proceedings Ufth Interna tional Joint Conference on AI, Montreal, < http://www.cs.bham.ac.uk/~axs/ cogaff/ijcai95.text> accessed December 7, 2004, August 1995. 243 Smith, Greg M., editor. On a Silver Platter: CD-ROMs and the Promises of New Technology. New York: New York University Press, 1999. Smith, Sean and Joseph Bates. Towards a Theory of Narrative for Interac tive Fiction. Technical Report CMU-CS-89-121, School of Computer Science, Carnegie Mellon University, February 1989. Staiger, Janet. Interpreting Films: Studies in the Historical Reception of Amer ican Cinema. Princeton, New Jersey: Princeton UP, 1992. Stallabrass, J. “Just Gaming: Allegory and Economy in Computer Games.” New Left Review 198 (1993): 83-106. Stallman, Richard M. “The GNU Operating System and the Free Software Move ment.” Open Sources: Vocies from the Open Source Revolution. Sebastopol, / California: O’Reilly & Associates. 1999. Stone, Allucquere Rosanne. “Will the Real Body Please Stand Up?: Bound ary Stories about Virtual Cultures.” Cyberspace: First Steps. Cambridge, Massachusetts and London: MIT Press. 1991. Stone, Allucquere Rosanne. The War of Desire and Technology at the Close of the Mechanical Age. Cambridge, Massachusetts and London, England: MIT Press, 1996. Storey, John. An Introduction to Cultural Theory and Popular Culture. Athens, Georgia: U of Georgia P, 1998. Sutton-Smith, Brian. The Ambiguity of Play. Cambridge, Massachusetts: Har vard UP, 1997. Swartout, William et al. “Toward the Holodeck: Integrating Graphics, Sound, Character and Story.” Proceedings of the Fifth International Conference on Autonomous Agents. Montreal, Canada, ACM Press, 2001. Szilas, Nicolas. “Interactive Drama on Computer: Beyond Linear Narrative.” Narrative Intelligence. Ed. Michael Mateas and Phoebe Sengers Menlo Park, California: AAAI, 1999, 150-156. Szilas, Nicolas. “A New Approach to Interactive Drama: From Intelligent Char acters to an Intelligent Virtual Narrator.” Artificial Intelligence and Inter active Entertainment. Ed. John Laird and Michael van Lent Menlo Park, California: AAAI, 2001, 72-76. Tafler, David I. “When Analog Cinema Becomes Digital Memory ” Wide Angle (1999). Thomas, Douglas. Hacker Culture. Minneapolis and London: U of Minnesota P, 2002. 244 Thompson, Alvin W. et al. “Intellectual Property: Computer Hacker Can Be Blocked From Posting Program On the Internet.” New York Law Journal (December 4 2001). Todd, George F. “Art and the Concept of Art.” Philosophy and Phenomenolog ical Research (1983). Todorov, Tzvetan. The Poetics of Prose, trans. Richard Howard. Ithaca, New York: Cornell UP, 1977. Todorov, Tzvetan. Symbolism and Interpretation, trans. Catherine Porter. Ithaca, New York: Cornell UP, 1982. Tognazzini, Bruce. “Principles, Techniques, and Ethics of Stage Magic and Their Application to Human Interface Design.” Proceedings, INTERCHI ’ 93. ACM Press, 1993, 355-362. Toles, Terri. “Video Games and American Military Ideology.” Arena Review 9 (1985): 58-76. Tomas, David. “Feedback and Cybernetics: Reimaging the Body in the Age of the Cyborg.” Cyberspace/Cyberbodies/Cyberpunk: Cultures of Technological Embodiment. London, Thousand Oaks New Delhi: SAGE Publications. 1995. Torvalds, Linus. “Prologue.” The Hacker Ethic. New York: Random House Trade Paperbacks. 2001. Turing, A. M. 1936 “On Computatble Numbers, With an Application to the Entscheidungsproblem.” . < http://www.abelard.org/turpap2/tp2- ie.asp#section-l> accessed December 14, 2004. Turing, Alan. 1948 “Computing Machinery and Intelligence.” . < http://www.turingarchive.Org/browse.php/C/ll> accessed December 6, 2004. Turing, Alan. 1950 “Intelligent Machinery.”, < http://abelard.org/turpap/ turpap.htm > accessed December 6, 2004. Turkle, Sherry. The Second Self: Computers and the Human Spirit. New York: Simon and Schuster, 1984. Turkle, Sherry. Life on the Screen. New York: Touchstone, 1997. Vygotsky, L. S. Mind in Society: The Development of Higher Psychological Processes. Cambridge, Massachusetts and London: Harvard UP, 1978. Walsh, Richard. “Who is the narrator?.” Poetics Today 18 (Winter 1997): 495- 513. Walton, Kendall L. “Transparent Pictures: On the Nature of Photographic Real ism.” Nous (1984). 245 Wark, M. “The Video Game as an Emergent Media Form.” Media Information Austalia 71 (1994): 21-29. Wavish, Peter and David Connah. “Virtual actors that can perform scripts and improvise roles.” Proceedings of the First Annual Conference on Autonomous Agents. ACM Press, 1997, 317-322. Weizenbaum, Joseph. “ELIZA—A Computer Program for the Study of Natural Language Communication between Man and Machine.” Communications of the ACM 9 (January 1966): 36-45. Weyhrauch, Peter. Guiding Interactive Drama. PhD thesis, School of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, 1997. White, Hayden. “The Value of Narrativity in the Representation of Reality.” Critical Inquiry 7 (Autumn 1980): 5-27. White, Hayden V. The Content of the Form. Baltimore: The Johns Hopkins UP, 1987. Williams, Linda, editor. Viewing Positions: Ways of Seeing Film. New Brunswick, New Jersey: Rutgers UP, 1995. Winnicott, D. W. Playing and Reality. London and New York: Tavistock Pub lications, 1971. Winograd, Terry. Procedures as a Representation for Data in a Com puter Program for Understanding Natural Language. Technical Report AITR-235, Massachusettes Institute of Technology, January 1971. ftp://ai.publications.mit.edu/ai-publications/pdf/AITR-235.pdf accessed December 7, 2004. Wolterstorff, Nicholas. “Toward an Ontology of Art Works.” Nous (1975). Wong, Wylie and Mike Ricciuti. September. 2000 “Microsoft strikes at Sun’ s Java with new standard.” , < http://news.com.com/2100-1001-242414.html> accessed December 6, 2004. Wood, Christopher S. “Theories of reference.” The Art Bulletin (1996). Young, R. Michael. “Using Plan Reasoning in the Generation of Plan Descrip tions.” A A A I/IA A I, Vol. 2. Menlo Park, California: AAAI, 1996. Young, Willie C., Dana Broach, and Lilliam L. Farmer. The Effects of Video Game Experience on Computer-Based Air Traffic Controller Specialist, Air Traffic Scenario Test Scores. Technical report, U.S. Dept, of Transportation, Federal Aviation Administration, Office of Aviation Medicine, Springfield, Virginia, 1997. Zangwill, Nick. “The Creative Theory of Art.” American Philosophical Quarterly (1995). 246
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
When a story calls: the narrative potential of mobile media
PDF
Cinema bulimia: Peter Greenaway's corpus of excess
PDF
An enticement to knowledge: Documentary spectatorship and a theory of performatives
PDF
Dangerous beauty: representation and reception of women in the films of Evgenii Bauer, 1913-1917
PDF
Shake your assets: dance and the performance of Latina sexuality in Hollywood film
PDF
A national score: Popular music and Taiwanese cinema
PDF
Allegories of dispersal: Nation and participation in Indian cinema, 1947-1977
PDF
"Shaken out of the ruts of ordinary perception": vision, culture and technology in the psychedelic sixties
PDF
Screens on the move: media convergence and mobile culture in Korea
PDF
Images of the city -nation: Singapore cinema in the 1990s
PDF
Bleeding through borders: The horrific imagination, melodramatic traditions and marginal positions
PDF
A dark starry night: The reconfiguration of Europe in contemporary European cinema
PDF
#Holocaust: rethinking the relationship between spaces of memory and places of commemoration in the digital age
PDF
Before Brando: Film acting in the Hollywood studio era
PDF
Generation sex: Reconfiguring sexual citizenship in educational film and video
PDF
Hollywood dark matter: Reading race and absence in studio era narrative
PDF
A working theory of film genre
PDF
Modular cinema: multi-screen aesthetics and recombinatorial narrative
PDF
One more time: instances, applications, and implications of the replay
PDF
Lead man holler: Harry Belafonte and the culture industry
Asset Metadata
Creator
Douglas, Jay (author)
Core Title
Caught in the loop: Narrative in the age of artificial intelligence
Degree
Doctor of Philosophy
Degree Program
Critical Studies
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
artificial intelligence,cinema,OAI-PMH Harvest
Language
English
Contributor
Digitized by ProQuest
(provenance)
Advisor
Kinder, Marsha (
committee chair
), McPherson, Tara (
committee member
), Stuken, Marita (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c17-694689
Unique identifier
UC11342916
Identifier
DP22288.pdf (filename),usctheses-c17-694689 (legacy record id)
Legacy Identifier
DP22288.pdf
Dmrecord
694689
Document Type
Dissertation
Rights
Douglas, Jay
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
artificial intelligence
cinema