Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Video Volume
(USC Thesis Other)
Video Volume
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
University of Southern California
Video Volume
Patrick Meegan
Masters of Fine Arts Thesis
Interactive Media Division, School of Cinematic Arts
26 March 2014
Patrick Meegan 2
Contents
Introduction 3
Exploring Available Technology 4
Story and Interaction 8
Next Steps 15
Works Cited 19
Patrick Meegan 3
Patrick Meegan
26 March 2014
Video Volume: Interactive Media MFA Thesis
Video Volume explores the challenges and affordances of working with full motion video in
interactive 3D games space. It has evolved from a broad set of aesthetic and technological concerns to a
very specific interactive head-mounted-display experience entitled: Séance for Cinewava. Though the
final product stands on its own, the true story behind Video Volume has as much to do with the
technologies, narratives, and prototypes that have been put to the side along the way, as it does the
final features that appear in Séance. Both technically and intellectually the project finds itself situated in
a unique space between cinema, game, and theater. In the end it aims to tell a story that illustrates the
unique expressive potentials of its form. But in a broader sense Video Volume hopes to articulate a
method for the creation of live-action video in 3D interactive space. In so doing, the project hopes to
raise useful questions and challenges to future artists working in this digital media “intermezzo” space,
which is being dubbed “immersive entertainment” by industry.
In my original proposal from Spring, 2013 I stated the following intent:
Video Volume (V
2
) is an interactive experience that explores the audio/visual aesthetics of
thoughts and memories. V
2
leverages the expressive qualities of video while breaking
conventions of the static, rectangular cinema screen, by inserting live-action video into a real-
time 3D game engine. V
2
employs a variety of capture methods and post-processing effects to
build a virtual world of video. In addition to the creation of a unique story experience, the
project explores how we can visualize, sort, and understand the vast ecosystem of video content
that we consume and construct on the internet today.
The goals can be summarized as follows:
Patrick Meegan 4
1. Tell a story exploring the aesthetics of thoughts and memories.
2. Remove the rectangular screen convention of traditional cinema.
3. Develop a method to capture and post-process live-action content for 3D interactive worlds.
4. Articulate implications for the overall online, digital video ecosystem.
Though the particulars of the project changed over the past year, each of the goals stated have been
met in often unexpected ways. In all cases a careful balance of storytelling and technology
considerations led to the final project. In order to best understand how and why the project evolved to
produce to a full-motion video experience for head-mounted-display to meet the stated goals, I will
discuss the process and decisions made chronologically. First I will discuss the evolution of the
technology platform. Second I will talk about how the narrative was shaped by interaction. Third I will
discuss Video Volume as a prototyping space, and future possibilities for the method’s application.
Exploring Available Technology
When I began making films as an undergraduate, my first fascination was with lighting. The
power to shape, hide, and create space was almost magical in its effects. Over the next several years of
working with film and video, I always bristled at the description of filmmaking as storytelling first and
foremost. In my experience producing and directing, we were creating space as much as we were
sequencing events. As online video took off with the rise of YouTube and other services, the
possibilities for interactivity became apparent to me as a way to let the audience explore and appreciate
the space of the video in its own right.
Upon coming to USC, I was immediately exposed to a community of designers working around
this broad idea of the spatial narrative. 3D videogames have been exploring this terrain since their
inception, yet the implantation of live-action video in this same interactive space was significantly less
explored. Once I was introduced to the Unity3D development environment and began working with full
Patrick Meegan 5
motion video, it was clear that there was a different relationship to the rectangular screen. When you
are looking at a two-dimensional screen displaying a 3D game environment containing a 2D video, the
video appears extremely flat upon interacting. In addition, the rectangular shape of the video is
distracting and at odds with 3D graphic elements. But there is also something visceral and emotional
that is maintained by the very presence of a recorded moment.
By shooting elements against green screen I was able to remove the rectangular frame. But for
the final capture method I knew I would need a technology that provided more depth or volume to the
content. I spent significant time exploring the Microsoft Kinect and similar depth cameras as capture
devices. Though the aesthetics of the resulting animated point clouds were far more “computerized”
looking than video, I still found the visceral impact of a recorded person present in these examples.
However, because depth cameras as motion imaging devices are very early in their development, they
are low resolution and there are no readily available post-production workflows. I continued to work
with depth cameras as an interface, but my capture focus returned to video.
After committing to video cameras, the primary question was how to create content with
volume. I had already been working with GoPro Hero cameras, due to their cost and size, in various
arrangements. I had experimented with both stereoscopic and 360 degree panorama videos discreetly,
and I saw opportunity in both directions. My work to that point had been raw and abstract, focusing on
concepts of perception versus comfortable immersion. Early discussions with rental outfits about more
precise camera arrangements made clear that the camera supports would have to be custom. This
meant I would need to assure enough time in the thesis process to construct the necessary support
equipment myself. It became essential to decide on a presentation format in order to guide the
requirements of the capture array.
Patrick Meegan 6
I had been working on head mounted display projects through the World Building Media Lab
during my second year at USC. The format was exciting for its immersive qualities, even with simply
animated and voiced computer models. At the time I could only imagine what it would be like to
encounter a video-recorded person in the space. In terms of Video Volume, HMD became the chosen
exhibition format for two main reasons. First, I was struck by the complete removal of both the
rectangular screen and the limited point of view we know from cinema. Second, as a natively
stereoscopic display it would allow me to reap the benefits of 3D video as the basis for my content,
bringing much needed depth or volume.
With the platform and content tools generally decided on, I went about devising a method to
capture a single performer “volumetrically”. Conversations with Michael Fink and Paul Debevec, two
experts in this area, led to the concept of a stereoscopic, “bullet-time” camera system. I constructed an
8’ tall, 12’ diameter semi-circular frame out of wood. The frame supported 18 cameras, arranged as
stereo-pairs, and 360 degrees of green screen background. Using this method I was able to create a
prototype in which a user could wear the Oculus Rift (HMD) and use a game controller to walk around
the actress as she performed the scene.
I will go into further details later on the scoping process as a whole, but it is worth mentioning
here that I eventually jettisoned the stereo-bullet-time array as a capture method for the final project.
The green screen method, turned out to be too time and resource intensive in post-production to justify
in terms of the specific interaction modes eventually chosen for Séance for Cinewava. To understand
the editorial and visual effects task, one can look at what goes into any single shot of a movie and
multiply that by eighteen. Because I was handling all of the post-production myself (as it too was a
wholly exploratory process) it was essential that I achieve as close to the final results I wanted in
camera, thus minimizing the editorial and VFX work.
Patrick Meegan 7
The final method of capture developed I called the “Panoptigon”: a stereoscopic, 360 panorama
video camera system. As I began prototyping this, initially for the sole purpose of environmental
capture, the powerful immersive effects of the results in the HMD were undeniable. After building a six-
sided version out of wood, I expanded the design to accommodate 8 pairs of stereo video (16 cameras)
in order to help correct for GoPro Hero3 ultra-wide camera lens distortion by providing more camera
field of view overlap. Because the Panoptigon provides full horizontal coverage I arranged the individual
cameras vertically allowing for over 120 degrees of vertical field of view. This creates a nearly complete
spherical image, with a small hole at the top and bottom. To standardize the post-production pipeline, I
laser-cut the 8 sided design to create a rig that would maintain an absolutely fixed relationship between
the cameras. This allowed me to batch process footage later on. The design of the Panoptigon, with its
matte black finish and 12” diameter, feels like a cinema camera. It can be mounted on a tripod with a
normal baseplate, allowing for its easy integration with cinema tools. For our final shoot we mounted
four shotgun microphones atop the camera to record spacialized sound that could be easily
implemented in Unity 3D in the final assembly. Overall, the modular yet familiar form factor turned out
to be practical, but also useful as an object that helped others understand the nature of the Video
Volume project.
The final piece of technology I explored was a close range depth camera, a technology I had
been exposed to while working with the WBML on the Intel funded research project Leviathan. I began
hacking around with some of Intel’s demo applications. Eventually, attaching the camera to the HMD, I
was able to stream in depth data and generate a mesh in real-time, allowing a user to see a rendering of
his or her hands and interact with the virtual space. I created a prototype that allowed you to grab and
manipulate cubes of video floating weightlessly around you. The Perceptual Camera, as it was called,
remained my primary input device until the late stages of the process. Once I began integrating my final
video content, it became apparent that there was simply too much visual dissonance between the
Patrick Meegan 8
stereoscopic environments I was most concerned with and this very cool, but ultimately unnecessary
depth camera/HMD hack. I decided on the HMD itself being the only peripheral for user interaction. At
the time of writing this paper, I have yet to make the final decision on whether or not users will be able
to trigger content through basic gaze detection. The driving interaction, however, comes from a
standard spinning desk chair and the content itself motivating a user to turn around.
Story and Interaction:
Telepathy, the ability to experience another person’s thoughts, has been a driving force behind
my artistic motivation for a long time, and with Video Volume this idea was the initial spark. While
telepathy often first brings to mind comic book superheroes, science fiction, and low-rent psychics, it is
only a small step to see the idea of thought transference within religion and spirituality, as well as the
ambitions of the computing and technology industries today. The idea of communication becoming so
fluid between people that one can literally step into mind of one another is compelling. I’ve always
imagined it as something like a shared dream. Cinema has been exploring the visual language of dreams
since its inception, the very action of turning off the lights and getting comfortable as the movie begins
draws a clear association to bedtime and sleep. As an interactive designer, however, I did not just want
to depict telepathy, I wanted the user to feel like a telepath.
In working with HMD experiences in body tracking volumes I had noticed how the elaborate
process of calibrating the volume and getting all hooked up in the head-mounted-display was very
ritualistic. Using a glowing staff you prepare the space so that you will be recognized. You adorn a sash
with the electronic components, and put on a special mask that will transport you into the digital
dimension. Unless you are one yourself, a technical expert will guide you through all this, making you
comfortable and calm, within the boundaries of the volume. Observing and participating in this I
decided that I wanted to find a connection between this ritual and spiritual rituals.
Patrick Meegan 9
I looked to the Modern Spiritualist movement in American history, as it was this time in the
West that telepathy was last popularly accepted as a human possibility. After some research I decided
on the séance as the ritual I would explore. As described in Jeffrey Sconce’s book Haunted Media, the
invention of a revolutionary technology, the telegraph, in the 1840s sparked a wave of new thinking
across the country:
In suggesting the limitless possibilities of flowing electrical information, telegraphy’s apparent
ability to separate consciousness from the body placed the technology at the center of intense
social conjecture, imaginative cultural elaboration, and often contentious political debate
(Sconce).
During this period Kate Fox of Rochester New York began communicating with spirits through knocks on
the walls of her family’s house, not unlike Morris code. The much intrigued public believed that she had
“opened ‘a telegraph line’ to another world”. This event and the quick spread of its happening through
telecommunications sparked the Modern Spiritualist movement. It spread quickly across the country
paralleling the spread of telegraph lines. “Spiritualism attempted to align itself with the principles of
‘electrical science’ so as to distinguish mediumship from more “superstitious” forms of mystical belief in
previous centuries.” (Sconce)
By choosing the séance as my “gameplay” model I hoped to draw attention to this relationship
between technology and mysticism. I wanted to both expose and elaborate on the idea that technology
brings with it paradigms of belief. Mysticism is always present in culture, but it can be difficult to
decipher when one is culturally immersed in the contemporary dogma. The overt inclusion of mysticism
in a virtual reality experience is not meant as a critique of technology culture per say. Instead I see it as
a way to begin tying the intention of a technology to a pursuit of inner self-knowledge.
Patrick Meegan 10
In addition to the historical basis, I was fascinated by the individual elements of the séance as
interactions. The idea of holding hands in virtual reality was very compelling to me in its merging of
digital and analogue interactions. In scoping the interaction to a single user experience, however, I
decided to use the table as the tactile anchor to the physical world. It was this connection between the
virtual and the physical environments that was most important. The need to purify the space before
commencing a séance also informed the final narrative of Séance for Cinewava.
For the Fall mid-semester open house, I presented a simple prototype for Oculus Rift that
attempted to create the sensation of a dream space. The 3D environment was textured entirely with
videos of characters and ocean waves, and the user’s hands were present as a blocky depth camera
feed. Users could observe the environment surrounding them, and using their hands could manipulate
video objects that were floating in the space. Play-testers were very excited about the general mood
and technology of the piece. However, the most important observation was the lack of rhythmic
coherence, or frame-rate consistency, between the different visuals. The presence of rectangular video
planes was also problematic for some. It was clear however, that because of the lack of a story or clear
objective in the prototype users were primarily focused on the novelty of the technology.
I decided to create the character of a medium: someone who would come to you in virtual
reality and guide you into the collective unconscious. This amalgam came from the traditional role of a
séance medium as guide to the spirit dimension, and my own thoughts on the telepathic experience. In
the VR séance the HMD taps you into another parallel dimension of thought, a telepathic guide comes
to you and helps you understand the nature of this place. Karl Jung’s Redbook was an inspiration in this
regard. In this document of his own “unconscious explorations,” he describes wandering through a
dream world, and encountering archetypal guides who lead him to ever deeper dimensions of the self.
In my fiction, this dream world is shared by all, and with the right understanding and skills, telepaths can
Patrick Meegan 11
travel from one person’s unconscious to the next via this dimension. But, in keeping with Jung I decided
I needed a strongly archetypal basis to my medium character.
I wanted my story to be somehow grounded in the landscape of Los Angeles, and I wanted the
character to be of this land. Having been previously interested in the Coyote archetype from Native
American storytelling, I knew the character as a shape shifter, a trickster, a traveler, and above all a
disrupter (Bright). I had already been thinking of a comedic tone for the character to offset the very
heady high concepts of the project, so as I began to recast Coyote as my telepath the irreverent nature
of the archetype fit perfectly with my own intentions and aesthetics. I named the character Brea
Sinawava: “Brea” after the La Brea Tar Pits, and “Sinawava”, the supposed Paiute Indian name for
Coyote. I discovered this second word while hiking in Zion National Park around this time. The Temple
of Sinawava is the source of the river that runs through parks canyon. In addition to the coyote
reference, this discovery led me to the idea of the unconscious dimension behaving something like an
invisible river flowing through Los Angeles. The final component of the character once again germinated
from Sconce’s history of Spiritualism, as he describes the feminist underpinnings of the movement:
…the liberating possibilities of electronic telepresence held a special attraction for women,
many of whom would use the idea of the spiritual telegraph to imagine social and political
possibilities beyond the immediate material restrictions placed on their bodies (Sconce).
Mediumship was a way for women to have a voice in what was otherwise a misogynist culture.
Somehow I wanted to incorporate this idea of the séance as feminist soapbox into the character.
For the Winteractive show I created an encounter with Brea Sinawava. Users sit at a circular
table and adorn an Oculus Rift. They find themselves in a computer modelled version of the SCI building
lobby, where the show took place. Using a game controller the user walks to the virtual version of the
table they just sat down at physically. When the user collides with the chair the scene cuts to a black
Patrick Meegan 12
void, with only a shadowy hint of the table. Abstract electronic sounds rise. Brea comes running to the
table, and begins speaking to you. While she speaks the user can move their viewpoint up to 180
degrees around the image of Brea, viewing her from every angle (which I had captured against
greenscreen using the aforementioned stereoscopic bullet-time camera array). Eventually the invisible
current of thoughts that had brought her into the space, sweeps her away into the unknown.
Beyond the overall positive feedback around the novelty of the experience, reactions to the
piece, entitled Sinawava, were split between usability problems and narrative intrigue. The black void
was very disorienting to people. The only spacial reference was the imagery of Brea, yet there were
invisible colliders limiting user movement which people found confusing. People also felt the game
controller as input device was at odds with the content and intent of the piece. Some users felt the
focus on a single subject in the space was in some ways oppressive. My own reaction in this regard was
that I had missed the opportunity to move content behind and around the user, and thus not capitalized
on the potential of the HMD. Those with less inclination towards the technical did not complain about
usability, and in fact saw the black void as an appropriate and interesting setting. The fact that you
could see Brea, but once you moved she couldn’t see you immediately brought ideas of feminism and
the male gaze to users. Yet it was the lightness of Brea’s monologue and attitude that was most
interesting to users who made connections to feminist theory. Brea was casual, funny, goofy and
seemingly genuine, which people did not expect when she first arrives.
Overall reactions to Sinawava at Winteractive were all I could have hoped for. The novelty of
the full motion video, “volumetric” character in virtual reality was enough to spark the imagination even
from experienced technologists. And, at the very least, the themes of the narrative were coming across.
But, the dramatic limitation of a single character was a glaring challenge, as the performance of the
actor was so essential to my proving the emotional value of this marriage between video and virtual
Patrick Meegan 13
reality. My main complaint with the winter piece was the single point of visual focus to the experience,
so I began brainstorming around a second or third character I could bring into the space. Due to the full-
body capture method I was employing the similarity to theatre acting was a regular comparison made
during discussions with colleagues. In order to focus my intentions I began describing the project as a
theatrical play in which you are sitting on the stage in the middle of the action. Thanks to the play being
recorded, however, the experience is enhanced by the cinematic affordances of editing and effects. In
the interest of time I decided that rather than write a completely original story, I would make better use
of my limited time by adapting an existing story structure to my narrative desires. So I decided to find a
play.
I can’t say exactly where the idea to adapt Waiting for Godot came from, but I will say that I
have a very limited knowledge of theater and plays. Godot happened to be one of the few plays I was
familiar with, although I had never actually read the play in its entirety. But I knew enough about the
concept to have always found the idea of the play appealing. Finally reading it, I was struck by the
sparse atmosphere and its similarity to the black void of the thought dimension I was depicting. Looking
at the overall progression and structure of the narrative, I realized that Vladimir and Estragon could be
seated at the table with the user waiting for the spirit to come, but also tutoring the user on the
interaction mode. Pozzo and Lucky could be the telepathic mediums that are conjured by the séance.
Lucky’s outburst towards the end of Act One, could be an immersive montage of Panoptigon shots
tracking through locations (something I had been brainstorming at that point). I happened to be in New
York City when this revelation occurred. I found out about the recent Godot revival on Broadway, and
immediately bought a ticket. Having never seen the play performed, the experience was very powerful.
Suddenly this text, that had left me somewhat confused overall, was transformed by the actors into
something hilarious, entertaining, and relatable. I needed no further convincing. If Video Volume is
Patrick Meegan 14
attempting to prove the vitality of full motion video in 3D virtual space, Waiting for Godot had just
proven to me the power of live theater performance to create something beyond any written text.
I began the spring semester by focusing entirely on a draft of the script I would be shooting.
Since, I had spent much of the fall exploring technology, for the final project I wanted to ensure that the
story would drive the technology. I constrained myself to the first act of Godot knowing that the final
experience would be only around 10 minutes long. I read through the text, highlighting sections and
crossing others out, until I had distilled it down to the essential elements I would need to communicate
my story. From there I adapted the characters, changing the two male tramps of Godot to a male and
female pair named Crabbler and Dodrelle. Pozzo became Coyot, a female shaman. Lucky became Xyzlo,
a ghost who turns out to be a projection of Coyot’s subconscious. I then went through the sections of
Godot I had chosen, and rewrote the words and actions in an almost line by line fashion. Once I had a
draft of my own original writing I began a series of table reads with friends and classmates. Between
each reading I would rewrite the script. I decided that I would have to structure the tramps encounter
with Coyot and Xyzlo very differently than the encounter with Pozzo as it appears in Godot. This
individuation process of my play from the source material continued throughout the rehearsal and
shooting process as I worked with the actors. The final element I took directly from Waiting for Godot
was its title. The title of my play Séance for Cinewava, in fact hopes to draw comparisons to its source,
and in so doing potentially draw comparisons between the abstract questions: “What is God?” and
“What is Cinema?”
We shot the principle photography in March on a stage at the Robert Zemeckis Center and on
location near Vasquez Rocks (a nearby desert park). Putting both the technology and the process to the
test in both studio and location environments allowed me to explore their potentials in different ways.
With actors performing against a black background, I was able to look at the effects of very discreet
Patrick Meegan 15
movements and arrangements of visual elements (the actors themselves). With the footage we shot on
location I looked more at long range depth and camera movement. Even though I took Samuel Beckett’s
signature minimalism to heart both aesthetically and as a scoping mechanism, shooting was very
complicated and technical difficulties arose. Perhaps the biggest fault of the equipment, was that the
Gopro cameras overheat once they are on a set under high temperature film lights. These camera
failures led to many lost takes. However, the project was saved by its theatrical approach, play actors
being accustomed to performing the same thing night after night. Relying on performance rather than
editorial, the majority of the play was captured as two continuous five minute scenes. The
performances only got stronger as the actors went through the play over and over again because of
camera issues.
The final experience of Séance for Cinewava will consist of the footage I have shot with the
actors, but will be layered with documentary style footage (also shot with the Panoptigon) which will
represent the manifestations of the characters’ thoughts in the environment. The question remaining is
whether or not users will be able to trigger this more atmospheric content through gazing characters or
areas of the environment, or whether these elements will simply be edited into the final linear
experience. This can only be answered when I have processed an implemented the principle
photography and can prototype the interaction with the final content. In selective playtests with
rehearsal footage I have rendered, it is becoming clear that spinning around in a chair is plenty of
interaction to engage a user – especially if they are engrossed in the story.
Next Steps:
As I’ve stated several times, Video Volume has been an experimental and exploratory project.
Immersive video art and entertainment will continue to reside in this uncertain landscape for some time
to come. But, I firmly believe that stereoscopic, 360 video is here to stay. My final product Séance for
Patrick Meegan 16
Cinewava was made possible by essentially two technologies: super compact, ultra high definition sports
cameras (GoPro), and low cost head-mounted-displays (Oculus Rift). Compact sports cameras have
become something that any consumer might want to begin documenting and sharing his or her most
active moments in life. GoPro has filed an initial public offering making its product’s rise beyond the
“pro-sumer” all the more clear. Meanwhile, both Sony and Facebook have entered the VR headset
arena, and sent a strong message of confidence in head-mounted-display technology. As both of these
markets continue to mature, it is not hard to imagine a 3D spherical sports camera as the self-
documentation tool of choice. So that we not only share a glimpse at our experiences with our friends
and families, we share the entire audio/visual experience.
For filmmakers, storytellers, and designers there is a much more immediate opportunity posed
by Video Volume: immersive video can be made right now with available technology. And this content
can be exhibited beyond head-mounted-display. And because I’ve been prototyping and constructing
my experience in a fully 3D environment, Unity3D, I am literally building a virtual version of a spherical
3D cinema. In fact, using the Panoptigon design I see no reason why one couldn’t switch the cameras
with pocket projectors to create a 3D 360 projection system that would require only traditional 3D
glasses to experience. More important than opportunities for exhibition, creating video for fully
immersive platforms requires that artists re-evaluate and adapt all of the rules of cinematic expression.
Of course, the rules of cinema are highly useful in space, as are those of theater and game.
For designers of HMD experiences Video Volume presents a method for prototyping. Because a
camera like the Panoptigon captures everything that is visible in the space, the continuity of
performance has more to do with theater than anything else. Because of this, actors have much more
freedom to try out ideas and improvise. While the post-process takes significant time to render, the
manual labor is minimal. In my own process it was a great benefit to shoot several rehearsals of the
Patrick Meegan 17
entire narrative, so I could look at “dailies” and make adjustments to blocking. Using people and play-
acting to work out spacial relationships of content elements for HMD, could be used even if the eventual
content will be computer generated.
For me, the delight of this project, and real revelations, have come from the process of making
it. My artistic process overall has matured through this multi-disciplinary practice. My computer work
includes video editing and visual effects but moves fluidly between this and game development. But, it
is very important to me creatively that I engage the material tactilely. Over the course of the project I
constructed several wooden and acrylic camera rigs both large and small, and I sewed and fabricated
costumes and props. As a director I was able to focus more on acting than I ever have on a film or video
project, and because the actors had never performed for an audience of one, sitting in the middle of the
action, we had a very dynamic and engaging set of challenges on set. As a writer I wrote my first short
play which allowed me to hone and contain my exploration of dream space – a topic I could explore for
the rest of my life. As an artist working with technology, I learned, with some help from Beckett, that by
continually reducing your project to its barest essentials you work towards the most focused and
achievable final work.
I am constantly reevaluating the relationship between the different forms I’m combining to find
something new. But if we look at Video Volume as a confluence of cinema, theater, and game there is
one word they all have in common: play. You press “play” to start a movie. You perform a play in a
theater. You play a game. It seems to me that in all cases the word infers action within a set of
constraints. For cinema the constraints are its unchangeable sequence in time. For theater the
constraint is the script. For game the constraint is the rule set. However, while the distinctions between
these modes are useful to compare, contrast, and understand histories, ultimately it is unimportant to
me what genre Video Volume fits in. Just as opera arose out of the hybridization of music and theater, I
Patrick Meegan 18
believe that as the technologies around digital immersion and spacial capture continue to mature, we
will see a new medium arise with its own set of affordances and limitations. I am certain the
entertainment industry will provide many opportunities to escape reality, so in these early stages I
suspect it will be up to artists to use these technologies to explore our internal worlds and better know
ourselves and our humanity.
Patrick Meegan 19
Works Cited:
Beckett, Samuel. Waiting for Godot: Tragicomedy in 2 Acts,. New York: Grove, 1954. Print.
Bright, William. A Coyote Reader. Berkeley: University of California, 1993. Print.
Cloud, Peter Blue, and Bill Crosby. Back Then Tomorrow. Brunswick, Me.: Blackberry, 1978. Print.
Sconce, Jeffrey. Haunted Media: Electronic Presence from Telegraphy to Television. Durham, NC: Duke
UP, 2000. Print.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Ascension: an analysis of game design for speech recognition system usage and spatialized audio for virtual reality
PDF
Virtual production for virtual reality: possibilities for immersive storytelling
PDF
Shayd: the pursuit of magic, illusion, and interactive worlds
PDF
A second summer
PDF
How to be Indian: a Tumblr experiment
PDF
Ahistoric
PDF
The future of games and health: towards responsible interaction design
PDF
Resurrection/Insurrection
PDF
Morana: explore healing potential of virtual reality storytelling
PDF
Berättande
PDF
Traversing the Green ward
PDF
Everything all the time: anthological storytelling
PDF
544
PDF
Getogether
PDF
Transmedia aesthetics: narrative design for vast storyworlds
PDF
A meditative application inspired by emotional regulation
PDF
Virtual competencies and film
PDF
Amoeboid: cross-device game design
PDF
The Palimpsest project: producing a cultural shift to enable a systematic shift
PDF
The return: a case study in narrative interaction design
Asset Metadata
Creator
Meegan, Patrick
(author)
Core Title
Video Volume
School
School of Cinematic Arts
Degree
Master of Fine Arts
Degree Program
Interactive Media
Publication Date
05/07/2014
Defense Date
05/06/2014
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
OAI-PMH Harvest,Ritual,seance,Technology,Video,virtual reality
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Kratky, Andreas (
committee chair
), Debevec, Paul (
committee member
), McDowell, Alex (
committee member
)
Creator Email
meegan@usc.edu,pmeegan@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-410664
Unique identifier
UC11295285
Identifier
etd-MeeganPatr-2498.pdf (filename),usctheses-c3-410664 (legacy record id)
Legacy Identifier
etd-MeeganPatr-2498.pdf
Dmrecord
410664
Document Type
Thesis
Format
application/pdf (imt)
Rights
Meegan, Patrick
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
seance
virtual reality