Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Creatively driven process: using a virtual production workflow to achieve a creative vision
(USC Thesis Other)
Creatively driven process: using a virtual production workflow to achieve a creative vision
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
i
Creatively Driven Process:
Using a Virtual Production Workflow to Achieve a Creative Vision
By
Sarah Scialli
A THESIS PRESENTED TO THE
FACULTY OF THE USC SCHOOL OF CINEMATIC ARTS
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the Requirements of the Degree
MASTER OF FINE ARTS
(INTERACTIVE MEDIA)
May 2013
Copyright 2013 Sarah Scialli
ii
Acknowledgments
Thanks to my wonderful team and crew who sacrificed their time to help this film
succeed. Thanks to my committee for being voices of reason and knowledge even when
times were stressful. And thanks to my parents for the eternal support and love.
iii
Table of Contents
Acknowledgments
.....................................................................................................................................
ii
List
of
Figures
.............................................................................................................................................
iv
Abstract
..........................................................................................................................................................
v
Keywords
......................................................................................................................................................
vi
Main
Body
.....................................................................................................................................................
1
Project
Description
...............................................................................................................................
1
Virtual
Production
Workflow
Overview
................................................................................
1
Story
............................................................................................................................................................
3
Aesthetics,
Style
................................................................................................................................
4
Concept,
Overview,
Objective
.....................................................................................................
5
Interactive
Foundation
..................................................................................................................
7
Prior
Art
.................................................................................................................................................
12
What
is
Virtual
Production?
.....................................................................................................
12
Virtual
Production
........................................................................................................................
14
Art
and
Technology
......................................................................................................................
18
Story
and
Themes
.........................................................................................................................
20
World
Building
...............................................................................................................................
23
Workflow
Description
......................................................................................................................
25
Initial
Aspects
of
Story
................................................................................................................
25
Story
Development
.......................................................................................................................
26
Visual
Development
.....................................................................................................................
27
Preproduction:
(Storyboarding,
Previs)
.............................................................................
29
Casting
................................................................................................................................................
31
Preparations
for
the
Shoot
........................................................................................................
32
Music
...................................................................................................................................................
35
The
Shoot
(Capture
Session)
....................................................................................................
35
Post
Production
.............................................................................................................................
38
Evaluation
Scenarios
........................................................................................................................
40
Why
Virtual
Production?
...........................................................................................................
40
How
much
world
to
show?
.......................................................................................................
40
What
would
I
have
done
differently?
...................................................................................
43
Challenges
and
Lessons
..............................................................................................................
44
Film
and
Game
Overlap
..............................................................................................................
46
Discussion:
Innovative
Contribution
..............................................................................................
47
Production
Vocabulary
....................................................................................................................
47
Future
of
the
Industry
......................................................................................................................
48
Ten
Lessons:
Suggestions
of
Virtual
Production
Possibilities
........................................
48
Summary
.....................................................................................................................................................
52
Bibliography
..............................................................................................................................................
54
iv
List of Figures
Figure 1: The Uncanny Valley
................................................................................................................
4
Figure 2: The Graces from La Primavera, Painting by Sandro Boticelli
..............................
10
Figure 3: Aglaea Storyboard of Circular Camera Motion
...........................................................
11
Figure 4: Aglaea Storyboard of Static Camera Motion with Quick Cuts
..............................
11
Figure 5: The filming of Avatar
...........................................................................................................
15
Figure 6: Shrek the Musical
..................................................................................................................
18
Figure 7: Luxo Jr.
.....................................................................................................................................
19
Figure 8: Day & Night
............................................................................................................................
20
Figure 9: La Luna
.....................................................................................................................................
22
Figure 10: Stairs built for Capture Session
......................................................................................
28
Figure 11: Aglaea Environment Concept Art
..................................................................................
28
Figure 12: Tangled
...................................................................................................................................
29
Figure 13: The Aglaea actors on set
...................................................................................................
36
Figure 14: The Capture Session
...........................................................................................................
37
v
Abstract
My thesis is a case study on the process of creating my thesis film Aglaea, an
animated short movie created with the use of virtual production techniques. In this paper,
I focus specifically on my virtual production workflow and how it impacted my decision-
making. I concentrate on balancing the technological and practical constraints with the
story and creative vision I set out to achieve while letting each aspect influence the others.
Aglaea utilizes performance capture in conjunction with a game engine. It differs
from many animated films in that the final video material is exported directly from the
game engine, allowing me to incorporate real-time visual effects that respond to the
performances of the actors. Visually, my goal was to achieve a stylized, rich aesthetic
reminiscent of more traditional computer generated (“CG”) animation. Though I delved
into many technological endeavors, I aimed to create a compelling, honest story with
believable characters. In addition, the technology has helped facilitate the creation of this
story, while the story itself has also affected the workflow supporting it. The film is
loosely inspired by the Greek mythological Graces, who are three sisters said to bring
happiness to others. I have reimagined the Graces as entertainers of the heavens, and
have focused my story on the youngest sister, Aglaea. The film tells the story of
Aglaea’s journey towards self-confidence and maturity, shown through the development
of her celestial light talent as she is stranded on the human world. During the film,
Aglaea transitions from feeling overshadowed by and dependent on her sisters to seeing
herself as their equal, but with her own unique identity.
vi
Keywords
• Virtual Production
• Animation
• Game Engine, Unity3D
• Interactivity
• AI- Procedurality
• Visual Effects
• Filmmaking
• Performance Capture, Motion Capture
• Storytelling
• Process
• Workflow
1
Main Body
Project Description
Over the past year, I have investigated virtual production with the end goal of
creating an animated short titled Aglaea. I have created a workflow and process that is
tailored to achieve my specific film. Though by no means do I claim that mine is a
generic or foolproof method to which all films should aspire, aspects of my process could
be adapted to other films. Additionally, though a film with a similar starting point could
have been created using traditional techniques, my workflow has truly shaped the film.
Without its process, my film would be a very different one.
However, it is equally important to me that the way the film was made not
overshadow the film itself. The film should stand on its own, with a sense of imagination,
visual wonder and compelling characters. I want the film to be appreciated on its own,
with the knowledge of the process it underwent only enhancing that appreciation.
Virtual Production Workflow Overview
I was drawn to the amount of creative freedom that real-time technology affords
filmmaking. These technologies allow ultimate control over each element, and the ability
to try out possibilities and iterate quickly. Any virtual elements of the film can be seen as
early as possible using virtual production, so that creative decisions can be made with the
understanding of the implications. In specific, Aglaea is “shooting in” a game engine,
while the animation is created via actor performance capture. So, though the actors are
performing in a physical soundstage, the animated characters are moving through a
2
virtual set within the game engine. In addition, the use of a game engine allows cameras
to face any angle of the virtual world and visual effects to be triggered in virtual space
based on the performances. The use of the game engine allows me to create my stylized,
fantasy world, while the use of performance capture allows me to ground my characters
in real-world performances.
I have used the game engine
1
throughout the process. Prior to the shoot (the
“capture session”) the engine was used to determine blocking, scout locations,
previsualize camera angles, test out the look of assets and create systems to drive the
visual effects (“VFX”). During capture, the performances were streamed in real-time
into the game engine, allowing me to visualize the animated scenes and incorporate that
visualization into my directing. At the time of this writing, I am working to build my
final scenes in a combination of the engine and other software packages. Ultimately, the
scenes are brought back to the game engine for their final polish, visual effects and
camera motions, where they are then outputted as video files from each camera angle in a
scene, akin to “dailies” in traditional filmmaking. Finally, the scenes are edited, given
their final audio and mixed using standard software packages like Avid and ProTools.
Using this workflow, I found much room for iteration, and an infinite amount of
control at each junction. I can record additional camera angles to make an edit play
better. I can patch together a line reading from one take with the emotional entrance of
another take to achieve a combination of performances not directly captured. I can
modify the position of set pieces or relight a scene to frame my action better. Using a
game engine allows me to make and see these changes without waiting for the scene to
1
A game engine is the real-time software package typically used for developing video
games.
3
re-render. In this manner, I can quickly determine the impact of these changes on the
final film.
Virtual production has largely affected my decision making throughout the year.
It has allowed me to have elements at my fingertips early on, rather than guessing how
that element would look much later down the road. I have the benefit of a performance
that can only be achieved through a live shoot while retaining the ability to experiment
with every frame.
Story
Aglaea tells the story of a young goddess of the same name. Aglaea is the
youngest of three sisters known as the Graces, loosely based on the Graces of Greek
mythology. Classically, these characters are said to bring happiness to others. They flit
in and out of other Greek stories without having a canonical mythology of their own. So,
I was inspired to give the Graces that story. They have been reimagined as entertainers
of the heavens, each Grace with her own celestial talent. Aglaea’s oldest sister, Thalia,
has the power to form luxurious foods from clouds. Euphrosyne, the middle sister, steals
the show with her singing and her ability to create beautiful melodies out of thin air.
Aglaea, on the other hand, has a talent that is not as strong as that of her sisters. Her
talent is Splendor, the power to control light. Initially, she can only create sunbeams-
bright rays of light that shine unimpressively over the already sun-soaked heavens where
the three perform.
As these three perform for the beings of the heavens, one audience member
insults Aglaea’s talent, knocking her off her game and setting in motion Aglaea
accidentally tumbling over the edge of her and her sisters’ sparkling, heavenly cloud.
4
Aglaea finds herself in a barren crater on the human world below, the heavens floating
away alongside the sun. There she sees darkness for the first time and discovers how
amazing her talent can be. Aglaea develops her talent incredibly, which eventually
enables her to return to the heavens. Aglaea ultimately rejoins her sisters with a renewed
sense of self-confidence and a more mature outlook on her relationship with her sisters.
Aesthetics, Style
Aglaea has an organic, imaginative aesthetic. The characters are stylized and the
world is vivid and saturated, hearkening back to traditional animation. I love the look of
classical animation and of recent CG animation, so this decision was largely personal. In
addition, choosing the look of the film as such grounds it in the tradition of animated
storytelling. However, there were practical reasons supporting my decision as well.
Many motion capture films have been plagued by the concept of the uncanny valley,
which describes the unease felt by audiences when human characters look eerily not quite
human.
2
Figure 1: The Uncanny Valley
2
2
Mori, “The Uncanny Valley,” 1970/2012.
5
I knew it was an unrealistic expectation for me to attempt to overcome the
uncanny valley, so I decided to steer far away from photorealism. Instead I chose to
contrast the live performances of my actors with stylized characters in a fantasy world.
This is one decision of many that was influenced as much by technological factors as by
creative factors.
The audience for the film is the general public. The story is very family friendly,
and the style will help it appeal to children and adults alike.
Concept, Overview, Objective
The process to create Aglaea adjusted to fit my goals and the goals of my crew as
the project progressed. Likewise, each decision I encountered in creating Aglaea was
colored by personal preferences.
My favorite part of the creative side of filmmaking is post-production. I like the
point in filmmaking when all of the elements have been gathered and the focus turns to
putting the jigsaw pieces of the puzzle together to create the best possible film. There is
an element of play in post-production for me. I can switch the order of shots, can test out
a different performance or can cut in an alternate angle easily to refine how a sequence
plays. I was interested in exploring how virtual production can make more aspects of
filmmaking feel like this.
In addition to incorporating what I like about filmmaking, I was eager to take on a
project that allows me to draw on the different aspects of my background and incorporate
them into my story and decision-making. For instance, motion capture draws on my
experience as a musical theatre performer. I am intrigued by the similarities between
motion capture and black-box live theater. There are minimal props and set pieces, and
6
the actress must project her character into the space. The actors must move past the
technology in the room to imagine themselves embodying their characters. There may be
cameras everywhere, but the actors do not necessarily need to play to one angle over
another. This element of theatrical performance is easy for me to relate to and direct
within.
I also chose subject matter that taps into my performance background. Aglaea is
a musical. The Graces perform multiple times, each performance representing the
emotional state of those characters. The character Euphrosyne, in particular, is talented
vocally. The music she makes goes hand in hand with her blocking and affects the
performance that the actress gives. Analyzing stage performance and vocals is one of my
personal abilities, so it was important for me to incorporate my musical and performance
training into this film. I wanted to treat the score, the choreography and the vocals as
filmmaking elements that could be adjusted to best serve the film.
Another aspect of my background that influenced the story was my computer
science background and interest in visual effects. I wanted to create a story where the
visual effects would serve as storytelling elements that could be modified using my
virtual production workflow. As the characters developed, I came up with the idea to
incorporate procedural visual effects into the story. Both Aglaea and Thalia’s talents are
visual, so the effects are programmed to respond to the performances organically from
within the engine, as opposed to compositing the effects over the image at the last stage
of filmmaking. I wanted to be able to judge the entire frame and adjust a camera based
on what would look better for the visual effects, or adjust the path of the effect to look
better for a desired camera or performance. The effects can be controlled and adjusted;
7
one of many tools to help dictate my final shots. In addition, I was drawn to the idea that
the effects could be more directly connected to the emotional performance of the actors.
A final personal goal was to incorporate the skills I have developed during my
graduate studies. Whether I am creating a film or a game, I like thinking about my needs
from many angles, rapidly remapping my understanding of the project to a specific need.
I enjoy seeing the progress of the project and discovering how all the pieces will fit
together holistically. The workflow I chose required me to constantly have an
understanding of all of the pieces moving together in parallel. I had to think about my
film from multiple perspectives, considering creative, technical and logistical details at
every step. The virtual production workflow suited my interests and allowed me to
further develop my skills as a producer.
Aglaea’s story of confidence and pride in talent is one that relates personally to
my desire to feel confident in my abilities. Through the content of the story and through
creating the process surrounding the story, I am helping to grow my own confidence.
Interactive Foundation
On the website of the School of Cinematic Arts, Dean Elizabeth Daley “invites
[students] to explore and expand the power and potential of film, television and new
media.”
3
I believe my film takes on this challenge. The film I am creating is linear, and
the project is a blend of many of the departments of the School of Cinematic Arts.
However, I believe my project also fits firmly within the field of Interactive Media. The
project is built on the foundation of my knowledge as a student of Interactive Media and
is enhanced by my understanding of virtual worlds and game production.
3
Daley and Ross, “About USC Cinematic Arts-USC School of Cinematic Arts.”
8
One way that I used my skills as a student of Interactive Media was to consider
the world that exists around my story. As I developed my story, I constantly asked
myself questions about how the world works. It was not enough for me to consider the
story plot-point-to-plot-point without considering the implications of each plot point to
the world at hand. I wanted to make sure I had thought about the functions of the world,
even if the audience would never be delivered those decisions explicitly.
For example, Aglaea’s talent (the power to create light) in the story is a visual
metaphor for her emotion and maturity on this journey to fit in with and yet stand out
from her sisters. I was very enthusiastic about projecting Aglaea’s character
development in this visual manner. I essentially chose to develop this talent with the
same approach as if I were developing a game mechanic. I developed the rules of her
talent as well as the motions that she must make to get her talent to function. I thought
about what guides each transition of her talent. What factors allow her to handle the light
a certain way? However, I did not actually want my actress to feel as though she were
playing a video game. I wanted her to connect with her character and the emotion of the
scene, first and foremost. I did not want her to be concentrating on getting her talent to
work perfectly in every scene. So my rules for the talent became a little less rigid and a
little more emotionally driven. For instance, all aspects of her talent begin with her hands
cupped to shape a ball of light. But, as she becomes stronger with her talent, she does not
always need to bring her hands back together and concentrate on this ball of light before
she can create something more complicated. Were I to be making a game of this, I would
want the mechanic to always function in the exact same way each time a player interacts
with it. Each input should yield the same output. But as Aglaea grows in my story, her
9
talent becomes more effortless. So, it requires much less concentration from Aglaea and
it responds without her hands having to be perfectly placed every time. Though
achieving this requires more finesse and flexibility in the game logic itself, the benefits
far outweigh the drawbacks.
Thalia’s talent is a great example of this in action. Thalia is much older and more
practiced, so her talent comes much more naturally. She almost only has to “think” her
talent and it functions. I wanted Aglaea’s emotional growth to manifest itself through not
only what she can create, but also the amount of concentration and exactness that Aglaea
needs in order to use the talent. So, it was important to me to balance thinking of it like a
true game mechanic with thinking about how the talent would actually work for Aglaea
in the world that I have built.
This balance certainly affected my directing of the actresses. We had to learn the
mechanical aspects of the talents first. The actresses can only be effortless with their
talents if they are comfortable enough using them initially. We spent a rehearsal
choreographing the talents, defining the hand motions that will make them work. But
then we also talked about the how adept the characters are from scene to scene with their
talents. Aglaea would need to be very careful with her hand placement at the beginning
of the film, while in contrast at the end her motions can approximate the specific motions
and still have her talent function. The emotion matters the most, so I encouraged my
actresses to bring in their own interpretations of how the characters are feeling from
scene to scene. Is the character upset and overdoing each hand motion? Is the character
distracted and not quite following through with her motions?
10
Another way I used my skills in Interactive Media was through the concept of
tracing an idea to possible outcomes, especially defining a rule set and then using that
rule set to help define other pieces of the experience. This was a technique I particularly
utilized when defining my shape language, the visual design of my film. Though I
started thinking only about shapes that would be prevalent in the frame, I used that basic
shape language to help me develop story beats, visuals, camera motions and even
editorial rhythm. Specifically, when I was first developing the story, I noticed that
classical sculpture and paintings of the Graces depicted the sisters in a circle, intertwined
with each other. I incorporated this idea and decided to use curves, braids and circles as
important symbols. I decided that circles and braids should signify Aglaea’s connection
with and dependency on her sisters.
Figure 2: The Graces from La Primavera, Painting by Sandro Boticelli
4
I then thought about the implications of this decision for character and
environment design. I decided that all three Graces should have braided hair and twists
4
Image from historylink101.com.
11
on their clothing or footwear. I decided that the world of the heavens has curved palm
trees that twist around each other and the pool is made up of intertwined circles. I then
extended this idea when I designed the look of the human world. That design is far less
filled with circular shapes, since Aglaea is separated from her sisters. But I did not feel
like an environment covered in hard angles would have any place in this film, so I
decided that the human world should have slight curves, but nothing approaching the
twisty circularity of the heavens. I also extended this idea to camera motion and editorial
pacing. Whenever Aglaea feels like she is connected with her sisters, the camera moves
in languid circular shapes and twists through the world lyrically in long dance-like shots.
In contrast, when she feels separate or disconnected, the camera is more static, and cuts
between actions more quickly.
Figure 3: Aglaea Storyboard of Circular Camera Motion
5
Figure 4: Aglaea Storyboard of Static Camera Motion with Quick Cuts
5
My understanding of world building through investigating implications as an
interactive student has very directly affected the formation of my project.
5
Storyboards by Nara Lee.
12
Prior Art
What is Virtual Production?
“Virtual production” is a phrase that has been recently popularized as a
description of cutting-edge filmmaking. However, it is hard to pinpoint exactly what
makes up virtual production or what categorizes one project as a virtual production film.
Digital Domain’s website defines it as “…the process of making films in the
digital world in real time, enabling a new level of creative freedom and efficiency…[It]
enable[s] directors to see more of what their created world will look like while they’re in
production.”
6
The German FMX Festival in May 2012 addressed the meaning of virtual
production as well. “The answer seems obvious at first: virtual production is Avatar – or
at least, the style of film-making James Cameron pioneered on it. John Kilkenny,
Executive Vice President at 20th Century Fox, …call[ed] Avatar ‘the birthplace of pure
virtual production,’ and describ[ed] his ‘lightbulb moment’ of seeing the director pacing
around a seemingly empty warehouse space, only to discover he was scouting locations
on the planet Pandora.”
7
But, does virtual production only mean that the project uses real-time tools?
Those real-time tools may have opened the doors for innovation, but do they encompass
everything that virtual production means? It is equally important to consider the
flexibility in approach that the use of this technology affords the team. For Zoic Studios,
a visual effects studio whose real-time toolset has been used in television shows such as
Once Upon a Time and Pan Am, it is important to understand the implications of the real-
6
Digital Domain 3.0, Inc., “Virtual Production.”
7
Thacker, Jim, “FMX 2012: In search of virtual production.”
13
time outside of the change in technology. “It’s going to bring VFX and animation to the
front end of production. Visual effects will no longer be segregated into phases.”
8
The FMX article mentioned above also quotes Wayne Stables, The Adventures of
Tintin VFX supervisor. “‘It’s not just about technology … the technology has helped but
[that way of thinking] is limiting us. It’s also bigger than giving back the director
creative control. It’s about taking all the lessons we’ve learned about film production
and applying them to your virtual world.’” The article paraphrases Sebastian Sylwan and
continues that “virtual production should offer artists the ability to iterate more quickly: a
process [Sylwan] likened to rapid prototyping.”
9
These thoughts relate virtual production
to traditional production while still acknowledging the affordances that the technology
provides.
I noticed that though these articles allude to the use of game technology, they fail
to discuss the overlap. The creative teams should, in my opinion, also apply the lessons
learned in interactive fields in addition to those lessons they have learned as filmmakers.
Rapid prototyping is such a huge part of the game development process. If virtual
production allows for more rapid iteration, teams should be drawing on the collective
understanding of iteration and prototyping in game development. So, I would argue that
virtual production enables the team to make more informed decisions earlier in the
process, using the tools and talents available to them.
8
Thacker, Jim, “FMX 2012: We could remake 300 in real time now.”
9
Thacker, Jim, “FMX 2012: In search of virtual production.”
14
Virtual Production
The virtual production workflow on Avatar
10
inspired my own in many ways. I
frequently spoke in terms of that film’s workflow when attempting to describe my
workflow to potential crewmembers. The Avatar story is about characters embodying
other personas and learning to empathize with other cultures. The use of performance
capture suits the film well. Actors can physically embody their Na’vi counterparts by
wearing a motion capture suit. And the crew can visualize the alien land of Pandora
through a virtual camera. As in my workflow, the crew streamed their performance
capture data live into a real-time system. The actors were then displayed as their
characters in their environment on the screen. Also equivalent to my workflow was the
use of a hand-held virtual camera to place cameras after all the performances have been
captured. However, Avatar used the virtual camera to record master takes during
performance capture. It made more sense for our team to record screenshots and rehearse
camera motions from within the computer during capture instead of juggling the physical
camera during the capture session.
10
Featurettes from Avatar, Blu-ray, 2010.
15
Figure 5: The filming of Avatar
11
And finally, one of the most significant ways that the Avatar workflow diverged
from mine was in finishing. Avatar did not finish inside of their real-time engine.
Instead, they rendered out of more traditional software using higher quality assets than
they used during real-time capture.
12
Also, as far as I am aware, their not finishing in the
engine implies that they did not create their final look or effects in engine as I am doing.
Just as I began work on my thesis, I had the unique opportunity to visit the Avatar
set. The crew demonstrated their workflow first hand for our group. They also opened
my eyes to one affordance of virtual production that I had not fully thought through.
They demonstrated how the position of something in real space does not need to translate
to virtual space. They separated two performers across the soundstage and demonstrated
an interaction between the two. In virtual space, they dragged that character over near
the other so it appeared that the two were next to each other. This information opened the
door to all kinds of interactions that could be cheated in real space. For instance, I used
this technique when capturing my big show scene. Our stage is much smaller than the
11
“Frame the Shot” image from “5 Steps to Avatar: reinventing Moviemaking.”
12
Featurettes from Avatar, Blu-ray, 2010.
16
environment in which the show takes place. So, I placed two performers that were
supposed to share a glance from far across the heavenly resort much closer to each other
in real space, ensuring that the direction of the glance would work in virtual space. I was
able to capitalize on the personal relationship that could happen from having two
actresses play off of each other in real space, while still letting them occupy their correct
positions in virtual space. Though there is freedom of physical space being completely
disconnected from virtual space, it can get complicated to keep track of what actually
happened, especially when multiple performers are performing together.
Many of Robert Zemeckis’s films developed and popularized the techniques of
motion capture, leading the way for most of the way performance capture is now done.
Even on The Polar Express, many related techniques were utilized. The film used facial
and body markers and also placed cameras virtually after the performances were in the
system. However, the technology was not yet developed enough for real-time streaming
of the performances. On the film’s Blu-ray commentary, Steve Starkey describes their
film’s techniques as “[putting] the pencil… into the hands of the great actors”
13
However
advanced the techniques in The Polar Express, many have used this film as a barometer
for the look of motion capture, noting the eeriness of the smooth motion and hyper-real
style and bringing the notion of the uncanny valley into public vernacular. The
Economist quoted a review which called it: “…so frightening the film should be subtitled
‘The Night of the Living Dead.’”
14
Despite this unsettling nature of the film, it was still
appreciated as an intriguing new look and an innovative process. Critic Roger Ebert
13
“A Genuine Ticket to Ride Documentary Gallery” from The Polar Express, Blu-ray
2010.
14
The Economist Contributors, “Animation and Robotics: Crossing the Uncanny Valley.”
17
described the film as “...extraordinary…The characters… don’t look real, but they don't
look unreal, either; they have a kind of simplified and underlined reality that makes them
visually magnetic.”
15
Virtual production is also becoming prevalent in previsualization (or previs).
Companies use previsualization to test out the look, effects shots and other costly
elements of a film before they undertake principal photography and effects work.
However, in most cases, this work is only a test and not actually used for any of the final
shots. At Lucasfilm Animation Ltd., on the animated television show Clone Wars, a real-
time engine is used for a more final version of previsualization. To Dave Filoni, the
supervising director, “The interesting thing about the cinematic version of previs is that
largely it’s all disposable. For us it’s very much 1:1. The camera that we use in previs
will be the camera. …Unlike a lot of other previs, which is kind of an experimental
process: ‘This is what it might look like.’ For us it’s ‘This is what it will look like.’”
16
I
had the opportunity to intern in the previs department of Lucasfilm, and was able to
experience this workflow firsthand.
Finally, there are even virtual production elements used in the field of musical theatre.
Specifically, in Shrek the Musical, the animated magic mirror appears on stage to interact
with the other actors. To achieve this effect, an actor is positioned offstage in a small
motion capture volume, and his facial data is streamed to a software package where it is
attached to the same Magic Mirror asset used in the film.
17
15
Ebert, review of The Polar Express.
16
Leckman, “Shooting the Clone Wars: A Conversation with Dave Filoni, Part 1.”
17
Strike, “Autodesk Reveals the Secrets of the Shrek Magic Mirror.”
18
Figure 6: Shrek the Musical
18
There are numerous other films and television shows that have begun to incorporate
some aspect of previs or virtual production into their workflows. Likewise, I am aware
that many films are experimenting with innovative virtual production techniques behind
closed doors, but very little is publicly discussed. It is hard to get a handle on the
innovation in the industry, since there is so much secrecy surrounding in-development
projects.
Art and Technology
“The art challenges the technology, and the technology inspires the art.”
19
An
anecdote in the documentary The Pixar Story about early Pixar history struck a chord
within me. The team was presenting the short Luxo Jr. at Siggraph. The conference is a
technical one, so their delivery of an animation with a narrative might seem out of place.
After the presentation, one of the graphics experts came up to John Lasseter. Lasseter
prepared himself for a question about the self-shadowing algorithm the team had
18
Image from “Shrek the Musical: Autodesk Brings Magic Mirror to Life.”
19
Lasseter, quote from brainyquote.com.
19
developed, but that he knew nothing about. Instead, the expert asked Lasseter whether
the parent lamp was a Mom or a Dad.
20
Figure 7: Luxo Jr.
21
Even in the type of venue where tech is almost exclusively presented, the
audience still fixated on narrative and character. Pixar’s technology sold its story so well
that the audience could concentrate on the emotion without letting the technology block it.
In Day & Night, one of the more recent Pixar shorts, and the first presented in
stereoscopic 3D, this duality of art and technology shines through the story itself. The
short tells the story of two 2D outlined characters that are each windows to a 3D
environment behind. One character displays a nighttime environment and one displays a
daytime environment. The plot follows their relationship through to their ultimate
understanding that they are two parts of the same whole. I interpret this short as a
commentary about computer animation vs. hand drawn animation (the outlines evoke the
hand drawn tradition while the environments and the stereoscopic 3D evoke the computer
20
The Pixar Story, 2008.
21
Image from “Luxo Jr.,” Wikipedia, The Free Encyclopedia.
20
animation tradition). The story would not have worked without both elements, and the
two must coexist just like the characters in its story. Substituting technology for the
elements representing computer animation and art for the hand drawn elements, this is
also a reflection of how technology and art must come together to serve and shape the
story. This lyrical short inspired me to let the emotion as well as the technology shine as
one.
Figure 8: Day & Night
22
Story and Themes
My story is largely about self-confidence, maturity and fitting in with family.
There are many films that have touched on these themes. In developing this project, I
most referred to the recent animated films How to Train your Dragon, Brave and Tangled.
How to Train your Dragon tells the story about a boy (Hiccup) who befriends a
dragon, ultimately changing his village’s attitude towards dragons as a whole. I
appreciate that the main conflict in this film is not clear-cut. Hiccup’s father with whom
22
Image from “Newton’s Law: Day & Night.”
21
he clashes is not an evil villain. Aside from the massive battle at the end, which I feel
does not mesh with the conflict set up by the rest of the film, the characters are not
divided into good and bad guys. I like that the majority of the tension in the story is
cultural. The struggle comes forth through Hiccup’s desire but complete inability to fit
into his society. As Hiccup begins to see into the world of dragons he moves farther
away from his society in private, and yet publicly appears to be the town’s best dragon
killer. Ultimately all is resolved: the village accepts Hiccup for who he is and comes to
live with the dragons in harmony.
I wanted to steer my film away from the clear-cut “hero must defeat the evil
villain who threatens all livelihood.” I drew inspiration from Dragon to create a story
whose conflict arises from the main character’s self-image and her desire to fit in with
those around her.
Brave is another film where the majority of the conflict is set in motion by the
tension in relationships. Merida is a free spirited girl trying to resist the traditional
responsibilities of being a princess in her kingdom. From this film I drew inspiration
from the relationship between Merida and her mother. The film does not turn Merida’s
mother into a villain, but allows the audience to sympathize with her as a woman trying
to raise her daughter in the only way she knows. Later in the film, both women have a
better understanding of the other and have grown closer. This complex relationship
helped inspire the relationship between the three Graces in my film.
Yet another film depicting relationships with parents is Tangled. However, the
inspiration I took from Tangled was less about the personal relationships and more about
Rapunzel’s character and the sense of wonder that she has throughout the film. In this
22
version of the Rapunzel fairy tale, Rapunzel convinces a thief who happens upon her
tower to take her to see the beautiful lights that appear in the sky once a year on her
birthday. Rapunzel’s constant optimism and love of life inspired Aglaea’s outlook. And
Rapunzel’s curiosity towards the world around her also helped inspire how Aglaea deals
with the changes in her talent.
La Luna was the poetic short that preceded Brave in theatres. Similar to Brave, it
is about growing up and taking on responsibility. The film tells the story of a little boy
being introduced to the family business by his father and grandfather. Their business just
happens to be sweeping the moon into its phases. There is a poignant shot in this lyrical
film that stuck with me: a wide shot of the boy tentatively climbing the ladder to the
moon. His father and grandfather look up at him expectantly. The phrase “reach for the
stars” comes to mind, figuratively and literally in this sense. I imagine his family
beaming up at him as he goes off and achieves his dreams with their support. This
message truly resonated with me as I developed my story. My final scene draws from this
image: I wanted the other two sisters to be truly proud of Aglaea’s growth.
Figure 9: La Luna
23
23
Image from “[WATCH] Enrico Casarosa’s luminous Pixar Short LA LUNA.”
23
World Building
As I was developing my story, I spent some time analyzing other stories and
world building methods. Though I analyzed many films, I wanted to reference three
different approaches from films that I drew from in some way.
Avatar
The world building approach in Avatar I would classify as a detail-centric
approach. James Cameron seems to move from element to element, assessing each
creative decision as it relates to all others. For example, when designing the fighting
suits that the humans wear, he asked himself questions about their culture. Does it make
sense for these characters in this world to have access to technology like that? And
further, he thought about how the technology would “really” work from an engineering
standpoint, ensuring that the suit could actually function if built for these characters.
24
I
think it is this logical, if fictional, explanation for each detail that makes the world feel so
fleshed out beyond the plot and so grounded in its own logic.
The Hitchhiker’s Guide to the Galaxy
Douglas Adams’s colleagues describe the approach he took when developing
aspects of The Hitchiker’s Guide to the Galaxy franchise as “pulling threads.”
25
He often
started with a throwaway gag and “riffed” on it. For example, when developing the
initial Hitchiker’s radio show, he came up with the idea that his character Zaphod has two
heads. This idea had no implications in the medium of radio, but needed to be resolved
and expanded upon in other mediums. Ideas like this could even become a core part of
24
Featurettes from Avatar, Blu-Ray, 2010.
25
Featurettes from The Hitchiker’s Guide to the Galaxy, DVD, 2005.
24
the plot of the story in another medium. He also closed up plot holes from medium to
medium, responding to audience feedback. One example of this is the detail of why the
main protagonists Ford and Arthur did not just hitchhike away from the Vogon ship.
Later iterations added this in as something that the characters try. Adams valued
performance and feedback, often reading the work out loud to others and moving forward
with what worked. This approach allows for a malleable world, where the specific details
do not define the world in and of themselves. Yet, the world still feels like it exists
beyond the story.
Tangled
A third approach is what I would describe as a character-centric or premise-
centric approach to world building. Tangled had the starting point of the Rapunzel story
to develop within. Why is Rapunzel in her tower? What makes her leave? What are the
implications of a young girl living her whole life in a tower? Each detail seems to stem
directly from questions like this. Though the world feels cohesive and logical, it almost
entirely centers on Rapunzel’s story. Her experience in the kingdom shows how
Rapunzel brings a ray of sunshine everywhere she goes and presents to Rapunzel that the
town is mourning its lost princess. Even the lanterns she so adores would not have
existed if Rapunzel were not in her tower. Each detail of the world was carefully crafted
to make sense in Rapunzel’s emotional journey.
My approach draws in some ways from all three classifications but probably most
fits with the character-centric approach. I also started with an existing premise and built
the world from that. However, I did concentrate on details as I described in the Avatar
approach. And I did start with notions that I then had to resolve to fill in the blanks.
25
Learning about these approaches has strengthened my own approach and given me a
clearer understanding of my world. There is a quote in The Art of Tangled credited to
John Lasseter: “‘Think about the world’ because if you create a world, rather than just a
character or a story, you can watch it, and it can play out and resolve within its own
reality.”
26
The book goes on further to quote Byron Howard “…even though [the
audience] might not notice all the details that have been put in, they would notice if they
weren’t there, right?”
26
Workflow Description
Initial Aspects of Story
Before I had even worked out the specifics of how my thesis would be made, I
knew I was interested in motion capture, and decided on a premise that would allow me
to continue experimenting with motion capture. The year before, I had created a small
project about the sitcom trope of two characters switching bodies. I had been interested
in seeing how the performances would change when the performers changed characters.
As I was brainstorming concepts for my thesis film, I looked through a book about Gods
and Goddesses and discovered the Graces. I was drawn to the fact that there was not
much mythology written about them. I knew that they were characters that brought about
happiness to others, but they did not really have their own story or their own defined
characters. They were a clean slate to inspire me. My initial thought was to have all
three characters played by the same actress and then have one of the three separate
herself from the other two, and somehow become her own character, now played by a
26
Kurtti, The Art of Tangled.
26
second actress. I thought about this project as an extension of my project the year before.
The story quickly moved away from the initial actress-switching notion, but retained the
Graces and the concept of one leaving the other two and coming back changed.
Story Development
The idea for Aglaea’s talent developed organically. I followed the yarn sparked
by the premise I had come up with through to possible conclusions. I held a
brainstorming session to enlist the creative help of my peers to investigate. The Graces
bring happiness to people: what does that mean? A comically corporate idea was
suggested: could they be extreme party planning secret agents? We thought through the
tone that I wanted to create and used that to help guide our discussion. I wanted
something magical, and though I was not against humor or quirk, I did not want a
sarcastic tone throughout. We continued to investigate the premise: what does ‘Aglaea’
mean, anyway? Splendor. What if Aglaea’s job was to make something visually
splendid? This realization sparked a wave of inspiration. What if Aglaea can control
light? We followed the initial premise to its possible conclusions and settled on
something exciting.
I soon brought two talented screenwriters on board, while simultaneously working
out the details of my technology and eventual production workflow. As the workflow
solidified, I had to choose which hard problems I should tackle head on, and which I
would use as constraints within which to create my story. One example of this was the
decision to avoid the complexity of motion capturing animals. I also had decided that
would be too much of a time sink to hand animate characters from scratch. So, I needed
to eliminate characters that could not be captured with human motion. This technical
27
decision became a constraint in my story: Aglaea could not befriend a bird in the human
world, etc.
On the other hand, once the idea had been generated that Aglaea could manipulate
light; I knew that I would have to tackle head on the problem of visual effects that
interact with characters. Without this story idea there would have been no reason to
consider visual effects that respond to performance. But this notion inspired me to
incorporate game logic into the project, having the performances drive the visual effects
and vice-versa.
My story development phase was a juggling act of what would be important to the
story while also balancing what technical challenges I was prepared to confront.
Visual Development
This balance of thought between creative preference and technological difficulty
continued through my visual development phase. For instance, as my artists and I
developed the look of the resort, I kept in mind that non-flat surfaces would be difficult to
capture. Anything that was not a level surface would have to be built in physical space
for the actors to walk upon. So we kept our terrain as level as possible. But, I knew that
anywhere that was not an area that characters would be walking over would be fair game
for uneven terrain. So we built curved stairs around the resort, but made sure that they
were in locations where the actors would not be changing levels between them. We did
decide to build in a main staircase coming from the stage where blocking would take
place. I understood that there would have to be something physical for the actors to walk
on during the capture session in order for them to climb these stairs.
28
Figure 10: Stairs built for Capture Session
Figure 11: Aglaea Environment Concept Art
27
By the end of this phase, my characters, environments and props and the look of
Aglaea’s talent were fully designed. To reach this end we took a fairly classical approach.
My artists painted entirely in 2D. We gathered pages of reference and put together
27
Concept by Irene Suh.
29
reference style guides, looking at everything from the way light reflects on surfaces, to
color of the frame to the style of chairs that would be in the resort. We drew inspiration
from the lush worlds of Tinkerbell and Tangled and gathered reference of from Disney’s
World of Color and from aurorae borealis. The Art of Tangled quotes Dave Goetz that
“The word that comes up all the time is appeal. We don’t want it to be grim, we don’t
want it to be threatening or uncomfortable, even when there’s something really dramatic
happening on screen, or something really horrible- it’s all been cocooned in this
appealing package.”
28
As we developed the look of the film, I took this “appeal” concept
to heart and tried to tap into it.
Figure 12: Tangled
29
Preproduction: (Storyboarding, Previs)
Aspects of my preproduction phase were very closely related to an animation
workflow. A good chunk of my time was spent determining how to visually tell my story
while simultaneously building out the world with final character and environmental
28
Kurtti, The Art of Tangled.
29
Image from “From Shakespeare to Tangled: A conversation with Designer Douglas
Rogers.”
30
assets. At this point, I had to be essentially locked with my story: I could not modify
environments or major plot points that would affect the assets that needed to be built.
What I could modify at this point was how the story was told. What information needed
to be shown? What could be implied? How should I convey the story without elements
feeling stale or overstaying their welcome?
We used a few methods to help solidify the visual storytelling. First, we watched
a lot of different reference animation to try to determine how sequences are shot. I
created some storyboard collages from concept art to try to help me explain sequences.
However, I did not have the artistic ability to really convey what was in my head. And
furthermore, I was simultaneously resolving the camera angles in my head, and spent far
too long trying to think of what the angles could be that I could not work quickly enough
on the collages. So, instead, I wrote out descriptions of the camera motion as it
corresponds to the emotions of the scene. I followed the shape language discussed above
and tried to rationalize why the camera should be placed in a certain position or why it
should be moving a certain way. Then, I enlisted a concept artist to take those ideas and
render them out in pencil, letting me actually see what I was describing. I edited the
boards together as an animatic, so I could start thinking about how to deliver my story via
editing. Also at this time, my sound designer focused on creating soundscapes for my
animatic and my composer created a concept track that defined the style and themes that
would be found in the film.
Once assets started coming in, my director of photography and I were able to
previs casually inside of the game engine that was now set up. We brought in some
friends and captured some test motions that we applied to the characters and brought into
31
the engine. We could then move the placement of the characters around within the
engine and fly a camera around the environment. This helped facilitate discussions of
blocking and allowed us to play with the kinds of shots that we might have. I describe
this method as something akin to riding a theme park ride: the characters are animating
on their own in loops, constantly performing each bit of action in a scene and a camera
can be moved around past them to view the bits of action. It is also a bit like playing
with action figures since characters can move to wherever you need them. We took some
screenshots at this point, but it was ultimately helpful for starting dialogue about the
eventual performances and camera motions.
Casting
One of the other major milestones during preproduction was casting. Casting was
another aspect that was very different from how casting works in live-action productions.
We had quite a difficult time sifting through headshots, since our characters were
animated and designed by artists. We needed actors who could embody the characters
physically through their movements. So, we looked for dance training and even
puppeteering skills. But the performer also needed to speak like the animated character
she would play. And, the actress playing Euphrosyne needed to sing. So we looked for
vocal training from Euphrosyne hopefuls. For speaking voices, we tried to look away
from the performer as she spoke, so we could separate their physical looks from their
voices and imagine that voice coming from the character. We could be almost ethnically
colorblind, so long as the voice of the performer could match the character.
It was also really important to me to find three actresses who could convincingly
relate to each other as sisters. So, we had a callback very similar to those in musical
32
theatre. We brought all the actors being considered in together and swapped performers
in and out to read scenes and dance with each other. We then judged each performer’s
ability to relate to the others and how strong the chemistry and synergy was between the
three. Through this process we determined our core cast of performers who would bring
life to the characters.
I had considered having a portion of the audition consist of “performing” a
Kinect
30
game or embodying a digital avatar. I ultimately decided that I could teach the
best way to interact with the technology, but I could not teach the performance that I
needed. So, I underwent a very traditional musical theatre style audition and callback to
determine which performers would be the best fit.
Preparations for the Shoot
In the month or so leading up to the shoot, it was my job to figure out exactly how
to schedule the performance capture. I also had to focus a good deal of my attention
during this time on technical and logistical preparation.
I spent some of this time programming the system that would govern Aglaea’s
talent. Ideally I had wanted to have all of the functionality of her talent in place for the
shoot, so that we could see all aspects of her talent as she performed on set. In practice, I
was not able to achieve this in time. I was, however, able to prototype enough to have an
understanding of how the systems would work for each effect. And though each
particular effect was not in place, working on the programming helped me better think
through the kinds of performances I wanted to capture during those talents. In a typical
live-action workflow, effects like this would be composited on top of the final image
30
Microsoft hardware for Xbox, 2010.
33
much later in the process. Instead, I had to think of the visual effects in 3D space,
figuring out exactly how the AI of her talent would know when to shift into different
phases.
Most of the other topics that I had to deal with leading up to the shoot were much
closer to preparations taken on during a live-action production. How many people would
it take to actually run all the machines that I would be having on set? What external
equipment did I need? What kind of physical set pieces and props needed to be located?
Finally, I had to break down exactly how I would capture everything. This turned
out to be quite complicated, though this is something that all motion capture films have to
think about. I was able to garner some inspiration in this manner from one of the
featurettes on the Avatar Blu-ray. One memorable capturing decision was how the
Avatar crew chose to capture the actors flying on the banshees. They chose to build a
device that would be powered by people instead of automated. In the story, the
characters are supposed to be completely in sync with the creatures they are riding. So,
the reasoning was that a mechanical device would lead to a reactionary performance,
while a people-powered device would be more teamwork oriented.
31
Though I do not
have a specific decision analogous to this, I had to analyze each scene to determine the
best way to achieve the performances that I needed to achieve.
The capture session was camera independent.
32
We were only interested in
capturing the performances of the actors, and the final cameras would be added later.
Because of this, the action did not need to be divided into shots in the same way as in a
31
Featurettes from Avatar, Blu-Ray, 2010.
32
As are performance capture sessions typically, though the level of attention to camera
during capture varies between productions.
34
live-action film. It had to be broken down by actions (capture takes) that could take place
physically on the motion capture stage (the volume). Shots and capture takes are not
mutually exclusive. Shots define everything that would be seen by the camera before it
cuts in the final film, while capture takes define everything that would be recorded in one
take of performance capture. For example, one shot in my film calls for a character
walking across a long distance, a distance longer than the dimensions of the volume.
This would need to be broken down into two capture takes: one for the first part of the
shot and one for the second. So this is one shot broken into multiple capture takes. The
opposite could also be true. For example, I have a scene that takes place in a specific
room. I captured these scenes entirely in one capture take, since the characters do not
need to move outside of this enclosed space. Later, this scene would be broken up into
different shots: a close-up on one character for part of the scene, a wide on all three as
another part of the scene. So to prepare for my shoot, I had to map the scene into both
shots and takes. Each divided action that would be seen on camera would have to be
carefully blended after the fact. Thus, I had to plan carefully to keep this post-production
work to a minimum.
One other decision I made with regard to scene breakdowns was my decision to
capture background characters on a different day as my main characters. The background
characters were, in many cases, in the same shots as the main characters, responding to
the Graces’ performances. However, their motions were independent enough that it was
logistically easier to capture them in separate takes from the main action, on a day
specifically dedicated to background motion.
35
Music
Another preparation required for my shoot in particular was working with my
composer to build the track that would accompany the Graces’ performance. Because
Euphrosyne’s talent is to create music, the score had to go hand in hand with the blocking.
We separated the scene into acting beats and talked through how the music would change
between each. My composer then scored a bunch of different sections that would fit with
those actions. The two of us met remotely to mix and match the sections and modify the
melodies to determine the best timing. I was even inspired to modify Euphrosyne’s
blocking based on freshly composed musical flourishes. Likewise, my composer
modified the timing of the score based on my direction. This iterative approach helped
me feel much better prepared for the performance scenes. It was a great example of the
give and take of decision-making in preparation for the shoot.
The Shoot (Capture Session)
We spent two full weekends capturing all of the performances during the capture
session. Each day required a huge amount of coordination to ensure that the days went
smoothly and that we captured everything that we needed.
Though no costumes are needed, since the characters are animated, the performers
did have to wear elaborate outfits. They wore skin-tight motion captures suits with 53
markers made of reflective tape and attached to the suit with Velcro. They wore a helmet
camera with a small light to capture their facial performances. The helmet camera
attached to a belt that the actors wore and was tethered to a long cord that needed to be
plugged in whenever they were captured. This cord required that each performer have a
crewmember dedicated to keeping enough slack in the cord and anticipating the actress’s
36
motion so the actress could move freely and would not be concerned with tripping on the
cord. This setup worked a lot better than I expected. The performers were able to walk,
dance and even twirl around comfortably with the help of the cord wranglers.
Figure 13: The Aglaea actors on set
The facial cameras were recorded to a reference camera system that recorded pure
video of each face. This reference camera system also recorded several angles of the
room. Because we needed to use the system in part to record facial performances, we
could not use as many reference camera angles as available on the motion capture stage.
So, we had a freestanding reference camcorder with a crewmember operating for
additional reference footage.
The volume is filled with 46 infrared cameras that capture the points of light
reflecting off the markers that the performers are wearing. Then this data is triangulated
into a point cloud on one machine in the room. This software also solves the data in real
time and puts together a unique skeleton for each performer. Then this data is streamed
out to a second computer that takes that skeleton and applies it to my character rig. This
data is then streamed out again to a third machine that applies that rig data to the models
37
of the characters in the engine. So, this third monitor outputs the characters moving
through the environment. During the shoot, lighting and camera motions were roughed
in to get an idea of the overall look of whatever we were capturing. However, the most
important data in this aspect of the pipeline was the point cloud of data straight from the
cameras. I had to keep in mind that the visualization in the game engine was for
convenience, so I should not devote too much time during the shoot fixing things in the
engine. All of that data in the engine would get cleaned and polished during post-
production. I had to focus my attention on ensuring that my performances were the best
possible.
Figure 14: The Capture Session
We also recorded audio in the volume. We had a boom microphone following the
action and each performer wore a small lavalier microphone clipped to her suit. For the
music in particular, the actress playing Euphrosyne wore headphones attached to my cell
phone so that she could hear the track while the sound was blocked from being heard by
the microphones.
38
We had a television monitor out in the center of the volume displaying the output
of the engine. This enabled me to be able to see the engine while I was directing with the
actors and also let the actors get an idea of how they looked as their characters in the
game engine.
This elaborate setup allowed me to capture all that I needed and have an
understanding of the final output of each performance in engine.
Post Production
Once the shoot ended, the cleanup phase began. This is the phase I am currently
in at the time of writing this paper, so some of the workflow is still in progress. My first
task was to back up all the data and upload the freestanding reference video. This
massive amount of data totaled to over 2 Terabytes for the information from the shoot
alone. Once the data was captured, I needed to rapidly batch process all of the data,
pushing the data through to the game engine as quickly as possible. The aim at this point
was to have enough information to choose which performances should be used for each
action.
The next step in my workflow is to bring the takes into a software package where
I can sync them to the recorded sound. Then I will be able to edit my scenes together and
shoot rough cameras. That material, plus an overlay of the raw video from the facial
capture will help me determine the best performances to use. I can swap in pieces of
performances if desired and can iterate on the animation of the scene. Once I am happy
with the action and feel like my scenes are playing out how I want them to, I can then do
a final cleanup of each take. This consists of going back into each software package,
filling missing frames from the motion capture system or retargeting arms that moved
39
through the torso. I also have to integrate the motion of the character’s hair and clothing.
And finally, I have to bring the facial video into two software packages to have it
analyzed and then applied to my character’s model. Then this cleaned up performance is
brought back to the engine, and the animation phase is complete.
The next step is to create the final camera motion. This will be done with a
combination of cameras positioned and animated within the engine and with a hand-held
virtual camera.
33
This virtual camera is a screen connected to a camera rig that is held on
the shoulder. The rig has markers on it, so the motion capture data is streamed to the
engine so that it can record its position in physical space. Then the view from the engine
is streamed to the camera, allowing my director of photography and I to have a view of
what the camera is seeing. This allows for organic camera motion akin to those that live-
action films can easily create. Once these final cameras are in place, I can export out
those shots to be cut together in Avid.
Once my final cut is together, I can deliver it to my sound designer and my
composer, who will clean and place final sounds, and score the film to picture
respectively. Once the film is scored, we plan to hold a scoring session to record a live
orchestra, further giving the film an organic feeling. Finally, on the sound side, all the
elements are mixed and final sound is delivered.
In parallel, I will continue to work in the engine to achieve the final look of each
shot. I will also be finishing up the look and functionality of the visual effects without
changing any of the animation or cameras at this point. My goal is to polish the visuals
up, to have each shot lit and art directed in the best way possible.
33
Similar to the virtual camera described above on Avatar.
40
Once both the visuals and audio are completed, I can bring the two back together
and export my final film.
Evaluation Scenarios
Why Virtual Production?
Did Aglaea need to be filmed with virtual production? Given the style I was
looking for, shooting in live-action against a green screen would not suit this project. I
could have made a completely hand crafted animation and still achieved the rich organic
look I desired. But then I would not have been able to infuse my characters with live-
action performances that I could direct. However, I could have just captured motion
capture and not used a game engine at all, just built the animations and rendered them out
in an animation workflow. Or maybe I could have just used the engine for previsualizing.
I think that this workflow has allowed me more flexibility and comfort than any of the
alternatives. I was able to direct actors to achieve performances, but then was also able to
get my hands dirty and manipulate motion and program the effects in the engine itself.
The partially live-action workflow, partially animation workflow and partially game
workflow suits my own interests, and allows me to apply the parts I know about each to
achieve something unique.
How much world to show?
There are a few discussions that come into play when evaluating the best possible
workflow. Should the action be streamed into the world real-time? Should the actors see
the world? Could they be truly playing a game?
41
During the commentary of the DVD of The Hitchhiker’s Guide to the Galaxy
movie, the topic of “practical” vs CG effects was discussed. Most of the effects of that
film are practical: puppets and elaborate costuming that would actually function are
photographed alongside the actors. However, there was one scene, the planet factory
scene, which was brought together through CG. The actors were required to imagine
sweeping landscapes that did not exist while they shot it. Martin Freeman, the actor
playing Arthur, was concerned; he felt he had no sense of whether it was going to be any
good because he was looking at blue screens.
34
The crew mourned the loss of
spontaneity if half of their set is not actually there and they are playing to a “tennis ball
on a stick.” The crew preferred when the actors could play off of the Vogon puppets that
were actually in front of them.
Another completely opposite story came from Zemeckis when he visited our
motion capture class a year ago. In his experience, actors enjoyed imagining what was
around them (making it similar to the kind of imagination they would have to do as stage
performers in a minimal black-box theatre). Zemeckis noted that motion capture actors
do miss their costumes. So, he provided them with costumes for a day so they could
learn how their characters would move in those costumes.
35
But are these two arguments mutually exclusive? Perhaps they are both
arguments for virtual production: allowing the performers to see just enough to
understand the performance but not seeing so much that they lose their imagination.
Maybe the real argument is how much of the world to show. Should there be a monitor
34
Commentary
from
Hitchiker’s
Guide
to
the
Galaxy,
2005.
35
Zemeckis, Q&A session, 2012.
42
such as the one I used in my capture session to give the actors a sense of the world?
Should the actors be performing while wearing virtual reality devices?
My opinion is that, at least at this point in the evolution of virtual production, the
use of VR helmets would remove the personal connection between actors, especially
since facial performances are currently complicated to stream live. I think it might also
remove the emotional performance, since the actor could be more focused on doing the
motion “correctly” as if playing a game rather than focusing on the emotion of the
performance. But I think this is the kind of evaluation that each filmmaker would have to
assess for the kinds of performances that would be required for his or her own film. For
me, it was important for the actor to see what her character looked like in the monitor,
and how it felt to move through space as her. But she needed to retain some imagination,
since she could not always look at the screen while performing. I think of the metaphor
of a set in live theater. The actor is immersed into the world enough to build the
character and imagine himself on the set. But the actor can step through a doorway
offstage and be faced with unpainted wooden planks. I think this keeps the actor able to
imagine while still being able to visualize the scene.
In addition, as a director, I would not find it helpful to be wearing a VR helmet
either. During our capture session, I mostly focused my attention on the actors’ physical
performances, with occasional glances at the monitors. This was partially because of the
lack of real-time facial retargeting, as described above. I wanted to make sure that I had
a good understanding of the performances I was achieving on the whole. Watching the
performers in physical space also allowed me to stay connected with the actors as people,
to be able to direct them in a manner closer to live theatre. I did not want to think of my
43
actors only as virtual characters while they are performing in front of me. However,
some directors may prefer to be completely immersed in the virtual world while directing
and might gravitate towards wearing a VR helmet.
I think it is also a director’s choice of how much to consider cameras during the
shoot. Some directors who are more comfortable with live-action might prefer to know
every camera angle and every shot throughout the shoot, so they can tell this to their
actors and could even bring the virtual camera out to record rough versions.
36
This
method would quicken the post-production process, since the camera angles are solidified
before the capture. For directors who are more comfortable with world building or game
development, perhaps thinking of the scenes as a world that could be captured from any
angle is easier. In this method, the performance is solidified before the cameras, or both
occur organically together, allowing for more spontaneity in camera motion that responds
to a performance. Both are different philosophies, and some combination of the two
should be decided upon for each director.
What would I have done differently?
I am reasonably happy with the progress of my thesis up to this point, but I wish
that I had had more time to experiment with workflows and the engine at the beginning of
the process. I should have taken more time to test out everything the engine is capable of
and attempt to achieve more of my final look earlier in the process. I wish I had used
some temporary animation data to work through a complete vertical slice, taking into
account the timing for bringing one shot all the way through the pipeline. Though I did
36
As described above for the Avatar workflow.
44
render out some test animation, I did not art direct that scene, nor did I test out the audio
syncing or the post-production pipeline.
I think much of my stress surrounding the shoot could have been alleviated if I
had had more time to practice aspects of the workflow. I was nervous that the systems
would not even function as of several days before the shoot, when I should have been
focusing more on the breakdowns and integrating more final elements into the engine.
More formal previsualizing also would have alleviated some stress about the
performances and timing.
Finally, I wish I had achieved more of my visual effects for use during the capture
session. I would have wanted to see how the use of the effects influenced the
performance, and it would have been great to have actual effects representing the Graces’
performance: a focal point to which the background performers could react. I am hopeful
that the effects will still feel as if they are responding to the performances and vice-versa,
even though I was not able to achieve it during the capture session.
Challenges and Lessons
Prior to this year, I had never taken on such an ambitious film. This was also the
first time I had served in the creative lead role in addition to the production lead on any
large-scale project. The creative production side of this project came fairly naturally to
me. I have grown accustomed to breaking down projects into their tasks and keeping
track of details as they relate to a creative vision. I am very comfortable extrapolating the
implications of a creative vision. However, I was not as used to coming up with that
initial creative vision. It was challenging at times to know intuitively what my creative
vision should be, but I found it more comfortable to reason my decisions based on earlier
45
decisions in the way I am used to as a producer. Also, because of the scale of my project,
the details that went into each element were much more extensive than I had initially
anticipated. I had to improve upon my compartmentalizing skills; I often found myself
jumping over mentally to a tangential track of details and had to remind myself to
accomplish one track. Over time I became more comfortable with simply writing down
dependencies or issues I noticed rather than abandoning my current train of thought and
attempting to solve those issues immediately.
In addition, I had never had to deal with the enormous amount of logistics
required of securing a team of professionals and preparing for such a complex physical
production. Though it was very challenging initially to navigate, I ultimately learned a
lot about the level of detail that goes into this kind of preparation. I am also much more
comfortable holding discussions regarding paperwork approvals, insurance certificates
and reimbursement forms with people who do not fully understand my project.
Another challenge to overcome was fitting the new technology together into a
functional workflow. In addition to my own challenges of simply putting together the
workflow, using new technology implied that there were very few students who were
trained enough to actually crew on the project. So I had to bring on a handful of
untrained students to help run the stage and I also had to be the one in charge of ensuring
that the tech was fully functional on set. In the future, I hope to be able to solidify my
live workflow sooner, so that I can train crewmembers to run some of the technology
themselves without my monitoring during the shoot. However, it was a very useful
experience for me to be in charge of the technology during the shoot. The skills I picked
46
up in preparing for the technical aspects of the project are ones that I hope to use in the
future.
I also had the additional challenge of essentially serving as the assistant director,
taking on the responsibility of ensuring that we accomplish all of the takes we need in the
required time allotted. It was actually very relieving for me to take on this role in
addition to my directorial one. It allowed me to have a much better understanding of
what I needed to achieve practically during my capture session while still keeping in
mind the vision I was achieving.
Film and Game Overlap
It was an interesting experience working in the overlap of the film and game
world, and one that I constantly had to navigate. I became used to choosing vocabulary
for aspects of my workflow that would help describe what I was accomplishing to
different audiences. And I also became comfortable responding to vocabulary that did
not quite fit. For example, my environment artist, who came from the game art world
constantly referred to my environments as levels. However, though describing them in
this manner to a live-action film guru would confuse him further, it helped my
environment artist understand his goal. To a more traditional filmmaker, I might describe
them as digital or virtual sets. Throughout the project I improved my ability to develop
relevant analogies that would help a given person understand my project.
47
Discussion: Innovative Contribution
Production Vocabulary
While working on my thesis, I was amazed at how directly I had to confront
confusion and misunderstanding over the complexity of my workflow. I had to confront
a language barrier when discussing my project with those who had never been exposed to
anything other than traditional live-action filmmaking. Those signing off on the use of
equipment balked when confronted with my twisted workflow. Post-production
documents had to be signed before production documents. The question “what are you
shooting on?” was a common question without an easy answer. A crewmember that
understood the process well enough tentatively informed me during my shoot that the
lighting on my actress’s chin looked really dark. She wondered if I needed to reshoot it.
My mind instantly darted to the facial cameras: was something blocking the camera and
darkening the image? Did the actress forget to turn on the facial light? My crewmember
was actually referring to the lighting in the engine. But the lighting was in the game
engine; it could be modified as often as desired after the fact, and was only roughed in for
the shoot.
All of these misconceptions and misunderstandings about the workflow made it all
the more clear to me that more projects need to twist the filmmaking process in ways that
help their projects succeed, so that there is not this sense of needing to adhere to one
perfect process. More exposure to virtual production projects would lower this confusion
and help determine more general vocabulary for talking about production processes
across entertainment.
48
Future of the Industry
I am lucky that my thesis fell at a time when the industry is just beginning to turn
its head toward virtual production. Films like Avatar popularized the techniques and
previsualization companies are integrating the techniques into their own work. There are
many companies developing software packages or modifying their own software to
streamline virtual production.
Though the industry is excited to move towards virtual production, it is still at the
very beginning. So many aspects of companies’ workflows are proprietary. Each
director has a virtual camera built specifically for him personally and each production
company has its own secret software. I anticipate the time when the companies share
their secret recipes and learn from each other’s findings.
And the skills required in the industry have to adjust slightly to make way for
virtual production methodologies. Though creating a film is still a collaborative effort
between some technical people and some artistic people; it is starting to become a
necessity for filmmakers to be able to think in both technical and artistic ways. The more
filmmakers can understand the many aspects of each decision, the better thought through
those decisions will be.
Ten Lessons: Suggestions of Virtual Production Possibilities
The possibilities for virtual production have only barely been tapped. As the industry
becomes more and more used to virtual production, I think it will be used more for final
filmmaking, rather than only for previsualization. I think virtual production can and
should be used to create interactive cinematic experiences in addition to passive linear
49
content. Once virtual production workflows are stable enough, it can even be used
comfortably for live broadcast.
The potential for telling new stories in new ways is enormous. I can imagine a film
that uses trigger volumes as in a game. So when a performer crosses a certain point in
the virtual world, a virtual event triggers: perhaps a fireworks show, a large crowd
appearing, or the sound of a friend calling out. This could allow for a more spontaneous
reaction to the event, especially if the performer does not know exactly where the triggers
will occur. I can imagine an epic disaster film that would want to allow for improvisation
in performance. For example, a volcano has erupted and the two main characters are
running away from it. The volcano could be a procedural effect, and the two actors have
to spontaneously respond to the way the lava is flowing, hopping over the rivers and
moving towards safe points. This second example would more than likely require head
mounted displays and a good deal of the world to be visualized. Both of these examples
would achieve spontaneity that brings the film closer to being a game. However, the
level of randomness in either of these examples could be adjusted to suit the director’s
needs. Perhaps the lava should not be random at all, and should follow a set path that the
director chooses. But then once the actors perform a couple of takes, the director changes
her mind and wants to modify the path. The director or a crewmember could go adjust
that path on the fly between takes and try something different. I believe that rapid control
of virtual elements will make experimentation with visual effects more possible. A
cinematographer can request an adjustment of the color of the waves on the ocean in the
same way he can request an adjustment of a light in a traditional workflow. Virtual
production can help filmmakers have a better understanding of their target look without
50
having to guess what the visual effects might look like in the future. As the technology
improves, the camera operator will be able to shoot final takes with the handheld virtual
camera rig that already have their final or near-final lighting in them while the actors are
on set. Then the workflow after the shoot collapses back almost completely into a
traditional live-action one where the takes would be cut together since the look and
cameras would already be achieved at the time of the shoot.
Another way that filmmaking could be affected by virtual production is in enabling
actors to perform together remotely. If schedules prevent two actors from being on the
same coast, the two actors could come together via virtual production, allowing their
virtual selves to perform together within shots.
I can imagine actors playing animated characters to have the ability to improvise and
make jokes in the same way they are often allowed to when they are recording voice-over
for a traditionally animated film. I can also imagine game logic that responds to the
performance that the actor is given. For example, imagine an actor is doing a scene with
an animated pet dragon. The game engine could choose different pre-loaded animations
for the dragon based on the volume of the performer’s voice. So, for example, the
performer could be enthusiastically encouraging the dragon to fly away, triggering an
overexcited liftoff animation. Or the performer could say the line quietly and with
hesitation, triggering a tentative liftoff. This moves the performance even closer to a
game, where the scene would adapt to the emotional choices of the performer. And I
could imagine a film where lighting changes based on the way the performer performs-
becoming a stormy scenario with the performer hunches down and a sunny one when the
performer straightens and quickens his stride.
51
Aside from elements that could respond to performances, there is potential for stories
that could be guided by an audience after the media is completed. A sitcom created on a
game engine could invite families to play a tangential game in the world of the television
show that affects the content of the sitcom itself. So, a child at home could play a math
game on her phone that represents how well a character in the sitcom did on her math test.
The outcome of the game could then trigger different animations in the main game engine
depicting how the character’s parents react and whether the character is allowed to go to
her school dance. Or a tangential interior decorating puzzle game could affect the layout
of the apartment in the sitcom, possibly creating amusing scenarios among the sitcom
characters.
Audience response would not have to be limited to responses after the story has
already been delivered. A live music competition that takes place in a virtual world could
have votes or social media responses affect the frame. Audiences could influence more
minor aesthetic elements like instrumentation of the song or an addition of a spinning
disco-ball. Or audiences could directly influence song choice of motion-captured
performers or the response of a virtual judge.
The possibilities are endless. However, the technology is not quite stable enough for
a live broadcast. Software packages very often crash or behave abnormally. Motion
capture software sometimes reverses the locations of markers, causing the machines
further down the line to twist the animated character’s limbs unnaturally. I think that the
creation of more standardized tools that can interface between software packages would
help stabilize the technology. And lowering the total number of software packages
running in tandem would also eliminate error. Further, on the game engine side, tools
52
that allow for high quality hair or cloth simulations in ways that can be affected by
streamed performance would need to be developed. In addition, a reliable real-time
facial retargeting solution would allow for real-time lip sync and high quality facial
animation. And, if the graphics technology in real-time rivals that of pre-rendered
graphics, then more filmmakers would be interested in experimenting with virtual
production. However, these technical hurdles are certainly possible to overcome as long
as the entertainment industry is interested in exploring the possibilities of virtual
production.
Summary
In this project, I created a short film, entitled Aglaea, with the use of a workflow
that incorporated the use of a game engine as well as other virtual production
methodologies. Through each decision-making opportunity I had to evaluate the story
reasons for that decision as well as the technical or logistical reasons. This workflow
combined the aspects of my background into a process that helped me create a film of
which I can be proud. I have appreciated the amount of control and iteration that virtual
production offers me. I also now have a much better sense of what would go into
creating a feature film in this manner.
Future work in this sphere could include experimenting more with game logic and
artificial intelligence during the shoot. I would be curious to see how performance was
affected by the addition of these elements, and would be curious how crews would
respond to having more of the final look on the screen. Another aspect of further
research would be in the realm of transmedia. My project already exists in a game engine.
53
If I wanted to create a game that existed in my world, much of the preparation already
exists. I could simply use the same assets and game logic that governed my world, so I
could devote time to coming up with an experience that feels suitable for the world that I
have spent so long creating.
I look forward to seeing what new stories other filmmakers can achieve with
virtual production and how virtual production will shape entertainment as a whole.
54
Bibliography
“A Genuine Ticket to Ride Documentary Gallery” from The Polar Express 3D, Blu-ray,
directed by Robert Zemeckis (2004; Burbank CA: Warner Home Video, 2010).
Avatar, Blu-Ray, directed by James Cameron (2009; Beverly Hills CA: Twentieth
Century Fox Home Entertainment, 2010).
Botticelli, Sandro, The Graces from La Primavera, Painting, image found from
http://historylink101.com/art/Sandro_Botticelli/images/08_Primavera_The_Graces_Detai
l_jpg.jpg.
Brave, Blu-Ray, Disney/Pixar (2012; Burbank CA: Buena Vista Home Entertainment,
2012).
Daley, Elizabeth and Steven J. Ross “About USC Cinematic Arts-USC School of
Cinematic Arts,” accessed March 10, 2013, http://cinema.usc.edu/about/index.cfm.
Day & Night, Disc 1 Toy Story 3, DVD, Disney/Pixar (2010; Burbank CA: Buena Vista
Home Entertainment, 2010).
Desowitz, Bill, “Newton’s Law: Day & Night,” Animation World Network, May 27th
2010, http://www.awn.com/articles/newtons-law-day-night/page/1,1.
Digital Domain 3.0, Inc., “Virtual Production,” accessed February 26
th
2013,
http://digitaldomain.com/virtual_production.
Ebert, Roger, review of The Polar Express, Warner Bros, directed by Robert Zemeckis,
Chicago Sun Times, November 10 2004,
http://rogerebert.suntimes.com/apps/pbcs.dll/article?AID=/20041109/REVIEWS/410060
05/1001.
“Frame the Shot,” image from the set of Avatar (directed by James Cameron, 2009)
from: Rose, Frank, “5 Steps to Avatar: reinventing Moviemaking,” Wired Magazine,
November 17, 2009, http://www.wired.com/magazine/2009/11/ff_avatar_5steps/.
How to Train your Dragon, Blu-Ray, Dreamworks (2010; Glendale CA: Dreamworks
Animation L.L.C., 2010).
Image from La Luna (Disney/Pixar, 2011) from: Pink, Dominic,“[WATCH] Enrico
Casarosa’s luminous Pixar Short LA LUNA,” A Fistful of Culture, November 5
th
2012,
http://afistfulofculture.com/2012/11/05/watch-enrico-casarosas-luminous-pixar-short-la-
luna/.
55
Image from Luxo Jr. (Pixar 1986), from: Wikipedia Contributors, “Luxo Jr.,” Wikipedia,
The Free Encyclopedia, accessed March 14th, 2012, http://en.wikipedia.org/wiki/Luxo_Jr.
Image
from
Shrek
the
Musical
(Dreamworks,
2008)
from:
Raz
Public
Relations,
“Shrek
the
Musical:
Autodesk
Animation
Technology
Brings
Magic
Mirror
to
Life”
The
Wire,
Jan
21
st
,
2009,
http://www.creativeplanetnetwork.com/the_wire/2009/01/21/shrek-‐the-‐musical-‐
autodesk-‐animation-‐technology-‐brings-‐magic-‐mirror-‐to-‐life/
.
Image
from
Tangled
(Disney,
2010)
from:
Mitchell,
Angela,
“From
Shakespeare
to
Tangled:
A
conversation
with
Designer
Douglas
Rogers,”
About.com,
http://performingarts.about.com/od/Sets/ss/From-‐Shakespeare-‐To-‐Tangled-‐A-‐
Conversation-‐With-‐Designer-‐Douglas-‐Rogers_3.htm.
Kurtti, Jeff, The Art of Tangled, Disney Enterprises,Inc./Walt Disney Animation Studios,
San Francisco CA: Chronicle Books, 2010.
La Luna, Disc 1 Brave, Blu-Ray, Disney/Pixar (2011; Burbank CA: Buena Vista Home
Entertainment, 2012).
Lasseter,
John,
Quote
from
http://www.brainyquote.com/quotes/authors/j/john_lasseter.html#wXcQxjkmvJd9bHlg.9
9.
Leckman,
Tad,
“Shooting
the
Clone
Wars:
A
Conversation
with
Dave
Filoni,
Part
1”
Previsualization
Society,
October
30
th
,
2012,
http://previssociety.com/2012/10/shooting-the-clone-wars-a-conversation-with-dave-
filoni-part-1/.
Mori,
Masahiro,
(K.F.
MacDorman
&
N.
Kageki,
Trans),
“The
Uncanny
Valley,”
1970,
IEEE
Robotics
&
Automation
Magazine,
19(2),
98-‐100,
2012,
http://spectrum.ieee.org/automaton/robotics/humanoids/the-‐uncanny-‐valley.
Strike,
Joe,
“Autodesk
Reveals
the
Secrets
of
the
Shrek
Magic
Mirror”
Animation
World
Network
News.
May
13
th
,
2009,
http://www.awn.com/news/technology/autodesk-‐reveals-‐secrets-‐shrek-‐magic-‐
mirror.
Tangled, Blu-ray, Disney (2010; Burbank CA: Buena Vista Home Entertainment, 2010).
Thacker, Jim, “FMX 2012: In search of virtual production,” cg channel.com, May 10
th
,
2012, http://www.cgchannel.com/2012/05/fmx-2012-in-search-of-virtual-production/.
Thacker, Jim, “FMX 2012: We could remake 300 in real time now,” cg channel.com,
May 11
th
, 2012. http://www.cgchannel.com/2012/05/fmx-2012-we-could-remake-300-in-
real-time-now/.
56
The
Economist
Contributors,
“Animation
and
Robotics:
Crossing
the
Uncanny
Valley.”
The
Economist,
Nov
18
th
2010,
http://www.economist.com/node/17519716.
The Hitchiker’s Guide to the Galaxy, DVD, Touchstone Pictures (2005; Burbank CA:
Buena Vista Home Entertainment, 2005).
The Pixar Story, Disc 2 Wall-E, Blu-Ray, Disney/Pixar (2007; Burbank CA: Buena Vista
Home Entertainment 2008).
Tinkerbell, Disney, 2008.
World of Color, Disneyland Attraction, Anaheim CA, 2010.
Zemeckis, Robert, Q&A Session, University of Southern California, April 24
th
, 2012.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Teaching interactive production
PDF
a•part•ment
PDF
Psynchrony: finding the way forward
PDF
Alma: designing compassion for healthcare workers through interactive play
PDF
Virtual production for virtual reality: possibilities for immersive storytelling
PDF
The real-time editing system for virtual production and animated interactive project in real-time rendering game engine
PDF
TCam: Touch driven communication for virtual moviemaking
PDF
BeachcomberVR: inspiring exploration: concepts in the production of a virtual reality game
PDF
Last Martelé: a virtual reality approach to analogic violin performance
PDF
Garden designing a creative experience with art and music orchestra
PDF
The moonlighters: a narrative listening approach to videogame storytelling
PDF
Alternative History
PDF
Seeds: role-play, medical drama, and ethical exploration
PDF
Against reality: AI co-creation as a powerful new programming tool
PDF
Ascension: an analysis of game design for speech recognition system usage and spatialized audio for virtual reality
PDF
Free Will: a video game
PDF
Morana: explore healing potential of virtual reality storytelling
PDF
Penrose Station: an exploration of presence, immersion, player identity, and the unsilent protagonist
PDF
Stepstone Island
PDF
Timension
Asset Metadata
Creator
Scialli, Sarah
(author)
Core Title
Creatively driven process: using a virtual production workflow to achieve a creative vision
School
School of Cinematic Arts
Degree
Master of Fine Arts
Degree Program
Interactive Media
Publication Date
04/12/2013
Defense Date
04/01/2013
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
AI-procedurality,animation,filmmaking,game engine,interactivity,motion capture,OAI-PMH Harvest,performance capture,process,Storytelling,Unity3D,virtual production,visual effects,workflow
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Malamed, Laird (
committee chair
), Furie, Eric (
committee member
), Rogers, Wendy (
committee member
)
Creator Email
sscialli@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-235908
Unique identifier
UC11294780
Identifier
etd-ScialliSar-1542.pdf (filename),usctheses-c3-235908 (legacy record id)
Legacy Identifier
etd-ScialliSar-1542.pdf
Dmrecord
235908
Document Type
Thesis
Format
application/pdf (imt)
Rights
Scialli, Sarah
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
AI-procedurality
filmmaking
game engine
interactivity
motion capture
performance capture
Unity3D
virtual production
visual effects
workflow