Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Video camera technology in the digital age: Industry standards and the culture of videography
(USC Thesis Other)
Video camera technology in the digital age: Industry standards and the culture of videography
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Video Camera Technology in the Digital Age:
Industry Standards and the Culture of Videography
Michael LaRocco
Department of Cinema and Media Studies
A Dissertation Submitted in Partial Fulfillment
of the Requirements of the Degree of Doctor of Philosophy
University of Southern California
School of Cinematic Arts
Degree Conferral Date – August 2018
Acknowledgments
Thanks to my Chair (Tara McPherson), my Dissertation Defense Committee (Nitin Govil and
Priya Jaikumar), and my Advisory Committee members (Steve Anderson and Ellen Seiter) for
their invaluable and continued support academically, professionally, and creatively.
Thanks to my “unofficial” committee (Kathy LaRocco, Mike LaRocco, Oscar LaRocco, and
especially Lindsey Brashler), whose invisible labor made this dissertation possible.
Finally, thanks to my cohort buddies - Anirban, Isaac, Maria, and Sonia, for keeping me inspired.
Table of Contents
Introduction – Do What You Can’t 1
Section 1 – Pre-Roll: Digital Cinema and Production-Based Theory 33
Section 2 – Video Camera Image Quality 52
- The “Video Look” 58
- High Resolutions and False Revolutions 113
- The Intercuttable Loop 132
Section 3 – Video Camera Utility 164
- Digitizing Video 177
- From Tape to Drive 205
- The Smartphone and the Death of Videography 254
Post-Roll: Concluding Thoughts 279
Bibliography 290
1
Introduction – Do What You Can’t
In February 2017, Samsung debuted a commercial during the 89
th
Annual Academy
Awards. It was positioned in the prime advertising slot – nestled strategically in between Emma
Stone’s Best Actress win and the most anticipated award of the night, Best Picture. The ad’s
placement wasn’t just a marketing bid to reach the highest number of viewers – its content very
much played off the sudden juxtaposition with Hollywood’s biggest awards show. As the
commercial begins, the lights come up on the wet pavement of a parking lot. A man walks into
frame, dressed in a tux befitting the night’s festivities, and approaches a singular microphone,
not unlike the one viewers saw moments ago as Leonardo DiCaprio presented the award for Best
Actress. Those in the know, social media speaking, would recognize the man as Casey Neistat,
popular YouTube vlogger (video blogger). Others – myself included – would need an
introduction. As Neistat speaks, his monologue is intercut with images of young people using
their Samsung video cameras in a variety of activities. Filming a horror movie. Launching
camera drones. Shooting selfie video while applying makeup. Recording 360 VR footage of a
boy dropping Mentos in Coke. Catching a football while skydiving (no, really). The monologue
is worth presenting here in full.
Allow me to introduce the rest of us.
We’re the makers, the directors, and the creators of this generation.
We don't have big award shows, or huge budgets, or fancy
cameras. But what we do have are our phones.
And duct tape, and parking lots, and guts.
And we have ideas we need to share.
We know it's not the size of the production that matters, it's what
you make.
We don't create because we have to. We create because we love to.
And we've captured billions of moments, from different angles, for
different reasons, for millions of viewers.
But with one thing in common –
2
When we're told that we can't, we all have the same answer:
Watch me.
The final tagline reads: “Do What You Can’t.”
This advertisement’s marketing strategy, its iconography, and its presumed audience all
speak to the state of video cameras and videomaking practice in 2017. It showcases the camera
hardware itself and the many and varied ways in which it is used. It utilizes the cameras,
displaying the qualitative “look” of contemporary video images. It displays the diversity of its
users. It points to the ubiquity of video cameras and the act videomaking. It calls attention to the
stakes of creating content, finding one’s voice, and broadcasting it through digital technology. It
celebrates the non-standard, the low-budget, the DIY, the amateur. It draws a line between
current videomaking practice and the media-making practices of old, and of the old. Most
importantly, it raises the hopes of a youth-driven grassroots media culture, standing apart from
Hollywood and all that its biggest night stands for… all to inadvertently (or perhaps not) dash
these hopes by incorporating the digital revolution into capital. It sells Samsung phones.
What strikes me most about this commercial is how strange and out of place it would
have been 20 or even 10 years prior; a more accurate tagline may have been “Do What You
Couldn’t Before.” Seeing the markers of contemporary videomaking practice put on display and
thusly celebrated serves as an effective point of reflection on the ways in which the act of
making a video and the tools used to do so have changed dramatically in the digital era. Most
visibly is the camera itself. At the start of the digital video age, the camera was large. It was
bulky. It needed two hands. It looked like a camera. It was a camera, and specifically a camera.
The technology on display in the commercial looks alien by comparison – a thin sliver of metal
and plastic, lacking any buttons, any visible lensing, any visual or ergonomic resemblance to the
image-making technology of 20 years ago, and that’s to say nothing of the fact that the camera
3
can also be used to make calls and check the weather. Likewise for the spherical orb, its eye-like
lens capable of filming in 360 , attached to a tiny helicopter that can soar above the tree line at
the user’s command. In 2017, Samsung presents this video tech as if to say, “Tomorrow’s
imaging technology – today!” But to a user in 1997, these absolutely were the image-making
technologies of a seemingly distant tomorrow.
The development in the imagery that these cameras produce is no less startling. Video in
the 1990s had a certain look that was immediately identifiable as that of video – it was low
resolution, grainy, lacking dynamic, and its motion was uncannily smooth. To hold it up in
comparison with the industry-standard imaging format, 35mm film, was to observe striking
differences between the medium of the high-budget professional and the low-budget amateur. In
this ad, footage shot on 35mm film (that of Neistat’s monologue) is intercut with footage shot on
a consumer phone with no startling discrepancy. The phone’s images are quantitatively better
than those that were achievable even with an expensive, professional broadcast video camera in
the 1990s. This improvement in image quality fits in line with the ad’s running theme, and its
strategic placement during the Oscars broadcast – “Anything you can do we can do just as well.”
The practices and practitioners on display are notably different from those in the 1990s as
well. To view the camcorder commercials of a previous era is to see a lot of white dads: dads
filming vacations, dads filming sports games, dads filming the rest of a family. The video
camcorder, like the film camera before it, was frequently positioned in marketing (and in actual
practice) as a tool of the patriarchal head of household. In addition to the range of ethnicities and
a balanced gender usage on display in the Samsung ad, the act of videomaking is presented quite
literally as a practice of the young, the hip, the tech-savvy – “the directors, makers, and creators
of this generation.” Quite a pivot from the marketing strategies 20 years prior, but one that has
4
occurred in response to shifts in actual videomaking practice, especially as the cost of equipment
and media has gone down and cameras themselves have become easier and more convenient to
use. The ad’s central figure himself was one that was essentially nonexistent in the analog era.
As a vlogger, Casey Neistat has racked up millions of views and, presumably, a comparable
amount of capital via his YouTube channel, presenting himself and his observations. Neistat’s
videos, most of which would fall into the “home video” mode, have found an audience on an
internet sharing site such that he is able to profit considerably and star in a national advertising
spot. Home videos are home entertainment. And, in the case of YouTube stars, home videos can
pay for homes.
1
In 1997, a home video user’s earning potential was mostly limited to winning
the big prize on America’s Funniest Home Videos.
Even aside from the commercial’s star, the practices on display in the ad are wide
ranging, from fiction moviemaking to archival home video, from the painstakingly planned and
elaborate to the casual and the quotidian. The act of videomaking can be a special occasion, but
it can also be a shot of oneself in the mirror. I can record myself playing the drums, to study and
improve my technique, or I can share the video with thousands of people, making money in the
process. The ad’s final line, “Watch me,” has several meanings in that regard. On one hand, it
serves as a braggadocios challenge to the Hollywood makers: “Watch me be successful. Watch
me steal your views. You don’t think I can? Just watch me.” At the same time, it speaks to the
practice of videomaking more broadly. As all forms of practice, from home video to indie film
become more public, “watch me” defines the state of contemporary life, not just videomaking.
Or, more accurately, videomaking and life have become inexorably linked. If Baby Boomers
1
The top 10 YouTubers in 2017 all pulled in over $10 million each. John Lynch, “Meet the
YouTube Millionaires,” Business Insider, December 8, 2017.
5
were considered the “Me Generation,” the current crop of millennials might be the “Watch Me
Generation.” That said, while the ad is clearly targeting and directly addressing the youth, the
intertwining of video with daily life is applicable for everyone, not just the selfie kids. The
ubiquity of the video camera has developed alongside the public-ing of everyday life and
interpersonal communication that is increasingly mediated by cameras and screens. “Kids these
days” old-timers may bemoan the apparent technologically-enabled vanity of millennials, but my
70-year-old mother is Facebook savvy, and she has the selfies to prove it.
Even considering the equipment, the practices, and the practitioners, the element of the
commercial that would perhaps seem the most out of place in the analog era is the core message
itself – Neistat’s confident celebration of amateurism, independence, and DIY content creation.
It is one trumpeted in countless other arenas – the video production and indie film trade press,
hosting sites like YouTube and Vimeo, film festival keynotes, and practitioners themselves. The
key word is democratization: of cameras, of moviemaking, of storytelling, of entertainment, of
cultural influence. The development of the technology is said to have sparked a revolution –
another key word – led by this generation of young creatives, to grasp the power from the hands
of big studio executives. As Neistat says in the ad, current practitioners might not have big
budgets or fancy cameras, but they do have phones and guts. They may have had guts before, but
the phones (Samsung phones, specifically) were surely what pushed them over the edge. So
watch out, Leo – you’re about to be upstaged by a kid dropping Mentos into a coke bottle. Or so
the narrative goes.
It’s hard to disagree with some of Neistat’s celebration. It’s true that people (young and
old) are shooting more video than they did before. It’s cheaper and easier to make an indie film,
and to make one that looks quite good. Home videos that were confined to the living room or the
6
closet (perhaps for good reason) are now public, and some of them are indeed quite funny and
entertaining, as reflected in their millions of views and likes upon likes upon likes. There’s no
doubt that something has been democratized, but the extent to which this so-called video
revolution is in any way liberating is something that this dissertation aims to question.
In a behind-the-scenes vlog episode that Neistat shot while making the commercial and
posted to YouTube, he explains his motivation for shooting the spot:
In the 2017 Academy Awards, they will be celebrating exactly this
many YouTubers: [Neistats holds up a handwritten sign that reads:
“ZERO”] … Can we make a commercial that celebrates what we
do, what we all do [Neistat gestures toward the video’s audience],
no matter how small you are, no matter what you make videos
about, could we make a commercial that celebrates this entire
community and play that commercial during the Oscars as if we’re
like on stage celebrating… this [Neistat gestures toward the
video’s audience].
Throughout the vlog, Neistat seems enamored at the size of the commercial production, putting
in contrasting terms what he would have done himself (shot the commercial in five minutes on
his phone) and what he calls a “proper television commercial.” As he later says, “this is what it
looks like when you do it their way … this is what it looks like when you do it the big way.” The
question, given the ad’s message, is why Neistat would want to “do it their way” at all. As one of
the episode’s commenters stated:
The commercial is badass and all, but why are you so hyped about
being on TV? You’ve preached nothing [but] TV being dead and
YouTube being the next media platform for YEARS.. The fact that
the Samsung ad is for ‘creators’ being broadcasted on TV is like
screaming; HEY, WE’RE RELEVANT TOO, GIVE US
ATTENTION!! We don’t need their approval, or their awards.
- Men Try Videos
Or:
Casey I respect what you do, and I think that you are great creator.
But if you are going to mainstream more and more, you don't have
7
to bring mainstream to us. We're here because we don't like
mainstream.
- Shabby Monkey
Or:
This was a great commercial! BUT, the irony of this is that you
could have done the same quality shoot with a cell phone and a
person standing there with a $25 high-intensity LED camera light.
:)
- Shooting Star
Cinema
Or, more to the point:
I wasted 90seconds of my life watching this corporate propaganda
- Kyle Basher
On one hand, Neistat is a YouTube vlogger given national airtime in the same way that an
established star might; it certainly speaks to the magnitude of his own DIY star status. Yet, this is
not a commercial he himself directed or produced. It is a massive production with a massive
budget, shot by the cinematographer from Fight Club and broadcast during one of the most
expensive commercial timeslots of the year. It may be a call for revolution, but that rallying cry
is secondary to, or in service of, trying to sell Samsung products. Casey Neistat made himself a
star, but YouTube hosted his videos, Samsung made him a spokesperson, and ABC put him on
television. He was probably cheaper than Leo, too.
To return to the commercial’s tagline, “Do What You Can’t,” the message seems to
suggest that videomakers can do that which, 20 years ago, may have seemed impossible. “I can’t
make a movie! I can’t be a celebrity! I can’t catch a football while skydiving!” Samsung
responds, “Do What You Can’t.” But with Casey Neistat, the phrase rings differently. That
which he is doing is decidedly not something the average YouTube vlogger can do, primarily
because of the capital involved. Thanks to Samsung, he is doing what they can’t – appearing on
8
national television in an extremely expensive commercial. “Their way” is still their way.
Neistat’s celebrity is largely that of his own making and of the nature of that which he is
celebrating in the ad, but the repercussions of DIY success and the dreams thereof for his
community of fellow content creators, as well as more mundane efforts of videomakers around
the world, are in service of more than just that community of makers. Whether it is Casey Neistat
vlogging for millions or my mom posting Thanksgiving videos, changing videomaking practices
remain fixed within the larger media and cultural power structures already in place. “The video
revolution. Sponsored by Samsung.”
Video Camera Technology in the Digital Age: Industry Standards and the Culture of
Videography
In the twenty years since digital video tape cameras hit the marketplace, video has
changed tremendously as a shooting format. From the high-end professional digital cinema
camera to the low-end consumer model, video camera technology has rapidly expanded from the
world of live and low-budget television, surveillance, and home movies to replace photochemical
film as the dominant motion picture shooting format (but also remains the dominant cat video
shooting format). The act of videography itself has become ubiquitous. The vast majority of the
country’s adult population (and much of the youth population) carries a video camera on their
person at all times. Whether or not the revolution will be televised, it will likely be shot on a
smartphone and posted to Facebook.
The switch from analog imaging technology to digital video cameras and the
development thereof across all sectors of the media-making population, from feature film to
home video, is one of the most significant technical transitions in moving image history, with
far-reaching financial, industrial, theoretical, cultural, and aesthetic repercussions. Despite its
9
critical substance, this transition and its practical implications remain largely underdiscussed
within media studies. To that end, my dissertation provides an historical investigation of the
development of video camera technology in the digital era, from its initial popularization to its
rise to industry standardization, and one that is guided and expanded by theoretical discussion,
formal analysis, studies of production culture, and practical experimentation. The central aim of
my study is to track the evolution of video camera technology from the mid-1990s to the present
across diverse communities of use and examine that technology’s reciprocal relationship with
culture – how the development of video camera technology has affected artistic, industrial, and
communicative practice and, in turn, how society and culture have shaped the imaging
technology that they have produced. It is, in equal parts, a study of a changing technology and a
study of the practices that make use of it, from the seemingly innocuous to the ruthlessly
capitalistic to the apparently revolutionary. As a means of creating images, videography has
always retained the potential to be leveraged for cultural influence, large and small. This
dissertation addresses how that potential shifted as videography’s content became digital. A
technological history of the video camera and the practice of videography is a timely and critical
intervention in the study of media, as the use of the video camera is ceasing to be strictly a
specialized, compartmentalized, readily identifiable activity, and is equally becoming a
normalized, non-act of interpersonal communication. Having access to a video camera, being
able to manipulate its images, and the potential to share and network its content are no longer
standards for participating in a unique archival or artistic or activist or profitable practice, but for
participation in modern society.
In addition to serving as a history of a medium and a media technology, this dissertation
serves more broadly within the field as a call for a methodological turn toward practice – and
10
what I call “practice-based theory.” Somewhat unique among my academic peers, my
background prior to my PhD studies was in film and video production. As such, I do not shy
away from my affinity for mixing traditional academic research and practical experimentation.
This study of video cameras was born very directly from my work using them, and it stands as a
personal pushback against the oftentimes strict and even physically-manifested boundaries
within academia between the study of cinema and the creation of it.
2
The scholars are in this
building, and the makers in that one. The scholars use the library, the makers use the studio.
Scholars get their PhD, makers their MFA. As a scholar-practitioner, I have witnessed firsthand
that coupling theory and practice allows insight into cultures of production, reveals intentionally
hidden labor practices, cultivates a visceral understanding of aesthetics, and demystifies the
technologies that are the objects of my study. Even given these benefits, though, my call is not
necessarily that more or all scholars need to participate in media-making practice (though, in
moments of incensed insanity, I have been known to make that audacious claim), but rather that
the field make a more concerted effort to consider the act of making as fruitful academic terrain,
and treat media practices (and, in this dissertation’s case, the tools of that practice) as scholarly
objects in and of themselves, and not simply means to an end. Just as the proliferation of screens
and screen content led to a call within the field for an expansion of “cinema” studies to include a
broader range of media, I am similarly signaling a need to shift perspective in light of the
proliferation of cameras.
My interest in finding the theory in practice is not relevant strictly for its novelty. Indeed,
some of the first and most-read theorists were practitioners and wrote about practice in great
2
Clive Myer’s introduction to his edited collection (Critical Cinema: Beyond the Theory of
Practice), speaks to this division in greater detail, as do many of the essays within.
11
detail, even if the modern-day Vertovs and Eisensteins are the exceptions that prove the rule. On
a wider scale, production studies has, over the last decade, effectively turned attention to
professional practices that were once largely invisible both culturally and within film studies and,
in the process, brought questions of political economy that were once far afield into frame. While
that field is broad and nebulous enough that this dissertation likely sits within it, my own work
addresses issues of media industries less directly than many production scholars, and instead as
the result of engagements with what John Caldwell calls “deep texts”
3
– the artifacts of
production, like cameras and their manuals – and those which Vicki Mayer cites as being a
necessary and underutilized access point.
4
My tendency toward the use of the term “practice”
rather than “production” further separates my work from other production-centered studies. It
suggests equal attention to the act of using tools as to the repercussions of their use, but also to
more casual, personal, intimate, haptic, and phenomenological elements of what it means to
“produce.” While all acts of videomaking practice are situated within systems of capital, the use
of a camera as a site of meaning-making varies greatly along with the degree of its situation,
whether one is a union DP and or a stay-at-home dad. Studying the shared technology and the
nature of one’s relationship with that technology across these varied videographic practices
reveals not just patterns of use, but also patterns of capital and power at different levels of
production.
Even if my interest in finding the shifting meaning in the act of videography is in itself
novel, the greater relevance in a methodological turn to practice is because it is timely. At the
3
John Caldwell, Production Culture: Industrial Reflexivity and Critical Practice in Film and
Television (Durham: Duke University Press, 2008), 26.
4
Vicki Mayer, Below the Line: Producers and Production Studies in the New Television
Economy (Durham: Duke University Press, 2010), 180.
12
institutional level, scholars within Cinema and Media Studies (and the humanities more broadly)
are increasingly feeling an administrative push toward the “practical” – whether it be in the form
of teaching media production classes, new forms of digital scholarship, or preparing students for
a workplace that is increasingly reliant on a complex digital skillset. The discussion of the
“existential crisis of the humanities” is itself not limited to academia, but has a sounding board in
national magazines’ and news outlets’ hand-wringing op-eds, wondering what might or must be
done to rediscover or redefine their value in a world that demands “results.”
5
While I understand
and firmly support the desire on the part of humanities scholars to push back against the
oxymoronic need for practical and economic consequence of scholarship (itself a clear indicator
of marketplace-driven and “relevance”-driven desires of the neoliberal university), I conversely
view the higher-level push toward practice to be an indicator of a long-avoided blindspot within
our discipline, revealed all the more clearly by the current state of media usage. More of our
students are media-makers. More of us are media-makers. If the university wants us to be more
practical, then let us find theory in practice.
In this dissertation, the study of practice takes the form of an intervention into the
scholarship on digital cinema by way of the tools of videography. While an analysis of the shift
from analog to digital technologies is, of course, not the only arena in which practice-based
theory can be applied, it is one in which the benefits of such a methodology are abundantly clear,
and currently the most necessary. A media scholar would be hard-pressed to find another
material object as thoroughly theorized as the camera, yet whose practical operation is so largely
taken for granted. Digital cinema, a broad and ambiguous term in itself, has been the focus of
5
A cursory Google search brings up a long list of such pieces across The New York Times, The
Atlantic, The Washington Post, The Guardian, The New Republic, The Chronicle of Higher
Education, Inside Higher Ed, etc.
13
theoretical media studies debate for over two decades – a debate that has managed to carry on
despite routinely ignoring the practical application of digital technologies. Discussions of the
loss of indexicality, the authenticity of the real, the truth claim of the image, and dystopian fears
of simulation have dominated the field to the point of obfuscating many of the very real
implications of digitality. The surprise and dismay with which many scholars faced the
administrative push for practicality with the proliferation of new digital technologies is but one
fitting result of this obfuscation. By addressing the digital video camera in less abstract, less
dys/utopian terms, this dissertation is a corrective to the tendency to treat digital video as a
metaphor rather than as a technological object, and a demonstration that the study of the
theoretical and the practical need not be mutually exclusive. When the word “revolutionary” is
applied to digital technologies, it is so not just to mark that our understanding of reality and
connectivity has changed, but in the heaviest sense of the word – the use of digital video cameras
presents the potential for real exchanges of cultural power, and this element of the digital debate
has been missing for some time.
6
6
Missing comparatively, though not entirely absent. With respect to video cameras specifically,
there have been several scholars who have written from a practice-based theory perspective
(without outwardly signaling as such), with professional digital cinematography the most
common entry point. Jean-Pierre Geuens’s many articles are exemplary of the methodology,
across topics ranging from video assist to editing, though his digital text Digital Filemaking is
perhaps the best example. Terry Flaxton’s work discusses digital cinematography with a
technician’s level of detail and a thoroughness rarely seen in analyses of cinema technology.
There are many excellent articles on the practice of cinematography (see: Ganz and Khatib,
Chris Petit, Gerald Sim in my bibliography), though some of the most novel and interesting
perspectives come (albeit with less explicit theory) from the trade press.
14
Throughout this study, I raise and revisit many theoretical questions not dissimilar from
those that have been raised when digital technologies were “new media,” though I look for
answers in different places.
Q: “What happens to the truth claim of the image when images are no longer indexical?”
One must consider not just the nature of the image, but the proliferation of the camera.
Q: “Why does the digital image no longer convey duration?”
One must consider that video is no longer bound to real-time application.
Q: Why does the video image feel different than the analog?
One must consider how quantitative image standards serve as semiotic codes.
Q: “What is cinema now?”
One must consider what the cinematographer thinks.
The goal is not to eschew the hypothetical, the theoretical, the philosophical, or even the navel
gazing, but to find answers in the use of the technologies rather than the notion of them – as
functional artifacts rather than theoretical models.
To assume that practice is without theory is a fallacy in itself, as John Caldwell rightly
identifies: “Its makers, technologies, and networks of practitioners reveal an industrial culture in
which—far from always reducing meaning to self-evident matters of common sense—complex
critical and theoretical ideas churn through even mundane industrial matters.”
7
Whether in
professional settings or amateur ones, these alternate theoretical frameworks are generally
manifested less in open discourse but are embedded in texts, habits, traditions, superstitions,
assumptions, social norms, laws, anxieties, and good (or bad) taste. Or, in the case of this study,
technologies and the use thereof. Engaging with such embedded theories is especially important
not simply because they have traditionally been underdiscussed or dismissed, but because they
offer a much-needed framework with which to approach the sudden proliferation of cameras and
7
Caldwell, Production Culture, 342-343.
15
of videography itself – practices that may be opaque to more traditional, text-based
methodologies.
In the following sections, I discuss the scope and stakes of my project, its place within the
digital debate, and the application of my methods by breaking down the central elements of my
dissertation’s title and their significance to the overall work.
“Video Camera Technology”
Much has been written in the field of media studies about “video,” broadly defined, but
shortcomings are apparent in such an ambiguous approach. In such cases, the usage of the term
“video” as shorthand can refer to a manner of image production (e.g. shooting with a video
camera), a storage format (e.g. recording on video tape), a method of transmission (e.g.
broadcasting a classical Hollywood film using video technology), a production practice (e.g.
video production vs. television production), a genre or mode (e.g. music video [which is often
shot on film…]), a domestic practice (e.g. home video), etc. My study examines the practice of
videomaking by focusing on video cameras – the facet of “video” that has been most taken for
granted in academic literature and thus least grappled with. It is, as such, a study that finds its
grounding point not in a medium per se (to the extent that video can be considered a medium)
but instead in the artifacts of image creation. It is akin to a study of carpentry and its wide
manner of applications through an analysis of the carpenter’s tools. It is easy to abstract (or
indeed ignore entirely) the practice of videography (or carpentry) when it is viewed strictly
through finished products. A study of the tools of practice renders that practice visible in a more
granular way, and allows access to cultures of production that have likewise remained hidden.
Studies of media industry have demonstrated the benefit of drawing attention to invisible labor
16
on an industry-level scale. This study of video cameras addresses invisible labor (professional
and amateur) on a more individual level, as a relationship with a tool but, in doing so, attempts to
get at something more universal and less geographically local than a study of industry.
8
Tracing the evolution of a specific piece of technology places my work within the larger
fields of media history, media archeology, and the history of technology. Considering the
traditional pitfalls in those approaches, my work hopes to eschew a technological determinist
approach (which places undue agency in the technology itself), an innovation-based approach
(which favors progress at the expense of regression or alternative usage), an innovator-specific
approach (which weights efforts of engineers and designers more heavily than practitioners), and
also a strictly social constructivist approach (which ignores that technology is often created to
political ends and may have agential force). Instead, by studying the development of cameras in
relation to the practice of videography, I am adopting a usage-based approach to the history of
technology, as outlined by historian Dave Edgerton – examining changes in the technology itself
through the ways in which these technologies are incorporated into functional usage.
9
A usage-
based approach hinges on the reciprocal nature of technological development – cameras are
designed by engineers, used (and mis-used, or not used) by practitioners, then re-engineered. In
doing so, I account for the ways in which camera technology can affect practice and perception,
how usage can undermine and redefine design, and how both practice and design can alter the
function and symbolic nature of technologies. Thus, my historical methodology mirrors my
theoretical one, in the desire to move beyond abstraction and view imaging devices as lived
8
Patrick Vonderau and John Caldwell problematize the geographic specificities of industry-
based production studies in an interview in Vonderau’s Behind the Screen: Inside European
Production Cultures (New York: Palgrave McMillan, 2013), 4.
9
Dave Edgerton, The Shock of the Old: Technology and Global History Since 1900 (London:
Profile Books, 2006), xii-xv.
17
objects embedded within cultural systems. Doing so accesses technological artifacts beyond their
capacity as machines, but also (in Caldwell’s terms), “as material forms of critical and aesthetic
craft knowledge (in machine and user interface design); as the nexus of dynamic, social actor-
network series (via the teams that use and animate these technologies); and as cultural and social
expressions (in terms of the way they are used in work and public spheres, discussed among
practitioners, and spun in trade discourses).”
10
To study the specifics of video cameras is to study technological design choices, affecting
both images and physicality of use. If “video” is a term used ambiguously, a study of cameras is
one grounded in technical specifications. For the purposes of my study, the camera isn’t just a
cog in a theoretical model or an object subsumed into a larger act of moviemaking – it is a tactile
artifact, with a design that affects how it can and cannot be used. It is an electronic artifact, the
technical capacity of which affects the images it produces. It is a financial artifact that has a cost.
It is a labor artifact that has a skill requirement in order to be built or to be used. The Canon 5D
Mark IV and the Arri Alexa are both video cameras, and they differ in many of the
aforementioned characteristics, but are similar in others. These points of convergence and
divergence in the cameras’ capabilities can be utilized by videomakers across the spectrum of
use to generate financial and cultural capital, to upset or maintain the stability of the
contemporary hierarchy of image-making regimes, and to uplift or restrict specific cultures’
voices and perspectives. If, as many authors and practitioners have hyperbolically suggested, the
current state of videomaking is one of a “video revolution,” then the video camera is the primary
weapon, of both the revolutionary and the establishment.
10
Caldwell, Production Culture, 350.
18
“in the Digital Age”
My study’s temporal boundaries run from 1995 to the present. In a broader sense, the
“digital age” is decades older than that, and even within popular usage and parlance, personal
computers, video game systems, compact disks, and the like have been engrained in
contemporary culture since well before the 90s. Digital video itself had been used commercially
in limited arenas since the mid-80s. The year 1995 is significant in that it marked the release of
the first digital camcorders, wherein digital video technology became widely available in all of
its varied spheres of production. It was the start of the age in which nearly all moving image-
making became digitized. As this is a study not just of camera technology but of videomaking
practice, the mid-90s were when the digital became practical.
The choice to study digital video specifically (as opposed to the decades of analog video
before it) is significant for several reasons. The dramatic shift in real-world image recording
from indexical analog to digital transcoding offers fertile theoretical and philosophical ground, as
many authors have demonstrated.
11
Much less analyzed are the ways in which the shift to digital
had a profound impact on the manner in which video could be used, as a tool of art-making,
industry, and the wider cultural archive. Analog video camcorders had existed in the market and
11
The catalog of literature on this topic is quite rich, and I will discuss some key texts in this
dissertation’s first section. See: Belton, Carroll, Elsaesser, Gunning, Mangolte, Manovich,
Marks, Rodowick, Rosen in my bibliography. More thoroughly, Marc Furstenau’s dissertation,
“Cinema, Language, Reality: Digitization and the Challenge to Film Theory” (2003) outlines this
debate in great detail. More succinctly, Tom Gunning effectively summarizes the general
anxieties: “A great deal of the discussion of the digital revolution has involved its effect of the
truth claim of photography, either from a paranoid position (photographs will be manipulated to
serve as evidence of things which do not exist thereby manipulating the population to believe in
things that do not exist), or from what we might call a schizophrenic position (celebrating the
release of photographic images from claims of truth, issuing in a world presumably of universal
doubt and play, allowing us to cavort endlessly in the veils of Maya).” “What’s the Point of an
Index? Or, Faking Photographs” in Still/Moving: Between Cinema and Photography (Durham:
Duke University Press, 2008), 41-42.
19
were readily available for well over a decade before the release of digital video cameras (and
small-gauge photochemical film cameras decades before them), but the coupling of the video
camera with the computer was paradigm shifting, as it allowed a level of control over the content
that did not exist in the analog era.
This newfound level of control was activated in a number of ways. One on hand was a
greater opportunity for manipulating digital images themselves via the use of the camera.
12
The
computerization of the image opened up the potential for changes in camera design and
mechanical functionality, the application of complex computer-generated effects and image
manipulation tools, and a novel potential for casual aesthetic adjustments for even the most
amateur of videomakers. Increased control over the look of the image was not a strictly aesthetic
feature, as increased production values for lower costs offered the chance for higher profits and
cultural status striving, as low-budget moviemakers could compete with those in the highest
sphere of big-budget production (who also began shooting on digital video). If video was once
viewed amongst communities of practice to be analog film’s inferior, the digital era was the one
in which the ugly duckling became a beautiful swan, and also murdered all the ducks.
Equally important were changes in the ability to control the products of one’s camera. As
the video camera became increasingly networked into the larger digital matrix, the potential for
content sharing and communication (from the interpersonal to the global) expanded far beyond
the traditional outlets of the analog era. For the home video user, the video camera became not
12
Image manipulation was certainly a tool of video users at varied sites in the analog era, from
on-screen graphics in a local TV studio to experimental video artists creating video feedback.
The ability to control the image in the digital era expanded vastly beyond these very specific and
capital-enabled uses. Digitization allowed image control to a vastly wider group and with much
greater precision. Further, image control was newly activated in the digital era through the
potential for much smaller cameras (like the GoPro and the smartphone).
20
just an archival tool, but a tool for creating edited objects, and for broadcasting them to an
audience. Communication became increasingly mediated by cameras and screens, both in the
form of direct communication through software like Skype or FaceTime, and indirectly through
multimedia exchange via social media posting. Cameras were used to build connections and
communities, creating a “digital local” that could counteract the inadequacies of the geographic
one. The camera became a tool of teaching and learning, of profiting and proselytizing, of self-
reflection and self-expression and self-creation.
13
The reverberations of this expansion were felt
all the way down the media-making hierarchy, as YouTube became a forum for Nicki Minaj
videos and amateur makeup tutorials alike. Throughout the digital era, the video image shifted in
look and feel, the cameras themselves changed notably, the technology became cheaper, and
spheres of use expanded greatly. The switch to digital was significant technologically, but the
varying degrees to which practitioners were able to leverage the potentials of digitality to
practical ends made the transition significant aesthetically, culturally, industrially, and
theoretically.
“Industry Standards”
For the purposes of this study, the phrase “industry standards” refers to both standards of
technology (e.g. a camera’s technical specifications) but also standards of practice, processes,
and attitudes. The combination of these intertwined and interdependent standards represents the
ways in which videography was normalized – the technical and cultural benchmarks. Both
13
Again, many of the practices of video sharing, direct communication, and reflective imaging
had precedents in the analog era. The digital video camera was a “new media” technology not
because it was itself new, but because it operated under a different, computer-based logic than its
analog predecessor. As cameras proliferated, many of the once-unique and specialized practices
became commonplace in ways that were logistically or financially prohibitive in the analog era.
21
technological engineering and practical videomaking are done with these standards of function
and normalcy very much in mind, oftentimes conforming to them and occasionally redefining
them – making them not work, or work differently. Regarding standards of technology,
considering that much of the discussion of digital video thus far has been theoretical, focusing on
technical standards grounds my project in the quantitative. One can hypothesize what it means to
shoot on video instead of film, or how the experience of digital cinema exhibition differs from
that of celluloid (e.g. “digital cinema just seems ‘colder’ to me”) but drawing attention to
standards of technology can reveal the nature of the difference quite clearly. That is not to
devalue qualitative assessment – quite the opposite. My goal is to draw attention to the fact that
the qualitative differences in media, both analog and digital, have their origins in quantitative
specifications and that these elements exist in feedback loops. The ability for practitioners to
manipulate these quantitative standards – either through formal training, personal experience,
outsourcing, or through the potential engineered and integrated into the technology itself – yields
the ability to manipulate the qualitative effects, and certain qualitative achievements shape future
quantitative technical specifications.
The same is true regarding standards of practice. While much of the way in which the
video image is captured, displayed, and read by an audience is dependent on the technology at
hand, it is equally impacted by the hands themselves: training and experience; rules and
regulations; habits, traditions, and superstitions; budgets and financing; the norms of conduct.
Because many of these difference in film- and videomaking practice are related to the
increasingly fluid binary of amateur and professional, the changing nature of that distinction in
the digital era will be of primary concern for my study. While originally much of the
professional’s status lie in the quantitative side of industry standards (e.g. in the camera itself), as
22
home video and professional formats began to convergence in their image quality, the distinction
between amateur and professional migrated more heavily into the act of production rather than in
the instruments of production themselves.
Standards exist to ensure things work properly, and this is true for both technical and
practical standards, though in different ways. With regard to technological specifications,
standards ensure that the technical processes of video occur without error, as with standardized
aspect ratios and lines of resolution enabling universal broadcast. Practical standards perform a
similar function, but culturally. As one example, in the 1990s, video’s association with low-
budget production marked it as being culturally inferior and, as such, it was “below”
Hollywood’s standards for feature production despite the fact that it was a fully functional
storytelling medium (see: soap operas). Whether designed intentionally or developed through
evolving practices, industry standards exist as methods of demarcating and separating groups of
producers based on legitimacy. Standards are a method of establishing authority and culturally
policing practice. These standards – and the ability to challenge them – shifted with digitality as
individual producers themselves gained more control over their images and where they could be
seen. Further, as the act of using a video camera became less specialized and fell increasingly
under the domain of interpersonal communication, the ability to manipulate standards – both
technical and practical – became matters of general videographic literacy as opposed to
profession, art, or hobby. Knowledge of videography standards is a developing social
requirement. Ask anyone who has been chastised for shooting his smartphone video vertically.
The word “industry” is not without its own gravity. Certainly the standards of technology
and practice within the film industry are different from those of home video, though the latter is
no doubt still an industry. To that end, one of the largest practical developments alongside that of
23
the video camera has been the incorporation of once-secular video practice into the market
(monetization via YouTube being but one example). As both technical and practical standards
are linked with professionalism, standardization is often linked with capital – standards are set by
those in power, and reflect the ways in which practice is tied to the market. Changes in the digital
imaging technology upset that power dynamic specifically because of the relationship between
standards and labor – both at the individual level (monetizing home videos or viral success) and
at the professional level (new on-set positions, union shake-ups). Again, much of the video
camera’s revolutionary potential lie in the practitioner’s ability to conform to or challenge
standard practices for financial or cultural gain.
“The Culture of Videography”
In examining the use of the technology across the last twenty years, my work constantly
asks the question, “Who is using video cameras? How and why?” A manageable answer to those
questions threatens to be elusive, considering responses include both “George Lucas, on Star
Wars” and also “the plumber that rooted out his drain.” Increasingly throughout the digital era,
the act of videomaking has become commonplace in nearly every arena of industry and
communication. Teachers use video cameras in the classroom. College students Skype with their
parents. Without video, a website looks archaic. Video or it didn’t happen. “Ability to teach
video production a plus.” PewDiePie has 54 million YouTube subscribers. Cops wear body
cameras. Civilians shoot video of cops. Cops delete the footage. The act of surveillance is
unending – of us, of them, of each other. The act of shooting video is different now than it was in
1995 – it is performed differently. Its actors are different. It occurs in different arenas. The
devices are used to different ends. The image is read differently. The answer to my question in
24
2018 is, “Everyone, for everything, all the time.” As Ganz and Khatib summarize, “There is
nowhere that is not accessible to the digital camera, whether inside the body, or outside the
Earth’s atmosphere. Simply put, there are more cameras filming more people in more places than
ever before.”
14
The fact that video cameras and the act of videomaking have become ubiquitous in under
a decade is an indication of the need for a study of the technology and the social needs that drove
it to be so. While that same ubiquity makes analysis of the practice difficult, it is the same
challenge faced by the engineers making the hardware and software; their central question is not
dissimilar from mine, and the depth and variety of camera developments over the last 20 years
have been guided by the findings of their research. In looking at the cameras themselves, the way
in which they are marketed, their categorization in the trade press, and my own observations in
practice, I have divided the equipment and the practice of videomaking into four overlapping
spheres of production.
1. Home video
• Archival domestic video, vacation travelogues, YouTube content, general
videographic messing about
2. Low-budget production
• Small independent projects, local TV, wedding and event videos, local
commercials
3. Big-budget production
• Studio features, large independent features, television, music video,
commercials
4. Industrial applications
• CCTV, body cameras, video for analysis, “functional” video
15
14
Adam Ganz and Lina Khatib, “Digital Cinema: The Transformation of Film Practice and
Aesthetics,” New Cinemas Journal of Contemporary Film 4, no. 1 (May 2006): 25.
15
Of these spheres of production, I will be speaking to industrial video the least. As a study of
videography, the suffix “-graphy” denotes writing. In many industrial and functional contexts,
the act of using a camera is less one of video writing and more of video watching, as the camera
25
Practice within each of these spheres is anything but homogeneous, but they tend toward
various points of separation from each other: their equipment, their content, their production
value, their relation to capital, their teleology. While my study is concerned with cultures of
production, the extent and depth to which I speak to each of the above-listed groupings is limited
specifically to the technological. I am not producing an ethnography (let alone four), nor am I
attempting the unenviable task of characterizing “home video” producers in an age in which
most homes have multiple video cameras and videographers, nor am I assessing the cultural
significance of “independent production” in general. Rather than treating these groups as
speculative monoliths, my goal is to look to each community’s “deep texts” as cultural artifacts,
grounding my discussion in the communities’ own self-representations and self-assessments.
While cameras themselves are my primary points of inquiry, my analysis of practice draws on
the secondary communicative and representational texts within these communities of use –
camera content, online discussion boards, the trade press of various levels of esteem (American
Cinematographer but also DV Magazine), guidebooks written by and for video users, camera
advertisements (both from manufactures and from vendors), instruction manuals, online video
comments, etc. In putting these communities’ artifacts in dialogue with one another, the goal is
to gain a better understanding of the centrality of the technology and the practice within each
sphere and in relation to the others. While technology can be symbolically and actually imbued
with political power, it is through its use within these communities that the relative power
structures are manifested and observed.
serves to extend one’s vision. This dissertation is more concerned with the creation of
videographic objects (which can be circulated as commodities) but, as I discuss in section three,
the act of using a camera has recently become less strictly one of writing, with the camera’s use
in the realm of direct, interpersonal communication with the proliferation of the smartphone.
26
Turning the digital debate toward the cultural usage of cameras gets at questions of
politics and authority that have largely been absent from the strictly hypothetical academic
approaches. Such an omission is clearest when turning to trade press texts, where the key
concepts of “democratization” and “revolution” are addressed most regularly and in the most
optimistic terms, without much concern for the current iteration of the “death of cinema.” In
these practitioners’ assessments, it is specifically in the technology and its use in relation to other
spheres of production wherein the revolutionary potential lies – low-budget projects passing
cultural gatekeepers, YouTube stardom, new and independent and varied representations of
marginalized groups, home video users profiting from home movies, big-budget studios re-
shuffling their labor pool, independent producers competing with media goliaths, etc. In all of
these narratives, the practice of videomaking is not just a defining element that ties each of these
various production cultures together, but is the act through which producers in a sphere of lower
cultural standing might achieve the goals of another – profit, cultural influence, and visibility.
Structure and Arguments
My dissertation threads several arguments across its three sections, relating both to the
development of the technology and its revolutionary potential, and to the practices that make use
of it. In the first short section, (“Pre-roll: Digital Cinema and Production-Based Theory”) I
outline the nature of my intervention in the digital cinema debate and the stakes of that
intervention by demonstrating the practical elements that the debate has long been lacking and
situating my work to fill that absence, as a middle point between its opposing poles: a fixation on
the indexical and, conversely, a medium eliminitivism. I apply my practice-based theoretical
framework to digital video technology, demonstrating the methods, turning the critical focus of
27
the digital debate towards practical application by speaking to digital video as a production
medium, and further elaborating on the benefits of such a turn more broadly. I suggest that as a
medium, video possesses several medium-specific properties, but not all are manifested in
medium-specific practices. By drawing attention to the ones that are (e.g. digital content’s
separation from real time transmission), I demonstrate that the significance of the shift to digital
video lie not just in the concept of digitality, but in how new digital content storage could be
manipulated by videographic producers.
Across the remaining two sections, I argue that digital video’s technological development
trajectory has been driven largely by an understanding of industry standards set by two other
forms of media technology – photochemical film and the computer. The former has guided the
development of video’s image quality, the latter the video camera’s utility. Utility, in this
instance, refers to both the camera’s ease of use and also to its usefulness as a computer-based
imaging system. I suggest that these two areas – image quality and utility – have been the ones in
which the video camera has seen the most significant development, as evidenced by the technical
specifications of the cameras themselves and also by discussions in the trade press, communities
of practice, and promotional advertisements. This is not to suggest that either one developed
independently of the other – surely this was not the case, as often it was specifically because a
camera became more convenient for the general public to use that its advances in image quality
were made significant. With that caveat in mind, I have arranged the remainder of my study into
two large, overarching sections, each divided into smaller sub-sections, such that the latter
section on utility builds upon the former section on image quality.
In the second section (“Video Camera Image Quality”) I argue that the engineering of the
video image has been carried out in relation to the industry standards set by the use of
28
photochemical film – a set of standards that I call the “cinematic,” following the use of that term
within communities of videomaking practice. Functioning as a legible image texture, the
“cinematic” standards are a set of qualitative readings based on the image’s quantitative
specifications (resolution, aspect ratio, frame rate, etc.). Video in the analog and early digital era
exhibited a “video look,” coded as being both nonfictional and of low cultural value, resulting in
a technical hierarchy dividing more expensive photochemical images and cheaper electronic
ones. By utilizing new digital tools to manipulate the video image to fall more in line with the
cinematic standard, tech-savvy video producers were able to create “semiotic decoys,” elevating
the look of their videos to match the cinematic standard and, in doing so, generating the potential
to surpass cultural gatekeepers and grow a culture of low-budget independent cinema. The
revolutionary narrative that followed in the trade press was one of “giant slaying,” in which low-
budget producers could compete with and even surpass the capital-driven, culturally significant,
big-budget media influencers.
As camera designers in the digital era pushed the digital image closer to the cinematic
standard in the form of high definition resolution, widescreen, and slower frame rates, the
cinematic ceased to be a marker of culturally high quality and was instead subsumed by all
videographic images. Big-budget media producers utilized digital video not just to enhance the
cultural capital of their images or revolutionize creative application, but more so to strengthen
the stability of their production and distribution networks, further separating their content, labor,
and influence from the access of low-budget producers. Across the amateur sphere, cinematic
images proliferated to the point of becoming the texture of our quotidian selves, relegating the
video image to a status of homogeneity and, thus, diminishing the significance and even the
existence of the “cinematic” as a qualitative, legible standard.
29
Across this section, I argue that the initial revolutionary potential of digital image
manipulation was tempered through the democratization of the cinematic look, shifting markers
of production quality to areas beyond image texture, and to ones more tied directly to capital and
the complex manipulation of data. Turning the discussion of the digital image to its usage across
the various spheres of production reveals the manner in which theories of practice were rooted in
mythological and ideological frameworks that ultimately served to reinforce the preexisting
hierarchy of image-making.
In the third section (“Video Camera Utility”), I argue that the development of the video
camera across the digital age occurred concurrently and in concert with advances in modern
computing, such that the camera itself underwent a process of computerization – not just in the
increasing technical complexity of its inner workings, but also in the camera’s more extensive
networking into existing computer systems, enabling digital postproduction and distribution in a
variety of arenas. Across all three spheres, digitality opened up new potential for practice in three
specific areas: transmission, communication, and mutability. The extent to which this
computerization was exploited differed across the various levels of production, as initially its
potential was tied specifically to technical expertise, which was generally directly proportional to
professional status. Evidenced in the early stages and increasingly so as the camera became
computerized, the ability to manipulate video as data, rather than strictly as an image, became a
significant dividing line amongst practices across the spheres of production.
At lower levels of production, digitality was less impactful until engineering advances
made the exploitation of digitality more accessible. The first of these was the removal of the tape
housing, which allowed for recording directly to hard drives and easier integration of footage
into computer systems. Interactions with cameras themselves changed, as they were no longer
30
tied to the real-time operation of the mechanical tape and entered a time-shifting world of
random access computing. For home video users, video became an edited object on a scale that
was logistically impossible in the analog era. The increased potential for the exploitation of
digital images crystalized on an even wider scale with the proliferation of the smartphone,
wherein the computerized-camera became a camera-ized computer. The video camera itself was
simultaneously linked both to a commonly carried, socially necessary object, and also to a wider
network of other such camera-ized computers. The ubiquity of these fused devices brought the
video camera from its place as a tool in the specialized practice of videography to a
commonplace tool of computerized communication. The extent to which videography has
become normalized results in the current state in which I hyperbolically suggest that
videography, as it has traditionally existed across the various spheres, has metaphorically died.
Across this section, I argue that the apparent democratization of moving image-making
exists in practice more as a self-perpetuating myth than a fully realized act of cultural revolution.
Turning the discussion of video’s digital existence to its usage as a tool of practice reveals the
extent to which the act of videography has increasingly become one of data gathering. The
ability to manage, secure, and manipulate video’s existence as data becomes a dividing line
among spheres of production, especially as image quality becomes homogenized. As the act of
everyday communication becomes one mediated by the video camera, the revolutionary potential
in exploiting video’s existence as data is tempered by the fact that that same data is further
exploited by those at the top of the hierarchy of image/data gathering.
In rethinking the digital debate from a place of practice, I call attention to these two
narratives of technical development and the utilization thereof, both of which were manifested in
the artifacts of videography but have escaped critical academic analysis, which tended to treat
31
video’s digitality as a theoretical construct rather than a medium-specific property to be
exploited in practice. Looking beyond the digital revolution as both a theoretical bombshell and
also as a utopian liberation, my own assessment finds middle ground – somewhere between film
theory’s dystopian “death of cinema” narrative and the optimism of the practitioners (Do What
You Can’t ). At the ground level and across the various spheres of production, I suggest that the
digital revolution (surely still in progress) has been realized less as a direct uprising to slay the
“media giants” and more as a slow-burning growth in videographic literacy. The cultural
influencers of old are still very much in power and control the same channels that they always
did – seemingly not a very successful revolution. High-quality image-making has been
democratized in that it falls under the control of more communities of practice, but the liberating
power of this democratization is still held in check by established power structures that continue
to separate these same communities. For every low-budget filmmaker that can surpass a culture
gatekeeper, there are many more whose jack-of-all-trades labor is overstretched and underpaid
by production companies. For every big-budget filmmaker that can more easily realize his or her
immersive 3-D world, there are many more VFX workers put on inadequate post-production
contracts. For every officer-involved shooting that goes viral, there are many more private
citizens covered by the web of surveillance. But to knock revolution out of hand is to ignore the
trends in the video camera’s development, overlooked in scholarship and outlined in this
dissertation. The act of videography has spread beyond the traditional outlets and larger media
flows, into broader elements of communicative practice. Perhaps Samsung is right in suggesting
that its users are “doing what they can’t,” but in a far less dramatic way. It’s less the indie films
and more the Facebook videos. It’s less the skydiving football catches and more the Snapchats.
It’s less America’s Funniest Home Videos and more America’s pretty funny home videos.
32
Revolutionary potential lies less in the reversal of the singular flows – challenging Hollywood
and the major media companies on their turf – and more in the abundance of alternate networks
of videographic communication and community building. For the first time, media literacy really
means reading and writing. All the more reason for the need to turn toward practice. If there’s
revolutionary potential in the pen, it’s there in the cameras as well.
33
Section 1 – Pre-Roll: Digital Cinema and Production-Based Theory
When shooting video on tape, “pre-roll” comes at the head of the footage. Rather than
popping in a new cassette and recording the significant event immediately, standard practice
often calls for running off a bit of tape – filming the lens cap, bars and tone, or the camera
operator’s shoes. It is necessary for a variety of reasons depending on the type of production, but
the most universal one is due to the manner in which tape footage is used in editing. The very
beginning of the tape is often difficult to access, as the mechanical tape reels have to spin up to
speed and video signals have to be initialized. The images recorded at the start of the tape are
often noisy and unusable, confounding recorders analog and digital alike. In order to
mechanically situate and preserve the integrity of the footage, editors would often request “30
second of pre-roll,” like a sacrifice of media to appease the video gods. As far as writing is
concerned, this section is my pre-roll (though hopefully more engaging than gazing at a camera
operator’s shoes). Before plunging into the digital debate straightaway, I want to mechanically
situate my approach by pinpointing in greater detail what the ongoing scholarly debate has been
lacking.
Considering the scholarship on digital cinema from a practice-based perspective reveals
and challenges two of the debate’s limitations, both stemming from the use of term “cinema.”
The first has to do with the scope and the ambiguity of the concept itself. Discussions of the
ontology of “digital cinema” are based on the presumption that the ontology of “cinema” is
neatly worked out, and clearly it is anything but. To read early commentary on and
characterization of digital cinema from John Belton or Lev Manovich or Thomas Elsaesser is to
see a conflation of many topics. What exactly is “digital cinema?” Computer generated imagery?
Digital animation, like Toy Story? Digital audio recorders? Digital cameras? Digital “video
34
village?” On-set video editing? Digital intermediaries? Digital nonlinear editing? Digital color
grading? Digital projection? Digital distribution? Streaming services? Digital TV? Tablet and
phone viewing? Motion capture? Writing a script with Microsoft Word? If one considers digital
cinema to entail computer-enhanced (or –enabled) motion picture making, then the short answer
to all of the above is “yes;” at the time of writing this dissertation, essentially every element of a
movie is in some way complemented (or destroyed, depending on your perspective) by
computers. As a result, the writing on digital cinema seems compelled to touch on a large
combination of these elements, and the resulting analysis is as complex and mixed and,
occasionally, muddy as its very subject.
Such a combination of processes is not a problem in itself and, if anything, is a necessary
step in the characterization of a concept as multifaceted as “digital cinema.” Such
conceptualization, though, favors simplification over specification. That is to say, the more
practices are combined under the umbrella of digital cinema, the less concerned the debate is
with the actual practices and their cultural ramifications. One can certainly find similarities
between shooting with a digital camera and creating computer-generated imagery, but to do so at
the expense of delineating their differences is a fallacy – the professional laborers are different,
their unions are different (or non-existent), the hobbyists are different, the technology is
different, the relationship to the technology is different, the skillset is different, the training is
different, etc. The theoretical concepts derived from these differences are sacrificed for ones
derived from their similarity (i.e. their computerization), which can lead to broad generalizations
far removed from any practical application of digital technologies.
A second issue that arises from the use of the word “cinema” has to do with that term’s
sanctity, earned in part through the decades of theorization behind its own conceptual
35
construction. When analog video was ushered into its digital incarnation in the mid 1990s, no
one seemed particularly bothered. No theorists grieved the “death of video.” No one mourned for
the lack of authenticity in home videos. No one doubted the outcome of that year’s Superbowl.
Digital videography wasn’t on most scholars’ radars (or vectorscopes). When cinema started to
become digital, though, it was a different story. The very notion of “digital cinema” or “digital
film” was seen by many scholars as an oxymoron, as the title of Dan Streible’s article on the
topic states quite directly.
16
“Digital cinema” was paradoxical not just in the literal sense
(material film vs. non-material computerized video) but in a more hypothetical one – the idea of
“cinema” was incompatible with digital technologies. Whether it was an issue of digital creation,
as with CGI, or digital capture, like the digital video camera, the fundamental theoretical issue at
stake was a shifting form of representation. Photochemical film, which preserved an indexical,
material bond with reality, was traded out for digital video’s symbolic system of digital data
storage – a process that threatened authenticity, the cinematic mode, and reality as we knew it.
Looking across the varied responses to the digitization of cinema, they span a wide
spectrum from fundamental, theory-breaking change to what Thomas Elsaesser calls “business as
usual.”
17
Another voice on the more forgiving end of that spectrum is Noël Carroll, whose push
away from arguments based in medium specificity toward a more-encompassing “moving
images” position points to the similarities among moving image formats, regardless of their
16
Dan Streible, “Moving Image History and the F-Word; or, ‘Digital Film’ Is an Oxymoron,”
Film History 25, no. 1-2 (2013).
17
Thomas Elsaesser, “Afterward – Digital Cinema and the Apparatus: Archaeologies,
Epistemologies, Ontologies,” in Cinema and Technology: Cultures, Theories, Practices, ed.
Bruce Bennett, Marc Furstenau, and Adrian MacKenzie (Houndmills: Palgrave/ MacMillan,
2008), 227.
36
origin, storage format, or method of exhibition.
18
Carroll’s frequent collaborator, Murray Smith,
pushes back somewhat against Carroll’s ‘medium eliminitivism” but still recommends at least a
“medium deflationism.”
19
As media converge in production and exhibition, the once-key
separations between them become trivial. Tom Gunning, while not knocking the index out of
hand entirely, points to cinema’s iconic representation of movement as a far more engrossing and
theoretically fruitful element of its ontology than the knowledge of the indexical trace, in his
aptly named article, “What’s the Point of an Index?”
20
For many of these theorists, the newness
of digital cinema is nothing new at all – the concept of “cinema” was continuously changing
across its many eras of style and technology, never one thing then and surely not now. Digital
cinema is a new cog in the works.
On the opposite end of the continuum are those like D.N. Rodowick, who view the break
with indexical recording as a point of essential separation between analog and digital media. In a
muddle of digital data, the dinosaurs in Jurassic Park are as “real” as the people – it’s all a
Matrix-like digital soup
21
(nevermind that it was all a photochemical soup before). As Timothy
Binkley describes it, digital technology “disinherits photography from its legacy of truth and
severs the umbilical cord to the body of past reality.”
22
For many in this camp, if cinema still
exists in the digital age, it does so in a form that is ontologically different from that of the past,
when its basis lie in the material recording of reality. As Steven Shaviro puts it, “In digital
18
Noël Carroll, Theorizing the Moving Image (Cambridge: Cambridge University Press, 1996),
141.
19
Murray Smith, “My Dinner with Noël;” or, Can We Forget the Medium?” Film Studies 8
(Summer 2006): 141.
20
Gunning, “What’s the Point of an Index?” 23-40.
21
David Norman Rodowick, The Virtual Life of Film (Cambridge: Harvard University Press,
2007), 122.
22
Timothy Binkley, “Camera Fantasia: Computed Visions of Virtual Realities,” Millennium
Film Journal 20/21 ( Fall 1988/Winter 1989): 8.
37
photography and film, even the most mimetically faithful images are artificial and fictive. There
is no longer any ontological distinction between a ‘true’ image and a ‘false’ one.”
23
The
proverbial mic-drop in indexical fixation comes from Laura U. Marks, whose spectrum-unifying
theory admirably and unexpectedly brings digital cinema and particle physics under one roof,
speaking of digital cinema’s quantum indexicality.
24
At the subatomic level, the digital is no less
indexical than the analog. As much as I enjoy the novel approach, if we’re getting physicists
involved, we may have gone too far. At the very least, I’d say we’ve done the index to death.
Part of the explicit concern of theorists like Rodowick and others on his end of the
indexical spectrum regards not only how digital technology impacts the future of our relationship
to the moving image, but our ties to the past as well. As Rodowick rightly argues, much of film
theory is based heavily on the material basis of photochemical film – its indexical bond with
reality.
25
When we look to digital cinema, can we even apply that same theory? Is decades of
thinking rendered dated, useless, and quaint? Are those theorists, many of whom are still writing,
rendered quaint as well? Is “film” (emphasis on material) just one of many formats encompassed
in Carroll’s moving image theory? And is photochemical, reality-based recording just a blip in
the history of animation, as Lev Manovich hyperbolically suggests?
26
From an experiential
perspective, is the nostalgic space of the smoky theater, film clacking through the projector, light
shining through the frames, all a lost memory? Many of these rearview concerns are no doubt
part of the reason why digital anything is often discussed with either outright disdain or, at the
23
Steven Shaviro, “Emotion Capture: Affect in Digital Film,” Projections 1, no. 2 (Winter
2007): 65.
24
Laura U. Marks, “How Electrons Remember,” Millennium Film Journal 34 (Fall, 1999): 66-
80.
25
Rodowick, Virtual Life of Film, 9.
26
Manovich, “Digital Cinema,” 5.
38
very least, as marked by some sense of lack, as with Babette Mangolte’s central question – why
is digital imagery seemingly incapable of conveying duration in the same way as film?
27
It is no
coincidence that critics with such material concerns have called attention to The Matrix, and
specifically the scene in which Cypher (Joe Pantoliano) eats a piece of digital meat, not caring in
the least that it is an imitation (or a re-meat-iation, if you’ll allow me). Should we respond like
Neo, the film’s hero, fighting against the simulation? Or like Cypher, enjoying business as usual?
What has been lacking in the discussion, and what this dissertation provides, is a middle
ground between the two poles. While I agree with Noël Carroll’s attempts to steer the
conversation away from the indexical, I wouldn’t want to go as far as Carroll in skirting
discussions of medium specificity entirely, because the shift from analog video and film to
digital video was a massive one when it came to camera usage; to look at changing videography
practices over the last 20 years is to see very unusual business. I additionally agree with D.N.
Rodowick’s observation that the emergence of new cinematic technologies calls for a return to
the age-old ontological question: What is Cinema?
28
In this case, what is digital cinema
specifically? This includes questions about cinema’s truth claim and how it is complicated by
digital processes; how cinema’s artistic lineage may need to be expanded to include the
computer; interrogations of whether and how a computer might be considered a medium; how
digital cinema’s relationship with other traditionally separate media have been brought together
in unexpected ways (either through remediation/simulated mediation or through techniques like
digital painting in CGI software). And yet, all of these questions, even in their sum, seem to miss
27
Babette Mangolte, “Afterward: A Matter of Time,” Camera Obscura, Camera Lucida, ed.
Richard Allen and Malcolm Turvey (Amsterdam: Amsterdam University Press, 2003), 263.
28
Rodowick, Virtual Life of Film, 9.
39
the trees for the forest. What is noteworthy in many of these scholarly approaches, regardless of
their place in the debate, is how different they are from the practitioners’ discourse.
Looking at discussions of the digital in the trade press, rather than academic journals,
shows dialogues that are different in both their concerns and also their mood – far more
optimistic, and far more interested in questions of community and technique and aesthetics. In
short, they are less concerned with an overarching conception of “cinema” and more concerned
with use. The ontological search for cinema’s digital incarnation in academia leaves a whole set
of theory unquestioned – one that emerges in the practice of media-making. In John Caldwell’s
terms,
Filmmakers constantly negotiate their cultural identities through a
series of questions traditionally valued as part of media and film
studies—namely, questions about what film/video is, how it works,
how the viewer responds to it, and how it reflects or forms culture.
Yet filmmakers (unlike theorists) seldom systematically elaborate
on these questions in lengthy spoken or written forms. Instead, a
form of embedded theoretical ‘‘discussion’’ in the work world
takes place in and through the tools, machines, artifacts,
iconographies, working methods, professional rituals and
narratives that film practitioners circulate and enact in film/video
trade subcultures.
29
The dichotomy between the theory of scholarship and the theory of practice makes sense,
especially considering the institutional separation between the practitioners of each. Still, the
avoidance of practitioner theory on the part of cinema scholars is due not just to professional
division, but also to willful ignorance. To Rodowick’s point about needing to rethink the
ontology of cinema in the digital age, I would suggest that scholars expand their search for an
answer beyond academia and their own heads. I am not proposing that scholars be less concerned
with the material properties of video as a medium, nor with the philosophical differences
29
Caldwell, Production Culture, 345
40
between the photochemical and the digital. Instead, I want to suggest that we look elsewhere for
those differences.
Approaching digital cinema from a place of practice-based theory involves the mining of
those theoretical approaches embedded in the act of making – in this case, the operation of a
digital video camera. Such embedded theory, by its nature, eludes analysis that approaches
digital video content as a text. Theory is embedded in the physicality of the practice itself and
relates to the text only invisibly. It involves the relationship between practitioner and camera,
and practitioner and footage, that extends well before and after a text has been finalized, if it is at
all, as many practices do not produce a final text to be studied by an outside observer (e.g. using
one’s smartphone as a mirror). As such, engaging with this embedded theory can occur through
the act of making (e.g. the creation of content, media experiments, or archaeological engagement
with the artifacts of practice), or it can alternatively be accessed by dissecting practitioner
discourse (e.g. the trade press, interviews, professional texts). Accessing and analyzing these
embedded theories is key to turning the digital cinema debate, as they often serve as specific
counterpoints and undercuts to academic film theory, operating in the blindspots apart from the
usual sites of scholarship in the form of media content and the academic texts that engage with it.
Whereas methodologies based in production study, broadly speaking, utilize similar texts
more explicitly to theorize industrial practices to, in Caldwell’s words, “describe the contexts in
which embedded industrial sense making and trade theorizing occurs” in the form of a “cultural-
industrial analysis,” my own approach is concerned with industrial ramifications only inasmuch
as they are one possible outcome of the engagement in videographic practice.
30
Indeed, as more
30
Caldwell, Production Culture, 345-46. Caldwell acknowledges the limitations of the term
“industry” as a monolith, as do I, though I feel I am additionally moving outside even the
41
and more individuals become videographers with the proliferation of the smartphone, industrial
production becomes a less sizable representation of videographic practice as a whole. If
production study aims to engage with the microsocial interactions of producers, practice-based
theory aims to go one level deeper, theorizing the practitioner and their relationship to their tools,
their medium, and the act of making. In doing so, it accesses a philosophy of practice that is
apart from that which is gleaned from content or hypothetical modeling and, in the case of digital
video, largely at odds with it.
To speak of the “death of cinema” in the digital era seems rather backwards, because to
read Videomaker magazine at the time of digital conversion is to see proclamations that the
current decade was the one in which the “cinematic” was finally in reach of the masses. It wasn’t
just a different narrative than the one most discussed in media studies, it was absolutely the
opposite narrative. Cinema wasn’t dead; for many practitioners, it was alive for the first time.
The apparent lack of authenticity in the digital image was of primary concern to many theorists,
but such talk seems quite out of place in documentary production circles, because the
affordability of digital technology and its unique aspects as a medium allowed for intimate
nonfiction filmmaking of an entirely new and different sort. Here again, not just a different
narrative, but the opposite one. Looking at the discourse in low-budget avant-garde circles,
Gregory Zinman likewise acknowledges that while theorists seemed hesitant about the transition
to digital, experimental video artists jumped in with gusto, keen to play with digital mutability.
31
broadest constructions of what that term might mean, even considering the recent incorporation
of home video practice into capital through its hosting on social media sites.
31
Gregory Zinman, “Analog Circuit Palettes, Cathode Ray Canvases: Digital’s Analog,
Experimental Past,” Film History 24, no. 2 (2012): 136.
42
Not a different narrative, but the opposite one. In these instances, the significant shift in how the
image was registered (digitally instead of materially) allowed for a different sort of practice.
When video became digital, it surely affected the relationship between the profilmic
event and the images on our televisions. From a practice-based perspective, though, it’s not the
difference between the indexical and the symbolic that was critical, but the difference between
video as a continuously running electronic signal and video as digital data. The resulting shift in
the use of the medium overshadowed that change in representational format entirely. Even if
theorists have largely settled into a Benjamin-inspired mindset in which the nature of digital
cinema requires a re-think of what cinema is, the digitization of video in relation to the use of the
cameras that produce that cinema remains critically underdiscussed. The amount of time that has
elapsed between the first digital debates and the current state of complacency leads to a false
sense of completion, as if the debate has settled, whereas I would suggest it missed the most
critical part.
Regardless of one’s opinion on the digital and cinema – business as usual or dead as a
doornail – I want to turn the conversation to videography. Cinema is dead? Fine. Let’s talk about
videography, which is alive and well. By moving beyond “cinema,” we shed the weighty
pretensions of an already ambiguous categorizing label and turn attention to tools and practices
with the potential to affect cultural change through influence, visibility, and profit. The questions
of videographers, heretofore largely ignored, speak to the connection between practitioners and
the results of their practice. How can I use digital video? What will its usage gain me? How is it
different than what was before? How is it the same? How I can do new things with it? How will
it meet my needs? How will it create new ones? How does the existence of digital video affect
the practice of videography? Without being overly deterministic, the central question of the
43
digital video practitioner has been the one missing from the digital debate – what does the digital
allow?
Digitizing Video Practice
Video’s transition from analog to digital was truly paradigm-shifting, but to the outside
observer, that moment of change was pretty anticlimactic. A videographer would shoot a scene
with her digital video camera – a camera that probably looked nearly identical to her analog
video camera from last year. The profilmic event would be shot no differently than before – hit
record, the tape would spin up to speed, the red light would flash. Information was recorded. She
could rewind the tape and play it back if she were so inclined. After filming had concluded, she
could then hook her camera up to a television monitor with the same old RCA cables to view the
finished footage, which probably looked more or less the same as that of her previous camera.
The turn of the digital century was about as exciting on the surface as the turn of the millennium.
No big bang, no Y2K. Below the surface, though, there was an extra step in the process that
eluded the eye. In itself, it didn’t change the on-set practice. Yet, it fundamentally altered the
relationship between the event being shot and the images being displayed; between the
information being recorded and the computer sitting next to the camera; between the
videographer herself and her sense of time and memory. It was invisible, but its repercussions
would ripple and expand over the next 20 years. The chain of mechanical events linking the
images on the videographer’s television with the reality of the videotaped scene had been altered
and, some would contest, broken. For many theorists, this break in the link was a big deal. It is
for me as well, but not quite for the same reasons.
44
Comparing our videographer’s two mid-90s cameras (analog and digital), the process of
videotaping begins in largely the same way. A tape is inserted, the shot is framed, the record
button is pressed, and a series of photosensitive cels receives light through the lens. Arranged in
a dense, tight grid, each measures the luminance and chrominance at a given point in the image,
and translates this information into an electronic video signal. The entire process thus far is an
analog one. That is to say, changes in the profilmic event affect the light traveling through the
lens, affecting the light that hits the cels, which in turn affect the amplitude of the electronic
signal. There is a 1:1 correspondence between changes that occur in reality and changes that
occur in the video signal. With an analog video camera, the signal is sent to a television set for
live broadcast (whether via antenna, cable, or CCTV) or it is stored as an electronic signal
without imaging. In the latter case, the signal most commonly is routed to a running magnetic
video tape, where changes in the signal’s amplitude are registered as changes in the magnetism
of the tape. In this process, whether the signal is stored via tape or broadcast live, there remains a
1:1 correspondence between the profilmic event and the final images on the display.
The process of registration is different in video than it is in photochemical film (which is
also analog), but the final images in both media are similar in that they form an indexical
relationship with the profilmic event. While videotape doesn’t have the visual iconicity of the
photochemical film image, in which traces of the profilmic event can actually be seen fixed on
the film’s negative, videotape bears an impression equally – more like a mold from which the
latent video image is later created. Had video remained analog and indexical when HD video
entered the realm of “cinema” in the 2000s, much of the debate would have taken a very
different form (though surely would not be absent altogether, as those keen to sound cinema’s
death knell are known to jump at every opportunity). As it stands, digitality’s break from the
45
indexical analog brought on considerable concern among many film theorists and philosophers –
both for the ontology of the medium and for our own as human beings.
Our videographer’s new digital camera functions just like the analog one in its first stage
of shooting – profilmic event becomes electronic signal. At this point, there is a considerable
shift in the way that the signal is registered. The still-analog signal is processed by an ADC
(analog-to-digital converter). This piece of hardware, contained within the camera, measures the
amplitude of the electronic signal in given intervals and converts the information into numerical
values – digits, hence the “digital.” The image is essentially converted into a very long string of
numbers that represent its brightness, color, and detail. That is, effectively, the only change in the
new digital camera. Analog signal has become digital code. Like the electronic signal in the
analog system, the string of numbers in the digital camera can be sent as digital data to be
broadcast live or stored. If broadcast, data is sent to a television or broadcasting system which
uses a DAC (digital-to-analog converter) to resurrect an electronic signal using the data and, in
turn, uses the electronic signal to illuminate the photosites of a television. If stored, the digital
data is again routed to magnetic tape, where changes in magnetism now register not the changes
in amplitude of an analog signal, but the differences in the string of numbers. When the tape is
played back (as on a television), the data similarly goes through a DAC in order to again become
an electronic signal readable by the playback device. In these initial days of the digital video era
(prior to digital television), both analog capture and analog display still bookended the digital
process. Live video was only digital for an imperceptible fraction of a microsecond. As with
camera operation, no viewer at home would notice a difference in the process. The scene would
play out in front of the camera, some sort of technical magic would occur within, and the image
46
would play back on TV. The school of that magic – analog/index or digital/symbolic – makes all
the difference.
When wanting to re-route the theoretical conversation of the digital to videography and
consider video from the perspective of a practice-based theory, one need not avoid the matter of
medium, but instead ask how the medium matters. How does the physical/electronic/digital
quality of the medium relate to the use of that medium? Take, for example, the nature of video
imaging in the analog era. As a continuously running signal, video can be transmitted via cable
or antenna. For many theorists, this is the fundamental, defining feature of analog video as a
quasi-material medium. Just as film is photochemical, analog video is electronic. From this
defining feature, one can further extrapolate two specific properties unique to the medium. The
first is that video can be live – the images of an analog video camera can be seen via
transmission in real time. The other primary moving image format, photochemical film, requires
chemical developing to fix the image (one of its unique properties as a medium), and cannot be
live. The production of live video, made possible by video’s potential for liveness, which is in
turn drawn from video’s existence as an electronic medium, is a defining practice.
The second medium-specific property of analog video is that, because video can only
exist as a signal, it is a medium that decidedly lacks an image. At any given moment in time, I
can point to a film strip and say, “There is the image,” as it is a strip composed of still images.
The building block of the motion picture film is the frame, and that frame is visible to the eye.
But with analog video, as a signal, at any given moment in time, all I have is a piece of an image.
A scan line. A single photosite. A momentary stroke in the painting of a larger picture. If one
were to watch a television filmed with a high-speed camera, the lack of image becomes clear in
the resulting ultra-slow-motion footage. All that is visible is a moving line of color, slowly
47
building that full image that is ultimately constructed in the eye and in the mind, but which never
exists whole in reality. Thus, there are two medium-specific properties of analog video – the
potential for liveness and the lack of a full image – both due to the technical manner in which
analog video is rendered as an electronic system.
In looking at these two properties, one can discuss video’s features as a medium by
basing an argument on either (or both) of these properties. To focus on the lack of image results
in a more ontological, philosophical discussion of video’s ephemerality, its impossibility of
being whole, its predilection toward motion. It is a medium in constant flux. It is a medium of
the present. It is a “hot” medium. It cannot fully represent the past. It cannot represent
wholeness. As ontological/philosophical arguments, they are fully rooted in invisibility. If one
were to say to a videographer, “There is no video image!” she might respond, “But, it’s right
there on the TV.” If one were to say, “Video cannot fully represent the past!” she might respond,
“But I shot this footage yesterday.” The goal with this example is not to position myself as anti-
philosophy, but instead to call attention to the fact that there is a tendency in certain kinds of
philosophical media arguments to ignore practice. That’s not a problem in theory (pun intended),
but it is a problem if the practice-based arguments don’t exist as a counterpoint. There is
something inherently un-practical in fixating on video’s lack of image.
In my experience in trying to bridge the theoretical gap while teaching moviemakers,
such arguments are the ones that most confound practitioners on the other side of the
theory/practice divide because they seem to conflict so drastically with the theory gleaned from
making media. For a practitioner who is unable to surpass cultural gatekeepers because her video
is coded as being aesthetically and culturally inferior, or for activists who are unable to broadcast
their documentary because its video is incompatible with professional television technical
48
specifications, or for the fiction filmmaker who is unable to edit his movie with precision
because he couldn’t afford to shoot on film and had to resort to video, the fact that video is a
medium without an image is neither a support nor a consolation. More crucially, is not
something to be exploited for financial gain, for leverage, or for cultural influence. It is a non-
issue.
Making a medium-specific argument based on video’s potential for liveness, on the other
hand, lends itself to different kinds of discussion, no less ontological, but perhaps more socio-
cultural than philosophical (if there is a difference, and I’m not convinced that there is). Video’s
lack of image has little impact on how it is used as a medium, if only because it is unlikely most
people would even notice that the image doesn’t exist. Liveness is different. Liveness allows for
real-time broadcast of sports and news. Liveness allows for reflective gallery exhibitions.
Liveness allows for CCTV surveillance. There is a direct correlation between the potential for
liveness and the way in which the medium is used. It is no less ripe theoretical ground, as key
historical discussions of video and narcissism, video’s panoptic properties, its reflective nature,
and the many and varied discussions of television as “global village” are all tied to the capacity
for liveness. More crucially, much of the revolutionary potential of the analog video camera lie
in its potential for liveness, as is evident from the attempts by activist videographers in the 1960s
(like the Videofreex and TVTV) in attempting to create alternative broadcast networks.
32
Thus,
both the potential for liveness and the lack of an image are medium-specific properties, but only
one manifests itself in medium-specific practice.
32
See Shamberg’s Guerrilla Television for a practitioner’s account and Tripp’s “From TVTV to
YouTube” for a historical overview, both in my bibliography.
49
To turn the idea of medium-specific practice to the digital video camera, one can see a
similar binary in theoretical approaches. Whereas the fundamental feature of analog video is its
existence as an electronic signal, digital video’s existence as a string of symbolic code can be
seen as the fundamental shift in material medium from analog to digital. As a result of this
change in its defining feature, the medium-specific properties that can be extrapolated from that
feature are altered as well. Whereas analog video’s first property is that its image existed only as
a fragment in time, never a whole, digital video’s image technically doesn’t exist at all – at least
not materially. My digital camera, then, doesn’t produce a video, technically speaking (though,
to a casual observer, it does do that). In reality, it is a machine that produces a string of code – a
series of instructions for the creation of an image. The digital video camera is not unlike an
extremely sophisticated paint-by-numbers system. A display (like a television, a computer
monitor, or my camera’s viewfinder) receives the instructions sent by the camera, which contain
brightness and color information for each photosite or pixel, and uses them to create an image.
That last point is quite remarkable. The monitor creates the image. Unlike a film strip, which
contains its images quite clearly to the naked eye and can be held in the hand, the digital video
image is essentially the result of a camera writing an extremely accurate description of what it
“sees” and then “telling” that description to the playback device so that the image might be
created.
Cinema existing as a list of numbers was (and still is) quite ontologically unsettling for
many theorists. It is no surprise that many discussions of digital cinema deal with death (of
cinema, of the image, of media, of the human), incompatibility (cinema with digitality, humans
with digitality, reality with digitality), and dystopian futures of simulation. But, as was the case
with analog video, this anxiety about digital’s shaking of reality to its core is often at odds with
50
what we see in practice. The digital image may not exist, but it’s still “right there on the TV.”
Lev Manovich’s suggestion that digital cinema is but a subset of animation would get a lot of
confused looks from both moviemakers and animators alike. Again, the goal is not to knock
theoretical arguments out of hand, but to recognize their basis in invisibility, and to turn to the
practice of digital media-making in equal amounts. The fixation on digital death within much
scholarly work at the turn of the 21
st
century makes up what is the dystopian yin of the
emergence of all new technologies, and is tempered by a more utopian yang seen in other
avenues.
Turning to the discourse of practice is to see a different narrative, one largely more
positive, and one focused on a second, more visible medium-specific property that stems from
digitality, akin to analog video’s liveness: digital video no longer exists as a real-time signal, but
as data; it follows a computational logic. Video is no longer an electronic signal, but a file;
shooting video is, as Jean-Pierre Geuens calls it, “filemaking.”
33
As a file, lacking a material
basis (beyond the hard drive on which it lives), it is subject to the same transmissibility and
mutability of any computer file. It is no longer bound by the material conditions that require real-
time electronic transmission via physical cables, but can be duplicated without degradation,
altered via code, and transmitted piecemeal. If Vertov spoke of the kino-eye’s conquest of space
and time, digital video carried that conquest beyond the frame. As information, video becomes
viral. Its effect on storage allows cameras themselves to mutate and proliferate like viruses.
Experimental artists now play not just with closed circuit televisions in a gallery, but with an
open circuit of digital devices. The video image is as malleable as a digital photo. The video
camera speaks with computers, and is a computer. It becomes an input in a vast network. There
33
Jean-Pierre Geuens, digital filemaking, https://rethinkcinema.com/digitalfilemaking/
51
is a direct link between the proflimic event and any great number of practices – from traditional
viewing to pure computational data analysis.
With digitality, the medium of video underwent a critical change in its medium-specific
practices. To fixate on the invisible, symbolic properties at the expense of the dramatic, practical
change in the medium’s cultural potential is inadequate, as it ignores the video revolution as a
revolution; the struggle for the right and ability to create and distribute authentic representations,
to gain and maintain cultural influence, and to participate in modern society are reduced to a
theoretical model. A film theory catch phrase. Furthermore, as was the case with liveness, the
new practical properties of the digital video medium are as fertile ground for theory as that of
indexical ontology. Whether or not numbers can function as a true medium is less interesting to
me than what can be done with video now that it exists in number form. To that end, in the
following sections I investigate how video’s existence as data and the subsumation of its
processes under computer logic relate to its medium-specific practices, working toward a better
understanding of how digital video was used and how it could be used to potentially
revolutionary ends. What does the digital allow?
52
Section 2 – Video Camera Image Quality
This section tracks the development of the video camera as a tool for creating moving
images. Like all imaging technologies, video cameras across all spheres of practice impart a
certain quality to the content of the images that they are used to create. The degrees of quality
and the alteration thereof, both through camera engineering and the application of
skilled/unskilled practitioner labor, have a great effect on the manner in which images are read
by viewers of all types, from home video audiences to film festival screening committees. The
ability for practitioners to achieve their desired goals through the use of a camera as a tool hinges
significantly on the quality of the images that the cameras can produce, as the “look” of the
images is a factor in the “success” of a wide variety of practices, whether archival, culturally
representational, or profit-based.
To speak of “image quality” in video camera technology is to speak of two distinct but
interconnected notions of quality. The first involves “quality” in relation to degrees of fineness
or grades of excellence; it is a quality that is measurable – that which I am calling “quantitative
quality.” Sheets of higher thread count. Cutting instruments that can operate with a higher degree
of precision. Higher karat gold. In the case of video, quantitative quality can refer to a number of
specific elements – lines of resolution, pixel density, bit depth, frame rate, frame size, dynamic
range (the ability to represent both bright and dark), color fidelity, levels of artifacting (unsightly
visual glitches), levels of video compression, etc. In the mid-1990s at the start of my study,
digital video in all spheres of production (save research and development labs) could be said to
be of relatively low quantitative quality, compared to both analog film (even smaller gauge film
like 8mm) and to digital high-definition video that would be adopted in the following decade.
Much of digital video’s technical development throughout the 2000s involved pushing the
53
numbers – creating a measurable (and marketable) increase in quantitative quality. Comparing
high definition digital video in 2015 with standard definition digital video from 1995, it is clear
that many of the numerical “obstacles” that marked the medium and were much maligned in the
video and filmmaking trade press had been surpassed. Vertical lines of resolution, for example,
increased drastically from 720 to 1920 to beyond 4000. At the time of writing this dissertation,
you would be hard pressed to find a standard definition camera in operation in anything but an
intentionally nostalgic context.
Looking across advertising, camera packaging, and camera reviews (both “official” and
“unofficial,” like those on consumer sites) throughout these two decades, much of the rhetoric
surrounding the development of the technologies specifically touts “high quality video” or
shooting with a “higher quality camera.” It is not hard to view this period as one driven primarily
by obsolescence, if only because that was the narrative being aggressively pushed by camera
manufacturers and retailers. Even aside from the understandably hyperbolic rhetoric, there is a
clear, demonstrable numerical progression in these various quantifiable elements of image
quality that one can observe in the development of the cameras. Yet, even looking strictly at the
numbers, the development of the cameras was not without its conspicuous exceptions. The
clearest example in this regard is the progression of camera frame rate, which seemingly bucks
the trend of “onward and upward.” While quantitative image quality generally improved in
nearly all respects, most default camera frame rates actually decreased dramatically, from a
perceived 60 frames per second (fps) to 30 or 24. Similar reductions occurred in video aspect
ratio which, despite the advertiser-friendly promotional term “widescreen,” often was achieved
54
by capturing or displaying less of an image when compared to the older full-frame video.
34
Such
oddities in technological development need to be contextualized in order to make any type of
sense in relation to a narrative seemingly driven by obsolescence.
The second type of quality relevant to my argument is quality with regard to the nature or
character of things: the difference in grain pattern and color between two different types of
wood. The differences in timbre amongst three brass instruments. The mouthfeel of different
beers. At the risk of sounding redundant, I would refer to this type of quality as “qualitative
quality” – something more subjective relating to feel, character, impression, the reading of the
image. Qualitative quality is the visual je ne sais quoi. “This image… it has a certain… quality.”
Items of different qualitative quality aren’t as quantitatively comparable and one is not
necessarily “better” than others, though sometimes they are rendered so by cultural coding. In the
case of gold, higher karat gold is thought of as being more desirable and, as a result, is generally
more costly. Higher karat gold affords not just greater financial capital, but cultural capital; here,
quantitative quality and qualitative quality are directly proportional. With a creative and
communicative medium like video, it is not always so. With the example of digital video’s
aforementioned technical development oddities in its two decades of growth, visual resolution
increased, but temporal resolution (frame rate) decreased. Further, to look at visual resolution in
relation to frame size, the numbers don’t just strictly get higher, but they get higher in a specific
way – the frames get wider than they do tall. Progression through obsolescence is an incomplete
narrative; the development of video was affected equally by the qualitative reading of its image.
34
Some versions of “widescreen” cameras simply cut the top and bottom off of a larger, full-
screen frame, producing an image that looked wider but actually presented less visual
information.
55
In the following sections, I argue that the development of the video image – both in the
case of quantitative and qualitative quality – developed to better meet the numerical industry
standards and the resulting “look” of 35mm analog film. That is, the development of video
technology was largely driven by a logic of emulation rather than obsolescence. Following the
terminology of various communities of video practice, I argue that the “video look” became
more “cinematic.” I use the term “cinematic” with great reservation, as it is one that is seemingly
so ambiguous as to be borderline useless. I have many colleagues that shudder visibly whenever
students use the word to describe something: “Inception just looks more cinematic than that low
budget action movie.” Or in a production class, when reviewing dailies – “I liked your first shot,
it was really cinematic.” Or, as is most directly relevant to this dissertation, in the video trade
press – “What Does it Take to Make Your Work Cinematic and Feel Like a Movie?
35
” Instances
like these are somewhat ridiculous because all of the things in question are movies – they are all
already “cinema.” How can anything be more cinema than anything else? The overarching
objection is that the term “cinematic” doesn’t actually mean anything. Etymologically, the term
is overly messy and is almost certainly being used in ways that other, more accurate descriptions
would do better. Still, it is important for this study to recognize that the word is not meaningless
– it certainly has a meaning in each of the aforementioned contexts, it’s just poor word choice.
Dismissing the term out of hand is easier than probing its semantics.
In the instances listed above, one can parse out some meaning with regard to the
cinematic. All three cases seem to share notions of production value, technical ability, markers of
professionalism, badges of esteem. In all three cases, it functions as a semiotic code – a way of
35
J.R. Strickland, “Make Scenes Cinematic: What Does it Take to Make Your Work Cinematic
and Feel Like a Movie?” Videomaker 29, no. 9 (March 2015): 40.
56
visually evoking quality of a particular sort, as in the case of the trade press article, where the
explicit goal is to make your movie feel like a movie. What is especially interesting in this latter
case is that, being a tutorial, the cinematic is linked to the technical. The article features
instructions on how to create the cinematic – how to make one’s cheaper video camera look
more like a “real movie.” In that way, the article draws a clear link between the two types of
quality upon which I have based this section and my argument as a whole; it demonstrates how
the quantitative technical standards translate into qualitative affect – a qualitative image
standard.
The cinematic qualitative image standard is defined and perpetuated by camera
manufacturers, the video trade press, videomaking guidebooks, online communities of
teaching/learning, and filmmakers themselves. In tracing the development of video across the
digital era, I argue that the engineering and technical development of the video image has been in
adherence to these user-defined notions of the cinematic. The adoption of a cinematic look is
perhaps clearest within big-budget studio production, wherein video has replaced film in nearly
all contexts with very little public recognition, though changes in quality are arguably more
impactful in other spheres of production – most notably the low-budget independent sector and
the home video market. In these latter groups, achieving a cinematic standard was not just a
switch in hardware, but a significant step up in qualitative quality, which carried significant
repercussion in the way that images were read culturally and was a key factor in how digital
video cameras could be used for revolutionary ends. In serving as a marker of esteem and
professionalism, the cinematic standard represented a visible point of separation of cameras and
their users into hierarchies of cultural value. The work of practitioners and the practitioners
themselves were judged, consciously or unconsciously, by the reading of their images and their
57
place within the hierarchy. The ability to achieve a cinematic look through the digital
manipulation of one’s video was a potentially revolutionary act, as it culturally elevated the
status of video practitioners through the altered coding of their images. Quantitative
specifications could be manipulated for qualitative effect, claiming the images of a group of
greater esteem.
Over the course of the following sub-chapters (which will largely progress
chronologically), I will relate the significant shifts in quantitative quality (progressive scan,
adoption of 24 fps, the shift to HD, etc.) to changes in qualitative quality, and trace how these
technical changes were utilized to varied ends (artistic, financial, and/or political) in different
spheres of production. I will break down the differences between the earlier cameras’ so-called
“video look” in relation to the cinematic standard, and speak to the causes and repercussions of
the technical move toward the cinematic in low-budget, big-budget, and home video contexts.
Over the course of the cameras’ development, I suggest that as the “cinematic look” proliferated
across all sectors of production throughout the digital age, its ubiquity tempered the once-
revolutionary potential that came from wielding it as a point of cultural esteem. As more and
more cameras were engineered to achieve the cinematic standard by default, the hierarchy of
image-making that was once technological and semiotic shifted to other areas of production
practice more directly tied to capital (like production design and casting), making the revolution
in image quality short-lived and largely illusory.
58
The “Video Look”
Keanu Reeves – “Are you done with film?”
David Lynch – “Don’t hold me to it, Keanu, but… I think I am.”
The above exchange comes from 2012 feature documentary produced by Keanu Reeves
and directed by Chris Kenneally called Side by Side. The film examines the transition from
photochemical film to digital video from a studio production perspective, and features in-depth
interviews with A-list directors and cinematographers about their adventures in a new world of
digital cinema. When the film premiered at the Berlin Film Festival, its on-screen juxtaposition
of the interviewee’s hopes and fears for this new technological “revolution” (the documentary’s
term) likely reflected back the same confidences and anxieties that were on the minds of many
filmmakers, trade journal authors, bloggers, and critics in the screening audience and in the film
industry more broadly. The film’s title, and much of its content, involves a comparison – holding
digital video next to film, side-by-side, and seeing if anyone can tell the difference, like the
hippest new cinephile party game. Does video look like film? Could it ever? Will it change
everything? Will it change nothing? Is it even worth debating?
Side by Side is 100 minutes long. If that same documentary were made in 1990, it
probably would have been a lot shorter. Video did not look like film. Video looked like video.
Hollywood movies were not shot on video. There was no debate. No revolution – at least not one
that threatened the relative stability of the most profitable movie industry in the world. While it
might take a skilled technician to spot the differences in a side-by-side comparison of film and
video now, in the pre-digital era even a layperson could spot the differences and, despite lacking
a technical vocabulary, could likely articulate some of them. Video had a certain look that, not
59
surprisingly, was dubbed the “video look” by industry professionals. It was the combination of
quantitative and qualitative qualities that marked video as a medium.
To say that there is a singular “video look” is a bit misleading, just as it would be to say
that there is a singular “film look,” though that similarly dubious term also received quite a bit of
real estate in the industry trade press in the early 2000s as well. For film, the “look” associated
with its physical medium has to do with its material qualities, its method of mechanical capture
and projection, and the resulting perceived texture and motion, but these factors exist in various
incarnations. Are we talking about 35mm film or 8mm film? Full frame or widescreen? What
ISO are we shooting? For the purposes of establishing a “film look,” how much do these
variables matter? Video similarly existed in numerous formats: 8mm, VHS, S-VHS, VHS-C,
Beta, DV, DigiBeta, etc. If we were to play the “side-by-side” party game, which formats would
we be holding? Considering film or video as a singular medium across these various formats
becomes somewhat slippery when it must account for their variety (and, with some exceptions,
most theoretical forays into filmic ontology conspicuously do not). Still, just like with the
apparent ambiguity of the term “cinematic,” the terms “video look” and “film look” were not
fully without meaning when they were used casually in the trade press and on the movie set.
Across the wide variety of imaging formats there were clear factors that separated film and video
regardless of iteration, and these universal qualities contributed to the general understanding of
what each medium looked and felt like, and how it was subsequently coded with cultural value.
The core quantitative elements of video and film as media and the resulting qualitative
“look” of each is derived from the manner in which light is translated into a storage medium
(and, conversely, how it is later displayed on a TV set or a cinema screen). Analog video
cameras in the 1990s produced images by sampling light from the environment being shot and
60
recording its intensity as electronic impulses. Light would enter the lens and hit a photosensitive
plate. On this plate would be a series of sensors, arranged in a tight grid. An electron gun would
scan the plate one line of sensors at a time, and the capacitance of each sensor site would be
translated into an electronic signal, recording the luminance (brightness) and chrominance (color
balance) at each site. This signal could be recorded onto magnetic tape, transmitted through a
cable (as in CCTV), or broadcast through the airwaves, enabling live television. Upon receiving
the electronic signal via tape, cable, or broadcast, a television system could essentially “paint” its
screen, also containing a corresponding grid of phosphorescent photosites, each being
illuminated according to the signal. This process of electronic capture was shared across all the
video formats, though with variations in degrees of quantitative quality (lines of resolution, for
example, varied between the U.S. and Europe, but the process remained the same). In the digital
era the process remains quite similar, though the storage method contains an additional step, as
the electronic signal is converted to a digital one – a series of digits representing the intensity of
the signal at any given moment.
To continue the side-by-side party game, photochemical film used a similar pointillist
system, but without such a regimented grid. A frame of film, filled with a matrix of randomly
scattered photosensitive crystals, would be exposed to light via the camera lens, becoming darker
in relation to the amount of light exposure. After the film was developed in a chemical bath to
permanently fix the image, copies could be made from the original. Projection was mechanical
and optical, shining a light through each frame of film and magnifying it with a lens to produce a
large image on the screen.
As a part of these differing methods of capture and image rendering, video and film had
several quantitative differences. First was the issue of resolution, or level of detail in the captured
61
images. The term “resolution” is a tricky one for film. Whereas the resolution of a video image is
clearly defined by its number of scan lines or pixels, the randomness of the film image has no
similarly easily measurable resolution. Further, while video lines of resolution were largely
standardized in order to make broadcast possible (TVs only had a fixed number of scan lines, so
if a video image had more or less, it would display improperly), with film the clarity of the image
related to the texture of the film grain, and this differed both by the size of the film strip (8mm
vs. 70mm) and the fineness of the grains (larger grains would be less dense and therefore
produce a lower level of detail) which differed with different film stock. Further, the random
scattering of grains makes a comparable mathematical resolution calculation more difficult. Even
with these caveats of inaccuracy, low-gauge film (like 8mm) would have an equivalent vertical
resolution of approximately 600 lines.
36
In the standard definition era, video playback in North
America maxed out at 480 visible vertical scan lines and up to 576 scan lines elsewhere around
the globe. This technical difference in resolution between film and video was noticeable to the
naked eye in that a video image lacked the fine detail that a film image contained. Edges would
be blurrier and singular points would be less recognizable.
Beyond rendered detail, the video frame was strictly fixed at a 4:3 aspect ratio due to
broadcast standards and device compatibility (TVs and video monitors), compared to the
possibilities of widescreen in the case of film. In the standard definition era, any attempt at
widescreen on video in anything outside of a special context (like a stadium video board) could
only be produced through letterboxing: the cropping of the top and bottom of the image.
36
This is just a rough estimate using the shorthand of 100 lines per mm (it would vary with stock
and lenses), though in the absence of actual pixels, there is no true numerical equivalent.
Numbers aside, the density of film’s grain yields a higher resolving power than standard
definition video.
62
Widescreen-through-cropping was certainly a popular technique with film as well, but the
increases in resolution with larger format film made the lost screen space more negligible than
with low resolution televisions. In short, the film image let a viewer see more, and more clearly
than the video image.
Lack of fidelity was not limited to visual resolution in and of itself. The video image
suffered from a lack of range, both in its ability to register varying levels of light from bright to
dark and in relation to color. In the case of the former, standard definition video specifically had
a very narrow dynamic range across cameras and formats. While 35mm film might be able to
expose both an open window and a dark, shadowy corner in the same shot, video was incapable
of doing so. If a shot were designed to allow the window to be properly exposed (i.e. not too
bright) the shadows would turn pitch black. Conversely, if the iris were opened in order to see
more detail in the shadows, the open window would be totally overexposed, producing a hot,
glowing white spot. Both extremes of over/underexposure represented loss of image detail in
those areas. The video image also struggled to render a range of colors with fidelity to their
appearance in reality due to the mechanisms by which color information was sampled and stored
by video media. The range of rich color tones that could be reproduced with panchromatic film
stock would be impossible to recreate with video, as can be seen in films broadcast on television
or reproduced on videotape. Additionally, and partly due to the medium’s low resolution, the
lack of visual definition often led colors to bleed across fine lines. An individual’s red shirt
might blend and bleed with the blue sky in the background. Film was more accommodating to a
wider range of brightness and color than video.
Video also suffered (or benefitted, depending on who is talking) from a particularly wide
depth of field. While depth of field can be manipulated by the camera operator as a function of
63
aperture and focal length, a video camera’s depth of field is also affected by sensor size – the
smaller the sensor, the wider the depth of field. Most video cameras in the 1990s had very small
sensors, a design choice driven partly by the goal of lowering manufacturing costs and keeping
camera bodies small, especially in the consumer market. Additionally, in order to render color
more accurately, higher-end video cameras had a separate sensor for all three colors in the
additive model – red, green, and blue, and the use of three sensors further necessitated the
smaller individual sensor size. Video camera operators could potentially produce a narrower
depth of field, but only by shooting at extreme telephoto, which was sometimes unwieldly, as the
camera would have to be placed quite far from the subject to accommodate for the longer focal
length. Additionally, because most video cameras outside the highest professional tier had fixed
lenses, and because these one-size-fits-all lenses sacrificed extreme wide and telephoto angles in
favor of a more general and practical middle ground, sufficiently zooming in to blur the
background was oftentimes an impossibility.
The result of these mechanical realities was that almost every part of the video image,
foreground and background, was in focus at all times. For some of video’s uses, the wide depth
of field was a great benefit. Coverage of spontaneous events as in sports broadcasting, news
reporting, or documentary was simplified in that maintaining focus in a changing environment
was much easier. For productions with a minimal skill threshold (e.g. home video), the wider
depth of field allowed the camera operator more room for error, as maintaining focus on a
moving object or with a moving camera using a shallow depth of field is quite difficult. For
fiction moviemakers, though, the difficulty in separating foreground from background and
isolating people or objects through selective focus was a continuous challenge, and a clear
optical division between film and video.
64
Limitations in resolution and fidelity affected the video image such that even a still frame
of video would look noticeably different from film. That said, perhaps the most noticeable and
commented-upon (within the trade press) marker of the video image lie in its rendering of
motion. One of the primary links between the photographic image and the cinematic image (and,
as a result, one of the central theoretical points of discussion regarding the ontology of cinema) is
the mechanical fact that film exists as a series of still images. The “magic” of the movies, as
Hugo Munsterberg reported in the early days of cinema, is that they are made inside our minds;
our eyes are capable of identifying individual stills, but our brains fill in the gaps and make them
move.
37
Video is the opposite. It is never still. It is never a solitary image. When a videographer
films a scene and the analog camera’s photosensitive plate registers light through the iris, it does
so not as a series of still pictures, but as a continuously-drawn line. With a North American video
camera and, consequently, on a standard definition NTSC television, the image is divided into
480 horizontal lines. When the television receives the electronic signal from a camera, broadcast,
or videotape, its electron gun fires at the top left corner of the screen, drawing a horizontal line
from left to right (line 1). When the gun reaches the end of the screen, it proceeds back to the left
side of the screen and draws another line from left to right further down the screen. Importantly,
though, the television does not draw lines 1-480 in numerical order. Instead, it draws the odd-
numbered lines first, effectively producing half an image (a field), and then returns to the top of
the screen and draws the even-numbered lines, starting with line 2. The process is like reading
lines in a page of book, but skipping a line with every pass. Upon reaching the end of the page,
37
Hugo Munsterberg, The Photoplay: A Psychological Study (New York: Appleton, 1916), 77-
78.
65
the reader would return to the top and read the skipped lines. Thus, a “frame” of video consists of
the gun painting the entire television screen twice – a frame is made of two fields. The reason for
this process, called “interlacing,” is twofold.
38
First, reducing the image information at any given
instant reduces the bandwidth of the video signal; showing half an image at a time requires less
signal than showing a full image. Achieving a smaller bandwidth facilitates television
broadcasting and allows for a higher total image resolution.
The second reason for interlacing has to do with frame rate – a seemingly trivial technical
element that would end up having significant qualitative effects in the early years of digital
video’s development. While film has largely been shot and projected at 24 frames per second
since the standardization of sound recording and playback technologies, video has historically
been shot at 30 fps in North America (NTSC video) and at 25 fps in the rest of the world (PAL
or SECAM video).
39
However, due to a television set’s inherent brightness from the illuminated
phosphorescent screen, playing video at 25 or 30 fps would produce a noticeable and distracting
flicker. By interlacing video, because each field is alternated so rapidly and because the spaces
between scan lines are so small, the frame rate effectively appears to be twice that which it
actually is. 30 frames are displayed per second, but they are presented as a series of 60 half-
frames. Thus, the appearance of 30 fps video is akin to that of 60 fps (25 and 50, respectively, for
PAL/SECAM). The faster effective frame rate is visibly noticeable in comparison to 24 fps film.
Motion appears much smoother on video and is lacking the subtle strobing effect that has come
38
After the switch to HD television and digital broadcast, most television images are no longer
interlaced. As this discussion of the video look is largely chronological, the description of
interlacing applies largely to the 1990s and early 2000s. Back when TVs weighed a lot.
39
Technically NTSC is 29.97 fps (an additional reminder of video’s lack of full still images), but
rounding up to 30 fps is common verbal shorthand for obvious reasons.
66
to be associated with film (and, as I will later discuss, notions of the cinematic) over decades of
use.
If one were to speak of medium specificity with regard to video’s image specifically, it
would be drawn from these combined technical specificities: low resolution, small dynamic
range and limited color reproduction, full frame images, wide depth of field, high frame rate
motion rendering. Regardless of project, regardless of genre, regardless of production value or
production context or camera or shooting format, anything shot on video exhibited what the trade
press called the “video look” – a term I will continue to use throughout. While this qualitative
“video look” was certainly visible and recognizable to even a layperson in the 1990s, it was so as
a sum of quantitative differences. The understanding of the technical workings of those
differences, and how to manipulate them, required a much more complex skillset. Amateur
camcorders often came bundled with mini “best practices” guidebooks, though these would
rarely go beyond providing the most basic tips for use and did not contain more complex
explanations of concepts like “depth of field” or “dynamic range.” That more intricate technical
knowledge was gleaned from professional experience, formal training, or hobbyist reading of
trade press publications. Thus, the markers of the “video look” would be easily recognizable by
practitioners across the spectrum of professionalism, but the notion that these technical elements
might be anything more than fixed properties of the medium that could be manipulated with the
coming of digital technology required specialized knowledge, and remained a key point of
separation between the true amateur and the more tech-savvy professional.
67
The “Video Look” and its Cultural Connotations
Man made the pixel, but God made the molecule!
- Adán Madrigal, videographer
There are plenty of art forms and cultural practices that make use of a low-fi aesthetic.
Watercolor painting. The gif. Punk rock is so entrenched in the low-fi that its name itself has
become synonymous with the practice – “We don’t have a lot of money, so we’re just going to
have to do it punk rock-style.” In these instances, the lack of fidelity in the medium and/or the
methods used to transfix the content add a certain quality at odds with the sheen of the realist or
the modern, acting as an overt rejection of the cleanest and brightest for something a bit dirtier.
In 2012, Alejandro González Iñárritu shot an experimental short movie called “Naran Ja” on
VHS. He described the process:
VHS texture is for digital what grain used to be for film...Digital
and most film stock is so sleek now, that everything looks very
plastic and unnatural. We have lost the skin of the images.
Cameras reproduce reality much more sharply than my eyes can
see and that’s why it looks fake...I thought this $39 VHS camera
reproduced an exquisite, moshy-moshy, beautiful, horrific greeny-
yellowish skin that triggered my emotional memory of TV series
from the 70s. I loved it.
40
Iñárritu’s use of VHS draws upon the low-fi aesthetic in two ways. First, it was done as a
response to the clarity and sheen of contemporary digital cinema. Second, it was partly an act of
nostalgia, making use of a technology whose look was tied to a certain historical period. From
the vantage point of a filmmaker in the later digital age, VHS can bring a specific texture for a
specialized, experimental project; it offers the potential to be an alternative to the norm. In the
era that Iñárritu’s nostalgia was channeling, however, VHS was the norm. Low-fi video was not
40
Kathleen Flood, “The Premiere of Alejandro G. Iñárritu’s Short Film,” Vice (October 26,
2012), https://creators.vice.com/en_us/article/3dpwxw/exclusive-the-premiere-of-alejandro-g-
i%C3%B1%C3%A1rritus-short-film-inaran-jai.
68
a choice – video was low-fi. Iñárritu opted for VHS on his experimental piece but, not
surprisingly, for Birdman (2014) he and DP Emmanuel Lubezki shot with the highest-end video
possible, using an Arri Alexa digital cinema camera. Also not surprisingly, the look of the two
movies is drastically different. Had Birdman been shot on video 20 years prior, that margin of
difference would have been significantly less. High-end video shot on a DigiBeta camera in the
1990s was certainly superior in its optics and electronics to consumer video, but the difference in
image quality might require the eye of a technician to notice. The markers of the “video look”
remained even in the professional spheres of video production, and the features of a particular
format were largely the features of the medium as a whole.
Video has long been a choice for experimental moviemakers operating well outside the
technical standards of the feature film industry precisely because of its visual difference. Video
activists in the 1960s, using the earliest commercially available versions of analog video,
likewise utilized the technology over film despite its visual imperfections as a symbolic, anti-
elitist and anti-capitalistic gesture.
41
For those seeking image quality of a certain technical
standard, though, video’s low-fi difference was less of a unique creative choice and more of a
serious technical limitation. As a shooting medium, video was functionally advantageous to film
for three primary reasons: it was cheaper, it allowed for live broadcasting, and its storage
medium was more flexible (e.g. longer recording times, less technical expertise was required,
etc.).
42
If these three features were not seen as benefits (e.g. I had a large budget, an experienced
crew, and was not shooting live), then film was the obvious choice, as it was the medium with
41
Abigail Susik, “Sky Projectors, PortaPaks, and Projection Bombing: The Rise of a Portable
Projection Medium,” Journal of Film and Video 64, no. 1-2 (Spring/Summer 2012): 86.
42
The specific nature of these utility benefits are discussed in greater detail, not surprisingly, in
the following section on the development of camera utility.
69
higher visual fidelity. As a result, video was used more commonly in scenarios that capitalized
off of its specific utility benefits.
Because the “video look” was readily identifiable through its low-fi difference from the
film image, and as a result of its repeated use in particular contexts, the video image was
subsequently coded and read differently from the film image. Of the four realms of production
that I identified in my introduction, in the 1990s, video was primarily seen in low-budget
productions, home video, and industrial contexts. High-budget productions did certainly use
video, but generally only when its use was necessary. The Super Bowl was shot on video
because it was broadcast live. Necessitated use aside, generally all pre-recorded high-budget
production was shot on film, if only because the financial and logistical obstacles associated with
film were not a concern. By virtue of being high-budget, the higher cost of film stock and the
need for more technically skilled crew members did not factor into the financial equation in the
same way that they would for a $10,000 film in which film stock accounted for a significant
portion of the film’s budget and a professional DP with the alchemical knowledge to shoot
35mm might break the bank.
The manner in which the “video look” was coded through its use in low-budget,
domestic, and industrial contexts is evidenced by the discourse within the independent
moviemaking and video production trade press. These two communities were actively invested
in and concerned with questions of “look” in relation to technology, and far more so than
practitioners in domestic or industrial contexts, in which aesthetics were less scrutinized or
irrelevant altogether. Amongst these communities, discussions of the “video look” were
noteworthy in that they are often marked with a tone of skepticism and inadequacy. Dale Newton
and John Gaspard summarize in their guide to digital moviemaking, published in 2001:
70
Video has long held a stigma in the feature film world that’s been a
barrier to distribution. If an independent feature was shot on video,
it was considered an amateur production that was relegated to
cable access or maybe late night broadcast TV. If an independent
feature was shot on film, it was admitted to the next level and
considered for distribution.
43
While Newton and Gaspard were speaking to feature filmmaking specifically, their concerns
were echoed across a broad spectrum of practitioners concerned that their project’s look, and the
way in which that look was read, would dictate that project’s success, whether it be defined
financially, artistically, culturally, or all of the above. For many video skeptics, the “video look”
was a not simply an aesthetic choice, but a veritable scarlet letter. The tradeoff that came with
shooting on video was not just lower resolution, but lower cultural capital. The very recognizable
look of video was coded in a particular way, and one that was frequently viewed by practitioners
in the early digital era as a potential limitation. Many considerations of video as an instrument in
the “digital revolution,” especially in the trade press, have been based on the hierarchy in image
quality that was generated in the analog and early digital era. The manner in which the image
was coded and read was a visible manifestation of the separation in power amongst the cultural
elites that could afford to shoot on film and those who could not.
Within the trade press in the late 1990s and early 2000s, the two dominant discourses
surrounding the coding of the “video look” were ones of aesthetic legitimacy and illusion. In the
case of the former, the “video look” is an obstacle in any attempt to achieve validation in the
eyes of critics, film festivals, distributors, and the public more generally. In the latter, video is
associated with some loose notion of realism, in which video’s smooth motion rendering makes
the suspension of disbelief in fiction projects more difficult (whether they be narrative films,
43
Dale Newton and John Gaspard, Digital Filmmaking 101 (Studio City: Michael Weisse
Productions, 2001), 104.
71
commercials, music videos, etc.). Sean Cubitt’s analysis of video as a medium comments on its
semiotics: “Video is like language less because it is a ‘langue,’ a systemic organization of rules
and difference, and more because it is composed of ‘paroles,’ of instances of usage in which their
construction, their textuality, and their reception are all at play.”
44
Any attempt to understand the
stigmatization (in Newton and Gaspard’s terms) of the “video look” must involve not just an
analysis of its medium-specific properties, but of video’s usage in everyday society (i.e. common
frames) and its use in creating artistic and non-artistic textual objects (i.e. intertextual frames).
Umberto Eco explains the concept of common frames, quoting linguist Teun van Dijk:
“frames are ‘common knowledge representations about the “world” which enable us to perform
such basic cognitive acts as perception, language comprehension, and actions.’”
45
The
recognition of the “video look” by a viewer elicits inferences based on his or her common
experience with its usage in society, i.e. “Where else in my life have I experienced this type of
imagery and motion rendering, and what sensations did it evoke?” Through common frames, the
video image is read apart from the content which it contains, and more as a familiar
technological object. In his seminal work on video as a medium, Fred Armes reminds, “No
aesthetics of video can omit totally those applications in which the nature and quality of the work
is secondary.”
46
Intertextual frames, on the other hand, involve weighing a text specifically
against other texts.
47
Recognition of the “video look” along with the content being rendered
elicits comparisons to other texts in which A) that content was experienced and B) that image
44
Sean Cubitt, Videography: Video Media as Art and Culture (New York: St. Martin’s Press,
Inc., 1993), 13.
45
Umberto Eco, The Role of the Reader (Bloomington: Indiana University Press, 1979), 20-21,
quoting Teun Van Dijk, “Macro-Structures and Cognition” (Paper presented at Twelfth Annual
Carnegie Symposium on Cognition, Carnegie Mellon University, Pittsburg, May 1976).
46
Roy Armes, On Video (London: Routledge, 1988), 196.
47
Eco, The Role of the Reader, 21.
72
quality was experienced. Thus, by the early 2000s the “video look” had been coded both through
its history of textual representation, primarily on television and, to a less common extent, at the
art gallery, and also through more general social uses like home video and industrial/commercial
video feeds. In the following two brief sections, I outline the manner in which the video image
came to be coded through practice as both illegitimate and nonfictional, and how this coding
served to suppress the radical potential of the video camera in the early digital era.
The “Video Look” as Illegitimacy
Q: I would like to give my videos a film look. Can you help? – Tony
48
This was the question burning in the mind of one reader of Videomaker magazine in
2001, and if the abundance of articles on this topic across the moviemaking press was any
indication, it was burning in the minds of many other videographers as well. Given the titles of
certain publications, like Videomaker and Digital Video Magazine, it is surprising and ironic how
often the responses to questions like Tony’s were marked with an air of inferiority. The
magazines are keen to acknowledge video’s democratizing potential, its crucial role in
entrepreneurship, its importance in creating individual and cultural archives, and its use for
personal satisfaction, but are admittedly less enthusiastic about its look. Videomaker’s frequent
“film look” or “cinematic look” tutorials, in which authors provide tips to make one’s video
more closely resemble the look of film, often included phrases like “film looks better than
video”
49
and “projects shot with film simply look better”
50
and “the moviegoing public
48
Bruce Coykendall, “Tech Support,” Videomaker (August 2001): 8.
49
Eric D. Franks, “Is 24p For Me?” Videomaker (August 2003): 45.
50
Brian Peterson, “Getting that Film Look: Shooting Video to Look Like Film,” Videomaker
(July 2008): p. 37.
73
aesthetically prefers film to video.”
51
As a result, such articles read like the newsletter of
perennial second placers, no doubt a reflection of the stigmatization of video, but also an
effective contribution to it, given the magazines’ readership. For independent moviemakers,
artistic validation often came only if films found an audience, and in the pre-YouTube era, this
opportunity was generally sought through exhibition in a theater – especially through film
festivals – or on television. As Newton and Gaspard point out in Digital Cinema 101, there was a
general feeling that the various cultural gatekeepers were less likely to allow passage for a movie
if it had a “video look.”
52
Accurate or not, filmmakers presumed a semiotic reading on the part of
their audiences that interpreted the “video look” as signifying a medium less aesthetically
legitimate than film. The same was true for commercial producers as well, wherein shooting on
film lent a sense of legitimacy to each individual project and, through the collected works, a
production company as whole, potentially yielding more lucrative contracts in the future. The
sense of illegitimacy extended even to pornography, as Chuck Kleinhans notes the commonplace
belief that porn shot on film was aesthetically preferable, even to the point of distributers using
the phrase “Shot on Film” on the video packaging as a marketing ploy.
53
Part of the legitimacy debate stems from a larger one regarding the hierarchy of film and
television more broadly. TV scholar Milly Buonanno’s book, The Age of Television, traces the
history of the latter, building on and critiquing John Ellis’s influential conceptual framework of
the glance (of television) vs. the gaze (of film). Based on TV’s usage within domestic spaces,
Ellis argues that “TV’s regime of vision is less intense than cinema’s: it is a regime of the glance
51
Patrick Lang, “24p For You and Me,” Videomaker (April 2003): 15.
52
Newton and Gaspard, Digital Filmmaking 101, 104.
53
Chuck Kleinhans, “The Change From Film to Video Pornography: Implications for Analysis,”
in Pornography: Film and Culture, ed. Peter Lehman (New Brunswick: Rutgers University
Press, 2006), 155; 158.
74
rather than the gaze. The gaze implies a concentration of the spectator’s activity into that of
looking; the glance implies no extraordinary effort is being invested in the activity of looking.”
54
Ellis’s works contrasts “a high value for the cultural and ethical works of cinema” with “a low
opinion of the aesthetics of television … the televisual image is regarded as fundamentally
incapable of attracting and captivating our gaze.”
55
Even if, as Buonanno argues, Ellis’s binary
lacks a middle ground to account for television’s hypnotic properties, the hierarchy between the
two media formats is reflected often in academic work, within production communities, and in
the public at large
56
(though certainly more in the 1990s than the present). The apparent
illegitimacy of television could equally be extended to video as well, as the “video look” has
been confined almost exclusively to this “lesser” medium since television’s inception. Up until
the 2000s, projects shot on film were seen both in cinemas and also on television, but projects
shot on video were largely seen only on television.
Video’s absence from the cinema space was partly practical. Until the 2000s, video
projectors were not a standard feature in most multiplexes. If a video were to screen in a theater,
it would have to undergo a telecine transfer onto a film strip and, in doing so, much of its “video
look” would be removed. Thus, even if a project shot on video could be seen in a theater, its
“video look” would not. Even aside from the technical obstacles in exhibition, there was a lack
of video work in feature moviemaking in general, partly as a result of technical limitations in
editing. Before the availability and popularity of nonlinear editing systems in the 1990s, video
was edited linearly, tape-to-tape. The process essentially entailed pressing “play” on one tape
54
John Ellis, Visible Fictions (London: Routledge, 1982), 137.
55
Milly Buonanno, The Age of Television: Experiences and Theories (Chicago: The University
of Chicago Press, 2007), 37.
56
Ibid., 38.
75
deck and “record” on another. As Brian McKernan’s history of digital cinema explains, the linear
editing process was notably inaccurate until the late 1970s, and it was not until the advent of
nonlinear systems in the digital era that video editors could cut with the ease and precision of
film editors.
57
Video’s ontology as a continuously running signal has frequently been a point of
artistic experimentation in the world of installation art and is an obvious necessity in the case of
live television, but for moviemakers or commercial editors needing to access the fundamental
building block of cinema, the frame, video was cumbersome at best and impossible at worst for
decades. The ire with which linear video editing is discussed by video editors in the present day
(myself included) likely accounts for some of the negative kneejerk reactions to the “video look”
on the part of filmmakers in the early 2000s, if simply by association. The bigger issue, though,
was the resulting lack of video in fiction projects – especially ones with high production values.
In addition to hierarchies of legitimacy in exhibition format, similar hierarchies existed in
areas of style and genre. For content shot on video, genre privileging was less of a contributing
factor in the first decades of television as video was used, at least on occasion, to produce nearly
all styles and genres, fiction and nonfiction. Thus, the “video look” would not have been coded
with any particular high or low cultural status intertextually initially, aside from the small size of
televisions screens themselves, which were a frequent target in exhibitor and film studio
mudslinging. In the 1980s, however, significant technological innovations led to aesthetic trends
that colored the video image differently. John Caldwell outlines two major modes of production
in the 1980s – the cinematic and the videographic. The cinematic here refers to film stock and/or
a “film look,” but also a corresponding increase in production value and more elaborate “feature-
57
Brian McKernan, Digital Cinema: The Revolution in Cinematography, Postproduction, and
Distribution (New York: McGraw Hill, 2005), 13.
76
style” cinematography.
58
As film stocks and telecine transfers onto videotape became more
efficient and affordable, there was a surge in the use of film to produce dramatic shows,
relegating the videographic mode to areas of nonfiction programming or, if fictional, shows with
a live audience.
59
As a result, throughout the 1980s and 90s, the “video look” became associated
with liveness and reality (as will be relevant in the following sub-section) but also with
programming that was stereotypically “low” culturally or, at least, low budget – daytime talk
shows, most sitcoms, soap operas, local news and commercials, reality TV, home shopping
networks, etc.
60
Additionally, because so many of these formats were filmed live or live-to-tape,
the control over the image that exists in single-camera production was impossible, as shots had to
be framed and studio sets lit to accommodate multiple camera angles simultaneously, yielding a
less precise and less polished look.
Because of the spike in film productions in the two decades prior to the digital
filmmaking era, and because of film’s association with culturally higher genres, the “video look”
was coded through its content on television as less legitimate precisely at a time in which video
was becoming a more viable medium for production on the part of low-budget and amateur
videomakers; camcorder culture developed concurrently with the increase in film production on
TV. In addition, as much as the advancements in video technology enabled independent
moviemakers and low-budget commercial production companies to produce higher-grade images
in the late 90s, the affordability of that same technology also enabled image creation on a much
wider scale in the home video market. In what was likely the most damning common framing of
58
John Caldwell, Televisuality: Style, Crisis, and Authority in American Television (New
Brunswick: Rutgers University Press, 1995), 12.
59
Ibid., 12-14.
60
Ibid., 19.
77
the “video look,” for every independent movie shot on video, there were thousands of home
movies being produced on the same format, coding the “video look” through its recording of the
banal, the quotidian, the decidedly un-cinematic.
Gavin Smith’s characterization of video’s cultural coding suggests a reading that goes
beyond the uses of the format and to the users as well. He writes:
[Film] has, for lack of a better term, a moral problem with video,
which is admitted to movie territory mostly as an inferior, suspect
Other. Certainly film’s rapid surrender of the pornography sector
to video in the early years of home video’s emergence
conveniently tainted the upstart medium. In the movies, video is
implicitly impure or impersonal at best; more usually, it connotes
the sinister insidiousness of surveillance, the banality of TV, or
generic estrangement/voyeurism/malevolence…
61
On one hand, Smith’s use of the term “moral” is driven by what he sees as implicitly negative
readings driven by video’s use in pornography or surveillance. Perhaps inadvertently, though, his
use of the term “Other” hints at the very specific ways in which the video look was deemed
culturally inferior. Video may have been a technological Other, but considering that film was the
shooting medium of the largely white, wealthy, male cultural gatekeepers, video was, by default,
the medium of the Other. Just as video was coded by its absence in the theatrical space, film was
similarly coded by its lack of use in marginalized communities due to its cost and technical
complexity. This particular brand of cultural coding is especially relevant in this dissertation’s
subsequent discussions of the “revolution” in digital video image quality, in which the act of
videomaking by low-budget and amateur practitioners demonstrated the potential to shift the
balance of power in media influence through the altering of video’s qualitative coding.
61
Gavin Smith, “Straight to Video,” Film Comment 33, no. 4 (July/August 1997): 54.
78
The “Video Look” as Broken Illusion
The rendering of motion in the video image has long been associated with a general
notion of “realism,” a term that, despite its inherent ambiguity, comes up frequently in “film
look” tutorials. This reading of the video image might seem odd, given the medium’s lower
visual fidelity, but it derives more specifically from its higher temporal resolution – its faster
frame rate. As one such tutorial states quite plainly, “The biggest downside to video’s frame rate
(30 [interlaced] frames per second) is that it looks too much like reality.”
62
The apparent inability
to produce fiction works with video was, again, not strictly an aesthetic limitation – the cultural
significance of fiction moviemaking goes without saying. That fact that the only medium deemed
worthy of the fictional mode was prohibitively expensive kept a form of representation out of the
hands of the overwhelming majority of the population. This is not to say that amateurs or low-
budget practitioners didn’t shoot fiction projects on video – they surely did (I myself did).
63
But,
judging from the articles across videomaking magazines, shooting on video came with the
knowledge that the practice was done with a caveat, an asterisk, a visible mark of separation that
filmmakers and audiences had to overlook (or look through).
Issues with faster frame rates are not restricted only to early digital video in the 1990s,
but are seen in current high definition video as well. To jump forward nearly 20 years, when
Peter Jackson produced The Hobbit: An Unexpected Journey in 2012, he opted to shoot at 48 fps,
resulting in motion rendering very similar to the 60 field-per-second interlaced video of the
1990s. His rationale was that the higher frame rate allowed for smoother motion, heightened
62
Coykendall, “Tech Support,” 8.
63
It is also worth pointing out that lower-budget and amateur projects (especially in earlier
decades) shot fiction projects on film, as well, but these were done with different obstacles that
the use of video remedied (i.e. video was cheaper and easier to use). Technical ability and the
cost of film stock limited practice in a different way.
79
realism, and an image that was easier on the eyes of 3D audiences.
64
Although the high-end Red
Epic digital cinema cameras featured imagery that was cinematic in all other respects (it mirrored
35mm film in resolution, depth of field, dynamic range, color sampling, etc.), the one divergence
in frame rate led to a slew of negative press at the movie’s release, as many critics found that the
hyperreal, ultrasmooth motion rendering resulted in an uncanny, unpleasant image, looking more
like consumer grade home video, documentary, reality TV, or even a live theatrical production
rather than cinema.
65
While Jackson had intended the motion of high-frame-rate (HFR) video to
heighten the sense of realism, many found the effect disconcerting. Film scholar Julie Turnock
explains this effect:
Removing the ‘pane of glass’ between the viewer and what is
captured by the camera does not necessarily result in pleasurably
enhanced realism or immersion. Instead, it can result in the
aesthetically unpleasing effect in which the diegesis looks too
much like a film set or live event, rather than a fully realized
imaginative world.
66
Before any discussion of coding through intertextual or common frames, it is worth
considering the validity of the “realism” argument on purely physiological grounds. Does the
HFR video image appear more realistic solely because it matches human vision more closely? In
the 1970s, visual effects artist and film director Douglas Trumbull ran a series of tests on the
effects of frame rate on perception.
67
Working for a research and development wing of
64
Julie Turnock, “Removing the Pane of Glass: The Hobbit, 3D High Frame Rate Filmmaking,
and the Rhetoric of Digital Convergence,” Film Criticism 32, no. 2 (March 2013): 41-42.
65
Jesse David Fox, “What the Critics are Saying about The Hobbit’s High Frame Rate,”
vulture.com (December 12
th
, 2014).
66
Turnock, “Removing the Pane of Glass,” 44.
67
Trumbull enjoyed great success as a visual effects artist, working on films such as 2001: A
Space Odyssey (1968), Close Encounters of the Third Kind (1977), and Star Trek: The Motion
Picture (1979). His experiments with frame rate were noteworthy in that similar experiments
were rare – in Trumbull words, “As far as I know, apart from my own experiments, this was
done only once before: Around the World in 80 Days, which was 30 fps.” Simon Howson,
80
Paramount Pictures, Trumbull and his team ran an experiment in which a group of college
students were brought into a theater and hooked up to a variety of machines to measure
physiological response. Trumbull played footage from different formats and frame rates, from 24
– 72 fps, and found that “there was a remarkable difference in response between 24 frames per
second and 72 … you could see that at 60 frames [the subjects’] responses were very high on the
curve.”
68
At high frame rates, the audience’s heart rate, brain waves, skin and muscle response
intensified. Trumbull further claimed that “people unanimously reported not only a greatly
increased physiological response to the film, but better color, better sharpness, a sense of three-
dimensionality, a sense of participation, and an illusion of reality.”
69
While Trumbull had
discovered a difference in physiological perception at different frame rates, he made a jump from
science to aesthetics, claiming that higher frames rates were “too vivid and life-like for fiction
film. It becomes invasive. I decided that for conventional movies, it’s best to stay with 24 frames
per second. It keeps the image under the proscenium arch. That’s important, because most of the
audience wants to be non-participating voyeurs.”
70
Trumbull’s assumption was that the higher
frame rates associated with a “video look” were intrinsically unsuited to fiction work due to their
greater similarity to human vision.
Even if Trumbull is right in his analysis and higher frame rates more accurately recreate
human vision, it is impossible to attribute all of the uncanniness of higher frame rates to physics
and biology. The argument against Trumbull’s claim, coming from Peter Jackson and his
“Douglas Trumbull: The Quest for Immersion,” Metro Media and Education Magazine 169
(2011): 113.
68
Bob Fisher and Marji Rhea, “Interview: Doug Trumbull and Richard Yuricich,” American
Cinematographer 75, no. 8 (August, 1994): 56.
69
Gaylin Studlar, “Trumbull on Technology,” Spectator 3, no. 1 (Fall, 1983): 7.
70
Fisher and Rhea, “Interview,” 59.
81
constituency, is one of nurture rather than nature – had cinema always been 48 fps, neither
Trumbull’s nor Jackson’s use of high frame rates would seem unpleasant or uncanny. In
response to Trumbull’s experiment, the physiological intensity experienced by his subjects could
be attributed not just to the format itself, but to its marked difference from that which is
traditionally experienced in a cinema space. Trumbull’s argument additionally runs into
problems from an historical perspective, as much of the fiction programming in the first decades
of television, both dramatic and comedic, was shot on video, and soap operas and sitcoms up
until the HD era exhibited a distinctive “video look.” Thus, a “video look” could indeed be used
to suspend disbelief and therefore was no less intrinsically illusory than a “film look.” The other
side of the argument holds true as well, wherein many nonfiction programs and documentaries
have been shot on film as well as video (sometimes both within the same movie) without any
sense of unpleasantness. If the “video look” of The Hobbit resulted in uncanny distancing rather
than illusion, that is largely because audiences have recently become accustomed to illusion
looking a certain way.
The relationship between television/video and liveness has been a central discussion in
television studies, and the concept is certainly relevant to the coding of the video image. Whether
liveness is indeed a central aspect of television’s ontology or, as Jane Feuer suggests, liveness is
largely constructed through ideology and corporate rhetoric, the fact remains that the video
image can be live, and the film image cannot.
71
Thus, up until the digital era and the possibilities
of attaining a “cinematic look” on video, live television always had a “video look.” Much like
the case with cinema and television, wherein a “video look” was almost never seen in a theater
71
Jane Feuer, “The Concept of Live Television: Ontology as Ideology” in Regarding Television:
Critical Approaches, an Anthology, ed. E. Ann Kaplan, (Los Angeles: AFI, 1983), 15.
82
space, liveness can never be experienced with film. In her theoretical investigation of video,
Jarice Hanson argues that because we can only experience liveness through video, and because
the image quality of recorded video looks identical to that of live video, we have been
conditioned to see all video images with a sense of liveness: “The fact remains that videotape
projection and live projection look indistinguishable from each other … we have culturally
understood the televised images produced live or by videotape to present an ongoing
presentation of ‘reality,’ or actuality, meaning that the material is current, and real.”
72
I would
argue that the effects of the conditioning of liveness were less relevant in the first few decades of
television, when a wider variety of genres and styles were presented on video, but in recent years
the effect has been more pronounced. To return to John Caldwell’s work on the cinematic vs. the
videographic, as more and more dramatic productions began to be produced on film in the 1980s
and 90s and the videographic mode was relegated to areas of nonfiction programming or, if
fictional, shows with a live audience, the balance between fiction and nonfiction programming
shot on video was thrown off.
73
Stepping outside of the entertainment industry, the coding of the “video look” with reality
was furthered in two major sites: home video and CCTV. While home movies were once a thing
of film, as VCRs and camcorders became more commonplace in the 70s and 80s, the domestic
recording of reality was linked to a “video look.” As was the case with legitimacy, the very
technological developments that put video cameras in the hands of moviemakers also put them in
the hands of everyone else. For every fictional narrative video that worked to liberate the
medium from its association with reality, significantly more weddings and birthday parties
72
Jarice Hanson, Understanding Video: Applications, Impact, and Theory (Newbury Park:
SAGE Publication, Inc., 1987), 27.
73
Caldwell, Televisuality, 12-14.
83
firmly attached the “video look” to the nonfictional. Additionally, the liveness of video makes its
image ubiquitous in surveillance culture, linking the “video look” to what is perhaps the least
illusory context of all. With CCTV, the “pane of glass” between content and viewer is turned
into a one-way mirror, allowing reality to be observed in real time, a mode of viewing very much
dependent on belief rather than the suspension thereof. The coding of the “video look” through
these two social contexts has been exacerbated further by the recreation of their modes of
viewing as tropes in cinema and television. Within a fictional work shot on film, the switch in
mode from cinematic to point-of-view surveillance or home video is often marked with a shift in
format as well.
As the video image was coded through practice to be both illegitimate and nonfictional,
the stigma that it developed led to a feedback loop, in which practitioners avoided using video if
possible, further contributing to the stigma. Such a self-perpetuating cultural policing of moving
image formats sequestered the video image to certain applications, creating a false impression of
its teleology as a shooting medium. In the subsequent years of the early digital era, with the
proliferation of digital image manipulation software, the video image itself became a site of
resistance, as the ability to alter its quantitative specifications and the resulting coding of the
image became a newfound reality for practitioners with the technical ability to claim a hold over
the “cinematic.”
84
Altering the “Video Look”
If it walks like a duck and flies like a duck, it must be a duck.
74
- Decoying Your Video to Look Like Film
For big-budget productions, the use of video was essentially nonexistent, as they were
able to afford film stock, and video was never a conceivable option unless it was necessary for
live broadcast. The low-fi “video look” was likewise not a particular hindrance in the home
video market, as camcorders were largely designed and marketed for simplicity of use rather
than high-quality image making, though phrases like “clear picture” always featured prominently
in advertising rhetoric (even if such phrases weren’t referring to any technical elements in
particular). Big-budget productions could afford the culturally superior medium, and for home
video users, lower quality images were not an obstacle in achieving their end goals, which were
largely documentary in nature. To that latter point, the connotations of the video look were never
a terribly strict limitation for many documentaries. The association of the video look with a lack
of illusion was obviously not a problem for non-fiction applications, and the association of video
with lower-budget and culturally low genres wasn’t necessarily an obstacle for documentaries in
cases where content trumped look and style. In some instances, the low-fi look actually added to
the documentaries’ authenticity as the lack of visual polish added to the sense of “grittiness” in
the imagery. Because documentary relied less on precision cutting, linear editing was still
tedious but not impossible. Additionally, the benefits of cheaper shooting stock and longer tapes
made the advantages of video invaluable, and far more than higher resolution imagery in many
cases. Considering that documentary production is largely marked by a lack of control, images
74
Michael Reff, “Duck, Duck? Goose?? Decoying Your Video to Look Like Film,”
Videomaker (July 2006): 49.
85
that appeared to be “lower quality” were not necessarily as damaging. The biggest impact of the
“video look” within the four major production contexts was with low-budget fiction and
commercial practitioners, as the use of the video image in these two spheres was the most
precarious. Hollywood could afford to avoid video and home videos were less of a commodity
(at least in the early digital era), but for low-budget and commercial practitioners, the use of
video against its stigma directly affected livelihood, artistic success, cultural impact, and public
representation.
Low-budget production could turn to video to save money, but when video was analog,
the limitations in image quality were immediately obvious. Through the “video look,” low-
budget production was stamped as being exactly that. When digital video technology hit the
professional and prosumer marketplace in the mid-1990s, these low- or no-budget productions
had new tools at their disposal.
75
Looking at the quantitative markers of video discussed earlier
in this section, the newly-digital incarnation of its camera mechanics allowed for a level of
control over the image that did not exist in the analog era. Many of the practical changes through
digitization will be discussed in this dissertation’s second section on the computerization of
video, but several are pertinent here as they relate to image quality specifically. Video producers
could effectively alter the quantitative qualities of video to achieve a look that differed from that
which the camera captured natively and, in doing so, avoid some of the connotations that were
seen to negatively affect their productions.
75
I use the portmanteau “prosumer” as it appears in the trade press. The cameras and
practitioners that bear the label, as the term suggests, are situated midway between the
professional and the amateur consumers. The price point of the prosumer cameras is more than
the average consumer camcorder, and the skill ceiling of the hardware and software is likewise
higher, but prosumers often bear the marks of self-training and often lack the professional
support structure of a union or a production company.
86
More broadly speaking, the newfound level of image control through digitality was but
one example of a wider trend across videomaking practice, in which control over the image
translated into more control over the results of the image (financial, aesthetic, etc.). While analog
video practice was one in which control over the image and the use of images was difficult (e.g.
linear editing), expensive (e.g. video effect synthesizers), or logistically prohibitive (e.g. personal
distribution), the coupling of the camera with the computer offered the potential for both greater
and more precise control and, eventually, wider access to the tools needed. As with other
technological potential, control over images was generally directly proportional to one’s place on
the image-making hierarchy, as it was tied to finance and formal training. In utilizing new, more
affordable digital technology to gain more control over one’s images, practitioners were able to
effectively ascend the image-making totem pole through technical means previously unavailable
to them.
As discussed earlier in this section, the most noticeable visual markers of the “video
look” were its resolution/aspect ratio, its color/light rendering, its wide depth of field, and its
frame rate, and each was digitally manipulable for effect, though to varying degrees. Video’s
resolution posed a problem for theatrical exhibition, as the limitations in fidelity were magnified
along with the images, but served as less of an obstacle on television, as television sets in the
mid-90s were only capable of standard definition (480 lines of resolution) playback anyway.
Video did not suffer from being broadcast in the same way that film did, as the latter took a
considerable hit to its resolution as it was converted into a video signal. In that way, television
broadcast leveled the playing field between the two imaging media to an extent, as film was
brought down to video’s imagery level; film looked much worse than it did in the theater, but
video looked the same as it did in the camera.
87
The same was true for color and light rendering, as the richness of film suffered from the
down-conversion as it was crushed into video’s narrower dynamic range. Video, again, faced no
obstacle in television broadcast because its image remained in a 1:1 correspondence with the
image from the camera. Video’s narrow dynamic range was again exasperated in theatrical
exhibition, but this could be mitigated to an extent with new digital color correcting software.
Individual colors and color spectra as well as specific ranges of light, dark, and mid-tones could
be altered, and color correction could be sculpted to specific areas of the frame, though colorists
had less to begin with when working with video. Video’s narrower dynamic range and color
spectrum were also likely less noticeable in many cases as film colorists would sometimes
intentionally degrade, desaturate, and muddle their film images for effect, so a narrower range
could be more readily viewed as a creative choice than would lower resolution, even though for
video users it wasn’t a choice at all.
All of these color and lighting adjustments could be made with the same accuracy (and
even software) that was used to color grade film, as more films were using the digital
intermediate process, in which a film print was digitally scanned and manipulated using digital
software. While film and video remained distinct media in production, in post they were both
digital, and thus were manipulated with the same tools. Video’s aspect ratio was fixed at a full
frame, but the frame could be cropped on the top and bottom to create a wider-looking image –
the same technique that was used for widescreen on 35mm film, though with a greater loss in
resolution as, again, video editors had less to work with to begin with. Digital technology was
not able to do much with depth of field initially, as blurring the backgrounds with any accuracy
was beyond the software capabilities and processing power of late-90s platforms, though it
would come in later versions of the software further into the 2000s. Digital manipulation aside,
88
the new breed of cameras was still capable of the same optical manipulations of depth of field
that analog practitioners had used.
While most of video’s quantitative elements could be mitigated to an extent with new
digital technologies, there was one that had an impact on the “video look” seemingly more than
the rest – frame rate. If one were to display on a television a still frame of an event shot on film
next to a still frame of that same event shot on video, they would certainly look different, but not
to a tremendous degree – especially in the era of digital post-production, where brightness and
color could be meticulously manipulated. Put the images in motion, however, and the difference
between the filmic and videographic becomes quite noticeable; the smoother motion of video’s
faster frame rate and interlaced playback is instantly identifiable.
In the 1990s and early 2000s, video’s frame rate was largely unchangeable. However,
many video editing software programs began to feature tools that could be used to de-interlace
the video, effectively allowing “progressive scan” images. 30 fps NTSC video had the
appearance of 60 fps video because of the interlacing process. De-interlacing the video would
take the interlaced frames and combine the individual fields into a single full frame. Thus, 60
field-per-second images would look more like 30 frame-per-second images, which was much
closer to the appearance of 24. The process was even more effective for PAL and SECAM video
outside of North America, which were reduced from 50 fps to 25 – imperceptibly close to 24.
De-interlacing could be achieved in postproduction with a variety of visual effects
programs like Adobe After Effects and plug-ins specifically designed to remove the “video
look,” like Red Giant’s “Magic Bullet,” Rubber Monkey’s “Film Convert Pro,” and DigiEffects’
“CineLook” and “Cinemotion.” These options, previously unavailable in the analog era, allowed
a moderate software investment to stand in place of a larger hardware investment (CineLook and
89
CineMotion, for example, were bundled together for $299 in 2002).
76
Such features were not
limited to low-budget productions, either. FilmLook, a postproduction company, digitally altered
video programming for a number of high-profile clients like HBO, Nickelodeon, and ESPN.
77
Other options for de-interlacing existed within cameras themselves. The Canon XL-1, which was
an extremely popular prosumer video camera in the early 2000s, and its more affordable cousins,
the GL-1 and GL-2, all offered a feature called “Frame Mode” in which the de-interlacing
process was conducted within the camera, removing any need for postproduction tinkering. None
of these solutions gave a fully film-like look but, by de-interlacing the video, filmmakers were
able to create what columnist Michael Reff calls a “decoy,”
78
and what I am calling a “semiotic
decoy.”
As a semiotic system, a motion picture image works as a traditional mimetic system: a
film image of a house, for example, serves as an indexical signifier for the real-world house
being filmed. The mechanical act of filming is, in itself, an objective recording of reality. In the
words of Roland Barthes:
From the object to its image there is of course a reduction – in
proportion, perspective, colour – but at no time is this reduction a
transformation … certainly the image is not the reality but at least
it is its perfect analogon and it is exactly this analogical perfection
which, to common sense, defines the photograph. Thus can be seen
the special status of the photographic image: it is a message
without a code.
79
76
Scott Saunders, “Wares,” Filmmaker (2002).
http://filmmakermagazine.com/archives/issues/summer2002/columns/wares.php
77
Filmlook corporate website. http://www.filmlook.com/clients.html.
78
Michael Reff, “Duck, Duck? Goose??” 49.
79
Roland Barthes, Image – Music – Text (New York: Hill and Wang, 1977), 17. Italics in the
original.
90
Though this objective, indexical image is without a code (what Barthes calls a “denotation”), it
becomes coded through connotation, which according to Barthes, “is the manner in which the
society to a certain extent communicates what it thinks of it.”
80
Changing the camera angle, for
example, does not alter the objective mechanical reproduction, but functions as a connotation
based on the cinematic language system of the filmmaker and the audience. Connotative coding
can also exist through the embellishment of the image by altering its physical texture – that
which Barthes calls photogenia.
81
For example, based on a contemporary cinematic language
system, sepia-tinting a photograph of a house changes the sign of “house” into the sign of “house
in the past.” I would suggest that a similar photogeniac coding extends into film and video
shooting formats. An audience’s recognition of the “video look” as a semiotic code relies on a
visual recognition of video’s particular rendering of motion caused by its faster frame rate.
Electronically altering this motion to match a more traditionally cinematic standard (that of 24
fps film) is enough to un-code a video image and, in doing so, deactivate an audience’s
recognition of the “video look” – a semiotic decoy.
One particularly noteworthy decoy was Danny Boyle’s 28 Days Later (2001), shot by
Anthony Dod Mantle. Dod Mantle was a go-to videographer for bigger independent projects for
much of the decade, as he had been involved in many of the Dogme movement’s video movies
and had extensive experience working with the medium. Unlike the Dogme movies, which very
much intentionally played up video’s connotations as a mundane, non-fiction medium, 28 Days
Later opted to shed the motion rendering characteristic of the “video look.” The movie was shot
on the aforementioned Canon Xl-1, using the de-interlacing Frame Mode to produce progressive
80
Ibid.
81
Ibid., 23.
91
scan images and was also shot on PAL, giving an effective frame rate of 25 fps.
82
All of the other
quantitative hallmarks of video remained – the image was grainy and low resolution, with
limited color space and dynamic range. A still frame of the movie absolutely looks like video.
However, when played back, its motion does not. The manner in which movement is rendered is
very similar to that of film, as the movie effectively shares the same temporal resolution. While
tech-savvy critics commented on the general “grittiness” offered by the choice to shoot on video,
there was no great uproar over its use as there was with Peter Jackson’s The Hobbit, and the
film’s reception was overwhelmingly positive, both from critics and the general public, as
demonstrated by the film’s success at the box office. The release and success of 28 Days Later
was noteworthy – even inspirational – to low-budget filmmakers who had long dreaded shooting
and releasing their projects on video. My own moviemaking collective at the University of
Chicago, Fire Escape Films, organized an outing to see the movie specifically on the basis that it
was shot on the same camera that we used.
It is interesting to compare 28 Days Later to The Hobbit, released 10 years later, as it was
essentially the complete technical opposite. The Hobbit was shot on top-of-the-line high
definition video that effectively matched the resolution, dynamic range, and color rendering of
film. In its qualitative quality, it was virtually indistinguishable from film in every area save one
– frame rate. Even with an image that was filmic in nearly all respects, the increased frame rate
was enough to send many critics into a filmophilic tizzy. 28 Days Later, which was video-y in all
respects except frame rate, caused no such panic.
82
Douglas Bankston, “All the Rage: Anthony Dod Mantle, DFF Injects the Apocalyptic ‘28
Days Later’ with a Strain of Digital Video,” American Cinematographer 84, no. 7 (July 2003):
83-84.
92
To return to the concept of the “cinematic,” the nature of that elusive term in the case of
these two films was intimately tied to their technical specifications – their quantitative quality.
The Hobbit, taken along with 28 Days Later, suggests that the biggest point of technical
separation in the “cinematic” was in frame rate – far more than resolution, dynamic range, or
color. It is not surprising, then, that many trade press articles and videos offering tutorials on
“How to Create a Film Look with Video” begin with the notion: “The largest, most important
thing you can do to get the film-like look … is to slow the frame rate down.”
83
This slowing of
frame rate was possible only to a degree. Eliminating the interlacing effectively halved the frame
rate, and for PAL and SECAM video users, the final images were quite numerically close to 24
fps. For NTSC users, though, the video would always play back at 30 fps, and the difference was
still enough to contain traces of the “video look.” The fact that some American moviemakers
would oftentimes opt to shoot on a European PAL camera (despite its incompatibility with U.S.
video systems) and then convert the footage to NTSC later, demonstrates the technical hoops
moviemakers would jump through in order to attain a more cinematic look.
84
The creation of the semiotic decoy was an act of videomaking practice that was
predicated on the theory embedded in the creation of images. While less verbally articulated than
might be the case in academic scholarship, the “cinematic look” tutorials across the videomaking
trade press demonstrated an understanding of image coding and reading that it might be
manipulated – putting theory into practice, as it were. These articles, which break down the
cinematic into its quantitative parts, are the result of research through practical experimentation –
83
Reff, “Duck, Duck? Goose??” 50.
84
Franks, “24p,” 44.
93
not unlike the “film experiments” of theorist-practitioner Dziga Vertov.
85
While perhaps
somewhat oversimplified for the tastes of some scholars, these “research reports” do offer a
fascinating and often ignored reply to the question, “What is cinema?” or, at least, “What is the
cinematic?” if only in the fact that they approach the answer as something measurable,
something to be adjusted, an action to be taken. Such tutorials delineate the qualitative,
ephemeral “cinematic’s” quantitative origins, and this distinction is not insignificant.
Quantitative specifications can be manipulated, and the means to do so clearly communicated.
These “film look” tutorials, then, are a key text in understanding videography as a revolutionary
practice as they are essentially instructions for how to make one’s images more culturally
impactful.
The efforts on the part of media-makers to bridge the gap between the cinematic and the
videographic, altering their images and their frame rates in order to shed the negative
connotations of the “video look,” were soon picked up within the camera manufacturing
industry. The most noteworthy leap forward came with the popularization of cameras that shot at
24 fps in the early- and mid-2000s, a technical switch that occurred at multiple production levels.
At the higher end, filmmakers like George Lucas led the charge and championed the potential for
shooting features on then-prototype HD digital cinema cameras like the Sony CineAlta HDW-
F900, which was used to shoot a handful of features in 2001 and 2002, including Vidocq,
Russian Ark, Spy Kids 2, and Lucas’s own Star Wars Episode II: Attack of the Clones. 2002 also
marked the release of the 24 fps prosumer Panasonic AG-DVX100 (or the “DVX” in trade
parlance). Unlike the CineAlta, the DVX did not offer any significant upgrades in resolution, as
85
Vertov’s concept of the “film experiment” is one I myself turn to frequently as an act of
practice and as a teaching aid in the classroom. The act of making need not be strictly about art
or storytelling, but more explicitly an opportunity for practical research.
94
it still shot standard definition video, but its 24 fps, de-interlaced shooting mode was equally
impactful as that of the super high-end Sony, as its lower price point ($3,795 at release) made it
available to a much wider swath of the videomaking population, and to ones that could not afford
to shoot on film as an alternative.
86
Postproduction plug-ins like CineLook and CineMotion had
offered the feature of creating 24 fps video along with their de-interlacers in years past, but the
results weren’t nearly as reliable. 30 fps video had to first be converted to 24 fps (which often led
to jittery motion) and then back to 30 through the pulldown process (see previous footnote), and
the resulting effects were far clunkier than actually shooting at 24 in-camera. For the same
reasons, moviemakers wanting to create a film print for distribution found the process much
more forgiving with 24 fps images than with 30 fps, as the latter was often plagued by technical
problems and less-than-stellar results.
87
The release and use of both digital image manipulation software and cameras like the
DVX complicated the image-making hierarchy. While it was once somewhat binary, with film
86
Technically, the DVX did not produce true 24 fps images. In order to play back on NTSC
televisions, the final camera output still had to be 30 fps. The solution was to shoot at 24 fps but
convert the footage to 30 fps in-camera utilizing a system called “pulldown.” Images were
recorded at 24 fps, and then the sequence of frames was converted in-camera to 30 fps by
doubling/mixing them through the interlacing process. If the camera shot 24 frames in a second
(e.g. Frames A-X), in the final output sequence frame 1/30 would be Frame A, Frame 2/30
would be an interlaced mix of frames A and B, Frame 3/30 would be frame B, etc. It was
essentially the same process that was used to convert 24 fps film to video for exhibition via TV
broadcast or home video media. The main advantage of shooting at 24 and playing back at 30
was that the resulting video preserved the motion rendering of 24 fps film, as there are never any
images in excess of the 24 original frames. It is the temporal resolution equivalent to image
enlargement – if an image is shot at a certain resolution, increasing it to a higher resolution
format won’t actually increase the detail in the original image. Viewing standard definition TV
on an HD television set doesn’t look any “better” – outside of possible psychosomatic reactions.
87
Daryl Chin’s roundup of the digital-to-film prints at the Montreal World Film Festival in 2002
railed against the terrible quality, noting that the 24p transfers fared much better. “Transmissable
Evidence: Is This the End of Film?” PAJ: A Journal of Performance Art 24, no. 1 (January
2002): 44-51.
95
on one end and video on the other, the tools and technical skill required to produce semiotic
decoys more clearly separated the prosumer and the low-budget practitioner from the other video
users. While the cameras, hardware, and shooting formats of these groups were always different,
as was their skill level, those differences were manifested less in the texture of the images
themselves. Prosumer video content likely had production values above that of an amateur, but
the image texture was still viewed as film’s inferior and, if the trade press is to be believed, the
reading of the images fell more in line with other video users regardless of content. The early
2000s were the era in which cinematic video became a possibility. That is to say, it was not
standard for all video (as it would become in the 2010s), but it was a tool that tech-savvy
videographers could use to separate their work from the pack and amplify their voices by
decoying their video. This act of claiming an image texture above one’s station was – on the
surface and in practitioner discourse – one manner in which digital technology might be
exploited for revolutionary ends though, as I discuss in the following section, the narrative of
self-made digital success was as much mythological as it was reflective of liberation.
Video Revolutions
This camera shoots 24p, but it uses the cheap and easy medium of
Mini DV to capture the footage. For the low-budget filmmaker, I
think this will be a revolutionary format.
- Dan Mirvish, co-founder of Slamdance
Film Festival
88
Panasonic’s DVX received considerable attention within the trade press and its coverage
was notably hyperbolic. Patrick Lang’s review in Videomaker called the camera
88
Martin Sü dderman, “Wares,” Filmmaker 11, no. 2 (2003): 14.
96
“revolutionary,”
89
as did Dan Mirvish, co-founder of the Slamdance Film Festival, quoted in the
above epigraph. That descriptive term is one I will return to frequently, as it crops up in
countless discussions of video technology. Kurt Lancaster’s guide to digital filmmaking refers to
the “HD revolution.”
90
Michael Newman’s video history, Video Revolutions, is not surprisingly
based on this very concept of viewing video technology through a lens of societal change.
Throughout the mid- to late-2000s, the trade press frequently referred to some incarnation of a
“digital cinema revolution.” “Revolution” is a loaded and multifaceted word, and its use in these
instances calls for further examination. For many authors, the use of the word “revolution” was
synonymous with “change” more broadly, and the nature of that change was primarily technical.
The manner in which video was being captured and stored shifted fundamentally, the central
defining features of video as a medium were being altered (resolution, 60i to 24p, etc.), and the
widespread shift away from analog technologies reinforced the word “digital” as the most
buzzworthy of the decade. Beyond these accounts of technological change, some of the more
hyperbolic uses of the term “revolution” hinted at something more societal and fundamental –
digital video is going to “change everything.” Or, in less technologically determinist terms,
people are going to change everything, and they’re going to use digital video to do it.
The digital era is certainly not the first time in the history of video that the word
“revolutionary” has been used with this high-stakes connotation. As Stephanie Tripp discusses in
her article on participatory television practices in the 1960s, many of the same claims that are
made about the liberating potential of video distribution sites like YouTube are echoes of those
made about Sony PortaPak cameras and projectors when video was first commercially available
89
Lang, “24p,” 15.
90
Kurt Lancaster, DSLR Cinema: Crafting the Film Look with Video (Oxford: Focal Press,
2011), Electronic Edition, Introduction.
97
to activists and media makers during the “Guerilla TV” movement.
91
Such claims were made
again in the 1990s, wherein the proliferation of the camcorder would “unloosen control of the
messages on the airwaves within the Western world.”
92
In the digital era, as in these earlier
examples, the technological shifts exemplified by the DVX were revolutionary not just as a re-
defining of industry standards, but in the potential for shifts in cultural power dynamic that came
through that redefinition. I am suggesting, as many reviewers have, that the images produced by
these digital video cameras were more “cinematic” in that they mimicked the quantitative
qualities of analog film and, as a result, replicated some of the qualitative qualities as well.
Specifically, they partially shed the negative connotations that were attributed to the “video
look” in the 1990s and early 2000s. At this time, because the “cinematic look” was based largely
off that of 35mm film, and because the cost of that format was prohibitively expensive, the
“cinematic” was tied to technological standards that were controlled by capital. Plainly speaking,
the cinematic was expensive. The newfound opportunity to alter frame rate in order to claim
some cinematic-ness meant the potential for cultural and financial passing.
Prior to digital and 24p video, if an individual wanted to produce a moving image project,
their choices were: A) Shoot on film, look professional, spend money; B) Shoot on video, look
amateurish, save money; C) Worst of all, not make the project. During the so-called “digital
revolution,” changing the video camera’s frame rate and reaching the cinematic standard became
a potentially revolutionary act through technical alteration. Like the video revolutions that came
before, it was subversive and aggressive – a cultural power grab. Moviemaker Samira
Makhmalbaf summarized the stakes of this act in a talk at the 2000 Cannes Film Festival,
91
Stephanie Tripp, “Participatory Practices,” 5.
92
Elizabeth Burch, “Getting Close to Folk TV Production: Nontraditional Uses of Video in the
U.S. and Other Cultures,” Journal of Film and Video 49, no. 4 (Winter 1997): 18.
98
proclaiming that “the digital camera is the death of Hollywood production,” and further
clarifying:
Three moves of external control have historically stifled the
creative process for a film-make [sic]: political, financial, and
technological. Today with the digital revolution, the camera will
bypass all such controls and be placed squarely at the disposal of
the artist. The genuine birth of the author cinema is yet to be
celebrated after the invention of the ‘camera-pen,’ for we will then
be at the dawn of a whole new history in our profession. As film
making becomes as inexpensive as writing, the centrality of capital
in [the] creative process will be radically diminished.
93
To read similar accounts in the video and independent film trade press, the far-reaching
implications of the cinematic alterations to video were readily apparent to the practitioner
authors and readers. Publications like Filmmaker, a trade press magazine covering independent
cinema, often highlighted the success of movies shot on video, initially with a tone of
“Howsabout that!” and later with a more sustained interrogation of the technology’s liberating
potential. One feature examined two successful films that debuted at Sundance in 2003 – Peter
Hedges’s Pieces of April and Fenton Bailey’s Party Monster, produced by indie renegade
Christine Vachon – both of the which the magazine claimed “wouldn’t have been made without
digital video.”
94
Videomaker likewise ran a yearly rundown of the digital movies at Sundance and
featured more frequent monthly write-ups of projects that found success at various levels of
production, highlighting their camera choices and production methods and touting their video
origins, for obvious reasons. One featured interview was with the creators of Digiorgazmik
93
Quoted in Ariel Rogers, Cinematic Appeals: The Experience of New Movie Technologies
(New York: Columbia University Press, 2013), 126.
94
Anne Thompson, “Fest Circuit: Sundance Film Festival,” Filmmaker 11, no. 3 (Spring 2003):
28.
99
Media, a hip hop music video company operating out of New York.
95
The profiling of a small
production company was unremarkable in relation to others in the magazine, but it highlights
some of the commonalities such write-ups share, and those which are meant to speak to digital
video’s revolutionary qualities. Firstly, the magazine characterizes the production company’s
mantra as, “Down with 35mm; long live Mini DV.” Rhetorically, the word choice immediately
positions the production company as literal revolutionaries, providing them with a voice and
language of protest, pitting two technologies against each other like rival regimes. This
verbalized conflict suggests not just a creative technological choice, but an overthrow. To read
further into the technologies themselves, as readers surely would have done, 35mm is the
medium of Hollywood, of the commercial industry, of capital more broadly – “Its very expense
aestheticizes.”
96
MiniDV is cheap – it is the medium of the people. This is meant literally, as it
was the medium used for most videomaking at all production levels save the higher-budget ones,
but also figuratively as it is the medium rhetorically positioned to overthrow those in power. The
feature goes on characterize the company’s producers, Tony Walker and Tee Smith, as
“technical wunderkinds,” pairing digital technology with the promise and potential of a younger
generation. Quoting Walker, the article speaks to the game-changing financial advantages of
digital video in the production of music videos:
"Before, you had to have $500,000," he said. "New bands just
starting out [now] can have four or five videos. (Mini DV) has
revolutionized the art."
97
Here, Walker holds in separate spheres established bands and production companies that have
capital to invest in a project, and newer, non-establishment artists who, prior to digital video,
95
Digiorgazmik Media was runner-up for this dissertation’s title.
96
Ganz and Khatib, “Digital Cinema,” 22.
97
Alan Sheckter, “DV Music Video Duo,” Videomaker 16, no. 6 (December 2001): 15.
100
were unable to produce anything of the same caliber (or anything at all). Walker goes on to
discuss DMI’s camera of choice, the Canon XL-1, specifically because its Frame Mode provides
a “film feel,” and that, in post, they utilize After Effects in order to “make it seem as close to
film as possible,” as “in video, you’re still judged against 35mm.”
98
By altering the frame rate
and using digital postproduction tools to mitigate the quantitative differences between video and
film, the low-budget producers at DMI are able to collaborate with local independent artists to
produce work that once would have cost hundreds of thousands of dollars, and only would have
been possible if those artists were properly vetted and groomed by the establishment gatekeepers.
The implication on the part of both the article’s authors and the featured media-makers
themselves is that, by utilizing cinematic digital video as a tool, low-budget producers have the
newfound ability to compete against established media giants and/or circumvent them entirely.
The tale of the individual artist slaying the media goliath is one that returns frequently
and in various incarnations throughout the digital era, especially with the coming of internet
distribution platforms like YouTube and, later, with smartphones (e.g. Samsung’s apparent on-
screen challenge to the establishment during the Oscars). While I wouldn’t deny that the use of
the digital video camera and cinematic video allowed for certain types of success that
moviemakers might have struggled to achieve in the analog era, the rearview mirror suggests that
the giant-slaying narrative was more of a self-perpetuating myth than a crystallization of any
real-world media revolution. This is certainly the case for the first cinematic prosumer video
cameras (the major studios haven’t yet closed), but it is also a pattern repeated throughout other
revolutionary moments in the development of video technology, cycling again and again like a
digital topos. Such repetition makes sense, given its push both in communities of practice and in
98
Ibid.
101
the trade press, as an empowering narrative was personally satisfying, forward thinking,
optimistic, inspirational, and catchy.
It would be easy to retroactively dismiss the dominant narrative as selective
(Videographer obviously didn’t cover many unsuccessful moviemakers or production
companies) and inaccurate altogether (many digital production startups went out of business in
the 2010s), but it is more beneficial to consider the narrative from an ideological perspective, as
its prominence among practitioners speaks to a driving force in the use of cinematic video in the
early 2000s regardless of its accuracy. The myth’s narrative is staged as a confrontation – it
involves a direct challenge to the establishment, stemming from the availability of new tools or,
to extend the confrontational analogy, new weapons; the possibility of creating cinematic
semiotic decoys was like the videographer masses being armed for the first time. Rather than
dismissing it as a failed, naïve pipe-dream, it is worth considering the ways in which this
confrontational ideology was driving may of the practices regardless of its success and, as such,
affected the outcomes of the proliferation of cinematic video, short of total media overthrow.
Like many tall tales, the myth of the media giant slayer had clear basis in reality, as the
use of cinematic video enabled upward mobility in a producer’s cultural capital by allowing a
more expensive cinematic look at a cheaper price. The shift toward a more exclusive cinematic
image no doubt came with deep personal satisfaction for filmmakers and DPs, evidenced by the
various reviews of cameras like the DVX fawning over the gorgeous imagery, now available at
one’s fingertips. Beyond the personal, the command over more cinematic images came with the
hope of increased potential for success, broadly defined. Filmmakers who, despite their best
efforts, could not get their films funded nor self-finance, could increase their chances of
completing a project by shooting on video and drastically lowering the budget. This was even the
102
case for larger-scale projects like Pieces of April and Party Monster, which were able to secure
hundreds of thousands of dollars in financing, but only after having been turned down by
multiple studios and switching to digital video to drop the budget. InDigEnt Studios, an early
adopter of digital video for moviemaking, ran their studio model exclusively on the cost-savings
of digital. As founder John Sloss explains,
We immediately saw that the economics with digital video were
dramatically different [from 35-mm film]. The all-in production
costs could be a couple of hundred thousand dollars.... The
filmmakers were able to make the pet projects they wanted to do
but no one would fund.
99
The studio produced several financially and critically successful movies in the early DV era,
including Richard Linkater’s Tape (2001), Rebecca Miller’s Personal Velocity (2002), Gary
Winick’s Tadpole (2002), and the aforementioned Pieces of April, which earned a supporting
actress Oscar nomination. The digital potential seemed strong for micro- or no-budget
production companies and individuals as well, as was noted by Tony Walker of DMI.
The lowered costs for cinematic visuals had countless other beneficiaries: visionary,
auteur directors who were unable to find an investor who valued their vision; artists who wanted
total control over their work without bowing to unimaginative financiers; local producers who
wanted to increase their profits and build their client base through an increase in production
value. To speak of video producers generally, the potential successes were largely aesthetic and
financial. For many groups, though, a lack of funding opportunities was the result of socio-
cultural factors beyond creative differences. Marginalized filmmakers facing institutional
discrimination because of race, sex/gender, or sexual orientation not surprisingly often had great
difficulty surpassing cultural gatekeepers, both in attempting to secure funding prior to shooting
99
“DV Studio Can’t Make a Buck,” Wired, June 26, 2006.
103
and also after a film was completed in searching for exhibition or distribution opportunities.
More progressive, more risqué, and more diverse films were at a double disadvantage if they
were priced out of the cinematic. Such difficulties were the most literal incarnation of what
Elizabeth Burch dubbed the “ghettoization” of content shot on video.
100
For moviemakers battling institutional prejudice in attempting to get their projects made,
shooting cinematic video had two benefits. First, by drastically reducing the shooting budget,
studios might be willing to roll the dice on a filmmaker or a project whose subject matter might
otherwise be discriminated against. It was essentially an attempt to skirt institutional prejudice by
helping present oneself as a less risky investment. Second, for a film that was self-financed,
shooting on cinematic video allowed completed movies to surpass some cultural gatekeepers that
otherwise might have kept it out of traditional exhibition outlets due to its “video look.” Even in
the circuit of film festivals, many of which are more inclined to push aesthetic or narrative
boundaries, films are still weighed on technical merit, and this was especially true at the
“marketplace” festivals designed around industry networking, distribution deals, upward
mobility for cast and crew, and greater visibility for featured cultures. While the independence
that cinematic video allowed (creative and/or financial) was a much-needed answer for some
unheard creative and cultural voices, for many of these same marginalized filmmakers, the goal
was often a wider market – pushing beyond the obscurity of the festival scene or local TV and
into the mainstream. The question of “independence” here is placed in a different light, and
reveals the extent to which the choice to work outside of the film industry is often one of
privilege.
100
Burch, “Nontraditional Uses,” 21.
104
Cinema made specifically for marginalized audiences has not necessarily had trouble
finding that audience, but that is largely because it has been carved out from the mainstream and
given its own place. This specificity has, in some instances, driven filmmakers to embrace
amateurism and refuse to play by the rules of the Hollywood system that had largely ignored and
underserved their community. Ed Guerrero’s analysis of the successes and failures of the
filmmakers of the L.A. Rebellion calls attention to that group’s relationship to Hollywood
professionalism. He states, “Rather than seeking financial and critical success within the
framework and values of Hollywood, its mission was to initiate social change through black
cultural production, to build an independent cinema and a ‘conscious’ black audience.”
101
However, despite their aims, the films of the L.A. School were often less successful in finding an
audience and in their cultural aims specifically because they bucked the traditions of classical
filmmaking.
102
In this instance, adhering to the freedom of amateurism as a rebellious act that
provided authenticity of voice had to be balanced with the professionalism of the cinema which
the work was largely fighting against. To this end, many black filmmakers following the work of
the L.A. Rebellion alternately aimed to find success and visibility as an end goal in itself. Rather
than embracing the specificity of their audiences, the battle against segregation came in the form
of making mainstream cinema; financial success was cultural success.
103
However, navigating
this terrain proved difficult, as limitations in control arise in the face of financing and against the
stakes of cultural authenticity. Facing the prevailing stereotype that black cinema doesn’t sell,
how much control are filmmakers willing to give up in order to create a piece of media that
101
Ed Guerrero, “Be Black and Buy” in American Independent Cinema: A Sight and Sound
Reader, ed. Jim Hillier (London: BFI Publishing, 2001), p. 70.
102
Ibid., 71.
103
Ibid., 72.
105
achieves success in a wide market, according to the (mostly white) cultural gatekeepers? In this
instance, the use of digital video had the potential to rekindle the revolutionary potential of
amateurism in two ways. First, by allowing filmmakers to reach a black audience and raise the
social consciousness using the images and storytelling techniques of Hollywood while remaining
largely free of their financing. And, alternatively, in allowing low budget and amateur
filmmakers to bypass the cultural gatekeepers, successfully “crossing over” and finding financial
and critical success within the mainstream at a much lower budget.
Shooting on cinematic video helped eliminate one potential stumbling block for
moviemakers in marginalized communities. The technology, of course, did nothing to actually
eliminate the institutional discrimination that many of these moviemakers faced. The
democratization of the cinematic look, which was once commanded almost exclusively by those
with capital and those in the big-budget production spheres (i.e. largely straight, white men),
resulted in a change in aesthetic command. Moviemakers were able to create images of a
quantitative and qualitative quality that were once largely forbidden financially, but a
democratization of quality doesn’t strictly translate into a democratization of the power that
wielded that quality. In short, shooting on video allowed tremendous changes in what individuals
and low-budget companies could produce, but there were additional obstacles when it came to
getting one’s work seen.
If, as many of the filmmakers and authors I have previously cited suggest, having a
cinematic look led to a movie that was more likely to find a place at a film festival, a festival
acceptance often led to an immediate financial consideration, raising the key question – how was
the film going to be shown? In the 1990s and early 2000s, not many theaters featured a digital
projector, which necessitated a transfer from video to film. Cameras like the DVX made this
106
transfer significantly easier technically, as the 24 fps images could be transferred to film in a 1:1
ratio without any frame blending or alterations to the playback speed that came with a 30 fps
print. The bigger problem was paying for the film. A video-to-film transfer was exorbitantly
expensive due to both the material cost and the complex technical labor required – somewhere
between $35,000 and $70,000 for a 90-minute film, depending on the difficulty of the transfer.
104
Some festivals were more welcoming of video than others (Sundance equipped all of their
theaters with digital projection starting in 2001),
105
but the fact remained that if digital projection
was an impossibility, showing one’s film meant an investment that may be several times the total
budget of the production and thus an impossibility. Moving beyond the festival space was an
additional consideration fraught with hurdles. Even if one could screen their work at a
specialized event equipped with a digital projector, the overwhelming majority of theaters in
America had no digital projection capabilities. Thus, the transfer from digital to film for wider-
scale exhibition would be a cost that a distributor would have to eat, making a deal that much
more difficult.
Broadcast on television was much more feasible as it eliminated the photochemical step
entirely, but television networks often dictated content a bit more strictly. A television-only
distribution deal was a rarity for a feature-length movie, as it would likely not be particularly
lucrative for either the network nor the distributor. Straight-to-home media was more common,
and many festival films found themselves existing after the festival circuit in video stores across
the country, but again, for filmmakers working the margins, the accessibility of their films was
often linked to content. While queer moviemakers might be able to exhibit their work at queer
104
Sinéad Walsh, “Digital Filmmaking,” Film Ireland 72 (August/September 1999): 23.
105
Todd McCarthy, “Sundance spectrum broad, often digital,” Variety, December 12, 2000.
107
film festivals and then later through a video store specializing in queer film, their work remained
in the margins throughout the process regardless of how cinematic their final product was. Such
segregation was especially damning, as moviemakers did not get the same level of exposure that
they would with a theatrical deal. A filmmaker might have success at a festival and acquire a
specialized video distribution deal, but be no more likely to receive funding for their next project
and the content of their film would ultimately be restricted to a specialized, segregated audience
as well.
Expanding beyond the feature, there were additional opportunities in smaller-scale
projects that utilized cinematic video, most notably music video and commercial enterprises. To
return to Tee Smith, one of the founders of Digiorgazmik Media, he spoke in an interview on
“Off the Record with Georgette Pierre,” outlining his career trajectory. After interning with
Spike Lee on Clockers in 1994, Smith faced a career lull until he was able to utilize digital
technology and make a way for himself.
Once digital came into play, it opened up a lot. Like back in 2000,
before anybody really was messing with digital, I was shooting
with Sonys and Canon XL-1s and stuff like that, doing these music
videos and stuff for independent artists and stuff. And that’s what
got me into saying I could direct myself … There’s nothing that
ever really can stop anybody from doing anything that they really
want to do if you really want to do it, you know what I mean? At
this point now.
For Smith, what the new regime of digital image-making technologies opened up was the ability
to work independently. When he struggled getting industry jobs initially, he formed a production
company and made music videos for hip hop artists in New York. He then was able to parlay this
independent success into more established industry gigs, including casting for several big-budget
films and directing for MTV’s Sucker Free. For Smith and companies like his, cinematic video
was utilized as a stepping stone to enter a higher tier of production for both themselves and for
108
the bands they featured. The cheaper cost of cinematic technologies enabled professional and
creative development at an individual level to a degree that previously would have been quite
expensive, and eventually could be leveraged into more established industry positions. Such
individual ascension into a higher sphere of production was less of a revolutionary slaying of the
media giant and more one of a neoliberal self-made success (and one that was, despite its relative
frequency, certainly equally mythological for some). The use of digital video did not necessarily
let media-makers “beat ‘em,” though it might have made it easier to “join ‘em.”
As Smith’s example suggests, if distribution through traditional outlets was not the end
goal, the possibilities enabled by the use of cinematic video were far more easily realized. This
was certainly true for filmmakers making art for art’s sake, but it was arguably even more
relevant for filmmakers who were denied access not just to distribution channels, but to the very
equipment required to make movies in the first place. While cinematic video became a low-cost
alternative to film for moviemakers in many countries, there were some in which video was the
only option. The proliferation of digital video cameras in the early 2000s saw a surge in the
cinematic output of several Sub-Saharan African countries in a variety of forms. In Nigeria in the
1990s, filmmakers were working under a system in which the state-funded film industry had
largely crumbled and the economy and currency were struggling in a global market.
106
For
Nigerian moviemakers, shooting on film was difficult for the same financial reasons faced by
most low-budget moviemakers, albeit to an extreme, but additionally so due to the lack of
adequate processing facilities to efficiently develop the film. Coupled with an upswing in urban
violence and mandated curfews, making and exhibiting movies on film in theaters became both
106
Sheila Petty, “Digital Video Films as ‘Independent’ African Cinema” in Independent
Fillmmaking, ed. Doris Baltruschat and Mary Erickson (Toronto: Toronto University Press,
2015), 256.
109
logistically difficult and a potentially dangerous prospect.
107
For moviemakers, shooting on film
meant needing to secure funding from other (largely European) countries, developing the film
out of state, and exhibiting the film primarily in foreign festivals in order to secure funding for
future projects – a cycle of richer countries funding poorer countries that essentially amounted to
an act of cinematic neo-colonialism, according to Cindy Wong.
108
Not surprisingly, in order to
secure funding, films largely had to meet the needs of the European film festival-going audience
as seen by the various festival-affiliated funding bodies that were dispersing production funds.
The potential for loss of creative control in the face of acquiring funding were not just issues of
creative vision but, along with it, issues of authenticity and national identity. Such conditions
were, of course, not unique to Nigeria, as Ganz and Khatib discuss parallel industrial histories
and a similar turn to digital in Iran and Palestine.
109
The Nigerian videofilm industry arose through both a sense of entrepreneurship and as a
rebellious act against the foreign powers that were exhibiting indirect control over Nigerian
films. The popularity of home viewing and the cheap and available nature of video tape allowed
Nigerian filmmakers to produce cinema specifically for a home audience, even without the
support of the government film industry. The earliest examples in the late 80s and early 90s were
shot using analog video and were edited tape-to-tape, but the transition to digital video cameras
enabled the same increase in image quality and control over the image as occurred in other
production contexts, allowing easier and more accurate editing and, crucially, distribution via
DVDs. Moviemaking in Nigeria was spread across three major geographic groups: the Hausa in
107
Ibid.
108
Cindy Hing-Yuk Wong, Film Festivals: Culture, People, and Power on the Global Screen
(New Brunswick: Rutgers University Press, 2011), 156.
109
Ganz and Khatib, “Digital Cinematography,” 28.
110
the north, Igbo in the central region, and Yoruba in the south. Moviemakers in each region had
their own end goals and the movies varied greatly, but across all three, the system was largely
self-contained and skirted traditional channels of distribution both nationally (as film distributors
largely no longer existed) and internationally, as distributors turned their noses up at the
videographic images.
110
Though the nature of the system largely flew in the face of traditional
notions of “cinema” – viewing at home, extremely low production values, shot on video –
moviemakers were able to create authentic works, maintain control, reach a massive audience,
and potentially make a profit. The logistical issues that existed in more organized networks of
distribution still were very much present in the videofilm market in Nigeria. While videos were
largely sold in the street markets in Lagos and elsewhere, there were squabbles between the
video producers and the marketers who often paid to produce the films. The industry was very
much an industry, just one that was driven by the use of video cameras rather than film and its
high-cost and technical difficulties.
Along with this massive proliferation of videofilms in Nigeria, similar industries were
developing in Ghana and Uganda. In the latter, moviemaker Nabwana Isaac Godfrey Geoffrey
managed to bring a sense of independence to the market, making films in his poor Wakaliga
neighborhood of Kampala. With his production company, Ramon Film Productions, he created
extremely low-budget (under $200 US) action movies shot on Sony and Panasonic cameras,
edited the movies on computers he built himself from scavenged parts, created his own digital
effects, and then sold the completed DVDs with his cast and crew on the street. With the rise of
YouTube, he eventually began uploading his films to that platform as well, bringing his work to
110
Jonathan Haynes and Onookome Okome, “Evolving Popular Media: Nigerian Video Films,”
Research in African Literatures 29, no. 3 (Autumn 1998): 106.
111
an international community when his 2010 film Who Killed Captain Alex? went viral. Judging
from the YouTube comments, some of the virality was due in part to the shameless mockery of
Nabwana’s rudimentary special effects, but others were impressed by his DIY approach and
enthusiasm. Nabwana took advantage of the exposure developed in recent years and his latest
film, Bad Black (2016), played at several American film festivals, including Seattle International.
The film, and much of his continued work, was funded in part by a Kickstarter campaign and a
Patreon account for donors to continue to send funds his way. For Nabwana, shooting on video is
itself only barely a possibility, but it is so only because of its low cost and mechanisms for self-
reliance. As cameras continued to develop in image quality, Nabwana was able to use his initial
success to purchase newer equipment and significantly improve the look of his work –
comparing his festival-bound Bad Black with Who Killed Captain Alex? shows considerable
qualitative improvements in his images, as were noted by both the film’s reviewers and also his
wider audience and supporters on YouTube. Nabwana and his Wakaliwood compatriots
represent how the use of cinematic video cameras and internet marketing could be used to create
Ugandan film outside of foreign, governmental, or marketplace intervention, reach an audience
of his countrymen, and also receive acclaim internationally on the festival circuit and beyond.
As the previous examples suggest, the digital revolution-by-camera, or at least its cultural
gains, came less through the outright conflict that the narrative in the trade press might suggest.
That is to say, even if the giants are still around and the revolutionary potential in the cinematic
imagery seems illusory in retrospect, there were strides forward for individuals and smaller
companies, even if they occurred within the industrial status quo. The cinematic video revolution
did not destroy the moving image hierarchy, but it allowed mobility within it – cinematic passing
of individuals more than entire populations. By utilizing cinematic digital video, small-scale
112
production companies operating at the local level were able to produce work of a higher
quantitative quality more readily than before, whether they be fiction films, wedding videos, or
corporate promotion. Improvements in quality yielded the potential for increased profits for
commercial companies, opportunities for individual professional advancement, and obvious
personal satisfaction. One of the biggest factors limiting the extent to which the cinematic
revolution upset the existing hierarchy of production was the lack of control over distribution
channels. Having greater control over one’s images is less impactful when those images must
pass through someone else’s control. As this dissertation’s third section will address, the
hierarchical shakeup and revolutionary rhetoric would be prophesied again, and seemingly with
greater results, with the proliferation of online distribution networks like YouTube.
113
High Resolutions and False Revolutions
I’m not going to trade my oil paints for a set of crayons.
- Wally Pfister, Cinematographer
Digital wasn’t paying enough respect to film. It wasn’t as good as
film … we want to help send film to the retirement home and have
it feel good about what took its place.
- Jim Jannard, Founder, Red Camera
In 2002, George Lucas held a demonstration at his Skywalker Ranch, inviting a cadre of
top-tier directors including Francis Ford Coppola, Steven Spielberg, Oliver Stone, and Robert
Zemeckis. His goal was to demonstrate the findings of his experiments with digital cameras and
digital projection. Lucas was offering a prophecy – perhaps even a warning – to his fellow
directors, proclaiming digital capture and exhibition as the future of the industry. Despite the
considerable pushback from many of those same directors, Lucas’s prophesy was realized in
short order. 2008’s Slumdog Millionaire was the first film shot primarily on digital cameras to
win the Academy Award for Best Cinematography (earned by digital cinema pioneer Anthony
Dod Mantle) and Avatar (2009), shot completely digitally, took the prize the following year.
Concurrently with the development of prosumer 24p cameras, big-budget filmmakers and studios
began to experiment with video cameras as well, but with different motivations and different
stakes altogether. For these top-tier content producers already shooting on film, the “cinematic”
was less something to strive for and more something to maintain – the result of decades of
stability in technology, labor, and logistics.
While the term “revolution” is equally applied to the switch in big-budget production
from film to digital, the word takes on quite a different meaning than it does in low-budget
circles. While technologically it was no less significant – perhaps even more so, in that there was
a mass shift from one technological medium to another – the corresponding desire for a shift in
114
power was absent entirely. By definition, such cultural and political shake-ups rarely benefit
those already in charge. If anything, with regard to image quality, big-budget productions wanted
as little change as possible. For that reason, the transition in imaging technology in big-budget
production over the 2000s happened with fairly little public fanfare and acknowledgement. John
Belton referred to this digital revolution as a “false” one,
111
largely because of its invisibility:
“the development of digital cinema [was] driven by its desire to simulate normative practices.”
112
I would go one step further than “false.” The switch from film to digital was only carried out as
an active attempt to maintain the hierarchical separation between the spheres of production. It
was not merely an invisible technological bait-and-switch, but was explicitly anti-revolutionary.
In this section’s epigraph, filmophile DP Wally Pfister presents an analogy comparing the
“crayons” of video to the “oil painting” of film. The comment is indicative of the pushback from
many DPs, producers, and directors in the industry against the dystopian threat of video in that it
is, on the surface, in service of the integrity of the art. Yet, Pfister’s tone is clearly one of
condescension. Video isn’t just lower quality, it is a child’s toy. This discourse of superiority was
a manifestation of the sense of separation that existed between those who produced “cinema” and
those who shot video – the difference between a cinematographer and a videographer. The
significance of the “cinematic” as a visual representation of cultural influence was no less
meaningful to those on the higher end of the hierarchy. That said, much of the concern of those
with the most to lose was, quite tellingly, not based in fear of losing their privileged position to
outside digital media makers (e.g. the semiotic decoy revolutionaries) but, rather, in fear of
internal shakeups – the proverbial wrench in the works.
111
John Belton, “Digital Cinema: A False Revolution.” October 100 (Spring 2002): 98-114.
112
John Belton, “Digital 3D Cinema: Digital Cinema’s Missing Novelty Phase,” Film History
24, no. 2 (2012): 189.
115
While video did present certain practical benefits on set (which will be discussed in
greater detail in the following section on camera utility), such was not the case visually. If
switching to digital images had anything to offer Hollywood, it wasn’t aesthetic, as their use of
35mm film already set the standard for image quality, and it wasn’t in the potential to surpass
cultural gatekeepers, as they were very much gatekeepers themselves. Instead, the switch to
digital images was not the result of anything inherent in the images, but in how their creation fit
within Hollywood’s production and postproduction workflow. Photochemical film formed the
bedrock of the motion picture industry as much as it did the world of film theory. If scholars
thought the switch to digital was earth-shattering theoretically, it was equally so in an industry
worth tens of billions of dollars annually; it was the professional media equivalent of secretly
switching to electric cars en masse. The adoption of digital cinema cameras did not occur
because camera manufacturers were able to emulate the image quality of film with video. Rather,
the increase in quality was necessary after it became clear that digital images themselves were a
necessity, rather than a disruption, in maintaining a seamless and efficient flow of labor from
production to distribution. The switch to cinematic video was anti-revolutionary because it
served to bolster the very industrial mechanisms that separated big-budget production from the
lower tiers – far more than resolution and frame rate.
For many filmmakers at George Lucas’s demo, and for many DPs and established
filmmakers who felt its rumblings, the logical response to digital cinema technologies was,
“Why bother?” Video was quantitatively inferior to film and, due to film’s established
prominence as the one-and-only high-budget shooting format, it was qualitatively inferior as
well. One of video’s greatest benefits – its low cost – was mostly irrelevant for a big-budget
project. Even for Lucas, who was something of an independent filmmaker himself, shooting on
116
film was no financial obstacle, but it did pose something of a logistical obstacle. When Lucas
began working on Star Wars Episode I: The Phantom Menace (1999), a film that would rely
heavily on digital special effects, the analog-production-to-digital-postproduction workflow was
rather unwieldly. Scenes would be shot on film, scanned into a computer to create a “digital
intermediary,” and computer generated effects would be applied to the digital footage. When
effects work was complete, the finished sequence would then be re-printed onto film for
exhibition.
The digital intermediary process was common for all effects-heavy films of the era (and,
eventually, all big-budget projects shot on film regardless of effects content) but, in the current
age of digital cameras, YouTube, streaming video, and DCP projection, the digital intermediary
process seems almost comically inefficient, and the quick realization of Lucas’s predictions
seems all too obvious. The same question that Lucas’s initial opponents posed – “Why bother?”
– is no doubt applied by studios in the current digital climate when film-only zealots like Quentin
Tarantino and Christopher Nolan attempt to justify their photochemical needs. Recognizing the
clumsiness of the digital intermediary process, Lucas used The Phantom Menace as an attempt to
integrate 35mm film images with digital video – two formats that, at the time, seemed largely
incompatible. It was one of the first attempts at creating what Filmmaker magazine called the
“intercuttable loop” – the ability to intercut footage from different sources like 35mm film and
video without the audience noticing.
113
While Samsung did it seamlessly with their phone video
in 2017, it was quite a stretch in the late 90s.
113
Roberto Quezada-Dardon, “State of the Art: HDSLRs in 2010” Filmmaker 18, 4 (Summer
2010): 78.
117
The prototype camera that Lucas used, the Sony HDC-750, was commissioned
specifically for the movie. It was designed to replicate the quantitative quality of film as much as
the technology would allow and, unlike the era’s prosumer and high-end professional video
cameras, it shot in high definition video, allowing a resolution far beyond that of any
commercially-available video camera. Speaking to the early HD camera’s influence, Sony SVP
of Sales and Marketing, Alec Shapiro, said, “It was our first high definition camera. Before that,
everything you were looking at really looked like video”
114
– ironic, considering the new camera
still shot video. His point, though, was that the video footage met the cinematic standard such
that it could be inserted into the finished film without anyone noticing. Digitally-captured images
ultimately only made up a small portion of The Phantom Menace, as the camera prototypes were
still in development during much of principal photography. Still, Lucas used the digital footage
in his final cut and the movie’s success (financial success – pace, Star Wars fans) to help build a
foundation for a push for a digital imaging future. For Lucas’s following film, Star Wars Episode
II: Attack of the Clones (2002), Sony had updated and standardized their HD digital cinema
camera in the form of the HDW-F900, and Lucas used it to shoot the entire movie.
Lucas’s first two Star Wars prequels set the precedent for Hollywood and served as
functional demonstrations of the technology, but studios and filmmakers were slow to follow his
lead. Brian Winston’s model for technological change is helpful in understanding this industry
sluggishness. It hinges on several stages of development, beginning with ideation and
prototyping and moving into widespread adoption and use. The technology does not see these
final stages, though, until there is a demonstrated social need for it – what Winston calls
114
Side by Side.
118
“supervening social necessity” which, in the case of HD digital cinema cameras, was still
lacking.
115
While Lucas had demonstrated that digital capture was a viable option for high-budget
films aesthetically, the benefits didn’t necessarily outweigh the costs of technological transition,
as the photochemical infrastructure was already firmly in place. This latter point was manifested
in two clear areas. First was in distribution and exhibition, wherein theaters were unequipped to
project digitally and film prints would have to be made regardless of the shooting format –
movies had to be analog when they exited post. Second was in the skilled labor of
cinematographers and camera crews. Even as digital intermediaries became more common and
movie editors acclimated to working in a digital environment for its clear benefits (digital
effects, more accurate color correction and grading, non-linear editing software, etc.)
cinematographers were still of an analog breed; they knew and utilized the alchemy of image
creation. Post-production methods demonstrated a need for digital images at some point in the
production chain, but digital intermediaries were effectively the same as shooting on digital for
post-production processes. That is to say, movies had to be digital when they entered post, but
the digital intermediary process allowed cinematographers and their crews to continue operating
in the format they knew and loved, and the one that required their specialized labor. Even if it
was inefficient to go from film to video and back to film, it was a familiar and welcome
inefficiency, and an industry-standard practice that served as a stable joint for a wide variety of
production and postproduction labor. As long as postproduction workers received their digital
115
Brian Winston, Media, Technology, and Society: A History From the Telegraph to the
Internet (London: Routledge, 1998), 6.
119
media, it didn’t matter what type of camera was used. As long as exhibitors received their film
print, it didn’t matter if the movie was edited digitally or not.
Further contributing to the lack of demand for digital video was the concern expressed in
Wally Pfister’s crayon analogy, namely that digital video was aesthetically inferior to film.
Technically and quantitatively speaking, the prosumer video that was emerging in the early
2000s certainly was, in resolution especially. Even with high-definition capabilities, as George
Lucas had, high-end video was lacking as well. While the step-up in resolution allowed Lucas to
intercut his video with film in ways that prosumer video could not be, his cinematographer’s
hands were somewhat tied in that the HD video’s dynamic range was still quite narrow, its color
sampling was less accurate, and its relatively small sensor yielded depth of field characteristics
unfamiliar to crews used to working in film.
116
Lucas and select others demonstrated that video
could work for big-budget cinema, but shots had to be selected and lit with its quantitative
limitations in mind. That is, cinematographers had to alter their process.
There is an old joke around film sets: a producer feels like he can do everyone’s job on
set except the cinematographer. Photochemical cinematography has a certain magic and
mystique to its processes in that no one, including the cinematographer, can see the image until it
is printed. For this reason, the ability to manipulate the image comes somewhat instinctually. To
look at a dark corner of the set, consider the film stock, and calculate how much light will be
needed to illuminate it effectively and creatively is a process that has to be calculated mentally
by the cinematographer. This very human element of the analog process gives the
cinematographer a certain power over the image and the look of a film that no one else on set
has. By removing that element of the DPs’ process through the real-time image rendering of
116
Barbara Robertson, “Tricks: Next-Gen High-Def,” Film and Video, May 1, 2005.
120
video, the use of digital cameras potentially makes the DPs’ job easier but, in doing so, makes
their labor less valuable. Value is self-applied, as well. As Jean-Pierre Geuens notes, a
photochemical DP’s sense of self-worth as a craftsperson is often measured “by going up against
[analog film’s] obstacles, learning to overcome them only through meticulous preparations
followed by a continuous exercise of exacting craftsmanship. The very difficulty of shooting a
film thus brings pride in one’s work.”
117
Additionally lost with the simplicity of digital is a shared sense of communal learning
and camaraderie, as analog film training has largely been conducted through apprenticeship
amongst newcomers and seasoned alchemical veterans.
118
The tradition of sharing nigh-magical
knowledge of the unseen relationship between wattage, foot-candles, and film exposure was
seemingly made quaint and strange when images were displayed in real time with digital video.
Further, due to differences in dynamic range, color rendering, and sensor response, lighting
video is different than lighting film; to get something of an equivalent image across the two
formats, different instruments and lighting placement is oftentimes necessary. A professional
cinematographer instinctually knows what a 300-watt light looks on film. With the introduction
of video, their instincts are rendered partially obsolete unless subject to evolution. Thus, many of
the arguments raised by cinematographers against the poor image quality of video were likely
not just about the image quality itself, but the against the manner in which the images were
rendered, and the resulting shift in the balance of power and tradition on set. Preserving film on
set also preserved the practice of elevated craftsmanship that separated top-tier cinematographer
117
Jean-Pierre Geuens, “The Digital World Picture,” Film Quarterly 55, no. 4 (Summer 2002):
18.
118
Ibid.
121
from low-budget videographer. The refusal to abandon film was a symbolic gesture as much as it
was a logistical one.
As HD camera designers continued to develop their technology, it is interesting to note
how the symbolic concerns of cinematographers were taken into account, often in seemingly
strange ways. When Panavision released the Genesis camera in 2005, their engineers not only
positioned the image controls in an array similar to a 35mm camera, but they also positioned the
camera’s hard drive where the film magazine would be – it even detached for downloading
footage in a manner similar to that of a film “mag change.” Even more absurdly, engineers
designed the camera’s electronics such that the operator couldn’t play the footage back once it
was shot, much like a film camera.
119
One of the singular advantages and medium-specific
properties of video – instant playback – was abandoned for analog familiarity. More significant
than these ergonomic oddities were their signaling that companies like Panavision and Arri, the
two largest manufacturers and standard-bearers of film cameras and accessories, had begun to
recognize digital technology’s potential and develop digital video cameras to higher quantitative
specifications. In addition to the public vote of confidence that the format received from these
two established, trustworthy industry names, as dynamic range and color sampling were
increased to more closely match that of film, cinematographers used to working in analog were
able to switch over with less adjustment. Video didn’t just look more like film, it behaved more
like it, and shooting video felt (physically) more like shooting film. The “bait-and-switch” of
digital cinema cameras extended beyond the images themselves and into the established
standards of production labor.
119
Side by Side.
122
Initial uses of cinematic digital video came much like George Lucas’s preliminary tests –
digital video cameras were used only for certain sequences in films that benefitted from them
especially, like those heavy on digital effects. Following Lucas’s extensive use of the technology
on Attack of the Clones, Robert Rodriguez was an early adopter of digital video, both for more
traditional cinema work like Once Upon a Time in Mexico (2003) and also on effects-heavy
films like Spy Kids: 3D (2003) and Sin City (2005). A critical turning point came after the
success of James Cameron’s Avatar in 2009. While that film is often remembered as ushering in
the latest boom in 3D cinema, the year after its release similarly saw a dramatic increase in
digital production, likely linked indirectly to that which Avatar’s success pushed into the public
and professional eye: reliance on digital backgrounds, digital effects, and 3D all benefitted from
being shot digitally, and all were being used in increasing amounts. As more and more of a
movie began to be constructed in post, studios had to question what photochemical film actually
provided besides nostalgia as the post-production and distribution infrastructure became
increasingly imbalanced by its presence, especially with the proliferation of digital projection
and delivery systems in theaters and direct-to-streaming releases.
Along with the success of Avatar, 2009 marked a significant milestone in shifting
industry standards as the American Society of Cinematographers and the Producers Guild of
America staged the “Camera-Assessment Series” in collaboration with manufacturers Arri,
Panasonic, Panavision, Red, and Thomson. The series of demanding cinematographic tests put
seven digital cinema cameras through their paces, along with one Arri 35mm camera as a
“control.” Designed as a collaborative attempt to catalogue the various differences at play
amongst the variety of cameras and make manufacturers aware of cinematographers’ needs, it is
telling that the tests were staged as a comparison between digital and 35mm – the latter serving
123
as the “benchmark standard for theatrical motion-picture quality,” according to the ASC
report.
120
Gerald Sim’s discussion of digital cinematography marks the Camera Series as an
especially significant and rare event in the history cinema technology, as it represented a moment
in which artists, studio producers, and manufacturers were collaborating in service of the
industry as a whole, rather than quibbling about video’s aesthetic limitations or its financial
benefits as the popular narrative of “art vs. finance” more frequently suggests.
121
Even more so,
it marked a moment of recognition across representatives of art, finance, and manufacturing that
the once-industry-standard practice of mixed media workflows was no longer efficient. It was the
demonstration of supervening social necessity. Further, it represented an active setting of new
standards – a reassessment and redefinition of what it meant to be “cinematic” quantitatively,
delineating a new barrier of separation between the video cameras of the cinematographer and
those of the videographer. Such separation was even manifested in the names of the tools, as
David Bordwell identifies, in that the term “HD video camera” (which this technology absolutely
still was) was widely replaced with “digital cinema camera.”
122
As would continue across the digital era, the migration of professional standards into less
visible and more data-driven areas served to reinforce image making hierarchies as the cinematic
look became more widely available. Netflix required productions to shoot in 4k resolution
120
“Testing Digital Cameras: Part 2,” The ASC (September 2009),
https://theasc.com/ac_magazine/September2009/CASPart2/page1.html.
121
Gerald Sim, “When and Where is the Revolution in Digital Cinematography?” Projections 6,
no. 1 (Summer 2012): 91-92.
122
David Bordwell, Pandora’s Digital Box: Films, Files, and the Future of Movies (Madison:
The Irvington Way Institute Press, 2012), 199.
124
starting in 2014 even though the content was not broadcast-able through the Netflix service.
123
While such restrictions serve the purpose of “future-proofing” content, they quantitatively police
it as well, such that content is weighed on its value as data as much as and independently of its
value as art, further devaluing a significant portion of low-budget productions. The standard was
becoming something more arbitrary, less visible and, again, less attainable without capital.
By the time Quentin Tarantino shot and exhibited The Hateful Eight on 70mm in 2015,
the act of shooting and projecting film itself had become newsworthy. Many articles on that
film’s resurrection of 70mm noted not just its stunning imagery, but also that it was “both
extremely expensive and fraught with problems” and “essentially dead.”
124
Tarantino himself
expressed the hope that 70mm “might be film’s saving grace,” with similar sentiments coming
from hold-out filmmakers like Christopher Noland and J.J. Abrams.
125
Those three filmmakers,
along with several major studios, lobbied for Kodak to continue producing film in 2014 after that
company was on the verge of shuttering its doors as sales of film stock had declined 96% in the
previous decade.
126
The major studios agreed to a contractual obligation to purchase a fixed
amount of film stock per year, saving the manufacturer from certain bankruptcy and the
widespread use of the medium itself. Still, the act seemed more like a rare example of romantic
preservation and obligation rather than calculated logistics. Even with the materials in hand,
photochemical filmmakers continue to face an additional absence – personnel. As older
123
“Why Does Netflix Require 4k on Netflix Originals?” Netflix,
https://partnerhelp.netflixstudios.com/hc/en-us/articles/229150387-Why-does-Netflix-require-
4K-on-Netflix-Originals-.
124
Sean O’Kane, “Watch a Gorgeous Timelapse…” The Verge, January 15, 2016,
https://www.theverge.com/tldr/2016/1/15/10776386/hateful-eight-70-mm-film-time-lapse-video
125
Ibid.
126
Mike Spector et al. “Can Bankruptcy Filing Save Kodak?” The Wall Street Journal, January
20, 2012.
125
cinematographers retire and as film schools stop teaching analog cinematography altogether, the
digital breed will lack the instinct and alchemy required to expose film properly. Film, at least as
a shooting medium, seems doomed to go the way of the hipster.
High [Image] Quality Television
John Belton’s claim of the digital revolution being a false one, at least in its invisibility,
and my claim of big-budget production’s anti-revolutionary, industry-strengthening tendencies
may ring true for studio feature filmmaking, but they are both complicated somewhat by the use
of digital video in television production, where the affordability of cinematic video played a
significant role in the proliferation of single-camera programming, the rise of so-called “quality
television,” and the visibility of the “cinematic look” in more and varied places, challenging the
notion of what it meant for images to be “cinematic” – specifically in the oxymoronic notion of
“cinematic television.” If digital video was revolutionary in the context of television production,
it was, much like big-budget feature production, less so regarding redistribution of power. If
anything, television producers’ dependence on ratings made them even more averse to boat-
rocking than those of feature film. However, once digital video’s image quality was made to
match that of film through camera and software, the ability to exploit video’s convenience as a
medium led to something of a creative and aesthetic revolution.
To return to John Caldwell and Televisuality, the two dominant modes of production in
the 1990s were what he calls the “cinematic” and the “videographic.” The dividing line between
them was a combination of shooting medium and production style. Cinematic productions were
generally shot on film, but also favored single-camera production, whereas video tended to be
used for multi-camera shows and, by necessity, live events. There were certainly exceptions to
126
the rule – Seinfeld was a multi-camera sitcom and it was shot on film, and single-camera-style
soap operas were notoriously videographic.
127
Just as low-budget moviemakers faced a “budget
vs. look” cost/benefit analysis in choosing film over video, there was a similar consideration of
the cinematic and the videographic in television production, but filtered through television’s
more rigid production schemas. A producer could shoot on film and achieve a cinematic look,
but the comparatively low budgets and tight schedule of television would limit more adventurous
cinematographic attempts more associated with cinema, as they were too costly (temporally and
financially). Shooting on video was an alternative (especially after digital nonlinear editing
became standard practice), but to do so would sacrifice the cinematic texture that sold fictionality
and production value – something ratings-conscious producers would be keen to avoid.
For many big-budget television programs that would have shot on film in the cinematic
mode, the transition to digital video followed a trajectory akin to that of studio feature film.
Initially, there was no great benefit (especially considering the lack of digital effects on TV at the
time), but supervening social necessity came as broadcasters switched to HD video in the late
2000s, as cameras matched the quantitative specifications of film, and as digital labor practices
accommodated and, ultimately, demanded the new technology. That said, the technological
transition to digital video in television production yielded several industrial and aesthetic
changes beyond those seen in the feature film, even for big-budget productions, due to the
differences in production contexts, financing structures, and practices specific to the different
distribution medium. A massively expensive show like Game of Thrones might cost $150 million
per season, but that money is stretched across ten hours of content and a much tighter production
127
The use of video on soap operas led to the pejorative term “the Soap Opera Effect” to refer to
any high frame rate video.
127
schedule than it would be for five individual two-hour movies.
128
The savings in time and money
that benefitted low-budget movie productions from a switch to cinematic video applied to many
television productions more than they might a $150 million feature film. This was especially the
case for shows that didn’t boast Game of Thrones’s $15 million-per-episode budget. The
physical limitations of film and the utility benefits of video were amplified in television
production’s rigid restrictions on time and money.
As digital video became an acceptable industry-standard alternative to film, the benefits
of the cheaper technology opened up doors to narrative and stylistic choices and indeed entire
shows that might not have existed in previous eras. As network television once operated (and,
arguably, still does) on a philosophy of “least offensive programming,” the proliferation of niche
cable channels in the 90s and 2000s and the more recent popularity of steaming services allowed
for more specialized content as the reliance on ratings became less critical. Risk mitigation
through diversified exhibition channels allowed for more risk-taking in programming. The
cheaper digital video technologies allowed producers to double-down on risk mitigation and, as
was the case with films like Pieces of April in the indie feature circuit, shows that would have
once seemed too financially, narratively, or stylistically risky were made palatable by the slashed
budgets. Similarly, as baseline production costs decreased, funding could be allocated to many of
the visual elements that are commonly cited as markers of “quality” in television – pulling in
bigger stars, including more digital effects, increasing production value through visual design,
and pushing beyond the limitations of the studio walls. While shooting multi-camera video with
pre-constructed sets and lighting rigs offered savings, the possibility of shooting on-location with
a smaller crew and a digital camera oftentimes trumped the convenience of the studio. These
128
Joanna Robinson, “Game of Thrones,” Vanity Fair, September 26, 2017.
128
increases in production value contributed not only to aesthetics, but also to an increase in a
show’s marketability. By leveraging the cost savings of cinematic video, TV creators and
producers gained negotiating ammunition that they did not have ten years prior.
Saving time on set by shooting on video additionally translated to financial savings in the
reduction of labor hours, rental budgets, and location allocations, but there was a creative impact
as well. Because time savings with video was generally manifested in the potential for faster
camera and lighting setups, crews could accomplish more in a day. With some creative
scheduling, this generally meant fewer days shooting, but in some instances, it just meant less to
accomplish in a single day. Because film crews generally operate on full-day rates, finishing
early didn’t necessarily yield any great benefits outside of earning brownie points with the crew
in releasing them early (another benefit not to be overlooked). The alternative was that crews
could use the extra time on set to shoot more takes, shoot from additional angles, get non-
traditional in their shot selection, or even shoot with multiple cameras.
As an example, take It’s Always Sunny in Philadelphia, a single-camera-style comedy,
that first began as a low-budget pilot, shot on location with a DVX by creators Charlie Day,
Glenn Howerton, and Rob McElhenney for a mere $200. The show was picked up and aired by
FX in 2005, but the network opted to keep shooting on the DVX rather than switch to film, as
other single-camera shows might have, and continued using that camera for five seasons until
switching to an HD cost-equivalent Sony EX3.
129
In many scenes, the show is shot with multiple
cameras at once, and the small crews are able to maneuver the lightweight cameras into spaces
that might be difficult or impossible with a film camera and corresponding larger crew. Further,
129
“FX Networks…” Creative Cow. June 6, 2006.
https://news.creativecow.net/story.php?id=856068
129
because of the cheap shooting medium and quickened schedule, producers can stage and shoot
more takes than they might with film. Utilizing the technology in such a fashion, the resulting
comedy can be more adventurous in its visuals gags and can rely more freely on improvisation
and actor-driven scenarios than would be possible if the crews shot on film. In a Season 10
episode called “Charlie Work,” the show featured a Birdman-esque long take as loveable loser
Charlie (Charlie Day) leads a health inspector through his bar while attempting to mask all its
horrible violations. The crew blocked, lit, and shot an 11-page sequence through 12 takes in four
hours. Or, as DP Matt Shakman explains, “A third of the show. All before lunch.”
130
The producers of the show capitalized off of several medium-specific qualities of digital
video to produce this sequence, which would be rather unlikely had they opted to shoot on film.
First, when shooting on film and on a tight TV production schedule, shooting 12 takes of any
shot would be rare and financially inadvisable. Second, considering the difficulty of the
choreography, it’s certainly possible that the sequence may have taken more than 12 takes. For
that risk alone, it is unlikely that the long take would have ever been attempted by the producers
at all. Third, the sequence is quite adventurous and visually interesting cinematographically, with
complex choreography of actor and camera that would have been challenging with a larger
camera and crew. Lastly, the length of the take was no obstacle due to the capacity of the storage
medium. Had they shot on film, at the very least the camera crew would have had to reload the
magazine for every single take (assuming the full take would fit on a single magazine as
choreographed).
130
Alan Sepinwall, “How It’s Always Sunny…” Uproxx. February 4, 2015.
https://uproxx.com/sepinwall/how-its-always-sunny-in-philadelphia-made-its-unintended-
birdman-homage/.
130
In utilizing the medium-specific properties of digital video, the producers were able to
produce a complex long take on fiction television that was quite novel at the time both generally
speaking and also with respect to situation comedy specifically. All of the technology required to
orchestrate this episode was available in the analog era – any one of the single-camera comedies
of the 80s or early 90s could have attempted it with video of the age, but at the expense of
cinematic image quality. It was only after video reached a certain qualitative standard that
producers were willing to make the switch and fully exploit the medium-specific properties that
had been there for decades. The notion of the “cinematic” still held for the televisual medium,
however oxymoronically.
On one hand, turning attention to the adoption and the exploitation of cinematic video in
television production reveals a lesser-discussed technical basis for some of the otherwise
ambiguous claims of “quality television.” The potential for shooting on digital video was not the
only factor in the rising popularity of the format, but the massive increase in single-camera,
visually and narratively complex shows in the mid-2000s and the concurrent growing popularity
of cinematic digital video was no coincidence.
131
Many of the earmarks of “quality” had roots in
the adoption of the technology and its aesthetic potential.
Still, to stop discussions of the cinematic at “quality television” would be to miss a
broader and, I would suggest, more theoretically interesting trend that was occurring at the same
time. In trying to define the semantically confusing term “cinematic TV,” formalist film critic
131
The earliest American examples to bear the “quality” moniker were shot on film (Oz, The
Sopranos, Twin Peaks, Freaks and Geeks, etc.), and plenty still are. There are additionally many
shows that bear the “quality” label that were shot on video in previous eras – the latter primarily
coming from the UK and exhibited on public television. But it was only after decreases in budget
met increases in quality in the form of cinematic digital video that such shows proliferated across
a wide array of channels, and the calls of a “golden age of television” started to fly with utopian
abandon.
131
Matt Zoller Seitz identifies a show that he feels is exemplary of the cinematic, The Knick (2014-
2015). He sums up, “Cinematic TV in its highest form creates beauty and mystery. Poetry.”
132
But considering the “cinematic” from a quantitative perspective, it was not just the poetic,
beautiful, and mysterious shows that were switching to cinematic video – it was all of them. As
much as a long-take sequence from The Knick may resemble the look of cinema, so does a long-
take sequence of home-buyers walking down the street on House Hunters.
For low-budget practitioners and trade press authors, to read the “film look” tutorials
suggests that there was a key visual element separating professional from amateur, big-budget
from low-budget, cinema from television, “quality TV” from regular TV, the cinematic from the
videographic. Yet, as cinematic digital video cameras became cheaper and more common across
the 2000s, it has become easier for producers at all levels to reach cinematic quantitative
standards. While serial drama and reality TV in the 1990s would have looked very different in
their images’ shooting medium, resolution, color rendering, and dynamic range, they now are
shot in the same medium on cameras with much more similar technical specifications and are
edited and colored in postproduction with the same software. Within television production in the
2000s and 2010s, there was a shortening of the visual divide between the cinematic and the
videographic modes, further shifting the hierarchy of image-making beyond the texture of the
image into other areas of production more directly tied to capital. In this section’s final sub-
chapter, I will discuss the implications of the current era of intercuttability and the potential
disappearance of the “cinematic” as a theoretical concept. Not only do low-budget and big-
budget digital video share the same imagery, but they share it with the video I shot of my dog.
132
Matt Zoller Seitz and Chris Wade, “What Does ‘Cinematic TV’ Really Mean? Vulture,
October 21, 2015. http://www.vulture.com/2015/10/cinematic-tv-what-does-it-really-mean.html.
132
The Intercuttable Loop
George Lucas’s demonstration of film and video intercut together in Episode I was quite
momentous, but over the course of the next decade, the “intercuttable loop” would expand to
include both prosumer video and home video. The massive discrepancy between film and video,
outlined in detail at the head of this section on image quality, would shrink rapidly. But, even
more significantly, the visual discrepancy amongst the spheres of production that once was
confined to technology and medium would ultimately shrink as well, as technology and imagery
began to converge. This visual convergence that marks the current era of imaging is at the heart
of the concept of the intercuttable loop – any imaging format, from big-budget to home video,
can be cut together. The imagery is not technically the same quantitatively, but it is close enough
qualitatively to be largely unnoticeable. For low budget media-makers striving for the cinematic,
the 2000s and 2010s would deliver precisely the look they were after. But, considering it was
delivered to everyone (indie filmmakers and soccer moms alike), the revolution that might come
with acquiring an image texture of greater cultural significance did not quite pan out the way that
the trade press prophesied in the late 90s. If my cat video is “cinematic,” does “cinematic” even
mean anything anymore? Is the concept dead entirely? Or has the separation in image quality
that demarcates the spheres of production simply become less visible?
As high definition cameras were slowly adopted across the 2000s in higher-budget
production spheres, there was a convergence across media-making practitioners in that they all
shared the same digital medium. However, there remained a clear separation in resolution. 24p
cameras like the DVX allowed for a cinematic look, but the cameras remained standard
definition – prosumer video was still a decoy. While high-end HD cameras like the Sony
133
CineAlta were available for rent, low-budget moviemakers faced a familiar problem – these
cameras had big-budget prices. Even though the need for film stock was no longer a factor, the
camera rental fees alone priced most low-budget moviemakers out of the market. To meet
prosumer demand, versions of the high-end technology found its way into more financially
reasonable packages not long after Lucas demonstrated its success. An early version of high
definition video, HDV, was developed and supported by JVC, Canon, Sony, and Sharp in 2003.
It allowed an HD signal to be shot and stored on MiniDV tape, albeit in a heavily compressed
format. While HDV offered a higher resolution image in a more affordable package, in the “Wild
West” phase of early technological development, the format faced compatibility problems on
numerous fronts. Many computers and editing programs could not handle the footage well or at
all, and it was incompatible with most home viewing as only 3% of televisions in the U.S. were
HD-capable.
133
Most television and cable broadcasts were still in standard definition, as were
home media formats (e.g. DVD). For many media-makers, ironically, shooting in HD would
only really be beneficial if one were printing to film. For most purposes for which video was
used, the high-definition HDV format would need to be converted back to standard definition,
confining its beautiful, high-resolution images to the editing room. There were financial benefits,
though, for the wide variety of production houses choosing to invest in HDV equipment for the
ability to now sell (or oversell) their HD-capable services, even if only to convert a couple’s
wedding footage back to SD upon delivery.
As HD TVs became more common and as broadcasters began to switch to the format in
the later 2000s, the use of higher resolution formats became more viable, and eventually a
133
“HD TV Sets…” Leichtman Research Group, March 13, 2015.
https://www.leichtmanresearch.com/hdtv-sets-now-in-over-80-of-u-s-households/.
134
camcorder standard. Seeing this HD future on the horizon and likewise seeing the immense
popularity of its 24p DVX camera, Panasonic released an HD version of that camera called the
AG-HVX200, or “HVX,” in 2006. The HVX was again hailed as “revolutionary,” but not
necessarily because of its images alone. The camera was one of the first commercially available
to bypass tape entirely, recording to solid state “P2” cards instead. While the changes to
workflow and camera utility that accompanied the departure from tape were massive (and will be
addressed in this dissertation’s following section), there were significant implications for image
quality as well. Bypassing the limitations of tape bandwidth, the HVX could shoot video at lower
compression rates, increasing its dynamic range and color sampling rates. While the Panasonic’s
proprietary P2 cards were quite expensive (an 8gb card was an absurd $1400 at release), storage
media dropped in price considerably in the following years, and camcorders quickly left tape
behind, allowing for HD images of the HVX’s data rate all across the spectrum of video
practice.
134
As HD video became the norm and 24p video was offered on a large variety of cameras
at varying price points, many of the most significant visual limitations of video as a shooting
format had been eliminated. The last significant quantitative improvements in image quality
came in a strange package – the DSLR, or digital single-lens reflex camera. These were not
camcorders in the traditional sense, and certainly not in appearance. DSLRs were primarily still
photography cameras and certainly looked it; if the design was any indication, video capability
was only included as an add-on afterthought. Their initial design function was in service of news
photographers (or, perhaps more accurately, news media organizations, in the camera’s labor
134
Andrew Burke, “Put it on my card,” Videomaker 21, no. 3 (September 2006): 21.
135
cost saving potential), in that a photographer could effectively capture photographs and video in
a single instrument.
In order to make this dual functionality possible, the cameras featured two noteworthy
upgrades to video image quality. First, the image sensors in still cameras tend to be significantly
larger than those in video cameras. A typical DSLR has an image sensor equal to the size of a
35mm film frame (they were meant to mimic still film photographs, after all). When these
massive sensors were first used for video capture, the result was an almost startlingly narrow
depth of field, the likes of which had never been seen in videographic images. This optical
change was furthered by the fact that the cameras had easily interchangeable lenses, another trait
that was a standard and necessary feature for professional still cameras but was largely lacking
from nearly every prosumer video camera.
135
Lenses could be pricey, but even the entry-level
models featured a great diversity in focal lengths, from super-wide fisheye to extreme telephoto,
and many were much more finely crafted than the sturdy, stock zoom lenses that were affixed to
standard video camera bodies. Additionally, the newly available ability to shoot on prime lenses
(those which are at a fixed focal length and cannot be zoomed) allowed for better low-light
shooting and even narrower depth of field, as these were capable of shooting at much wider
apertures than the standard zoom lens. While digitality offered videomakers control over the
image that was rarely seen in the analog era, the DSLR brought a level of mechanical, optical
control that was long absent from video imaging devices.
To the trained eye, the release and popularity of DSLRs in the late 2000s was visually
noticeable as many lower-budget productions – from web-based videos to local commercials to
135
Some prosumer cameras like the Canon XL-1 did have interchangeable lenses, but they were
often a camera-specific type and many were so expensive as to exceed the price of the camera
itself.
136
indie films to music videos – suddenly featured an extremely shallow depth of field as video-
makers quickly converted to the new shooting format. To that point, DSLR cameras were quite
affordable. The Canon EOS 5D Mark II (or “5D”), released in 2008, was one of the first and
most newsworthy video-capable DSLRs. It was comparable in price to many prosumer cameras
at $2700, but the lower-tier Canon Rebel series, released shortly thereafter, offered similar
cameras for $700. These low-budget options offered marginally smaller sensor sizes, but still
yielded an image quality that effectively matched the motion rendering, color rendering, and
dynamic range of film far more closely than any prosumer or consumer cameras of the digital
era. In 2010, a moviemaker could shoot video on a camera for less than $1000 that was
comparable to that which George Lucas was shooting 10 years earlier. That is not to say that a
DSLR and an Arri Alexa produce video of the same quantitative quality – they surely do not, or
Hollywood studios would have likely pitched the expensive cameras into the trash long ago.
Qualitatively, though, the DSLR achieved its “semiotic decoy” status more effectively and
affordably than any camera up to that point in digital video’s history. Videomaker magazine
doesn’t run quite as many “cinematic look” tutorials as they used to.
To say that video has developed a “film look” is strange. Likewise to say that television
has become more cinematic. As 24 fps HD video has become the professional shooting standard
for cinema and television in the late 2010s, aspiring low-budget moviemakers striving for the
Hollywood “film look” are really striving for a high-grade “video look.” Aesthetics have been
divorced from their media of origin. It is truly an oddity from a medium specificity point-of-view
that 48 fps film is said to exhibit a “video look,” and 24 fps HD video is said to exhibit a “film
look.” The terminology, which is rooted in technological history rather than stylistics, is
137
experiencing a slippage – what once referred to the specific qualities of a medium now refers to
the semiotic coding associated with its rendering of motion and visual dynamics.
The more closely video resembles film – and it largely does, in the late 2010s – the less
the notion of the “cinematic” is tied to a “film look;” it is as much a qualitative notion as it is a
quantitative one, becoming even more nebulous and, perhaps, completely obsolete as a term. The
very notion of a “cinematic look” is predicated on difference; the “cinematic look” was only a
term because there was another option – the “video look.” As formats at all levels of production
merge in their image quality, the notions of “cinematic” and “video looks” become pieces of
nostalgia, viewed only on vintage equipment. When I myself am lecturing on the “video look,” it
has become increasingly difficult to demonstrate. In the absence of old cathode ray tube
televisions and interlaced video cameras, trying to find digital machines to render motion in that
old way is nearly impossible. What does it mean when home movies can look like Hollywood
movies? Wherein lies the difference, and what is at stake?
The Ubiquity of Intercuttability
The notion of an “intercuttable loop” refers not just to the potential to intercut film with
video, as George Lucas did, but high-budget video with low-budget video. This intercuttability
isn’t just hypothetical. Using my HD television, I can produce an interesting experiment. First, I
connect my DSLR camera to my TV’s HDMI 1 slot, to play some home movies. In the HDMI 2
slot, I have my Blu-ray player, showing Iron Man 2 (2010). My computer is hooked up to HDMI
3, where I’m streaming an indie documentary called Hell and Back Again (2011). Last but
certainly not least, I also have my old-fashioned rabbit ear antennas installed to the coaxial
hookup so that I can watch broadcast reruns of House, occasionally interrupted by commercials.
138
The gag is that if I switch back-and-forth amongst all of the TV inputs, from home video to low-
budget indie to commercials to Hollywood TV to Hollywood cinema, fiction and non-fiction, the
images on screen were all shot with a Canon DSLR camera. The 5D was used to shoot the
P.O.V. soaring scenes in Iron Man 2 (and action scenes in countless other films, due to its small
size). That same camera was used for the entirety of the indie doc and the season six finale of
House. Commercial shoots that once utilized prosumer HD cameras like the HVX or even the
bigger-budget monsters like the Arri Alexa have traded out for DSLRs and subsequently bumped
their profit margins significantly. All of these for-profit, professional projects were shot on the
same camera that I used to film my dogs battle for a squeaky hotdog.
The effects of intercuttability extend into the phenomenological. I change the channel
from my experiment to see a woman entering the front door of a house and smiling with surprise.
A man’s voice says, “Welcome home!” I have no idea what I’m watching. Not just in the
specifics of the content – I have no idea what I’m watching in terms of the mode. Is it a
commercial? A TV program? A fiction film? A documentary? A news report? Even in this brief
of a moment, ten years ago I could draw immediate impressions from my semiotic reading of the
image texture – its form, its mode, its production value and, from these, its cultural significance,
even if only subconsciously. That reading has become increasingly difficult. The imagery in
question in this particular instance is especially hard to read and, after gaining context, it
becomes clear that its producers are very aware of the visual confusion and the current mixing of
modes across the video image spectrum.
The scene of the woman is non-fiction, and is from a commercial for an interior design
company called “Houzz.” The commercial is only partly for the service itself, as the majority of
the content is pitching a home renovation web series that follows celebrities Ludacris, Kristin
139
Bell, Kyrie Irving, Jenna Fischer, and Mila Kunis as they remodel their relatives’ homes using
the service. The commercial aired on HGTV, which regularly features such wildly popular
renovation shows. The one-minute ad is modeled on that of an upcoming show, featuring the
staple tropes of the home renovation genre like room demolition and big, happy reveals of the
new spaces, coupled with the name celebrities. Given that the commercial refers to the web
series only as “a new series” and, given the choice to air the commercial on HGTV, the ad
deliberately misleads the viewer into thinking that the show is an HGTV program. Further, the
Houzz website and the accompanying service is mentioned in the commercial, but only inasmuch
as it is a part of the show’s design elements, as if they are a sponsoring company.
In watching the segment, at any given moment it could be read as a renovation TV show,
a commercial for a renovation TV show on HGTV, a commercial for a renovation internet series,
a commercial for a web site, or a commercial for a design service. In reality, the segment is
somewhere in between – it is a commercial for a web series that itself is a commercial for a
design service, all disguised as a commercial for an HGTV renovation show, using that genre’s
narrative conventions and showcasing celebrity iconography. The reason why the commercial is
so effective at disguising itself and blending all its layered modes in a single moving image is
that all of its referents – celebrity media, renovation shows, commercials (high and low budget),
and web series, all currently feature the same image texture – that of cinematic HD video. If one
considers the convergence of new media in the last decade, the intercuttable loop represents its
extension into image quality.
Image quality convergence is somewhat uncanny across the products of all the spheres of
production, but it is perhaps most so in home video. Part of that reason has to do with the notion
of image control. Cinematic video was largely impossible in the analog era, and with the coming
140
of digitality, it was possible only through complex and meticulous manipulation on the part of
practitioners; semiotic decoys were restricted to those with the skillset, and these were largely
semi-professionals, artists, and advanced hobbyists. In 2018, cinematic video is standard. It bears
the visual marks of image control that once separated the spheres of production but without the
need to wield any of that control oneself. As much as the semiotic decoy was a significant
technical development in low-budget production especially, and the adoption of high-definition
cinematic video produced considerable industry-wide changes in big-budget production spheres,
the rise of intercuttability had the biggest visual and cultural effects in the amateur and home
video sectors, where the notion of the “cinematic” was the last thing on the practitioners’ minds.
Jan-Christopher Horak calls attention to the frequent descriptions of 9/11 amateur video
footage as being, “like a movie.”
136
The phenomenological similarity was no doubt due in large
part to the on-screen content, but also in that images had become increasingly “broadcast
quality.”
137
Just as video images were read differently in independent movies that were shot with
a “film look,” and this altered reading affected the filmmakers’ ability to achieve their end goals,
similarly altered readings and resulting effects are present as the home video’s images become
cinematic. While it is impossible to prescribe end-goals for the vast diversity of video projects
across the various spheres of production, home video is somewhat unique amongst the others in
that it is the least audience-driven act of practice and, for that reason, the least tied to capital.
Big-budget cinema certainly foregrounds its financial goals as it needs to recoup its mammoth
costs, and it does so by reaching an audience in some form, be it through box office sales, home
media, or broadcasting deals. Low-budget practice is similarly tied to capital, audiences, or both.
136
Jan-Christopher Horak, "Editor's Foreword," The Moving Image 2, no. 1 (Spring), vi.
137
Ibid.
141
Commercial video is for-profit by definition, and while independent filmmaking generally
doesn’t draw a profit, filmmakers wouldn’t necessarily oppose one. Either way, the goal is often
to reach an audience, either through a distribution deal or via film festivals. Even web-based
video on sites like YouTube presents the potential for linking audience to revenue though the
monetization of videos through opt-in advertising. In discussing the cinematic in relation to
questions of capital and audience, the digital video’s revolutionary potential stems from its
ability to help producers better achieve those goals. Home video is an interesting point of
comparison in that its end goals – broad as they may be – are largely different.
Saying that home video is not audience-driven is somewhat inaccurate. In many ways, it
is perhaps the single most audience-driven video practice, though the audience for each project
has historically been extremely small and specific. If the sitcom jokes are to be believed, home
movies are notoriously resistant to audiences beyond the family who shot them. In the many
instances in which home video is shot and never watched, it is literally without an audience. For
these reasons, one can view the practice of home video as more archival than communicative.
Or, if communicative, it is so with the self (or, as a larger “self,” the family). Yet, if home video
was largely archival by definition, the audience for that archive has changed quite dramatically
over the last decade as the potential for wider broadcast has reached the home. Home video has
become something more public than it has been in the past.
In the 1990s, home video was filmed with a camcorder. When compared to the early
prosumer digital cameras, home videos weren’t all that different in image quality. For each
video-producing region (e.g. NTSC in North America or PAL in the UK), professional video and
home video were the same format. They had to be, as their end point of exhibition, the television,
was the same. While prosumer and professional cameras generally had some degree better
142
dynamic range, color rendering, lensing, and less-compressed storage media, the end products
were not terribly different in their final images. The difference between the professional and the
amateur lie in control over the image more than the technology itself. In discussing camera
quality with my students, I used to tell them that the easiest way to identify the professional
caliber of a camera is to check and see how many buttons, switches, and dials it has on the
exterior camera body. These added controls led to added control; the prosumer and professional
cameras allowed for greater image manipulation. The popularity of a camera like the Canon XL-
1 in the early 2000s came in part from its progressive scan Frame Mode, but the camera also had
image manipulation controls for aperture, shutter speed, and electronic gain. It had a built-in
neutral density filter to allow for high-light shooting and greater manipulation of depth-of-field,
and a system for customizable white balance selection for greater color manipulation. It also had
a focus ring for the manual adjustment of focus, and interchangeable (though pricey) lenses. The
camera offered additional in-menu functions, including an anamorphic widescreen shooting
mode and an optional image stabilizer. Conversely, the Canon Vixia R80, a consumer model
Canon camcorder, had an on/off switch, a zoom lever, a record button, and a menu to access
footage.
For these two Canon cameras and other cameras at similar levels, the difference in image
quality came not strictly from the quantitative technology of the image sensor, but from the
ability to alter it. A pro DP loses some of her might when the only aperture setting is
“automatic.” Manipulation control was two-fold. It came from personal ability and skill, but also
from the physical allowances of the camera. Knowing how to alter depth of field was less useful
if the operator lacked the ability to alter the focus and the aperture on the camera. As a result,
home video image quality was largely marked by indicators of amateurism, even though the
143
cameras had image rendering technology that was comparable to the prosumer models; it was
amateurism through automation.
For example, camera aperture was frequently automated to remain as small as possible,
which ensured the largest depth of field. Doing so made the autofocus more effective, but
required the camera to automate the electronic image gain as high as possible, making the image
extremely grainy. Another example was in the autofocus automation, which would often render
images in a state of erratic focus flux, as the autofocus computer repeatedly attempted (and
failed) to determine what the most “important” part of the shot was and retain its focus on that.
Similarly, the auto-iris feature would average the brightness across the entire image and settle on
a catch-all middle ground, but this made stylistic over/under exposure mostly impossible.
Further, home video images had notably video-y motion as the home video user likely lacked the
software and know-how for de-interlacing the video, and the low-level Canon cameras lacked
Frame Mode as an option. Thus, many of the home video tropes used in filmmaking to simulate
the “raw” and “gritty” look of home video footage were based in large part on the camera
manufacturers’ tendency to remove control from the home video user rather than the quantitative
specifications of the cameras’ image sensors.
The relationship between the level of camera control and the resulting images was
additionally relegated by the users themselves. An untrained home video user wielding a
prosumer camera like an XL-1 in the early 2000s would likely produce an image that was only
on-par or potentially worse than that of a lower-level home video camera, because without the
technical mastery, more control over the image would be a hindrance, resulting in images that
were out of focus or poorly exposed. Even as home video resolution increased dramatically as
camera manufacturers made the switch to progressive scan HD in the late 2000s, the trend in
144
home video camera design was to further automate the process of recording such that the viewer
had increasingly less control over the image itself. This reduction of control reached an extreme
in the smartphone camera era, wherein many camera phones have only a single record button.
Thus, even as image quality has increased to intercuttability in the current era (even in
smartphones), the hallmarks of amateurism-through-automation still mark home video images
and home video cameras; much of what higher-level cameras offer is not strictly a better image,
but more control over that image.
While home video users lacked control over the image in cameras across the digital era,
they were increasingly afforded more options in the domestic version of post-production. While
home video practice could be – and in many cases, surely was – contained entirely within the
camera, as it could record and play back video just like those in the analog era, the digitization of
video brought with it the opportunity to manipulate images in a variety of ways. In the analog
era, little control over the image itself was possible during playback once the tape started rolling,
aside from turning the “tint” and “saturation” dials on the TV itself, creating in-home acid trip-
inspired visuals, but affecting no permanent changes in the images. Post-production, as a
practice, was not a common one in the analog era; home video largely jumped from production
straight to exhibition (if it even made it that far). In the early digital era, the opportunity to edit
video both temporally and also with respect to the look of the images was a newfound, if still
somewhat difficult, practice. Many early digital cameras came bundled with editing and image
manipulation software at point of purchase, and Macs and PCs were increasingly arriving with
software like iMovie and Windows Movie Maker pre-installed. The software itself was designed
to be user-friendly, but the process of getting video from the camera to the computer – digitizing
– was not particularly intuitive; required a fair amount of time and effort, as tape-based video
145
needed to be captured in real time; and was prone to errors that led to video that was glitchy,
choppy, or out-of-sync. As camera manufacturers broke away from tape-based media, the
digitizing process became significantly easier, but this change in storage format was followed by
an even more impactful technological shift – the spread of the smartphone.
138
With smartphones,
the opportunity for image control increased amongst all users – not the just tech-savvy ones.
Once a smartphone had captured video, it was easily (and in some cases automatically)
transferred to a computer, where it could be edited with the aforementioned bundled software.
Even more noteworthy were the in-phone apps that could manipulate video immediately after it
was shot or, in some cases, while it was being shot, as with SnapChat.
All of these methods of control had to operate within the bounds of the cameras’
technical specifications. That is to say, the ability to easily manipulate the images did not come
coupled with an upgrade in quantitative quality. If anything, it was the opposite. Video shot with
a smartphone phone was high definition and generally was either 24p or 30p, but had a dynamic
range, color sampling, lensing, and sensor size that was inferior to most stand-alone cameras.
Instead, the pre-programmed filters on programs like Instagram allowed for control over an
image’s qualitative quality through automatic alterations of color saturation, tint, and contrast.
The technology was brought into popular discourse first through still images, wherein
casual photos were made to mimic the professional Photoshop effects that were largely beyond
an amateur’s skill level. Similarly, the video filters performed operations akin to those of
professional color graders. Users could choose from a series of pre-defined “looks.” Each of
these was essentially a fixed alteration of the quantitative data of each photo, though those data
138
This discussion of the transition from video tape to drive storage to smartphone is
intentionally abridged, as it is the primary focus of this dissertation’s following section.
146
manipulations were largely hidden from users behind an interface. While such filters did give
users more control over their images, it was in fixed amounts, and just as automated as before –
simply that these new automations tended towards aesthetics rather than ease-of-use. If I liked
my videos looking warmer and more contrasty, I could achieve those qualitative effects without
needing to know 1) how to effectively light and design my image when shooting; 2) how to
adjust the camera’s sensor settings to be most manipulable in postproduction; and 3) how to
manipulate the individual highlight/mid-tone/shadow levels and the red/green/blue/saturation
balance.
The automation that largely defined the look of home video throughout the early digital
era remained in the more recent smartphone era, and the sense of image control that users were
given was tempered somewhat by the fact that control was often limited to certain specific looks.
Still, home video users were able to easily affect the look and feel of their images in ways they
had not before. In Jean-Pierre Geuens’s somewhat nihilistic words, “Whereas it might have taken
years for someone like Mikhail Kalatozov to develop the extraordinary style that became his
hallmark, today ‘style’ can be purchased on day one by activating a switch on a gizmo.”
139
Just
as the “video look” was partly derived from its ubiquity in home video, as home video tended
toward cinematic specifications, and as aesthetics played a larger part in home video practice, the
ubiquitous and the aesthetically pleasing were not strictly mutually exclusive. That which
defined an aesthetic point of pride and a gateway to cultural influence became quotidian – the
common images of ourselves had become cinematic.
139
Geuens, “Digital World,” 20.
147
The Effects of Intercuttability and the Cinematic Self
For both big-budget and low-budget production, increases in quality could be utilized for
financial gain, reaching wider audiences, cost shaving, increased production value, stronger
creative control, riskier projects, and potentially revolutionary power shifts. But what of the
home video user, where few of those advantages played into the practice? What does it mean that
a Bar Mitzvah video is now in HD? The word “now” is a key modifier. While it may be
impossible to prescribe an overarching, defining purpose to home video practice, as it varies so
wildly in different households, I would argue that the audience for home video in the digital era,
regardless of its ultimate goal, has become far more public. As opposed to most art-related
and/or commercial projects, where exhibition is often the primary goal (even if only as a means
to generate profits), for many home movies, exhibition remained secondary to archiving. Judi
Hetrick’s article on the renewed importance of studying home video, written in 2006, calls
attention to the difficulty of accessing home video content due to its private nature: “it is a rare
tape that is made public to a larger audience.”
140
Such a claim was accurate at the time, but has
been rendered dated in short order. The increase in home video image quality has occurred in a
time of significantly increased visibility.
The prior lack of exhibition in home video content was partly due to inefficiency. In the
amateur film days, pulling out the projector, setting up the screen, and spooling up the reels of
film was all a bit labor intensive. Video camcorders were largely compatible with televisions and
could easily be made to play back footage, but the nature of playback remained linear, making
viewing a servant to linear time. Searching for individual moments of content to exhibit required
140
Judi Hetrick, “Amateur Video Must Not Be Overlooked,” The Moving Image 6, no. 1 (Spring
2006): 68
148
slogging through many minutes of footage or even multiple tapes, making the archival nature of
home video quite literal as the home video user became the archivist and the researcher; if I
wanted to show someone a particular video of my dog, I would need to perform considerable
labor. With digitality and the move away from the linearity of tape came random access, so
pulling up only bits and pieces of footage was easier. Videos of my dog, and specifically the best
ones, are readily available and from any location via my phone; the opportunity cost of
exhibition for home video is much lower. The same is true for reaching an audience. For better
or for worse, my Facebook page can consist of nothing but a string of home videos, invading the
screen space of my friends and their friends. Exhibition is easier and more common, and this
development has occurred concurrently with increased aesthetic control over one’s images.
In the analog and early digital era, I was able to capture my child’s first steps or her first
hockey goal in SD video. The images looked lower in resolution and they moved differently than
if I had shot the scenes in a more expensive format, but as long as the content was visible (and it
generally was, thanks to idiot-proof camcorder design), it accomplished the goal of capturing the
moment. The archival function was achieved. The lack of control that was a defining feature of
home video cameras actually made the archival process easier, as users had to worry less about
images being underexposed or out of focus when they had less opportunity to mess them up
themselves. Excessive graininess wasn’t a problem, because it made the scene more evenly
visible. Interlaced video wasn’t an obstacle to conveying illusion, because illusion was never a
goal. Narrower dynamic range and color space didn’t detract from a user’s ability to capture their
kid’s hat trick. The markers of video-ness that identified productions as amateurish and hindered
fiction storytelling – those coding video as low production value and incapable of illusion – were
not a hindrance to home video practice because home video was those things, and never had any
149
aspirations of not being them.
141
If anything, it was the existence of home video that cemented
those semiotic markers in the minds of the low-budget producers trying to escape their station. It
didn’t matter much that home video looked “bad,” but it did benefit in a variety of ways when it
started to look “good.”
Despite the fact that image quality was less of a central concern for home video’s
archival purposes, notions of quality have always featured prominently in the marketing of
consumer cameras. Pushing some generic and ambiguous idea of “high quality” – this phrase
itself was oftentimes the extent of it – was common in camcorder ads even in the analog era.
Hitachi’s VK-C1000’s picture quality is “incredible,” though the images of Sony’s F77 are
“infinitely superior.” Canon’s ad for their VC-10A juxtaposed two televisions, one featuring
professional Canon camera footage used in TV broadcast, the other with footage from the
consumer camcorder, looking essentially identical (though my gut says neither one is a video
image, and both are photographs). These claims were sometimes dropped in passing, but were
often tied to the camera’s technology – especially proprietary names like Canon’s AccuVision or
Hitachi’s HQ Video. Considering all the cameras were outputting NTSC video that was identical
in resolution, claims of superior quality – especially “infinitely superior” – are dubious in their
ambiguity, and are present even when there were no clear upgrades to the technology. For
advertising purposes, the claim of “high quality” was enough. Such claims were frequently
coupled with the trumpeting of the cameras’ ease of use and automated features, resulting in ads
that would push cameras whose quality was directly tied to their automation. Thus, the very
141
Certainly there were hobbyists for whom aesthetics were an important element, many of
whom were probably readers of the trade press and existed in the space between the home video
amateur and low-budget practitioner or prosumer. Again, the boundaries between the spheres of
production should not be read as rigid, as they were not in practice.
150
restrictions that prevented viewer control over the image were being highlighted as the features
that would make the images look their best.
Claims of automatic high quality were especially exploited in key moments in which
there actually was a significant change (quality-related or not) – the switch to digital video, the
release of DVD camcorders, the upgrade to HD, and in the release of smartphones cameras. The
switch from analog to digital does not inherently carry any upgrades; analog vinyl records are a
higher fidelity storage medium than the digital compact disc. That fact didn’t stop marketing
companies from making “digital” the buzzword of the 1990s and early 2000s, linking it with
cutting-edge, future technology (digital compact discs using “precision lasers”) and – especially
relevant to this work – superior quality. This was true for digital video despite the fact that the
switch to digital didn’t initially offer any tremendous improvements in image quality at the
consumer level. As was the case with more general marketing claims of “high quality,” the fact
that video was digital was enough of a case for advertisers to make, and this push for “digital
better-ness” was seen across advertising in the 1990s and early 2000s with respect to audio and
video especially, with CDs, DVDs, video and still cameras, TVs and broadcasting, but also more
generally in anything that could be computerized, from wristwatches to car dashboards.
In nearly all of these cases, it certainly isn’t made clear to the consumer what, exactly,
digitality entails, but it certainly is made to seem beneficial; digital video is an ambiguous
upgrade. What is being sold is not the nature of the upgrade, but the act of upgrading itself; it
functions as a qualitative improvement in personal status in addition to or even apart from any
quantitative specifications in the image itself. Even with HD cameras, wherein there was a
significant improvement in quantitative quality, the act of purchase was not just driven by the
need for a better image (SD was acceptable before) but a desire to stave off obsolescence (lest
151
you yourself become obsolete) and step up in social status. Unlike the more logistically-driven
turn to digital video in the higher tier of productions, the social need for HD in the consumer
circuit was largely manufactured; it accompanied and was partly driven by the turn to HD in
television. “If your TV is HD, then surely your camcorder must be!”
Technical upgrades driven by paranoia were not unique to the home video sector, as a
similar drive was seen in the low-budget production circle, though with a greater specificity to
the technical specifications. Much like low-budget video producers, home video users could
utilize the new technology for an increase in social acceptability. For the home video user, it
wasn’t strictly about making one’s home movies better – it was about making oneself better.
Considering the aspirational end goals of many low-budget filmmakers, though, it is worth
considering how different these motives really were. For many filmmakers, achieving wider and
greater audience or industry recognition was an act of artistic vanity more than anything else. If a
filmmaker wanted to make a film, to tell a story, to express oneself, but lacked either the
financial means or the logistical support to do so, the use of video allowed that individual to
potentially circumvent the gatekeepers and fulfill their desires; it was a keeping-up-with-the-
Cannes. For many filmmakers and families, though, increases in social status through technology
went beyond status-striving, artistic or otherwise, and served as a point of personal pride,
empowerment, and cultural visibility interpersonally or artistically.
HD home video camcorders began to phase out standard definition ones by the late
2000s, and this transition did mark a significant and visible technical upgrade. In addition to the
large jump up to 720 or 1080 lines of resolution, cameras were also progressive scan, so their
images looked significantly less video-y than cameras of previous eras. The switch to HD was
not just a bump in quality, but a disruption in the quality ratio amongst the different spheres of
152
production as the gap between the professional image and that of the home video camera had
lessened significantly. In some cases, the gap in technology was absent entirely, as with the
DSLR’s use across all spheres of production.
The period of shared technology and the heyday of the standalone HD camcorders was
relatively short-lived. As smartphones became common in the early 2010s, the markers of
amateurism became tied to elements of hardware apart from image quality; smartphones still
shot in HD cinematic video, but their hardware controls were adversely limited by their design.
Most smartphones lacked true zoom lenses and still had comparatively wide depth of field,
largely due to their tiny sensors. Home videos of previous eras were often marked by aggressive
zooms (a technique perpetuated by the repeated trope of the “amateur home video” in fiction
film), in part because they represented one of the few opportunities for user control. The home
video image in the 2010s was, by contrast, a much wider-angle one as the zoom-less smartphone
became popular. The lack of a true zoom was counteracted by the smartphone’s ability to film
very close to the subject and, with the addition of front-facing cameras, to the self. Lastly, the
aspect ratio of the smartphone was often the butt of jokes and ire, with many satirical blogs,
internet videos, memes, and commentary in magazines like Videomaker railing against the newly
popular vertical 9x16 image and calling for phone shooters to reject verticality for a familiar
widescreen image. As form and format coalesced across spheres of production, markers of
amateurism were shifted largely to practice. Tangerine (2015), a feature fiction film shot on an
iPhone, benefitted from the high quality recording format but distanced itself from other phone
video by eschewing those markers of amateurism.
There were some areas in which increases in image quality in the home video camera did
make a difference, and these were largely linked to the increased visibility of home video. As the
153
practice of shooting home videos began to incorporate exhibition on sites like Facebook or
YouTube, where video would live semi-publically and seemingly forever, there was an increased
concern over the look of the videos or, perhaps more accurately, the look of the subjects and
content. Taking a photo or a video of something or someone to be posted publically became not
just an act of archiving an event or a place but something more presentational, more
exhibitionist. The concerns of the average user became partly those of the professional
photographer, invested in their work looking aesthetically pleasing, and concerns over “look”
were likely especially pressing when the subject was the self.
Much has been written about the consciously constructed artifice of social media, and
stories abound of social media personalities who would go through hundreds of selfies before
settling on one that was an appropriate (if not necessarily accurate) representation of the self.
142
It is not entirely surprising that the widespread digitizing and public-ing of photographic and
videographic self-representation was accompanied by an increase in technical camera quality. If
the proliferation of HD cameras was driven largely by a manufactured need, the proliferation of
image manipulation apps felt a social necessity in the sheer number of images of the self being
put out into the world. Certainly selfies existed, albeit with less accuracy, in the pre-camera
phone era, and even in the pre-Facebook age of early digital photography, the newfound ability
to immediately review photos led many subjects to become particularly self-conscious, but
concerns of aesthetic quality that had existed previously were no doubt exacerbated as the
potential audience increased and, for better or worse, as resolutions increased. News anchors
were notoriously hesitant about the switch to HD television broadcast, as older anchors suddenly
142
As one example, Rebecca Pearson’s confessional piece to The Telegraph about her
constructed Instagram personality, “The Ugly Truth…” (November 9, 2015).
154
looked their age, once-hidden blemishes revealed themselves, and the magic of stage makeup
crumpled under digital scrutiny.
143
Beyond TV personalities, though, I would suggest that the
new potential for individuals to see themselves in high definition didn’t generate the same
paranoia as it did in the newscasters bemoaning HD. Audiences were used to seeing TV
personalities in low resolution, whereas they always saw themselves in the highest resolution
their eyes would allow through their literal mirrors, nevermind the metaphorical ones. As point-
and-shoot cameras and smartphones upgraded to HD, the increase in quality again came not just
in resolution, but as images more in line with the cinematic standard. Essentially, as cameras
became more cinematic, our self-representation did as well.
For home video users used to filming in low-resolution interlaced SD video, I imagine
that their experience first filming with their HD smartphones in 24p produced an effect similar to
that which low-budget filmmakers (myself included) experienced when first shooting with an
XL-1 in Frame Mode or a DVX in 24p – the camera presented images of the world in a way that
I was familiar with, and familiar with filming, but in a way that was, for lack of a better word,
“cooler.” As Ganz and Khatib described our relationship to the cinematic in 2006, “We have
probably seen ourselves on video; we do not know how we look on 35mm film. On film we look
at other people. On video we watch ourselves.”
144
With cinematic video, the medium of
watching culturally significant others and that of our own quotidian existence collided. While a
home video user may not have recognized the apparent revolutionary potential of having
cinematic images at their fingertips, they certainly might recognize that their video – and their
subject matter – looked more like a movie. And unlike the meaning-making that comes from
143
Chris Emery, “Anchors Can’t Hide Anything in HD,” The Los Angeles Times (December 25,
2007).
144
Ganz and Khatib, “Digital Cinematography,” 34.
155
self-applied video filters, the cinematic was a constant – it was an always-present elevation of
image texture, at least during its novelty phase.
In addition to increased resolutions and optics was, again, the increased power for image
manipulation through apps like Instagram. Opening up a world of possibilities to a user with no
experience with the complex nature of color grading, image alteration software offered video
filters that could be slapped on to one’s video in order to alter the color, contrast, and texture of
the image. Not surprisingly, some were even designed to look specifically like analog film. Such
image-alteration software soon became standard camera technology on phones themselves,
embedded and normalized into the capture mechanism with no need for a special app. As phones
and cameras developed over the 2010s, it wasn’t just that home video looked increasingly better,
it was that people effectively did as well, and doubly so as interpersonal interaction became more
and more mediated by the camera on social media. Concern with appearance is obviously
nothing new for. What is novel is the fact that appearances themselves are increasingly digital.
Brooke Wendt suggests that the application of textural alterations to one’s own image offered
practitioners the opportunity to “idealize themselves,” especially as those images become
public.
145
Camera image quality becomes digital makeup.
Despite the sales pitch of high quality home videos, cinematic image textures, and the
potential for idealized selves, improvements in image quality conflicted with ubiquity. James
Moran cites the major difference between home movies (shot on photochemical film) and home
video is that video’s cheapness and convenience resulted in the “increase[d] opportunities for
representing a greater range of social intentions less likely to emerge on celluloid. Therefore,
145
Brooke Wendt, The Allure of the Selfie: Instagram and the New Self Portrait (Amsterdam:
Network Notebooks, 2014), 26.
156
rather than assume that video in itself may “revolutionize” amateur practice because it changes
conventional perceptions of domestic living (such as home movies), we more properly should
conclude that the new medium is more likely to represent the fuller range of domestic ideologies
already present in the culture, well before the arrival of home video or even amateur photography
itself” and, as such, “home video reveals that families have always been more complex and
contradictory than home movies have generally portrayed them.”
146
If the use of home video
amplified the moments that amateur filmmakers deemed unworthy, intentionally avoided, or
sought to keep hidden, the use of the smartphone surely amped the representation of these
inessential and unwanted “complexities” to an extreme, often preserving them in a very public
way. While smartphone usage may have perpetuated a novel interest in aesthetics along with the
rising tide of image quality, it accompanied a rising tide of image content, complicating the
potential for a wide-scale representational idealization of daily life.
Smartphone companies recognized and capitalized off the increased concern with
aesthetics, drawing attention to their camera quality directly. The generic “high quality video”
stamp that was standard on camera ads became the entirety of the ad itself. Apple ran a billboard
campaign for the iPhone that featured nothing but aesthetically pleasing images and a tagline that
read “Shot on iPhone 6.” Nokia responded in semi-parodic fashion utilizing side-by-side image
comparisons in their ads, showing the apparent superiority of their camera to the iPhone’s. As
image-making became a greater part of the practice of owning and operating a smartphone, ads
increasingly addressed it, in reference to both casual and more ambitious practice. In a 2015 ad
for the iPhone 6s, a group of young moviemakers are shooting a horror film with an iPhone. The
146
James Moran, There’s No Place Like Home Video (Minneapolis: The University of Minnesota
Press, 2002), 68.
157
voiceover narration, describing what has changed on the new model, states, “Student films don’t
look like student films.” There is a cut to director Jon Favreau on his own film set, watching the
students’ movie on his phone. He exclaims incredulously, “This is a student film? Get these kids
on the phone for me.” The narrator cheekily responds, “Dude, that is a phone.”
In this ad, the cinematic nature of the imagery is being used to directly market the phone
quite literally. Firstly, it reminds viewers that movies can be and are being filmed on an iPhone.
Second, that the images being shot are of professional quality, as they don’t look like “student
films.” That compliment to the commercial’s student filmmakers is somewhat backhanded, as
the implication is that student films inherently look bad. Whether those shortcomings are
technology-related the narrator doesn’t say, as the important thing is that the iPhone has
remedied any flaws of amateurism. While the ad certainly does appeal to moviemakers, the
primary audience is one that likely won’t ever shoot a short or a feature. The ad is using the
camera’s moviemaking potential to sell a home video camera – with the iPhone 6, your internet
videos will be shot on a cinema camera.
Beyond quality, the ad further capitalizes off of a new and noteworthy trend in home
video practice – self-made fame through videography. Apple is presenting an updated version of
the mythical tale of Hollywood success, in which a filmmaker slides a script to a famous director
under the bathroom stall door. By coupling that narrative with their video technology, Apple is
effectively mythologizing that technology itself – it has effectively made that mythical labor
obsolete. By shooting a video and uploading it to the internet – that which the phone
deterministically “allows” – a filmmaker gets a break. The iPhone has made him famous. In
addition to playing on the desires of low-budget moviemakers, the iPhone ad plays heavily on
the potential success – both real and imagined – for social media celebrity, a production context
158
situated halfway between amateur and professional. Not long after the birth of YouTube, other
pre-existing social media sites like Facebook and startups like Vine recognized the potential
(cultural, perhaps, but more likely financial) in communication through video – particularly the
viral sort.
While all manner of videos had the potential to receive a tremendous amount of views
and clicks, from high-budget music videos on Vevo to a no-budget home video of a child going
to the dentist, across the spheres of production, individuals and organizations recognized sites
like YouTube as not just platforms for sharing videos with friends or sending one’s creative
work into the ether, but for reaching an audience in lucrative ways. Top-earning social media
video personalities cross demographics and styles, from comedians and pranksters to short
filmmakers, makeup artists, musicians, video game commentators, vloggers, and Kardashians.
Across these varied personalities, many of whom take in a considerable amount of advertising
revenue at the top end, video cameras are obviously fundamental to their ability to create
content, and they use them with a great spectrum of abilities. Some restrict themselves to solitary
webcams, others shamelessly use grainy, blurry, vertical Snapchats, and still others use higher-
end DSLR’s to create videos with a high production value. I would suggest, with some caveats,
that the rise of social media celebrity is linked not just to the ability to upload and share videos
(though it is certainly that as well, as I’ll further discuss in the next section), but also to the
increase in image quality in the digital era and the resulting qualitative changes in look.
On one hand, it might seem like “bad looking” video-y home videos absolutely have as
much potential to draw a crowd as the rest, even if they lacked the cultural clout. America’s
Funniest Home Videos has aired for 28 seasons and counting, from 1990 – 2018. These videos,
entertaining enough to keep the show in business, were perhaps the epitome of video-y video:
159
low resolution, interlaced, flat, deep focus, lacking dynamic and without any semblance of
professional technique. The realist DIY aesthetic wasn’t a hindrance and, if anything, it was
likely part of the draw. Rather than an obstacle to achieving a certain level of cultural superiority,
it served as a marker of authenticity and may have led to the show’s success more than a
polished alternative. It is conceivable that if digital video sharing technologies like YouTube had
existed in the late 1990s, the YouTube star might have arisen then. Still, while content may be
king for virality, most of the top-tier, highest-rated internet video personalities and content
creators on YouTube present work that looks considerably higher production value than the
average home movie. A one-off viral video can have success regardless of its image quality, but
for media-makers looking to increase their subscribers and advertising revenue, standing out
from the sheer number of uploads and potential competing creators is sometimes only possible,
or at least more reliable so, through the ability to exercise more control over the image. To that
end, many social media personalities ally themselves with services called multi-channel
networks (MCNs) – effectively social media distribution companies. Amongst the services that
these MCNs provide (like marketing and distribution channels), MCNs offer the potential for
higher production values through financial investment and training. As social media video
analyst Vidooly writes, “While it is definitely possible to become a big YouTube star even if one
has zero production cost, the chances of becoming successful increase multifold when the
content has a high production value.”
147
An investment in the production value of content
creators is an investment in their market potential.
147
Aravinda Holla, “How Multi Channel Networks…” Vidooly (May 30, 2016)
http://vidooly.com/blog/multi-channel-networks-youtube-superstars.
160
Much like the late 1990s, when increases in image quality allowed filmmakers to punch
above their weight due to their ability to digitally manipulate their videos, YouTube MCNs
supporting their content creators’ production value is different only in degree of difficulty. To
manipulate a video image to a professional level in the late 90s, one would likely need a
prosumer camera, costing several thousand dollars, a powerful PC and ample drive space, editing
software, and the ability to use all of them effectively – skills that were uncommon. It was costly
and difficult (though not as much as shooting on film, of course). For a 16-year-old wanting to
create makeup tutorials in the 1990s, achieving a high production value would have been
prohibitively difficult. In 2010, one only needed a smartphone and some basic training. The
iPhone shoots high-quality HD progressive scan video, editing software is built into the phone
and the social media sites themselves, and color filtering can likewise be handled on various sites
or built-in apps. As the rate of digital literacy is higher, one still needs to claim control over
images to step above the bar, but the technology itself and the wider ability to use it makes
achieving a desired look much simpler. A production value that might once take several college-
level courses to generate is now possible through collective learning on the very social media
sites that one aims to use as a platform for distribution. It is not simply that baseline-level
consumer video looks better than it did in the late 1990s. It does look good. But it is also easier
and cheaper to make it look great than it would have been in the late 90s, and it is easier and
cheaper for a large MCN to increase the production value of a potential star.
As a result of the ease of professionalizing one’s productions, one of the most significant
results of the increase in image quality in the home video sector has been the widening of the
sphere of low-budget production, and the migration of individuals to that more professional tier.
Not unlike the situation in the early 2000s with low-budget practitioners using semiotic decoys to
161
cinematically pass their images off as those of higher cultural influence, the wide scale
proliferation of cinematic video across smartphone users infused the once strictly amateur
practice of home video with the concerns, the aims, and the potential results of professionals.
Along with the upgrade in look, the act of professional videography expanded beyond the
readers of the trade press and into the practices of any number of other hobbyists and professions
whose work could benefit from the adoption of moving-image creation. Also like the low-budget
semiotic decoy revolutionaries, while certain individuals may have ascended the ranks – some
even to stardom – such ascension did nothing to greatly disrupt the media flows already in place,
as even self-made fame drew revenue to media giants like YouTube and its advertisers. Of the
top 100 most-subscribed YouTube channels, the overwhelming majority are run by major media
companies.
148
The Disappearance of the Cinematic?
One of the greatest effects of the proliferation of cinematic cameras in the digital age has
been the democratization of the cinematic look. What once was in the control of only those with
capital now belongs to everyone. Because the notion of the “cinematic” as a qualitative standard
is predicated on difference, as the imaging formats conform, the very notion of an image texture
that marks one’s work as that of higher cultural capital itself disappears. That is not to say that
the difference in images no longer exists – it does, though the dividing lines are drawn
differently. For the YouTube star, the home video user, the wedding videographer, or the low-
budget filmmaker, increased image quality alone doesn’t bring one’s work into the realm of the
cultural elite. The cinematic is merely the new normal.
148
YouTube analytics via Social Blade, socialblade.com
162
The narrowing of the gap between the high-budget, the low budget, and the consumer has
notable effects on video’s revolutionary potential, but not necessarily in benefit of those
revolting. The “film look” – the cinematic – was once a mark of quality and professionalism.
The very reason why fiction moviemakers didn’t want to use video cameras in the 1990s was
because the look of the cameras had a certain stigma, and one linked specifically with low
budget, “low” cultural value images. As long as that visual division still existed, and as long as it
was attainable by those seeking to profit from it (financially or culturally), the opportunity
existed for marginalized voices to achieve a greater standing through means that were technical
and skillful, not financial. If a queer filmmaker couldn’t finance their movie, they could make a
low-budget version of it that still could achieve recognition and bypass cultural gatekeepers
through the application of technical mastery. That is to say, a hierarchy of image quality was
necessary in order to separate oneself from the rabble of lesser image formats. Achieving a “film
look” didn’t just elevate one’s images, but it separated them from the common; it was just as
much about pushing down as it was about pulling up. As the differences between high, low, and
middle-ground image quality are smoothed out, such separation is no longer possible. The
boundary lines are now drawn more in invisible quantitative specifications and production value,
and both are tied directly to budgets. Capital is the great unequalizer.
If the video revolution has been one in which the qualitative hierarchy of image formats
no longer exists, is that homogeneity counted as a victory, a loss, or something else? It becomes
a question of whether democratization and revolution are sometimes antithetical. Those trying to
create content to compete with the narratives of big-budget production companies had pulled
themselves up above the masses, only to have the masses quickly catch up – the shouting of the
revolutionaries is drowned out in the disorganized shouting of everyone. The democratization of
163
high-quality image making only makes the financial modes of separation amongst the different
spheres of production more significant. Still, one can’t look at the video revolution strictly in
terms of image quality. In the following section discussing video cameras and utility, I suggest
that the revolutionary potential that once was contained in the video camera’s ability to achieve a
certain look was relegated elsewhere – specifically to the video camera’s ubiquity. If low-budget
filmmakers lost some of their ability to carve out a niche for their work which they deemed
important socially, culturally, and/or artistically, the proliferation of image making into the hands
of the masses opened up new and different revolutionary potential as the video camera
increasingly became a device of not just art- or profit-making, but of communication and
community building.
164
Section 3 – Video Camera Utility
You can get misty-eyed about 35mm in terms of luminosity if you
want, but my memories of making feature films were of an
incredibly antiquated system, almost Victorian in its method,
involving cumbersome machinery. Though it was a privilege to
work with a process that hadn’t changed substantially since the
days of Stroheim or Griffith, it was also a drag.
- Chris Petit, film
director
149
My argument that the video camera image has become more cinematic over the last
several decades was born not strictly from a scholarly place in which a discussion of medium
specificity was the end goal, but instead from the communities of videography themselves, from
the trade press to the development engineers to the general public. The goal was to trace the
history not just of the technology, but of the technology’s images in relationship to how those
images have been coded and read. That coding and reading relied heavily on media comparison.
Cinematic video is only half the story and, as Fred Armes’s seminal study of video suggests,
“video should not be seen simply as a latter-day descendant of film.”
150
Armes was writing in
1988, well before the popularization of anything that might be called “cinematic” video. And yet,
even now, as the video image has become cinematic in nearly all respects, and video has largely
replaced photochemical film in nearly all uses (Hollywood moviemaking the most noteworthy),
and none but the most trained eyes can tell them apart, I would still echo Armes’s statement. In
becoming more cinematic, shifts in image quality at various points in history brought with them
significant shifts in the practices in which communities of videography engage, but it was only
because of advances in the functionality of the cameras that produce those images that increases
in image quality could be parlayed into plays for capital, esteem, or visibility. Those advances in
149
Chris Petit, “Pictures by Numbers,” Film Comment 37, no. 2 (March/April 2001): 38.
150
Roy Armes, On Video (London: Routledge, 1988), 34.
165
functionality are what I am calling “camera utility,” the nature of which I will discuss in these
opening paragraphs.
To think of video as a “descendant” of film is misguided in part because the technology
developed concurrently and from diverse sources (especially considering the computer lineage of
contemporary digital video), but the misconception goes beyond that. When critics like Noël
Carroll push back against characterizing moving image media materially and instead find the
theoretical benefit in a more all-encompassing concept of “moving images,” the disappearance of
physicality from the discussion threatens to make us lose sight of that which makes individual
media unique beyond the fact that they are “moving.” The uniqueness of the video camera as a
material object should not be overlooked just because it produces moving images like a film
camera once did. If, as John Belton claims, the digital cinema revolution is a “false” one in part
because it has been mostly invisible to audiences, his claim is true only if one considers the
images on screens and not the tools of their production. To hold a VHS camcorder from the early
90s up alongside an iPhone is to demonstrate a remarkable change in a relatively short time in
how the digital video camera functions as a material object – in how it looks and how it feels. In
how much it weighs and how much it costs. In where we can carry it and how often. In what it is
attached to (physically and digitally). In how easily I can view its images and how easily I can
show them to people – on set or in the home, on YouTube or on Netflix. Video may now be
floating ethereally in cyberspace, absent of index and severed from the real, but it exists in that
form due to acts of engineering and labor. Because cinema is digital, it can be shot with a smaller
camera. Because cinema is digital, I have more money in my wallet. Because cinema is digital, I
can carry my camera with me at all times.
166
Much of the theoretical discussion of digital imaging, especially in relation to cinema, is
in relation to film specifically. In filmmaker Jean Pierre Geuens’s ruminations on digital
moviemaking, he writes, “the narrative focused exclusively on the efficiency of the new medium,
on its ability to do the same things better, without anything else being altered.”
151
If my interest
in the previous section was in tracking that narrative of media comparison, this section of my
dissertation pushes beyond the short-sightedness that Geuens rightly critiques, looking not at
how video did “the same things,” but considering that which is unique to video in its digital form
– its fusion with the computer. Through developments in camera utility, the digital video camera
became decidedly more unlike the film camera, and across all spheres of production.
While improvements to film cameras and film stocks are (or at least were) continually
being made across that medium’s life, there exists a certain quantitative ceiling in the medium.
Michael Cioni, CEO of postproduction house Light Iron, explains, “You can’t make film smaller
… You can’t make 35mm be 8K resolution, no matter what you do. You can’t have a [film]
camera be four pounds. You can’t fit a 400-foot magazine in a smaller space. It can’t improve at
the rate Moore’s Law says we can predict technical improvements [in digital].”
152
The
mechanical and photochemical realm of analog film presents certain physical boundaries that
digital cameras do not face. These boundaries affect visible qualities like resolution, but also
more function-related factors like size and portability, cost, distribution, and technological
networking. As video has effectively matched the image quality of film, its uniqueness as a
medium and its future technological development lie specifically in its utility.
151
Jean-Pierre Geuens, “film/digital” digital filemaking.
https://rethinkcinema.com/digitalfilemaking/film-digital/.
152
Quoted in “Landmark Year for Digital,” Variety, July 7, 2011. “Moore’s Law” refers to the
exponential growth of computer processing power over time.
167
As was the case with the term “cinematic,” “utility” threatens to be ambiguous to the
point of uselessness, albeit without the weighty cultural connotation of the former term. When
speaking to utility, I am referring specifically to a camera’s ability to be used and its usefulness.
As a usage- and practice-based history of video cameras, tracing a camera’s functional potential
is a critical element in understanding its place in practice. While ultimately the video camera is a
tool for producing images, and the nature and quality and content of those images are of utmost
importance in discussing videography as a practice, the ability for a practitioner to produce and
manipulate those images has varied greatly across the digital age, both in the design of the
cameras and in their relationship to other technology. If control over images offers the potential
for power and influence, the ability to wield that control is tied to the functionality of the tools of
manipulation. As image quality conforms, utility becomes an equally crucial point of separation
amongst cameras and practitioners. The Arri Alexa and the iPhone X both produce cinematic
images of varied quality, but their differences in utility are vast. Only one of them lets you
upload videos to Facebook.
Looking at practitioners’ assessments of the video camera’s medium-specific properties
and practices across the late analog and the early digital era, one of the most common
assumptions about the video camera was that it was a “practical” medium; its medium-specific
properties led to practices that were easier and more convenient than other imaging formats.
While the video image was unique in a way that was stigmatized as largely detrimental, the
video camera’s functionality offered specific advantages in utility, especially over a film camera,
which lie in three specific areas: cost, electronic transmission, and flexible storage.
168
Cost
To shoot video in the 1990s meant potentially saving a great deal of money on media. As
a point of comparison, a 60-minute MiniDV tape cost approximately $4. A 3-minute reel of
16mm black-and-white reversal film, plus developing costs, was around $40. The per-minute
cost of video vs. film was approximately $0.07 vs. $13, and this obviously offered considerable
savings to the videomaker. Reduction in budget through video often extended beyond media
itself. Digital video generally required fewer technical personal on set and less elaborate lighting
schemes, which meant faster shot setups and more setups per day, which meant fewer days,
fewer personnel, fewer paychecks, and lowered equipment cost. Union camera crew rates for
shooting video tape were additionally lower than those for shooting film.
153
The smaller crews
and less-obtrusive shooting profile that came with video often meant more potential for run-and-
gun guerilla shooting, which meant skirting the permits and the crew required to lock down
locations, again considerable (albeit legally questionable) savings. The cheaper cost additionally
allowed for more mistakes in shooting, which some filmmakers found liberating,
154
though
others found it led to a “lackadaisical response” from crewmembers on set.
155
Further, considering the exorbitant cost of film equipment, shooting on film generally
meant equipment rental. Nearly all video cameras themselves were more affordable in
comparison as purchases, such that shooting on video was also an investment in hardware. For
the price of a sustained film camera rental, small production companies or individuals could
instead purchase a video camera that could be used on multiple future projects, thus mitigating
153
Peter Kiwitt, “What is Cinema in a Digital Age? Divergent Definitions From a Production
Perspective,” Journal of Film and Video 64, no. 4 (Winter 2012): 3.
154
Petit, “Pictures,” 41.
155
Geuens, “Digital World,” 19.
169
cost on those projects as well. Owning a camera further allowed for a long-term process of
experimentation, material exploration, self-training, and refining technique on the part of the
owner that would have been absent in a short rental.
156
In the home video market, where cost was
undoubtedly one of the primary contributing factors in the purchase of a camera, the cheaper
format found a home (literally) with families that may not have been able to afford film stock or
film cameras in previous eras.
Electronic Transmission
Video’s method of capture and recording media itself had unique benefits. The first and
most noteworthy was video’s immaterial nature as an electronic signal. Unlike film, which
requires time for developing, there is no delay between a profilmic event and the projection of
the video image on a television. This potential for liveness dictated video’s use in live playback
of any sort, be it real-time event-based productions like sports, news, or performance (like at a
concert), or in more industrial settings through the use of CCTV, as video’s potential for liveness
was exploited to extend sight for the human eye. Liveness was also an integral part of video’s
use in the gallery as artists probed the specifics of the medium, challenging viewers with their
own electronic reflections, creating visual feedback loops, and allowing for various types of
mediated surveillance. Video’s potential for real-time playback led many to consider that to be
its primary unique attribute as a medium, whether ontologically (Rosalind’s Krauss’s “Aesthetics
of Narcissism”
157
or Yvonne Spielmann’s “Reflexive Medium”
158
) or ideologically (Jane Feuer’s
156
Walsh, “Digital Filmmaking,” 19.
157
Rosalind Krauss, “Video: The Aesthetics of Narcissism” in October 1 (Spring 1976): 50-64.
158
Yvonne Spielmann, Video: The Reflexive Medium (Cambridge: MIT Press, 2008).
170
“concept of live television”
159
). Video’s instant playback had utility benefits beyond
broadcasting as well. When shooting film, there was a considerable amount of guesswork
involved on set when considering the finished product. Regardless of how meticulously a shot
was planned, the image in the viewfinder was strictly optical – no different from looking at it
with a spyglass. When shooting video, the viewfinder contained a tiny television that displayed
exactly what would appear on a larger one. As the on-set saying goes, with video, “What you see
is what you get.” While the ability to view the end-product in-camera was certainly useful for the
lesser-trained amateur or home video user, it surely didn’t hurt the professionals either. The
feature was so useful as to become a fixture on film cameras in the form of a “video assist,” in
which a video feed from the film camera gives a low-resolution preview of what the final film
image will be as it is recorded.
160
Flexible Storage
Video’s storage media offered practical benefits as well. While tapes came in various
lengths depending on the video format (e.g. Betacam vs. MiniDV), nearly all offered storage that
was much longer than a reel of film, which generally maxed out at 11-minute magazines.
Standard MiniDV tapes, for example, ran 60 minutes. Alfred Hitchcock famously pined for the
possibility for longer takes when filming Rope (1948), but instead had to settle on breaking up
his one-take film through several hidden cuts. Alexander Sokurov and his cinematographer,
Tilman Büttner, needed no such concessions when shooting Russian Ark (2003), which ran 96
159
Jane Feuer, “The Concept of Live Television: Ontology as Ideology” in Regarding
Television: Critical Approaches – An Anthology (New Brunswick: Rutgers University Press,
1983).
160
This preview was generated by splitting light from the film camera’s lens to a video sensor
installed in the film camera, forming a sort of video-film hybrid device.
171
minutes in a single take of video. Non-standard production techniques aside, capturing long-form
events (like weddings or concerts) was possible with fewer interruptions due to swapping
magazines. Video’s format as a tape-based medium was also particularly user friendly, especially
compared to higher-gauge film. While 8mm was available in pre-sealed magazines, 16mm and
35mm generally needed to be loaded by hand, and in partial or complete darkness. It was a
difficult process and mistakes were costly both financially and also creatively, as unwanted
exposure could ruin much of a magazine. Inserting a video tape and pressing record was far
simpler than fumbling about in a changing bag and, again, the feature would have been useful to
amateurs and professionals alike in the form of creative security and time savings. The same was
true for the ability to turn the camera itself into a playback device. While film required time to
develop and a separate projector and screen, a video camera required only a single cable to play
back footage on a television (or even via the camera’s viewfinder itself on set).
While utility itself was no doubt valued in the home video market (advertisers included
the phrases “easy” and “simple” as much as “high quality”), in other circles many of the
particular functional benefits of video as a shooting medium were overlooked in the striving for a
higher-quality image. While cinematic imagery was a visual marker of cultural esteem, the
physical convenience of shooting with a smaller, easier camera did not carry such cultural
weight. If anything, the smaller cameras only made one’s production look less legitimate (as I
was reminded every semester by my students, upon learning they’d be shooting on the small and
visually unimpressive Canon GL-2). Across the digital age, though, as image quality became less
of a restriction, the unique properties of the video camera came to be utilized in new and critical
ways through developments in its digital functionality. In addition to its benefits in cost,
transmission, and storage, there developed new forms of utility unavailable to both analog video
172
of previous eras and also analog film in contemporary ones. As my focus is on the digital age
specifically, the primary question of this section of my dissertation is: how did the video camera,
and the act of videomaking along with it, change as video itself became digital? With image
quality, changes occurred in a series of steps over the course of several decades. These steps
were primarily manifested in quantitative increases and occasional decreases (as with frame
rate), but all represented a progression toward a qualitative reading that was closer to the
cinematic standard. If image quality changed in a series of steps, digitization of video was
effectively an on/off switch, much like digital code itself. Later digital cameras were not “more
digital” than early ones – a camera either is digital or it isn’t. What did change as cameras
evolved in the digital era was the manner and ease with which this digitality could be exploited,
and that evolution is the primary basis for this discussion of “utility.”
In looking at a video camera’s usefulness and its ability to be used in the digital era,
several considerations need be made. Firstly, questions of the material. While digital images are
most often characterized by their lack of materiality, the absence of a material storage medium
made room for several physical changes in the cameras that produced them. Ergonomics, design
(practical and aesthetic), weight, and size all underwent radical changes as the imaging
technology was made increasingly smaller. Historically, one of video’s advantages as a medium
in practice was its portability, especially after the popularization of the video cassette tape. If a
lesser image quality was an acceptable tradeoff, the video camera’s lack of heavy magazines,
media, and mechanics allowed it to be carried to places and put into positions that alternative
media formats would prohibit. While digital tapes and cameras were comparatively even smaller
and more portable than their analog predecessors, when cameras transitioned to drive storage and
the tape housing was removed, the camera body became flexible in ways it never was before,
173
yielding everything from GoPros to camera phones to spherical cameras. While the videographic
image is often theoretically and practically positioned alongside the film image, advancements in
physical design allowed for imagery that was wholly unique to the digital form.
Along with these changes in the physicality of the camera came considerable cost
savings. In prior eras, much of the savings were due to the lower cost of recording media (tape
stock vs. film stock). Video cameras in the 80s were still quite expensive, and the process of
video editing to a professional standard required a standalone system that was prohibitively
expensive as well. The use of analog video was a possibility for the professional spheres, but
videography remained a costly endeavor well into the 1990s. That discrepancy changed
considerably in the digital era as material and manufacturing costs dropped with the use of the
new technology and camera prices themselves dropped accordingly, with the biggest
repercussions felt in the amateur market. In short, more people could afford to shoot video, and
those people could afford to do so more frequently.
While digitality was exploited physically and financially, the biggest shift in the practice
of videography came when digitality was exploited… digitally. As analog video signal changed
into digital video code, once-mechanical cameras were increasingly computerized and
incorporated into a wider digital network. The video camera’s transition from analog to digital
marked the birth of the kino-computer or, as D.N. Rodowick called it, “a computer with a lens
input.”
161
The computerization of the camera yielded the potential to automate processes, to
manipulate images within the camera, and to store images on hard disk. But, as with the
computer itself, the ability to bridge the gap between devices and network with other kino-
computers represented a paradigm shift in the practice of videography as a form of
161
Rodowick, “Virtual Life,” 121.
174
communication. The camera became a window to other homes and offices via Skype, a tool of
live broadcast via Facebook, and a content hopper to a widespread distribution network, replete
with commercial and financial potential. Even considering live TV broadcast, never before had
the camera itself been tied so directly to capital, funneling images through a web of advertising
in real time.
My analysis of the video camera’s evolving utility as a tool of image-making is divided
across three subsections, all of which track specific digital exploitations and speak to their
ramifications on the practice of videography across all spheres of production. In this first, I
approach the binary switch from analog to digital video and address the ways in which the switch
from index to code was manifested in digital video’s medium-specific properties and its new
potentials as a digital object. In the following subsections, I address two significant physical
changes in the cameras themselves: the shift from tape-based recording to disk-based recording
in the second, and the consolidation of camera and computer in the form of the smartphone in the
third. While the shift from index to code opened up new opportunities for content manipulation
and pathways to distribution, there existed a chain of devices and labor between the camera and
the act of filming, the computer and the act of image manipulation, and the viewing device and
the act of viewing. The developments in camera utility across the digital era brought the camera
and the computer closer together, shortening the chain and ultimately eliminating it. The video
camera went from being just a camera, to a computerized camera, to a camera-ized computer. As
such, the practice of videography across all spheres of production has increasingly become
subsumed into the logic of the computer – both in that it has become a process of data gathering,
but also in that it is performed with the same ubiquity and communicative aims that one might
use a modern smartphone.
175
As I argued in the previous section, much of video’s cultural capital as a medium lies in
the image itself. Yet, the ability to harness quality is only useful inasmuch as one can get their
hands on it, manipulate it, control it, and distribute it. Much of the revolutionary potential in the
video camera’s increased image quality was tempered by the pre-existing cultural boundaries
between amateur and professional, haves and have-nots, artist and corporation. As high quality
video cameras have proliferated across all spheres of production, cinematic quality has been
diluted by its increased quantity. What, then, of the quantity revolution? In turning away from
questions of imaging – that is, the manner in which the video camera did the “same things” – and
toward the camera’s computational capability in the digital era, I suggest that the revolutionary
potential of the ubiquitous digital video camera lies not just in the act of shooting and recording,
but in fully exploiting its computational nature – leaping the obstacles that once stood in the way
of distribution now that the means for distribution are literally in one’s hand. The video camera
is no longer just a window to the world, but a window to other windows.
The narrative of the media giant-slayer, in which producers challenged the visual
authority of cultural elites directly through attempts at cinematic passing, was predicated on the
notion of singular media flows that had to be interrupted. It was based on using the video camera
as a tool of media-making. In the later digital era, the video camera has become not just a tool of
producing, but of editing, manipulating, and spreading information. The ability to shoot and
distribute video on a personal level does not wrest control of the major media flows from the
entities that wield it. Instead, the proliferation of videography as a form of communication and
digital community-building enables alternative flows of information. The digital revolution in
quantity is likewise not an overthrow, as the giants are still around and still capitalizing off these
newfound communicative flows. The migration of the act of videography from a specialized,
176
hobbyist, masculinized, capital-dependent practice of media-making to one of widely-available,
common, and necessary computerized communication suggests that potential lies in turning
attention to this new form of practice – nascent videographic literacy.
177
Digitizing Video
digital longs for growth, variations, and transformations. it begs to
regenerate itself, to be on the move, to engender manifold
identities. in short it aspires to a state of permanent
metamorphosis. to sum up: film halides were expected to respond
twice to outside forces: light first, the developer next. after that
they were done. game over. with digital by contrast, millions of
pixels are primed to burst again and again at any moment. the
brewing magma cannot wait to gush forth. let the feast begin!
- Jean-Pierre Geuens
162
Filmmaker and cinematographer Jean-Pierre’s Geuens’s Vertov-esque excitement for the
wonders of the digital video camera speaks to many of the potentials of its newly-coded
information and, in its phrasing, makes the analog seem archaic. His language calls attention to
mutability (“variation,” “transformation,” “manifold identities,” “metamorphosis”), proliferation
and movement (“growth,” “regenerate,” “on the move,” “again and again”), spontaneity and
excitement (“at any moment,” “cannot wait”), scale (“millions of pixels,” volcanic imagery),
and, interestingly, desire (“longs,” “begs,” “aspires”). He paints a picture of a medium that is
drastically different from its analog predecessors – it wants to move and expand at a geometric
rate. The video camera became the tool that realized the digital medium’s aspirations. The act of
videography became one that hinged on digital video’s existence as data and in three specific
areas of practice: transmission, communication, and mutability.
Transmission
Because data can be transmitted nearly instantaneously, digital video retains the potential
for liveness. For this reason, early digital video cameras were capable of co-mingling and
162
Geuens, “Ephemerality,” digital filemaking,
https://rethinkcinema.com/digitalfilemaking/ephemerality/. Stylized lack of capitalization is his.
178
replacing analog cameras in their once-unique place within the various incarnations of live video
practice, from sports and news broadcast to CCTV to reflective installation art, seemingly with
no change. Yet, because digital video exists as data and not merely as an electronic signal, its
capacity for transmission in nontraditional formats increased dramatically. With an analog video
camera, the ability to transmit a signal is limited by cabling or by broadcast power. For an
individual in the analog era, broadcasting via the airwaves was practically impossible, as the
equipment to transmit in that fashion was prohibitively expensive and, as such, that individual’s
video signal (and their content) was restricted to transmission via a cable (e.g. one connecting
their camcorder to their television or a CCTV network). For most practitioners, the videographic
medium was largely defined by its restrained, self-contained nature.
While early analog video practitioners and activists in the 1960s were energized by the
revolutionary potential of affordable broadcast tools, they were quickly checked by the very real
physical and material limitations that stood in between the video collective and the masses.
163
The product of the video camera was limited to a single location unless one chose to go through
the cultural gatekeepers that held control over the means of broadcast (though there do exist
some incredible examples of attempts to circumvent those gatekeepers, like the infamous “Max
Headroom” TV hack in 1987).
164
The physical, material limitation of signal transmission was
one of the things that separated the modes of “video production” and “television.” My home
video was restricted to the home, unless I sent my tape to America’s Funniest Home Videos, and
they chose to broadcast it, in which case that same video content became “television.”
163
Tripp, “Participatory Practices.”
164
The still-unsolved mystery involved the offending party overpowering the broadcast signal
for Chicago’s local WGN and WTTW networks and broadcasting an absurdist parody of then-
celebrity TV host Max Headroom.
179
As video became digital, the potential for transmission became less linked to the material
limitations of the electronic signal. As data, I could still store my video on tape, transmit it via
cable, or broadcast as I could with analog video. However, because digital video is essentially
just a series of instructions for a playback device, one of its primary properties is that it can be
duplicated without degradation, theoretically a near-infinite amount of times. Copying film
requires physically putting two pieces of film in contact with each other. Making 1,000 copies of
a reel of film will require 1,001 reels of film. However, if I provide 1,000 computers with my
digital video’s “paint by numbers” instructions, they all can create the same video, exactly as my
camera transcoded, exactly as it “described.” The mechanism of transmission is drastically
different from analog media, but the results are the same: I send something to someone, and they
can watch it. The difference is, I no longer need a prohibitively expensive and sophisticated
piece of technology to do it. Unrestricted by the material and temporal limits of real-time
transmission, my video could be sent (e.g. via email) to as many people as I wanted for their
download.
In the early days of digital video, this new form of transmission was not fully live; the
video that my camera produced had to go through a period of “distribution” in order to be
accessed, especially in the comparatively molasses-slow era of dial-up internet. Still, the switch
to digital represented a tremendous change from the difficulty of transmission in the analog era.
Over the next decade, with the popularity of hosting and streaming services like YouTube, my
video could live semi-permanently on the internet to be accessed by an audience of my choosing,
from private individuals to a truly global one. As internet speeds have increased in more recent
years and as cameras themselves have become linked directly to the internet (as in the
smartphone), the potential for true liveness via the internet has become available to all spheres of
180
production. Call it: “liveness 2.0.” An individual can activate their phone and transmit live video
to a site like Facebook, providing real-time content to anyone who chooses to watch. While still
unlikely, such a practice potentially allows an individual to broadcast live to an audience as large
as that of a major TV network (or even larger, considering Facebook’s global reach). Thus, a
digital video practitioner could effectively do that which early video collective activists yearned
for.
Liveness was always available in a limited sense, as in the case of reflective gallery
installations or CCTV surveillance. The profilmic event was beamed through the cable to the
monitor for an audience to observe. Live mass broadcast, in the mythic sense, required digitality
in the absence of capital. In Jane Feuer’s oft-cited characterization of liveness on television, she
discusses the ways in which television practice was rarely marked by true liveness, and the
public conception of television as a live medium largely existed as a self-perpetuating and
industry-reinforced myth.
165
The liveness enabled by the digital video camera capitalizes off of
the myth in ways that were impossible or difficult in the analog era. The dangers and challenges
and headaches and inefficiency of live broadcast for networks – those factors that often
precluded its use – are all but gone for an individual’s live broadcast, as the latter falls more in
line with personal journalism and interpersonal communication – modes which, unlike much
network television, capitalize off of and often require liveness. Thus, the function of liveness-as-
myth for broadcast television becomes less mythic and more actual with individual transmission,
as an individual controls the link between the proflimic event and a seemingly limitless number
of viewers, but without the overwhelming burden of capitalistic success restraining their hand
165
Feuer, “Live Television.”
181
(though those major media companies are certainly still benefitting from it financially, via the
hosting platforms on which live personal video lives).
Communication
Beyond the act of transmitting footage or movies, the greatly-eased transmission of
image and sound situated acts of videography more squarely into the mode of interpersonal
communication. The degree to which the video camera was a communicative device in the
analog era, broadly speaking, varied with practice. Looking across all three spheres of
production, large studios produced content with video cameras that was, in many ways, similar
to film, or any other representational medium – fiction or nonfiction programming that was
displayed on television. Corporate advertising used the video camera as a direct form of
communication, most obviously through the direct address of infomercials and commercials, a
direct line between marketing departments (big and small) and consumers. In addition to these
largely pre-recorded forms, the communicative potential of the video camera was clearest in
instances of live, direct address, whether it be news (both the low-budget local or big-budget
evening programs), sports, or government address. These instances were ones in which the video
camera was an interpersonal link, featuring a person or people speaking to another or others in
real time. Because these practices were live, they were again limited by the hindrances of analog
technology and the financial costs associated with them. They were, by their design, generally
one-way flows, and largely between more powerful media entities and those individuals with less
power. With the digitization of video and the newfound ability for the individual to transmit their
video via internet upload, streaming, and fully live digital video, the one-way nature of that
transmission was challenged as videography became more of a participatory practice.
182
In the analog era, both direct and indirect communication via video camera for
individuals was limited and specialized. Home video was uniquely communicative, inasmuch as
it was largely an archival practice, and therefore functioned as a form of self-communication – as
if a time capsule. Beyond the self, the personal video camera has functioned as a direct
communicative device in previous eras, though in very specific ways. Thinking through the
tropes of fiction filmmaking, there is the videotaped message (anything from the will of a dead
relative to a kidnapped victim to a videotaped confession to the birthday wishes of a distant
relative to an art installation), all of which involve the recording of video and sending a physical
tape. Real-time direct communication occurred in even more specific circumstances, as with the
video surveillance intercom (to monitor the entrance of a secure location) or the corporate video
teleconference with the top brass or the satellite uplink from soldiers in the field. In all of these
and other video communication tropes (aside from the fully fictional, sci-fi uses), the use of the
video camera as a communicative device is largely presented as a rarity; it is not a daily
occurrence, but a specialized use often precluded by labor, expense, or infrastructure (or all
three). With the digitization of video, all three of those preclusions were largely mitigated. The
act of shooting video and making it remotely accessible became so easy that it did become casual
and quotidian and, like with conventional broadcasting, existed in both direct and pre-recorded
forms.
The act of videography as a pre-recorded interpersonal communicative practice is clear
on a site like YouTube. The vlog serves as the most direct form of pre-recorded content, as it
generally features a direct address to the camera. The camera itself, though, functions differently
than it might in the analog era. Direct address in interpersonal practice (like home video, for
example) often sees the camera itself become a stand-in for someone else. It is not uncommon
183
for a wedding videographer, for example, to ask party guests to make a statement directly to the
newly married couple, by way of the camera. In home video footage, the camera often becomes
an extension of the user, wherein the videographer’s voice directly (and comparatively loudly)
addresses the video’s on-screen subject, who addresses them back through the lens of the
camera, as if the technology were not there. In the latter instance, the camera functions as a tool
through which to preserve the in-person interaction, and in the former, to link two parties
separated by time and space: the video subject and the married couple.
With the vlog, which is generally a public or semi-public document, the camera itself
becomes a stand-in not for a specific individual, but for a wider public. Here, the camera serves
as a conduit to internet viewing; it becomes a device through which to speak to an audience. The
way that the camera is viewed culturally as a material object, then, is quite different. In the
analog era, it was a device of preservation, or of limited communication – it was a direct line. In
the digital one, because its footage is transmissible data and thus presents the potential for nearly
limitless distribution, the personal video camera becomes a tool of mass broadcast, regardless of
a camera’s intended use. This point was made clear to me when a student recently described how
now, when he sees someone shooting video with their phone, he sees the camera not strictly as a
recording device, but as a window onto an audience. That is, the camera becomes a stand-in for,
essentially, everyone. This belief on my student’s part was not due to the knowledge of how
digital video functions as a technology – he had no knowledge of analog vs. digital, index vs.
symbol. It was simply an observation of practice. Jarice Hanson argues that because analog video
has the potential to be live, and because pre-recorded video looks identical to live video, we tend
to see all video to be coded as live, as least in part.
166
I would suggest (as would my student) that
166
Hanson, “Understanding Video,” 27.
184
the same is true for digital video, but on an even greater scale. As the camera has moved from a
tool of preservation to one of mass communication, every act of videography occurs with the
knowledge that its content has the potential to be transmitted.
Beyond the act of transmission itself, the camera likewise functions as part of a larger
multimedia communicative process. Again, taking the vlog as an example, the video rarely lives
solely as a video, but most often takes the form of a video embedded within a larger system of
community interaction, as on a YouTube page. The video acts as a conduit between an individual
and an audience, but it also functions as a catalyst for broader communicative processes. A
posted video can inspire responses (also created via video camera) of a variety of types too
numerous to cover here: observational commentary, praise and/or hatred, parody, direct address
communication, etc. One viral video that I myself shot received a “harmonizer” treatment, in
which a jazz trio replicated and accompanied the subject’s speech patterns with a guitar, bass,
and drum performance.
On a less grand scale, a video posted on YouTube or Facebook more frequently elicits
text responses (of a variety even more numerous), as well as simpler digital binary responses
(thumbs up/thumbs down, like/dislike), or often no response at all (itself a type of read response).
These various acts of response can and do occur whether a video is direct address or not. Thus,
regardless of a video’s original intention, upon posting it becomes a catalyst for communication
(and often public communication) in digital forms that did not exist in the analog era and on a
scale that was restricted to those with capital. A video’s nature as digital data becomes clear
when it exists on a public website along with other remediated forms (like written text), and
interacts with them freely. A direct address video might be posted, receive critique in the form of
dislikes, then receive a supportive response in the form of another direct address video, which
185
itself will then be parodied by a fictional video only to receive critique in the form of text
comments. In these multimedia spaces of digital communication like YouTube, the video camera
acts as a tool in the creation of conversation, a catalyst for dialogue, and a device for bringing the
private into the public space. All of these things were true for larger spheres of production
before, but on this scale at the personal level, these practices are new.
The video camera has become additionally communicative on a private level as well.
While expensive and/or experimental versions of video chat can be seen in various incarnations
all the way back to video’s invention as a medium, the conversion of the video signal to data
allowed video to be transmitted along a drastically longer line much more conveniently. Analog
versions of video chat might would have required televisions linked via dedicated cable or
satellite, but the potential for digital transmission via computers and, even more so, smart phones
have made video chat a casual practice. If I have a smartphone, a video call is as free as the Wi-
Fi.
Just like the potential for pre-recorded transmission, the ability to exploit digital video’s
liveness changed the way in which the camera itself was used and viewed. For larger media
organizations, liveness was obviously crucial for sports and news, and was likewise exploited
amongst the avant-garde videomaking community as a unique property of the medium. But for
most of video’s uses amongst all spheres of production (and especially for interpersonal use), it
was strictly a recording medium. For an individual, as discussed, transmitting live was a
logistical difficulty that made live communication especially unlikely. Through a program like
Skype, the video camera itself became a device with which to reach another individual or group
of individuals. It remained a tool, but one of a different sort – it was no longer just a capture tool.
With the invention of the webcam and laptop camera, the video camera itself began to function
186
more like a phone, hinting at the inevitability that the phone and the video camera, once distinct
devices for distinct purposes, would merge into the same device. The phone was a tool strictly
for direct communication. The video camera could be, but only inasmuch as its liveness could be
exploited. Digitality put the means to exploit liveness into the hands of individuals across all
sectors of the population. As a result, the camera’s physical linking to the phone symbolically
completed the camera’s transition from functioning primarily as a recording device to a more
transmissive one. While theorists in the past discussed video as a medium of the present and film
of the past, in practice video’s presentness was only fully and widely exploitable when it became
digital.
Mutability
The third property resulting from video’s digitality, and one frequently discussed by
critics, is its mutability. To return to Geuens, he specifically calls attention to digital video’s
morphic properties, celebrating their potential in creative practice and reveling in their apparent
ability to activate almost on their own. Others were more hesitant to embrace the digital’s
tendency toward the chameleon-like, including scholars and filmmakers alike. Films like
Terminator 2 (1991), Lawnmower Man (1992), and The Matrix (1999) used visual morphing as
an on-screen manifestation and demonstration of what digital technology could do at the
invisible level, and also staged through their narratives many of the anxieties reflected in the
critique of digital culture. Many of the more dystopian philosophies on the subject involve fears
of reality itself becoming subsumed to artificial images through digitality, the basis for which in
practice comes from the idea that the digital image is more easily manipulated than the analog
one. As critics like Tom Gunning argue, the photochemical image was always able to be
187
manipulated and forged, and the truth claim of the photographic image is merely that – a
claim.
167
That said, image mutation in the analog era required specialized knowledge, skill, and
labor, while the digital image can be manipulated and doctored in my phone. Some apps, like
Snapchat, demonstrate this potential uncannily well, transforming my face into an anime
character in real time. What makes the digital especially plastic is the automation of processes.
To change a filmic image into black and white would have once involved a physical act of time
and labor. That time and labor marked the image transformation as a process. Now, I can merely
click a button on my phone and I have a black-and-white digital video. Or a sepia-tinted one. Or
one in which I have dog ears. It’s not that there is no longer labor required, but that the labor is
done by programmers, and then subsequently outsourced to computers. And it is very much not
performed by me in any sustained way.
Discussions of the mutability of the digital (digital cinema especially) again tend to
conflate the various digital processes together. Discussions of digital plasticity stem largely from
the digital effects work that made that mutability manifest on screen in the 1990s, wherein
photochemical film images were digitized and ingested into a computer, where a variety of
animations were carried out. Suddenly and startlingly, real-life actors and digital dinosaurs were
sharing the same screen – sharing the same medium, made of the same (non)-stuff. Digital
dinosaurs and digital audio and digital cameras were all mentioned together in the same breath
by skeptical critics. While such overarching discussions of the digital can be beneficial in
identifying and characterizing the early digital zeitgeist, they again can be overly reductive when
it comes to practice. The act of digitally mixing an album and animating digital dinosaurs and
shooting with a DV camera are all distinct, as individual practices and in their relation to broader
167
Gunning, “What’s the Point of an Index?,” 42.
188
structures and histories of art and industry. With the video camera specifically, what is lost in the
conflation is the manner in which the newfound sense of control over the image transformed the
meaning of the medium itself. In addition to expanding beyond recording to become a fully
transmissive medium, digital video also became an edited medium in ways that it was not before,
both in the ability to manipulate individual image textures and images in relation to one another
at frame-level. This sense of control was especially noteworthy for home video users, for whom
postproduction control of any sort was a rarity in the analog era.
The analog camera could perform several image-altering tasks, all of them mechanical or
electronic. Not unlike the film camera, the analog video image was subject to focus, zoom, and
aperture alteration, all image mutations that are surely as significant as their digital counterparts,
but are oft-ignored through their normalization in both film and early video practice. More
video-specific alterations came in the form of adjustable shutter speed and electronic gain, and
less common features were camera dependent, including fades, image tinting, cropping, and
character generation. In the later analog era, the gap to the digital was partly bridged, as the
camera menus themselves were often digitally created, and features like on-screen titles were
often computer-generated or -augmented, even though the videographic image itself was not yet
digital.
In the first crop of digital video cameras, the mutability of the image was generally not
exploited in any particularly novel ways. Even though the image was readily manipulable, many
of the features offered by digital cameras were identical to those in the analog era. As the camera
itself in the early digital era was still essentially just a capture device (prior to networked
smartphones), most of the camera functions were preserved either mechanically or as digital
remediations of ones that previously existed. Here was Elsaesser’s “business as usual” on the
189
material side. Comparing the manuals of the XL-1, a digital camcorder, and an analog JVC
VHS-C camcorder of the same era shows fairly few differences in the potential for image
alteration. Because the processing power of the on-camera computers was quite limited, the
extent to which any sort of real-time digital processing could be performed was also limited.
There was, however, one extremely massive change with the digitization of video in a camera’s
images’ ability to be manipulated – the digital output port.
Analog cameras had analog output ports, and in the 1990s, these were usually in the form
of RCA jacks (the red/white/yellow ports common on many devices of the era that would output
video and audio to another device) or, in more professional cameras, BNC jacks. The receiving
device was most commonly a playback monitor of some sort – either a professional monitor or a
home television. Output cables could also be hooked up to tape decks, ranging from professional
and expensive Beta models to low cost VCRs, with which linear editing could be performed with
varying degrees of precision. When video became digital, these analog ports generally remained
on the cameras for the purpose of playback. In this case, digitally recorded footage would run
through a DAC converter and then out through the analog jacks in order to be displayed on a
monitor. But most digital cameras also had some type of digital output as well – the most
common being Firewire (for Mac users) or USB. With these ports, the digital data could be
transmitted not just for playback, but for storage on a computer.
In the early digital era, transmission to the computer occurred in real time, as the media
was still stored in a tape-based format (like MiniDV). Computer software would be set to
digitize, awaiting camera playback for the footage to be ingested. The process allowed the
camera to be linked to digital computer files for the first time (at least within the popular
190
sphere).
168
If I digitized my camera’s footage with a program like Final Cut Pro, I could look in
my computer’s designated capture folders and see Quicktime files of the footage that I just shot.
I could double click them and watch them at will, and in any order. I could throw some out, and
save others. I could email them to a friend. I could import them into various software programs.
It is difficult to overestimate the significance of this first linkage between camera and computer;
the Firewire port became an outlet to a completely different form of working with video and, for
the home video user, the video camera became a tool of an entirely new sort – it was a bridge to
the postproduction process.
169
As was the case with the analog video image (and the early digital one), control over the
product of one’s videography was not something that was equally available, convenient, or
affordable across all sectors of production. Big-budget projects in the analog era had access to
costly editing equipment with a fair degree of precision, as did some low-budget productions.
For those without, though, editing video (in any sense) was not something that was always
conceptually linked to videography. This was especially true for home video users, who could
record and play tapes back easily, but editing footage would require the assembling of a crude
nonlinear editing system, in which a camcorder would be hooked up to a VCR. Footage would
be played on the camcorder one shot at a time and recorded on the VCR to the best of one’s
ability. Across all of these setups, from expensive suite to home VCR, editing was linear,
meaning that I would need to play my media in real time and in the designated order on one
setup while recording on the other. Even in an expensive suite, it was laborious, time consuming,
168
Big-budget production shot on analog video could be digitized to an editing program like
Avid when it was first available in the early 90s, but such practices were likely beyond the
budget of smaller productions.
169
Postproduction was always a possibility with video, of course, but as I lay out in the
following paragraphs, possibility and actuality were quite different.
191
and limited in its application. When video became digital, it became subject to random access.
While capture and digitization still occurred in real time, once footage was on a computer I could
access any part of my footage at any given time, making video a much easier medium to edit,
and with significantly improved control and accuracy. Given the great difficulty and imprecision
of video editing for home practitioners in the analog age, I would even go so far as to argue that
the shift to digital marked the point at which video became an edited medium for that group, at
least conceptually.
The history and phenomenology of digital video editing is beyond the scope of this work,
and not entirely relevant, but a brief discussion is needed in as much as it relates to the practice
of videography. In short, when video increasingly became a thing to be edited accurately, the
way in which cameras were viewed and used changed accordingly. The “stuff” that the video
camera produced became something fully malleable, and at all levels of production. Digital
video editing in general was popularized by non-linear systems like Avid in the mid-90s but,
much like the analog video editing suites of previous eras, it was prohibitively expensive to use.
The software and hardware combination was beyond the purchasing power of most individuals
or small companies, as often was the cost of suite rental and hiring an operator. Even in smaller-
scale operations, before cameras and computers came equipped with digital outputs/inputs, the
cost of a digitizer (used to get footage into a nonlinear editing system) was an additional
prohibitive expense.
At the higher end of the production hierarchy, the adoption of Avid no doubt changed the
editing industry tremendously, with many film-based projects benefitting from the convenience
of video editing. As earth-shaking as digital editing’s influence was for film editors and their
labor, it didn’t necessarily fundamentally change the way that cameras were used at the
192
professional level; the product of a film camera was always something to be edited (beyond more
experimental uses). For low-budget productions, with the advent of small-suite programs like
Adobe Premiere and Final Cut Pro, the change in videographic practice that accompanied the
new switch to digital editing was more impactful. No longer was there necessarily need for an
expensive editing suite, as a company could invest in a powerful computer and edit within the
confines of their office. For a company that produced wedding videos, for example, the ability to
edit was always a requirement, but it was a process that was a difficult and costly element of
their operation. The growth of the digital nonlinear editing system resulted in savings in time and
money, both critical for a small company’s survival. The same was true for low budget
filmmakers. Many of these likely would still pay to edit in an Avid suite if the budget was
available, but makers of smaller low-budget or no-budget productions, opting to shoot on new
digital technology, would be able to potentially cut the films themselves, either as do-it-all
filmmakers or by hiring individual editors who themselves were benefitting from the shrunken-
down postproduction scale to run all-in-one editing businesses. But even in these low-budget
circles, video cameras always produced footage that was meant for editing. It was in the home
video sector most of all that the popularity of nonlinear editing fundamentally changed the nature
of cameras and camera use. For home video users, the camera did not produce “stuff” that you
“did things to;” the act of videography largely stopped with production. The rest was exhibition.
In the late 90s, as video cameras in the home became digital, computers began to come
equipped with Firewire inputs and bundled with video editing software (like Apple’s iMovie or
Microsoft’s Windows Movie Maker). Getting one’s footage into the computer was still
something of an acquired skill. Given that the practice of analog home videography was rather
limited in its post-production practices, and “exhibition” was limited to plugging a camera into a
193
TV to watch one’s footage on a bigger screen, the act of hooking a video camera to a computer
to do anything at all was a rather foreign one. Digitizing footage generally required making use
of the computer’s software, which was likewise a leap in practice, as in the late 90s home
computing was still relatively young. Still, for those with the technical know-how, the home
video camera could be connected directly and synced with a computer to such a degree that, with
Firewire, the camera’s playback could be controlled remotely by the computer itself – a process
that seemed equal parts cool and creepy when I first experienced it in the early 2000s. By
operating my computer, the camera would respond – an action foreshadowing the degree to
which the computer and the camera would be fused in the coming years. For now, though, the
process was still quite messy (literally). When the software was set to capture, the camera’s tape
deck would spontaneously whir to life, playing the footage to be captured by my computer. This
complex act, requiring a camera, tape, cable, PC, software, hard drives, and power adaptors, all
as unique elements in a physical chain, resembled a mad scientist’s lab when laid out on one’s
desk. It required technical knowledge, hardware, software, time, patience, and a surprising
amount of space. While all of these practices were commonplace in professional production
circles and mitigated by necessity, the process of video editing was out of place physically and
practically in the home video market.
For those home videographers willing and able to deal with the challenges, there was a
considerable amount of increased control over one’s images. This sense of control was made
even more accessible with programs like iMovie and Windows Movie Maker, which came
standard on new Macs and PCs, respectively. That act of standardization is telling in itself. With
the development and release of iMovie, for example, Apple concluded that video editing was
enough of a draw for users (or at least would be in the future) to be included with their base
194
model computers and be utilized as a selling point for the hipness of their brand. A predictor of
(and partly a cause of) the proliferation of digital video editing, iMovie’s ubiquity was an early
step in the act of video editing becoming a normalized, casual, even daily activity. Compared to
a more complex editing program like Avid, iMovie was intentionally simplified to appeal to the
casual user – once again repeating the pattern seen in camera image manipulation of removing
complex controls in order to provide a more simplified version of it elsewhere. Audio tracks in
iMovie were condensed and hidden, for example, to allow for eased video editing without the
worry of accidentally deleting one’s audio or moving things out of sync. For more accomplished
video editors, the simplified “idiot proof” design was maddening, but for those looking to gain
some control when there was none before, iMovie opened up an entire creative practice to
individuals who may not have otherwise experimented with it, either in the analog era or with
more complicated prosumer software like Final Cut Pro or Premiere.
Once the footage was digitized in the computer and imported into manipulation software,
the practitioner could perform a great number of operations, many of which again featured a
fairly steep learning curve. Footage could be organized in folders, separated into clips, and
placed into a timeline, much like it would be in the days of amateur film, albeit with the process
simulated digitally. Somewhat ironically, amateur film users might have an easier time working
with digital editing software than analog video users because its interface was so clearly based
on the process of film editing (e.g. calling the digital folders holding footage “bins”). To that
point, one of the most striking additions to the act of videography across all sectors of production
was the ability to access the frame.
In the analog era, the video recorder never produced a true frame, but this was largely not
a practical concern because accessing the frame would have been impossible due to video’s
195
existence as a constantly running signal. As such, the editing of certain kinds of content in a
linear editing system was very difficult, as frame-accuracy was critical for making cuts of a
certain type. While live switching of multiple analog video cameras allowed consistency (as with
live TV), cutting pre-recorded footage from a single camera could only be done with so much
accuracy. With the advent of digital editing software, the video camera became a tool for making
frames; it became a tool that could produce footage for precision editing, and precise
experimentation. Cuts could quickly be “undid” with a key stroke, or slid around by a matter of a
frame or two with great ease. This was true for digitized analog film as well, but the frame was
always accessible in the film camera’s footage, even if the “undo” mentality was recent. For
videography, this frame production was all new. Across all sectors of videomaking practice, the
video camera became a tool that shot with increased accuracy – not because it shot any more
accurately, but because its footage could be accessed differently. This change in frame
accessibility alone made the video camera increasingly viable for fiction moviemaking. Video, in
effect, became a fully edited medium.
The newly acquired ability to edit with accuracy resulted in increased accessibility to
moving image making, as opposed to simply moving image shooting (at least for those tech-
savvy practitioners). While many of the tools and levels of control (editing specifically) that
were available digitally were always available to higher-budget sectors of production, the ability
to use a video camera to shoot footage to be edited in the amateur and low-budget sector was
indeed groundbreaking. In this dissertation’s previous section, I spoke of the cultural importance
of achieving a cinematic look in getting one’s movie past cultural gatekeepers. But getting one’s
movie to exist at all was often dependent on the nonlinear editing enabled by digitization. Even
the mode of documentary, which relied far less on the precision cut and continuity editing than a
196
fiction film, benefitted greatly from the switch to digital in the management of footage and the
trial-and-error approach allowed by nonlinear editors. Additionally, the parallel decrease in cost
among cameras, computers, and editing software meant that the same production company, and
eventually the same individual, could have in their space (production office or home) all of the
tools to create a movie. Again, the smaller the project (and the smaller the budget), the more
noteworthy the change. This consolidation of equipment and skills into a single individual was a
double-edged sword. While a freelance videographer could pitch herself as a one-stop shop,
providing both production and postproduction services and doubling her income, production
companies and clients could similarly expect that single individual to perform several jobs that
previously would have required the services (and salaries) of several individuals. The more labor
was consolidated, the easier it was to exploit.
The potential for editing affected videographic practice directly, touching even camera
operation. Videomaking guidebooks often speak to the amateur video user’s tendency to operate
the camera like a firehose, “covering” the scene in one long, continuous take. Selection of
content occurred within the operation of the camera itself. When digital editing became a
possibility, the act of filming for one-take coverage was, of course, still possible. But now that
the camera operator was frequently also an editor, that latter practice affected the manipulation
of the camera – knowing that pieces could be cut together later rather than strung together while
filming. For that reason, I would venture a guess (perhaps without any way of proving my case)
that home video camera operation in the digital era favored less frantic movements and shorter
shots as the act of shooting became increasingly one of footage-gathering as much as it was one
197
of coherent memory preservation.
170
In the analog era, shooting footage was itself the end goal,
as there was no postproduction process afterwards. When video became a thing to be edited, the
footage itself became the raw material for an edited video to be put on DVD or archived on a
hard drive. Those media were now the “home movie.” Forget the tapes in the closet.
The digital camcorder had become a link in a chain of digital tools, all designed around
producing an edited work and, as a result, there was an increased literacy in the creation of edited
motion pictures. Movies had largely been something of a one-way street for some time,
especially the further one moved down the production hierarchy. The ability to easily transfer
one’s footage from camera to computer and the ability to edit that footage meant that home
movies started to look more like capital M “Movies,” mirroring the development that was
occurring in image quality. Edited home movies certainly existed in the analog film era with
formats like 8mm, but the advent of user-friendly editing programs like iMovie, designed to
make video editing easy, hip, and fun (a process that was further cultivated with PC and Mac
fetishization), promoted amateur use beyond those possessing the specialized skillset required in
the analog era.
The video image additionally became a malleable one through the use of digital effects –
changing not the series of images, but the video image itself. Final Cut Pro, the prosumer Apple
editing software, featured a wide palette of applied effects, few of which were available in the
analog era, and certainly not for the price of the all-in-one software. A video synthesizer capable
of handling a handful of effects was itself thousands of dollars and many required professional
technicians to operate, compared to the great variety of digital effects that came bundled with
170
I would also guess that this trend reversed again in the late 2010s with the practice of live
streaming.
198
editing software for only a few hundred.
171
Tools were available to alter color, to distort the
image, to manipulate the perceived texture of the image, to add titles, and eventually to produce
fairly complex computer generated on-screen assets (as with Adobe’s After Effects software). It
was through such software plugins that the “film look” was also first applied to digital footage.
Here, across all sectors of videomaking practice, the “stuff” produced by the video camera
underwent a transformation once again. Many of the alterations that were so celebrated as novel
additions to the film image as it first became digitized in the 1990s through the digital
intermediary process were increasingly available to the video image as well. In the computer, the
film image and video image became one and the same, and thus alterations available to film were
available to video as well.
172
Again, at higher-budget levels, digital manipulation was not nearly
as novel. The changes were most noteworthy when this heightened level of control, both of the
image itself and the images in relation to one another, became a possibility for the lower-budget
spheres of production.
Both for high-end professionals and lower-end amateurs, the malleability that came with
digitization changed the nature of the video that cameras produced, and the practice of
videography altered with it. While malleability is most often discussed in relation to CGI, and
the creation of synthetic worlds and digital dinosaurs, the malleability of digital video extended
into practices far more mundane than those. Just the act of being able to cut a sequence in shot-
reverse shot, or to create a montage of vacation videos, or to edit one’s audio tracks, was a
drastic development in practice. On one hand, digitization brought the video medium more in
171
Catherine Elwes, “Visible Scan Lines: On the Transition from Analog Film and Video to
Digital Moving Image,” Millenium Film Journal 58 (Fall 2013): 62.
172
This is a bit of an overstatement, as film was generally scanned at higher resolutions in the
digital intermediary process than film, but with regard to image control, they became the same
digital “stuff.”
199
line with film practices. The control over the image that was present in analog film editing was
extended to the video image, albeit in digitally remediated form. Such digitization opened the
video image and the video camera up to potential for practices that required the precision that
only film could provide previously. Yet, remediation was only half of it. Many of the new tools
for manipulating one’s image were uniquely digital and marked genuinely new digital production
practices. As such, the video camera was not merely given the utility of a film camera. Instead,
due to digitization, both the film camera and the video camera became tools for doing new things
and for developing new practices; digitization effectively made these “old” media “new.” This
newness came specifically from the fact that both media fell under the domain of a third – the
computer; both increasingly became devices for producing raw material to be manipulated
through computational processes. As the video camera became digital, as much as one might
view it theoretically as a device that closed the window between the screen and reality, in
practice, the window remained open – it lie between reality and the computer. At this stage in
video’s development, the technical chain between profilmic event and computer was still long
and complex, though that chain would shrink considerably with significant technological
advances I will discuss in greater detail in the following subsection.
A brief discussion of the relationship between film and video at this moment in history is
worthwhile once again. As film production increasingly went through the digital intermediary
process, and as video could easily be brought into a computer due to its existence as digital code,
the difference between film and video at a material level was eliminated in the computer. This
elimination was further doubled when films were no longer printed back to film after editing and
effects were complete, but instead were exported for exhibition in digital form, and it was trebled
when the image quality of video increased across the digital era, such that the perceived textural
200
differences between the media had been mostly eliminated. The result of the process of
digitization was ultimately that the differences between film and video lie almost exclusively at
the level of production, in the act of working with the camera itself.
Reading the trade press in the early 2000s is to see frequent comparisons between the
video image and the film image (“New Video Camera Looks Pretty Good!” “Digital Video is
Catching Up to Film!”). Soon, these image-based comparative articles gave way to experiential-
based ones – interviews in American Cinematographer discussing the process of shooting with a
video camera, or producers in Filmmaker discussing the logistical pros and cons of shooting on
video. With digitization, the practice of shooting film became increasingly marked in practitioner
discourse by comments on the disadvantages of materiality and a nostalgic romanticization in
equal parts. Even for those who maintain that shooting on film and video remain fundamentally
different processes, both of equal value as art forms of material practice (and surely they are), it
is difficult to use phenomenology to justify cost. To return to Iron Light CEO Michael Cioni’s
assessment of the advantages of video, considerations of look and feel are quickly being trumped
by those of logistics: “If there are people out there, and there are, who want the picture to be the
end-all, be-all reason to choose a format to shoot on, they’re going to be the last guys at the
Alamo. Because in the world I live in, image quality is not the No. 1 factor. It is cost.”
173
Working with film is more difficult and more expensive. When both film and video ultimately
enter a computer and exit as the same “stuff,” film seems a burden in every way but the romantic
ones. For scenarios in the past twenty years in which the choice between film or video was a
choice, it was almost always a decision regarding sacrificing convenience and cost for image
quality. With digitization and the meshing of media in the computer, that cost/benefit no longer
173
“Landmark Year,” Variety.
201
weighed in film’s favor. Economically, the digitization of video and, ironically, the digitization
of film, were the technological developments that sounded film’s death knell more than anything
else. By making these two media the same on the back end, their differences on the front end
were ever starker.
Does Digital Matter… Matter?
I’ll conclude this subsection with a snapshot of the video camera in the year 2003. Video
across all sectors of production has by this time and in large part become digital. And yet,
cameras themselves have not changed all that much. The image quality of capture and broadcast
is still mostly standard definition, with some rare, high-priced exceptions (e.g. the work of
George Lucas). The camera bodies look similar. They still shoot on tape. The control functions
are still more or less the same. The only great difference is in the material being captured – the
cameras spit out digital “stuff.” Through that one difference, though, the camera has indeed
changed, both in terms of what can be done with its material once it is captured and also in the
cultural expectations of its performance. What did the digital allow, then? What is the digital
video camera? It has become a tool for communicating directly. For sharing information. For
reaching an audience. It is a device that creates material to be manipulated. It is a device that
produces content to be edited, not just screened. As such, the process of moving image-making
has seemingly been democratized. The nonlinear editor awaits my footage. The internet awaits
my upload. The audience awaits my content. The camera may be similar, but the expectations are
new. They are all exploitations of the medium-specific property that digital video is data.
In contrast with my waxing poetic at the significant, fundamental alterations in practice
that were enabled by the digitization of video, the trade press was much more lukewarm when it
202
came to discussions of digital utility; there was far less talk of “revolution” and
“democratization” amongst practitioners when it came to the digitization of video in general.
Publications like Videomaker did not suddenly and dramatically expand their readership. As
practitioners themselves, such publications surely recognized the potential in digitality, but also
the technological limitations that still remained. All of the changes that I noted as being enabled
by digitality became a possibility, but not necessarily a reality – at least not for everyone.
Potential does not always equal access. Indeed, many of the technical possibilities (like digital
liveness) were not widely available until well over a decade after they became technologically
possible to a small group of practitioners.
To return to Brian Winston’s model of technological change, after a technology exits the
prototype stage and reaches the masses, its use on a wide scale always has the potential for
revolutionary change. Yet, the ways in which the technology is made to fit the current economic
model and cultural climate is such that the technology only reaches the market through the
“suppression of radical potential.”
174
New technologies can only be utilized inasmuch as they are
situated into the existing social structures. Despite the fact that an amateur could shoot video,
edit it, and post it online, there remained a crucial obstacle (or series of obstacles) to doing so –
the length of the chain from production to distribution (the mess of equipment on my desk, the
mess of software on my computer, and the mess of an internet connection between me and my
audience). Cameras and computers were cheaper than in previous eras, but still were luxury
purchases. While the video camera became a device that produced video that could potentially
leave the home, the impact of that technology was mitigated by the fact that the infrastructure to
support that transmission didn’t yet exist.
174
Winston, Media, Technology, and Society, 11.
203
This suppression of radical potential is often missing from the most utopian and
dystopian formulations of the switch to digital image-making. To consider only the potential of
morphing images in the early 1990s, for example, is to drum up fears of visual simulation
overtaking realist, indexical representation on a wide scale, but one that has not come to pass.
Digital effects are difficult and expensive. The same fears are resurrected even now, as with new
software that is capable of face-swapping video. A well-publicized example featured an AI-
assisted program that could replace any actor’s face with that of Nicholas Cage (although
dystopian examples exist as well!). The implicit/explicit fear, again, is that the existence of such
technology will destroy faith in visual imagery as we know it, leading to an apparent degradation
in the truth claim of digital video (which is ironic, considering it was said to have already
degraded in the early 1990s, but persisted pretty well regardless). I’m not saying that we as
scholars and citizens shouldn’t be aware and concerned about digital technology’s relationship to
truth and our perception of reality – especially in the case of technology that is explicitly
designed to manipulate our representations. Rather, it is ever more important to consider these
questions from a place of practice and usage, in order to cut through the more hyperbolic
formulations; trace a lineage of similar digital (and analog) technologies across eras to better
consider future trends; and avoid the sense of inevitability and/or completion that comes with the
panic/euphoria of dys/utopia. “Videography has been democratized.” Has it really, though? In
practice?
As will be seen across the digital era, the notion that moving image-making and
videography had been democratized with advancements in digital technology was not unlike the
myth of the giant-slayer – not entirely wrong, but a gross overstatement of potential. With the
digital video camera, widespread moving image making would not be possible until much later
204
in the digital video era, after two significant changes in camera utility occurred in the realm of
hardware development that effectively shortened the chain between production and exhibition
and, in doing so, removed the interstitial obstacles. The first was the shift in storage method,
from magnetic tape to disk drive, allowing the camera itself to become much smaller and
ultimately married with the computer directly in the form of the smartphone. That latter
consolidation, the second significant hardware development, turned the once-standalone camera
into a compressed, portable, software-enabled and internet-ready imaging device. Taken
together, these two innovations in camera utility allowed for not just the potential for
videography, but the mass proliferation of it. After all, democracy doesn’t mean much when not
everyone can vote.
205
From Tape to Drive
In 2005, I was teaching video production at Schurz High School on the northwest side of
Chicago. During one of our first sessions on postproduction, I was demonstrating for my students
the way in which to digitize footage. We hooked the camera up to the computer via Firewire
cable and opened Final Cut Pro. We set the computer to capture and the camera began to roll,
sending footage to the computer in real time. A silent, low-res preview of the footage began to
play on screen as the footage was ingested. After a few moments, the student that shot the
footage, Jesus, spoke up.
J: “How long are we going to watch this?
M: “Well, we aren’t just watching it to watch it, this is the computer capturing the footage.”
J: “Oh. How long does it take to capture?”
M: “It happens in real time. So as long as you shot for. How long did you shoot for?”
J: “Like 45 minutes.”
M: “Then it’ll take 45 minutes.”
J: “Damn. Can’t you just drag it?”
M: “…what do you mean?”
J: “Like getting music off a CD. Why can’t you just drag the file off?”
M: “…I don’t know. That’s actually a really good question.”
J: “Like why would that work for music but not for video?”
M: “Well, the files for video are way bigger. I guess it doesn’t work with files that big.”
J: “Oh. That sucks. So what do we do while it’s capturing?”
M: “I usually just go eat a sandwich or something.”
Having been raised in the analog era, capturing video via tape transfer never struck me as
being strange or tedious. The act of playing one piece of media and recording it on another was
one I had performed countless times. I had done it making mixtapes in my boombox. I had done
it taping my “Best of the Blaze 103.5 FM” compilation. I had done it taping Big Trouble in Little
China off of WGN Action Theater. I had done it when I dubbed a copy of The Evil Dead for a
friend, which required borrowing my grandma’s VCR to hook up to mine. The act of copying,
for me, was an act of real-time transfer. Content flowed from one medium to the other. Audio
206
and video were conceptualized for me as things that existed in the present – things that existed
only when they played. If it wasn’t playing, it was effectively locked. Frozen in time. Even my
earliest forays into digital music, CDs, were tethered to presentness. I could copy a CD to an
audio cassette tape in real time, essentially no different from copying another cassette tape, albeit
with more accuracy. Computers, I knew, were different. I could make a copy of my MS Word
file simply by right-clicking “Duplicate.” Even when those media first collided, with CD
burners, the phenomenology still retained that real-time feel, even if it was accelerated. Burning
a CD required it to spin, and the terminology (“32x transfer speed!”) preserved the feel of
records and RPMs – it was basically as if I were still pushing “play” and “record,” just that the
turntable/tape deck/CD was simply spinning really fast. It still happened, in my conception, as a
flow of one media into another (even though with CDs it absolutely did not work that way in
practice). Even when downloading digital music, which was generally quite a slow process in the
early 2000s, the delay still retained the illusion that somewhere someone was playing the song,
and I was merely recording it. I had displaced the digital reality with my analog past.
For my student, Jesus, that was not the case. Even though he was less than a decade
younger than me, he had never taped anything off the radio, a cassette, or a CD. He had never
seen a turntable (outside of a DJ scratching). Movies were on DVD. Music was always digital,
and if you wanted to duplicate a song, you right-clicked “Duplicate.” Thus, for him, having to sit
on standby while a tape rolled and rolled, scene by scene, for 45 minutes was the oddity. That
was the medium that was not like the others. What struck me most was that he seemed like he
was right. Why couldn’t one transfer footage from a camera much faster, as with a music file?
Copying a track took a fraction of the time it would take to play. Copying a movie file took
longer, but certainly not as long as playing it in real time. Why couldn’t a camera transfer
207
footage faster? It was digital, after all. Obviously recording had to happen in real time, as the
profilmic event happened in real time. But surely file transfer could be faster. Even with video’s
digitality, it was still tethered to a rusty, mechanical infrastructure.
When light hits the camera’s sensor, the string of data gathered by the pixels is generated
in real time. The storage of this data can come in any format that stores digital data. With
MiniDV, that medium was magnetic tape – the same medium that was used to store an analog
signal. As digital data was generated by the camera’s sensor, it was sent via internal cable to the
tape spool that registered the string of 1s and 0s onto the tape as changes in magnetism. The
string of data was all there as it would be in a file. The problem was, the tape could not re-
generate that digital signal unless it was spinning; I rebuilt the string as I ran the tape. Storing
video on a spinning disc, like a DVD or MiniDVD, was possible as well. The system required
the disc to spin to both receive and also generate the signal, but just like with a CD, it could spin
much faster than would be required for real-time playback, and therefore could transfer data
much faster than a tape. DVD-based cameras did exist, but they were unreliable as write-media
for a number of reasons, the first and foremost that they were prone to skipping (just like
portable CD players during playback) which would result in lost information. These early digital
cameras were only partly digital – they generated digital material as an information carrier
medium, but they moved and stored it via mechanical means. That is to say, the early digital
video camera was only mildly computerized.
Storage on a solid state hard drive is different than storage on tape or disc as it is not
hampered by any real-time mechanical limitations. After capture, the digital data is all present
and merely needs to be copied to be reproduced. The speed of this copy is dependent not on the
media’s real time duration, but on the drive speed and the computer’s processor that receives the
208
information. Hard disk storage was a possibility in the early digital era, but it suffered from
several problems. In order to capture video in real time while filming, the drive write speed
would have to be incredibly fast for the era, which would mean a very expensive drive.
175
Further, large hard drives at the time were not yet solid state, meaning they still had moving
parts. Rugged design was not often at the top of drive engineers’ lists, as they were far more
concerned with speed and power, and thus hard disk drives were far more delicate than the
robust video tape housings that had been developed and field tested for the last two decades.
Drives were prone to overheating, and dropping one even a short distance would potentially
destroy the drive and its contents (ask me how I know…). If one of video’s primary attributes
was its portability, the opposite could be said for the hard drive. Thus, for much of the video
camera’s life, it was reliant on the tape.
Reliance on tape wasn’t the worst thing in the world. As mentioned, the technology was
durable and tested. A MiniDV tape had ample storage space for the digital video and audio
signal, including early versions of larger (albeit compressed) HD signals in the form of HDV.
The labor processes for tape were simple and familiar, for both professionals and amateurs.
Recorded analog video formats in the preceding years were largely tape-based, so the act of
feeding the camera and recording itself were largely unchanged; for home users, MiniDV was
like a cute, little version of VHS. On set, the most common tapes lasted for 60 minutes at full SD
resolution, far longer than any reels of film. Also unlike film, tapes themselves were small and
lightweight. A documentary crew trekking across the Sahara could carry tens of hours of storage
media for little over a pound. While most professionals wouldn’t need or bother to, the tapes
175
Such drives did exist, though mostly in experimental format. Russian Ark, for example, was
shot on a specially designed hard drive that could hold the entire film in a single take.
209
themselves could also be blanked and/or recorded over. Thus, if an individual wanted to, they
could record their home movies, digitize the tapes, and then re-record another sequence over the
same tape without any loss in quality (at least theoretically – sometimes the process would lead
to glitchiness in practice, though these minor glitches might not be a deterrent for someone
willing to record over their tapes). Such a practice was largely unnecessary, though, as MiniDV
tapes themselves were quite affordable – something in the range of $4-5 for 60 minutes. More
robust, professional formats like DVCAM tapes were physically larger and more expensive, but
ultimately only provided material reliability, as the image quality was the same as the smaller
MiniDV tapes. Most of the early prosumer digital cinema cameras shot on MiniDV (like the XL-
1 and DVX) as did consumer camcorders. Regardless of the image quality across the cameras, all
produced a digital data stream that was compatible. DVCAM from a professional news camera
could be intercut with consumer MiniDV.
As my earlier anecdote suggested, the limitations of tape both on set and in post were
largely hidden by their history, and their comparison to other moving image media. Considering
that tape allowed much longer shooting times than film, didn’t need to be loaded in the dark, and
was considerably lighterweight, it seemed like the advantageous format when it came to ease of
use and logistical benefit. In post, digitally ingesting tape was far easier than digitizing film, as
the latter had to be scanned frame-by-frame to produce a full-quality image. Not least of all was
the fact that tape could be played back immediately, as opposed to the long and labor-dependent
developing process needed to view film. Compared to film, tape was great. It was through
comparison to newer computerized media that tape revealed its limitations, both in post and, to a
lesser extent, on set. As video was used in more and more computerized applications after
capture, the presence of the mechanical tape recorder became increasingly out of place.
210
The post-production process involved hooking a playback device up to a computer,
initializing capture through the software, and playing tapes back as the computer ingested them
in real time. Home video application would likely use the camera itself as a playback device,
doubling the wear-and-tear on the camera, while any professional productions would likely pay
for a tape deck that could handle capture and spare the camera. Such decks could additionally
fast forward and rewind with greater speed, ultimately saving time in the process. The tape could
be captured either in one go, creating a single large file, or it could be broken up into pieces.
Using the editing software, the editor could shuttle through the tape, setting capture markers.
When the footage was marked, the computer would take control over the device, shuttling under
AI command, and capturing only what was selected. Most software provided a low-res, low
quality preview of the footage on the computer as it was being captured, though more
professional outlets would also hook the playback device up to a monitor for assessment of
quality. Lacking that, it would likely need to be watched again for a proper assessment. Once the
tape was rolling, it couldn’t be paused or rewound in capturing.
The process itself often presented computer-technical problems that did not have intuitive
solutions, and some of which were better solved by prevention through camera production
practices. Frames would occasionally be “dropped” in capture, resulting in jump cuts in the
computerized footage. In very long clips, the audio and video would occasionally fall out of
sync. One of the most common problems for students of mine (and surely other amateur users)
involved gaps in footage, or “timecode breaks,” and this common technical foible serves as an
interesting case study of the collision between analog and digital in early digital cameras. A
MiniDV tape, upon purchase, was blank. It could hold 60 minutes, but if you played it back, its
“timecode” – the series of numbers outlining how much footage had been shot – was just a series
211
of dashes: --:--. When recording was enabled, footage would be recorded onto the tape and the
timecode would begin counting up: 00:01, 00:02, 00:03… In addition to keeping track of the
length of one’s shot and the footage remaining, the timecode was the language through which the
computer would keep track of footage during capture. If there ever was a break in the timecode,
either because of a technical error or user activity, the timecode after the break would reset. The
most common cause would be a user switching the camera into playback mode during
production in order to review their footage, and then accidentally letting the tape play past the
end point of recording, into the dreaded blank space, at which point the timecode would reset to:
--:--. Upon switching back into camera mode and recording, the timecode would start not where
it left off, but from zero: 00:01, 00:02, 00:03…
Timecode breaks played havoc with the computer’s attempts to digitize. Some would
lead to dropped frames. Some would cause footage to be out of sync. Some would cause the
computer to throw up its digital arms and stop capture altogether. The timecode break
represented something foreign in the analog era – nothingness. In both film and analog video, a
lack of signal input is still manageable, as it technically is still an input, just that its values are
null. Unexposed film is just black. Unrecorded video is just that familiar and infamous greyish
TV static (“Oh God, Star Trek didn’t record!”). But unrecorded digital media is nothing. Null
and void: --:--. The tape existed, certainly, but as far as the computer was concerned, it did not. It
was like giving a projector a reel of a film with a section cut out and the ends left unspliced. It
was just digital air. The tape rolled – I could see it through the clear plastic housing on the
camera. But the computer could not. The mechanics that practitioners were used to were revealed
to be an inelegant and ultimately inadequate support structure for digital video’s existence as
data. The tape itself was a vessel for information. There was no real connection, in the logic
212
necessary, between the material of the tape and the data that the computer needed. What did it
matter if there were some blank bits? I could cut those out in editing. But the absence of
information was a fundamental flaw in the computer’s view of the video. It had no input. The
computer was incapable of creating the movie.
Due to the importance of timecode integrity, the act of shooting video became partly
structured around ensuring that the footage was digitally visible for the computer. Reviewing
footage on set, the most common cause of timecode breaks, had to be done with specific
practices in mind (none of which would be intuitive for the amateur user). When wanting to
review a shot, it became standard to procedure to film a “tail” – meaningless footage at the end
of the footage to be reviewed, like the floor or the lens cap, into which the camera could play
after reviewing the shot. That way, I could watch my shot, let the footage run into the tail, switch
back into camera mode, and record my next shot over the rest of the tail, thus preserving the
continuity of timecode. A simpler method was simply not to review the footage at all and thus
not risk breaking the timecode. If there were doubts about a shot’s effectiveness, it was safer to
just shoot the shot again.
The end goal in all of these practices was to preserve the logic of the computer as much
as possible. The data stream had to be continuous. Even with all of these computer-centric
practice alterations, the most frustrating timecode breaks were the invisible ones. I could play the
footage back on the camera or the deck in the editing room, and it looked perfect. There was no
break in the numbers and no break in the images. My set conduct had been pristine. Yet, the
computer eye’s perfect gaze would sense a break and halt capture. In such circumstances, no
matter what the user did, the shot remained impossible for the computer to recognize. The sheer
number and absolute furor of timecode break-related posts on video forums like Creative Cow,
213
most of which consist of a thread of continuous all-caps screaming intermixed with desperate
cries for solutions, demonstrate the fundamental disconnect between the logic of real-time analog
playback and that of digital transmission. Video existed as a visual medium (for me) and data
(for my computer). It did not want real-time capture of a string of data. It wanted it all, and all at
once. It wanted data from a drive.
These various hiccups in getting one’s video onto a computer, both technical (in the
collision of mechanical and computational) and in practice (the adjustment of on-set conduct for
the computer’s benefit), were preliminary indications of a wider trend that would continue across
the digital era as the computer and camera continued to merge – the progression from image-
gathering to data-gathering. At its core, the act of using a camera in the analog era was one of
watching and creating images – look through the viewfinder, see an image on the screen, press
record, save the image, watch the image on a television. The digital camera still did this, of
course. As the camera translated the profilmic event into code, that code could be (and generally
was) turned back into an image later, via the playback system (television or computer monitor).
But, as it existed as code, it was also stored as code, manipulated as code, transmitted as code,
tracked as code. Video existed as a software input as much as it did as a medium of image-
creation. The disconnect between its existence as code and its physical, mechanical housing was
a metaphorical predictor of its future. Digital video had digital potential.
In the early digital era, tape was familiar and was far easier to work with than film. But,
as my student Jesus suggested, real-time digital capture was quite old-fashioned compared to
what could be done with other media on the same computer I was using to edit my video. The
digitization process was like the last bastion of the analog era – a reminder of the horrors of
linear editing before the freedom of random access. But beyond mere annoyance or quaint
214
reminder of times past, the real-time tether of rolling tape presented a significant limitation to the
way in which digital video’s existence as data could be exploited both in production and
postproduction. While the initial shift from analog to digital resulted mostly in changes in which
the camera was conceived as a tool, primarily in the lower tiers of production practice, and
affected change in cameras themselves only tangentially, the shift from tape to drive affected
cameras and camera operation at all levels of production, touching labor practices, economics,
aesthetics and, without hyperbole, our very understanding of time and space.
File-izing Digital Video
I do have a problem with the lack of respect that data gets
sometimes.
- Michael Urban, Digital Imaging Technician
176
When video camera manufacturers first began storing media on drives, it didn’t reflect a
major shift in camera operation, per se. The camera itself was nearly identical – the features were
all basically the same on the exterior, and the camera functionality was identical to that of
predecessors. The only major difference was that the tape housing mechanism was totally
removed, and a hard drive and/or card reader was installed. The recording process was similar –
light would activate the photosites and the chip would be sampled. The digital data stream would
then be sent not to the rolling tape, but to the camera’s drive. This could be either a fixed internal
hard drive or some combination of card readers, into which portable storage media could be
inserted. The switch to hard drives likewise did not affect image quality directly, though for
many users, the switch to drive storage did often come paired with an upgrade to HD, and for
176
Steven Gladstone, “7 Expert Digital Imaging Technicians…” B and H (2015)
https://www.bhphotovideo.com/explora/video/features/7-expert-digital-imaging-technicians-dits-
discuss-their-role-film-set.
215
good reason. Tape formats like MiniDV could record an HD video signal, but the signal was
compressed. The final images still had 1080 lines of resolution, but the camera’s sensor was
sampled with less accuracy, resulting in the potential for artifacting – digital distortions in the
image. As long as hard drive write speed was fast enough, HD could be recorded with less
compression (or, in the most expensive digital cinema cameras, no compression at all); the drive
had more bandwidth than a rolling tape. Thus, while storage and image quality were not directly
related (you could just as easily store SD footage on a drive) the switch to high-quality HD both
necessitated and was enabled by hard drive storage.
Beyond the possibility for higher quality images, there were other immediate results from
the switch to drive recording. The first was the simple change in the physical storage media.
Drives came in a variety of shapes and sizes, and the most common involved some type of
insertable card format. Early card storage was proprietary and absurdly expensive, reflecting the
cost of solid state memory at the time, which lacked the moving parts that made previous-era
drives fragile and unreliable. The Panasonic HVX, for example, shot HD video on Panasonic’s
P2 card – a futuristic-looking wafer that came in a variety of sizes (4, 8, 16, 32, or 64GB), with
the standard 8GB card costing $1400 at launch.
177
That storage media price alone was more than
many consumer MiniDV cameras.
While current professional digital cinema cameras still require their RedMags or Arri
Compact Flash Cards, propriety gave way in most other camera models in the consumer and
low-budget prosumer end. Low-cost compact flash and SD (secure digital) cards became the
norm in many cameras, and the tiny cards – about the size of a postage stamp – could work just
as well as proprietary media and for a much lower price. A high-quality 8GB SD card was a
177
Burke, “Put it on my card,” 21.
216
fraction of the cost of a Panasonic P2 card. The downside was the cards’ tiny size. While a VHS
or even a MiniDV tape were prone to loss, an SD card seemed liable to blow away in a stiff
breeze. In all cards large and small, the camera could shoot footage until the card was full, at
which point it would need to be swapped out for another. Some camera models even had two
card slots, meaning that cards could be “hot swapped” – one card could be removed and the
camera would immediately begin recording on the other. This process could be carried out
repeatedly and the camera could keep recording almost indefinitely.
If a practitioner wanted to skip the act of swapping and managing cards, larger hard
drives were also a possibility, both external and internal. The HVX, for example, offered an
external drive, allowing 100GB of storage, that would connect via cable and could be mounted
on the camera, tripod, or operator. The Thomson Viper digital cinema camera had a drive that
attached to the camera body directly, though it was clearly designed to be unobtrusive in a filmic
labor environment, as it was molded in the shape of a film magazine and mounted on the top of
the camera just as a film mag would. Cameras with built-in internal drives no longer had the act
of physical media interchange at all, as everything was contained within the camera body, and
footage could be extracted via a cable port. No cards to be swapped or dropped or lost – just hit
record and go, though the problem with the internal drive came from that same lack of
interchangeability. One could shoot footage until the hard drive was full, and then recording
would be inhibited until the footage was “dumped.” The drive would need to be hooked up to a
computer and shooting would need to wait until the transfer was done. While a home video user
might not mind the delay, as the camera’s drive was usually fairly large and could hold at least
an hour of footage, the break in shooting to dump footage with no alternative was a practical
obstacle for most other applications. It was potentially a waste of time and labor in a scheduled,
217
paid day, and threatened to disrupt any shoot that was dependent on an unpredictable schedule. A
documentarian not able to shoot an intimate moment because his drive was full was a
consideration that made built-in drives impractical.
To call specific attention to a change in terminology mentioned earlier, the act of getting
footage into a computer went from the elusive “capturing” or futuristic “digitizing” in the tape
era to the much crasser “dumping.” The change suggests a different relationship between
practitioners and their footage. To capture is to secure something rare or valuable. It connotes
skill and difficulty. It is an active acquisition of something. When the practice first occurred,
there was something of a techno-magic to it – the computer taking control over one’s camera, the
tape reels springing to life and transferring the footage into the realm of manipulability. 10 years
later, when footage was being “dumped,” that initial marvel was gone. Acquisition had been
exchanged for removal. The very terminology suggests an excess of footage and, as I will later
discuss, this was often an accurate characterization. “Dump,” though, does connote speed, and
that was an accurate assessment as well. The priority came not just in wanting to get the footage
onto a computer, but to dump it as quickly as possible in order to make room for more footage.
Unlike with tape, wherein the master tapes would be brought to the digitizing space and
recorded, one after another, dumping often happened on set. Especially in circumstances where a
videomaker only had one or two P2 cards or a camera with an internal drive, that act of dumping
was a necessary step in keeping things on the set moving. In the tape era, videomakers could just
bring a stack of tapes to the production. If they weren’t used, no sweat. Next time. But with drive
storage, there was a finite vessel of digital space to be filled and emptied. The camera became a
temporary holding zone. It needed space to function. As a result, the practices on set were altered
to meet the logic of the digital device, just as they were with digital tape. In the analog era, the
218
act of recording was more permanent. I would film a scene on 16mm film, and have that thing in
my hand as a record of both the scene and my labor. Tapes extended that physical nostalgia
archive – I still have every DV tape I’ve ever shot, sitting on a shelf. It’s a visual representation
of my craft. Digital logic operates differently. I have a handful of SD cards. All of them are
blank. They once contained footage – hours of footage, from different projects. They’ve all since
been wiped. They mean nothing to me.
Through the act of dumping, the notion that cameras were no longer producing a fixed
material document was evident not just in theory, but in practice. Instead of a permanent physical
document, they were creating material to be “done to.” 35mm film is raw material for editing,
but digital data is material for computer creation – both in the creation of the image itself via the
monitor and as information for the computer’s manipulation. Dumping becomes a process more
akin to mining – fill the mine cart, pass it down the line, fill another, repeat. This act of mining
extended beyond the phenomenology of camera operation and card storage, and into the creative
process as well, whether it be filming green screens or motion capture for digital effects, flatly-lit
material to be heavily color graded, or actors to be replaced with digital apes. There was an
increased perceived distance between what one would see in the viewfinder and what the final
product might be – “might” being the operative word. Much of the shape of that finality,
especially for an effects-heavy film, might be fully invisible to an operator, shooting blindly at
an actor running through data points for later use in the computer. Of course, the process for a
home video user might be vastly different – free of digital apes. As image quality began to
equalize, some of the hierarchical distance between big-budget and low-budget production lie
not just in the image, per se, but in what one could do with its data.
219
The clearest practical advantage to drive storage was the resolution to my student Jesus’s
complaint – transfer was as fast and easy as any other sort of computer file. Upon inserting the
storage media into the computer, the digital card would open in my operating system like any
other hard drive. Footage could then be dragged off the drive onto the desktop, the drive could be
wiped clean, and the process began again. The greatest advantage therein was the massive time
savings. A 60-minute video tape would take 60 minutes to capture. Depending on the speed of
the drive and the card, transfer would be greatly accelerated; the same 60 minutes of HD footage
would take only a few minutes. In the digital tape era, the potential of digitality was tempered by
the rolling reels. Once those mechanics were removed, video had lost something that for decades
was a key element of its ontology – its 1:1 relationship with real time. This break with linearity
had an impact on practice, but it also represented a more fundamental change to video as a
medium. It altered the relationship between practitioners and their footage in relation to
conceptions of time – past, present, and future.
Video was often discussed by theorists as being a medium of the present, as opposed to
film’s tendency toward representing the past. Because it could be live, video had a privileged
place as a window – it could represent other people and spaces in real time. Film was a medium
concerned with preservation of past occurrences, in part because it had to be developed.
Interactions with video as a medium were unique in that the images could only be accessed in
real time. Early video (as on television) was always live. When the ability to record on tape was
introduced, both through camcorders and also through the VCR, video’s relationship to time
changed. Thus reified, it became a record as much as film could be, allowing not just a manner
of capturing the past, but interacting with it. I could record and play back, pause, fast forward,
and rewind. I had more control over the time of my video (and, indeed, over that of film that had
220
been transferred to video), as it was a thing to be explored, forward and backward, in ways that
pre-tape video or projected film could not be. And yet, even with all this shuttling, video still had
ties to linear presentness. Without the reels of tape being in motion, at any speed, video was just
a piece of plastic. Its existence as a signal always needed something moving, something turning,
something cranking out its images. Even a still frame of video required the tape heads to spin,
over and over, reproducing the same signal to my television, its scanning gun tracing and
retracing the image, always moving. Video needed motion in the present. The motion generated
its existence.
Digitization stripped video of its ties to mechanical motion. Once the rolling reels had
placed footage into my computer, it became a different sort of object – no longer a continuously
running signal. With a freeze frame, my computer monitor’s pixels remained steady, illuminated
at the proper chrominance and luminance, ever present. Through digitality, video could finally
achieve true stillness. On the digital day, video rested. Like with a VCR, I could fast forward and
rewind my footage, but as video came under the domain of random access, a new temporal
manipulation was activated – skipping. By clicking various points in my QuickTime timeline, I
could in one moment be in the middle of my film, and then at the end. I could go from the third
scene to the tenth to the first in just three clicks of my mouse. Interacting with the video, in
image and as a timeline, became a haptic process (foreshadowing that which would be more fully
realized with the smartphone). Obeying the logic of computer data, video had lost its ties to
motion and linearity. It became a thing without a thing – a computer representation, a simulation
of linearity, whose rules could be bent like Neo bending space and time in The Matrix.
When digital video hit my computer, it underwent a transformation into a new, different
medium, but it first had to get there. At the camera, my digital video sat enchained by the ties of
221
archaic mechanics. It couldn’t become the boundless, timeless entity until it had been digitized.
There was a disconnect between the camera, still a clunky relic, and the computer itself. Video
was digital in the camera, but it only became a digital object when it entered the computer. When
the tape housing was replaced by the hard drive, the logic of the computer was inserted into the
camera. As I shot, the digital object sat immediately ready within the camera’s plastic housing.
The camera wasn’t creating an analog-present signal to be made into files later – it made files.
The drive camera was the mother of “digital filemaking.”
178
The transformation was immediate
and clear in practice. When I switched my camera into playback mode, there were no more
VCR-style controls, as the tape recorder was gone, and linearity gone with it. The interface
instead displayed all of my files, thumbnailed as if in my computer already. I could interact with
them on the camera free of the bounds of time – I could watch shot 1 or shot 10, in whatever
order I chose. I could delete clips I didn’t want. Skip around my shots with random access. The
phenomenological experience of interacting with my camera became more like that of the
computer. No rolling reels, no linearity. The camera itself now produced digital objects.
Digital video was no longer just a medium of the present, nor of the past, either. It could
still be live, retaining its privileged relationship with the present as a window to the profilmic
event. Like film, it could be a medium with ties to the past, in its ability to preserve moments in
digital code (inasmuch as one is willing to accept non-indexical digitality as a sufficient
representation of past events, and surely some do not). But unlike a film camera, which offers a
delayed view of the past, and the analog video camera, which offers a view of the past through
the mechanics of generating images in the linear present, the digital drive camera brought past
and present under the instantaneous control of the computer, controlling the past outside of linear
178
Geuens, digital filemaking.
222
time, instantly re-living it, eliding it, deleting it, starting over. I can shoot a scene and
instantaneously be brought back to that moment. I can shoot a scene and, unsatisfied, remove its
pastness from existence. The digital camera captures time and renders it under present control.
The digital camera is a time travel device.
Digital Time Travel
The digital-video-camera-as-time-travel-device is not merely a theoretical construction;
to look at the on-set use of digital drive cameras is to see ways in which the ability to interact
with and control the past was beyond metaphor, and materialized in the real-world alteration in
the relationship between practitioner, footage, and time. First and foremost, the act of
transferring footage to a computer became significantly faster and easier. Hours spent on labor
were now reduced to minutes. The complex process by which I would need to hook up and sync
my camera to my computer was eliminated, as the file-izing of video occurred in the camera. For
a home video user, getting footage onto a computer was as easy as getting any other file on their
computer. Pop the card in, drag the files, done. From there they could upload the files directly to
the internet without any complicated capture software or import them into any variety of
programs for manipulation. In more professional contexts, ease of hookup was never an issue,
but the reduced labor time was a tremendous boon. The increased rate of footage transfer meant
reduced time in digitization, getting into the meat of editing sooner, and turning out dailies and
cuts faster. Tape digitization was one of the few elements of postproduction that couldn’t be sped
along by working harder, and the process had been eliminated.
The biggest change in professional practice, though, came in the potential for on-set
footage review and transfer. No longer needing to wait until the evening or the following day to
223
review dailies, footage from the camera could be quickly and easily viewed on set via the
camera’s LCD screen and dumped onto a laptop on set. The act of reviewing footage, which was
impossible with film and difficult and potentially risky with video tape, became a constant
occurrence. Moving image production at all levels now involved a key, heretofore largely
unperformed act – screening. Throughout the history of moving image-making, the on-set
relationship between practitioners and their images evolved along with the various imaging
technologies. Prior to the digital file’s instant review, watching media on set was an action
tethered to its generation in real time. Filmic views were a privilege of the camera operator alone
and, even then, the views were just an approximation – viewing the scene through glass, without
grain. The photochemical representation would have to come later, after the developing process.
Video assist allowed playback of footage, but this too was a crude approximation of the final
image – the video camera’s low-resolution best guess at what the high-resolution image would
contain. For projects shot on video, the camera gave a true representation of the digitally-
mediated finished product, but this representation was often limited to the eyes of the camera
operator, even after cameras came with LCD screens. Running cable from the camera to external
video monitors allowed for the creation of the “video village” for watching images remotely in
real time, which potentially meant more eyes on the scene-on-screen.
Jean-Pierre Geuens discusses the many effects that the incorporation of the video village
had on on-set practices, most notably that the role of the director shifted more toward one of
watching screens as opposed to experiencing a scene in the space of the actors.
179
With the
switch to drive cameras, on-set practices shifted similarly, though the effects were largely
179
Jean-Pierre Geuens, “Through the Looking Glass: From the Camera Obscura to Video
Assist,” Film Quarterly 49, no. 3 (Spring 1996): 20-22
224
temporal rather than spatial. When digital video was file-ized, the ability to review became
instantaneous. Cameras often came with a “review” button that would replay the previous shot
without even needing to switch into playback mode.
180
The act of review often occurred as a
stand-alone ritual in itself, though it could also occur while other on-set tasks were being
performed (lighting adjustments, for example) and, as such, would not add any extra time to a
crew’s day. The act of reviewing footage became a part of many crewmembers’ jobs, all
capitalizing off of the potential of instantaneous and recurring playback for different ends. Actors
could review their performances as if they were watching them on screen and make adjustments
according to their impressions or the director’s (which were often likely contradictory). Script
supervisors could review for continuity. Production designers could check the color palate in
motion. Visual effects supervisors could preview graphics. Producers could monitor the product
placement. DPs could adjust the operator. Everyone could comment on how good/bad everything
looked. Production became a process of doing and reviewing, making and looking, injected with
a greater sense of scrutiny. Or, more accurately, a different type of scrutiny. What once was
reviewed strictly in the eye of the mind, filtered through self-doubt or self-confidence, guided by
the frailty of memory, was fixed for review, again and again.
Digital scrutiny was doubly performed in going beyond the act of looking into the
practice of editing. Because footage could be dumped onto a laptop on set, an editor running
Avid could actually begin to edit the movie while it was still being made. After a sequence was
shot, footage could be dumped, eyelines and continuity checked, rough effects applied, a quick
180
The Thomson Viper, in wanting to preserve the feel of shooting film, opted to not have a
review function, an engineering decision that received considerable criticism and confusion in
leaving out what would become one of the most significant medium-specific elements of the
digital video camera.
225
assembly could be constructed, and the flow and pacing of a final cut could be predicted. Of
course, these super-rough cuts represented only extremely preliminary visions, but if an editor
spied problems with the footage, it could be quickly re-shot, potentially saving tens of thousands
of dollars in labor and rentals if the problem were only recognized later and a full reshoot was
required. With digital filemaking, the once-compartmentalized practices of post-production and
exhibition were brought under the broad umbrella of production. As a time machine, the use of
the digital drive camera had collapsed practices that once occurred days or months apart and
meshed them together into a singularity. The on-set editor was like a scryer, gazing into her
crystal ball and bringing a view of the future into the present.
As a brief single-position case study of the complex effects of on-screen review, for
actors, the ability to instantly see one’s image on screen represented a paradigm-shifting
alteration of the art form, and a change that, in many cases, was a source of stress more than
anything else. In previous eras, most actors wouldn’t see their performance on screen until a film
was fully cut and completed (and sometimes not even then).
181
Self-assessment was always a
part of the acting process, with actors mentally reviewing their performance in a given shot,
weighing it against other takes of the same shot, other performances they gave in the film,
performances they gave in other films, performances of other actors, future roles they wanted,
their public image and aura, etc. The act of being able to visually review a performance on set
quantified their self-assessment. Seeing their performance, rather than considering it mentally,
provided a view into considerations beyond acting ability: appearance, costuming and makeup,
181
Again, video assist in the analog era offered a clumsier version of this potential (preliminary
versions of video assist were developed in the 1960s so that Jerry Lewis could watch his
performances back [Ganz and Katib, Digital Cinematography, 22]). The advantage of drive
playback was significant, in that playback was instantaneous and a full-quality representation of
the finished product.
226
gesture as captured by the camera, positioning within a shot. Talented, savvy actors in prior eras
could no doubt predict these things with some degree of accuracy just as a cinematographer
might (surely they had some understanding of alchemical magic as well). But the fixing of a
performance for review put image and acting in undeniable terms. To that end, I recall a moment
in a film production class in which a directing professor spoke of a common unwritten rule on-
set – never let the actors see the footage. The reason provided was that actors tend to be vain and
overly self-critical in ways that don’t benefit the film. In that particular explanation, one can see
a hierarchical separation of who should and should not review a shot, whose eyes might benefit
from checking, who was worthy enough to judge for retakes, what elements – artistic, practical,
personal – were deserving of scrutiny. Questions of the gaze and its power of visual dominance
were brought onto the set, as the ability to see the image was provided and denied at the whim of
those in control of conduct. For the actor, the act of reviewing on screen what was once in the
mind – screen-ifying memory – is indicative of a larger phenomenon that would develop across
the digital era for all video camera users (i.e. everyone with a smartphone).
The extent to which on-set review transformed the nature of image making is, of course,
largely dependent on the culture of the makers producing the footage. On a commercial set,
producers might delight in being able to double-check every logistical box as an additional
mistake-avoiding, cost saving, cover-your-ass mechanism. On an indie art film, less concerned
with the financially-driven mechanisms, the review of footage could serve similarly as a process
of critical review but also, in the absence of some of the gravity, a sort of artistic ecstasy – being
able to view with one’s peers the art that had just been created. With all gathered around the
camera’s LCD screen, the camera provided a view through a wormhole beyond the edit to the
final screening, linking past and present with a hopeful future (or, if the take was bad, an
227
ominous future, still to be changed). The magical alchemy of the singular cinematographer here
was replaced by a shared reveling in the magic of the digital camera.
For a home video user, that act of viewing and the instantaneous gratification was very
unlike former practices of home video, wherein the act of shooting and exhibition might be
separated by months or years. Screening, especially for an audience, might never occur at all, as
footage rested for eternity in the closet. With instantaneous playback, the act of filming again
became coupled with the act of watching, eliding time. In those ways, the instantaneity of
screening often made the footage itself more enticing. With the amount of time elapsed between
the shooting and watching of one’s vacation video, the images on screen were so divorced from
the trip itself that the representation was related only to events in one’s memory, and sometimes
a lost one. With instantaneous review, the past was only a moment away, to be relived and
relished immediately. The camera served as a bond between the immediate past and the present,
as the act of shooting was no longer just about preserving a moment for posterity but about
seeing that which just happened. Instantaneous review bridged the gaps between the true present-
ness of live surveillance and the archival nature of previous analog media. It enabled a near-
liveness. The use of the camera became a call to performativity, with the knowledge that one’s
appearance in a video would likely be viewed, and almost immediately.
Not quite a mirror and not quite a photograph, the phenomenology of the drive camera
was something new entirely – not just because of its digitality, but because of the potential
unlocked in new practices. The nature of digital video and videography was changed as the result
of storage media, of mechanics, of buttons, of screens – it was a practical shift in ontology.
Theoretical models of the camera were largely one-way – they involved light coming in and
reaching the film. Here was a device that was more of a Mobius strip. Light enters the lens not to
228
be indexically burned into the photochemical base, but to be viewed and reviewed, approved or
denied, preserved or deleted, studied for alteration and re-recorded, re-reviewed, re-denied. It
was a self-contained collapse of all the stages of production, from writing to screening. Media-
making was less of a start/stop binary, but a fluid intermingling of image-watching and image-
making and image-molding: “Action – Cut – Let’s watch that back.” “Say ‘hi’ to the camera –
Let’s see how it looked – Ugh, one more time.”
Just as data manipulation potential became a newly-developed point of separation
amongst spheres of production as computer and camera began to assimilate, so too did the
mastery of near-liveness, its visual feedback loop, and the collapsing of time. Success at various
levels became a matter of accurate divination. Just how well can you read the digital bones? How
accurate is your picture? Is it a small DSLR preview or a high-resolution 4k image, complete
with digital effects and color grading? How well can you see the future? How accurately can you
manipulate the past? What can you do with your data?
Labor and Loss in the Digital Domain
The file-izing of video was a transition that was not without its growing pains. Problems
were plentiful, troubleshooting message boards were busy, and video’s existence (or lack
thereof) as digital data became increasingly clear as the camera itself fell under the domain of
computational logic. Within the camera, the digital files were constructed and governed by the
computer processor. Unlike an electronic video signal that existed in real time for as long as the
camera was rolling, if a processor’s capabilities were overloaded, it was susceptible to the same
“crashes” that a computer might be. For that same reason, limitations in drive data writing, file
structures, and access often led to restrictions on individual file sizes. Practitioners using DSLRs
229
like the Canon 5D were hit with hard cutoffs on their recording times. Even though they
produced beautiful imagery, their digital files were capped at 10 minutes or so per clip, making
them ironically inefficient in the very manner that video usually was. The hard cutoffs were not
always a problem for many applications, as a 10-minute take was plenty for most fiction
production, but for any sort of event recording (from professional sports to children’s soccer
games) it made many drive cameras no more efficient (or indeed, even less efficient) than many
film cameras.
The transfer of video to a computer, while much easier than with tape, still presented
obstacles in the chain between production and distribution. Because each camera company had
their own proprietary storage method, in order to get one’s drive or cards to be recognized by a
computer and its editing software, format-specific drivers often had to be downloaded and
editing software had to constantly be updated to accommodate the various file structures. When
it came to industry standards, the early “Wild West” period of drive cameras was one that
offered some clear standards (HD or NTSC resolution for image quality) and a lack thereof in
other areas (trying to capture a new camera’s footage into un-updated editing software).
To combat the competing and conflicting file structures, camera companies often
provided specialized software specifically for transferring footage into computers, adding an
extra step in between the camera and the finished edit. Other third party programs, like MPEG
Streamclip or Apple’s Compressor, offered catch-all middleware, allowing any camera’s footage
to come in and any editing software’s preferred format to come out. As questions of bit rates,
image compression, processor strain, and other such computer-y concerns began to enter the
videography lexicon, the required knowledgebase of camera operation and computing began to
overlap – “H.264 is a great video codec for filming, but it’s a nightmare to edit. Convert it to
230
AppleProRes, but not 4444 – you don’t need to operate in uncompressed color space.” In the
analog era, technical issues were in the domain of video engineers, not most videographers.
When capturing in NTSC or PAL and calling it a day would have once sufficed, camera
operators and videographers had to be increasingly aware of the technical aspects of how their
images would be understood and processed by a computer.
Beyond issues with capture, the fear of loss plagued the use of digital video storage, as it
generally does with all digital practices. For videomakers, digital loss was a terrifying spectral
bogeyman, whose reach was invisible but the repercussions very real. For example, editing
software like Final Cut Pro was programmed in part to identify footage based on the naming
conventions of the cameras, which auto-generated the names of video files and housing folders
as they were being created. If a user changed those names, or moved the footage out of the chain
of nested auto-created folders, the footage was effectively made invisible to Final Cut Pro. One’s
footage could be viewed on the camera without a problem, but upon import into the computer, it
would become digital nothingness unless the computer-dictated naming conventions and file
structures were obeyed. This process is not unlike the nightmarish academic scenario in which
one’s conference talk is ruined when PowerPoint can’t find your images and video (often due to
similar operator error). As digital data, the video files were inputs to be read by the computer
such that the assets could be recreated. Altering video files and file structure was equivalent to
hacking any other computer software, with similarly catastrophic crashes the result.
Certainly problems preventing footage access could occur in the analog tape era – tapes
could become demagnetized or worn out, leading to bad signals, static, and glitches, but video’s
illegibility as a file represented something different. It wasn’t just a matter of bad signals, as
those generally had physical, material causes (e.g. I dropped my tape and stepped on it, I left it
231
on a magnet, the tape came out of the housing and got twisted). As digital material existed only
as information, its errors were also informational; the act of footage damage and destruction
could occur through non-physical means (but also physical means). In that way, the digital
material at times seemed to have an even more fragile existence than that of the analog era. If I
merely added a “0” to my footage’s file name, it would disappear from existence. The computer
couldn’t find and recreate my images. It was an oftentimes painfully clear way that digital
video’s precarious, ephemeral nature was manifested in practice. Lost tapes could be found.
Broken tapes could be reassembled. The mechanisms for doing so were clear and visible, even if
they didn’t always work. The lack of materiality in digital loss meant that it was less tangible,
subject to invisible forces, but the effects just as devastating. To paraphrase a fellow
videographer, “Working in digital media is the absolute worst. I can drop my hard drive down
the stairs just like a painter might drop their painting down the stairs. We have the same physical
concerns. But the image on their canvas won’t just spontaneously disappear.” Granted, if he had
backed his project up (something a painter could never really do), he’d be fine. To that end, with
fears of digital loss also came digital security. Digital’s immaterial nature was tempered by the
ease of its proliferation – especially in the age of drive storage, when transfer and duplication
were easier.
The horrors of digital loss and the need for digital security ultimately led to not just
altered practices, but to the creation of a new position on larger sets – the digital imaging
technician (or some less prestigious equivalent on smaller ones). The DIT was tasked with,
among other things, ensuring that footage was properly managed digitally. Or, in more
contemporary terms, digital video’s “workflow management.” The concept of a managed
workflow involved ensuring that, in no small terms, the events being filmed became digital in the
232
most efficient way possible. The keyword, “workflow,” brought considerations that may have
existed in the analog era with material media, but operated differently in the digital domain as
the “flow” was computer-dependent.
With analog material, there is a certain alchemy involved in ensuring that the profilmic
event is translated into a material medium not just creatively, but essentially. The job of the DP,
along with her assistants, is in part to make sure everything chemical and mechanical works
properly. They ensure that when the film is viewed in dailies, that there is a film to be seen. The
task involves achieving acceptable exposure, checking the gate to ensure that no debris has
gotten into the camera, ensuring that the film itself is maintained both before and after shooting
(e.g. that no light or other elements spoil the film), and that the camera is loaded properly. As far
as the camera crew’s job goes, workflow management is especially invisible labor.
On the digital set, some of these tasks are still performed (proper exposure, for example),
albeit more visibly, especially given the potential for and dependence on video monitoring and
playback. Anyone with a monitor, for better or worse, can see the imagery on screen and call for
corrections. Many of the material concerns of preserving the integrity of the medium are gone,
though ensuring that a digital image is retained calls for a different sort of technical expertise.
Digital alchemy. The concerns fall closer in line to what John Mateer calls “IT procedures”
182
–
monitoring hard drive space; ensuring that data compression rates preserve the integrity of the
footage; selecting the desired camera output format; coordinating the placement and logging of
footage in a manner that the computer can understand; creating redundancy through digital
duplication; maintaining compatibility across devices, computer platforms, and operating
182
John Mateer, “Digital Cinematography: Evolution of Craft or Revolution in Production?”
Journal of Film and Video 66, no. 2 (Summer 2014): 9.
233
systems. In the analog era, much of maintaining workflow involved ensuring that light had an
unobstructed and unmolested passage to the filmstock. With digital cameras, the flow is uphill;
cameras don’t talk to computers without help. As videographic practice altered to match the
needs of computer logic, the DIT became the translator between reality, camera, and computer.
Basic work for a DIT is what industry professionals might call “data wrangling” – the
phrase itself suggests an almost instinctual action on the part of data to be uncooperative. Like
other on-set “wranglers,” the DIT must force its animal to work against its nature, drive it
efficiently and safely from pen to pen, and make it perform tricks. DITs manage data both for the
computer, in ensuring it is properly ingested, and also for the various other personnel on set – the
creation of compressed dailies for those concerned with content, the assembling of raw footage
for the editor, the creation of digital masters for archival and redundancy purposes. Data
management for many DITs extends into the camera as well, managing interfaces and menus,
and often functioning like caddies for DPs as they monitor footage for quality control purposes
(focus, lens flare, etc.). Technically speaking, as DIT Abby Levine identifies, the title “digital
imaging technician” was created specifically by Local 600 IATSE and, as such, it is a union job
on bigger sets.
183
In practice, though, the term flies around as wildly as any on-set title. On larger
jobs, it is not uncommon for the duties of the DIT to be distributed across many workers, and on
smaller sets, for the DIT to act solely as a footage dumper (though some in the field might object
to the application of the title “DIT” to the performance of such a menial, if still important, task).
In the absence of physical media, the DIT’s labor serves not just to mitigate digital loss,
but as an antidote to “digital anxiety” – anxiety over the immateriality of the digital, driven by
creative, financial, and/or emotional impetus. As Levine describes his own position, “My
183
Gladstone, “7 Expert…”
234
overriding interest is that everything proceeds smoothly, technically speaking.”
184
Here,
smoothness refers not just to day-to-day operation on set but, more broadly, to the overarching
concerns that come in a world where a few lines of code can erase one’s footage. The labor of
the DIT is meant to ensure that with the transition into the world of digital code, which is
difficult to understand and invisibly threatening, daily procedures occur as they did in the analog
era. They are the on-set ushers of the digital age, ensuring that the invisible obstacles remain
both conquered and invisible. Such a position makes the job of the DIT especially precarious.
The importance of their labor is often only noticed if there is an interruption in workflow, in
which case digital loss not only occurs, but it is pinned to the apparent failings of an individual.
That is, they are not only the on-set ushers of digitality, but also its real-world manifestation
when a scapegoat is needed. As more computers exist on set – both in capture devices like
cameras and audio recorders and also in post-production tools like storage drives and computers,
human labor functions increasingly to ensure that the logic of the computer is followed as closely
as possible, lest its existence be known.
For smaller projects, the role of the DIT is still necessary – Lizzy’s hockey footage isn’t
going to get onto YouTube by itself. That said, having a fully-monitored workflow is a luxury.
Smaller sets may not be able to afford union DITs at all, let alone several. More likely is that the
role of the DIT ends up being performed by a PA and, as DIT Dave Satin jokes (in all
seriousness), “What other duties does that downloading PA have besides handling your precious
camera media? Lunch orders? Emptying the garbage? Driving the grip truck?”
185
Satin’s
comments are simultaneously made as an assertion of the importance of his position and, given
184
Ibid.
185
Ibid.
235
their publication in the trade press, an encouragement for producers to hire union DITs. His
comments’ subtext, though, reveal the extent to which the very real dangers of digital loss and
the anxiety that comes with them are more prevalent in the absence of capital. Continuing my
broader argument, the separation between spheres of production lies not just in how well they
can exploit data, but also by how susceptible they each are to its digital dangers. Digital security
is expensive, but digital anxiety is free.
Cameras and Practice in Transition
The changes that occurred on set as cameras shed their tape housings were not just
limited to the introduction of practices that more commonly occurred in postproduction (footage
management, editing, duplication, etc.). Compared to a hard drive, a tape recording mechanism
is quite large and requires a number of moving parts. Even a MiniDV tape, which is small in
comparison to a large format cassette like VHS, takes up considerable real estate within the
camera when one accounts for all the tape reels and read/write heads. When such a sizeable
element of the camera was no longer necessary for its operation as designers shifted to drive
storage, its removal impacted all three areas of functionality for which video cameras have long
been utilized – cheapness, portability, and shooting capacity – and allowed for a rethink of what
a “camera” might be.
Video could always shoot more footage than film for a cheaper cost, but the differences
in capacity were exacerbated when video moved to drive. Time to load the camera dropped to
almost nothing (much to the chagrin of everyone who enjoyed the film loading breaks). Tapes
were no longer required, removing that logistical consideration. Cards could be easily and
quickly swapped, allowing for a drive to be loaded with hundreds of gigs worth of footage. The
236
on-set mentality of, “We’re running low on film, so we really need to nail this shot” instead
became, “Let’s take it again, because it costs literally nothing.” In the absence of the clicking-
shutter-as-disappearing-dollars, media seemed free and infinite, and one could roll with
impunity. Peter Jackson’s The Hobbit, shot in 3D and at 48 fps, accumulated over a million gigs
of footage.
186
Such accumulation was still extremely costly (how many hard drives do you need
to store a million gigs?), but given the film’s budget, it was merely an extreme example of what
was possible across the board. It was cheaper and easier to shoot a lot of footage.
The ability to shoot unrestrained by physical media encouraged practices that were once
frowned upon for their temporal or financial costs. For better or for worse, the need for
rehearsals diminished inversely as shooting multiple takes became less discouraged. On-set
rehearsals would often just be shot anyway (several videomaking guidebooks offer this
suggestion when dealing with actors in order to get unspoiled performances, different versions,
or unplanned moments), blurring the lines between practice and performance. Rather than
needing an actress to nail a line as the precious film rolled, a director could have her run it five
times in five different versions and choose one later, thus making the already segmented and
fragmented performances of film acting even more so. Such practices had a psychological effect
on actors, for better or worse, as Ganz and Khatib describe:
Digital cinema allows for a different kind of relationship between
actor and camera, because the digital video camera looks in a
different way. Since the digital camera is potentially always on, the
performers are potentially always performing
187
… The process of
acting becomes less about performing for a single observer and
more about the condition of being observed, whether by a small
fixed camera, a camera in the hands of another actor, or by several
186
Simon Gray, “An Unlikely Hero,” American Cinematographer 94, no.1 (January 2013): 53.
187
Ganz and Khatib, “Digital Cinema,” 26.
237
cameras simultaneously.
188
Beyond multiple takes, shooting with multiple cameras became less cost/labor prohibitive
as cameras became cheaper, camera crews shrunk, and media costs all but evaporated. Shooting
with multiple cameras is difficult beyond reasons of cost, as lighting for two angles is
significantly more challenging than lighting for one, as lighting instruments often sit just off
camera and would be in frame if another angle was added. Still, for comedic productions that
tend toward flatter, less extreme lighting, and as more of a movie’s “look” could come through
digital postproduction, a practice that was once reserved for un-repeatable actions like stunts or
explosions began to see more use in traditional production, and ultimately could translate into a
budget reduction in the saved time and labor in being able to shoot coverage in simultaneous
shots, even including the additional rental and operator costs. Shooting with multiple cameras
had advantages beyond fiction production as well, with documentary and reality TV modes
benefitting creatively and logistically from the coverage allowed by the increased angles.
The ability to shoot lots of footage and with multiple cameras led to an increase in
another on-set practice that was something of a rarity in previous eras – improvisational acting.
Especially in comedy, it became commonplace to run a scene several times with the actors
riffing off of each other, taking chances, trying out new lines. In a talk given by actors Dave
Foley and Bruce McCulloch following a 2016 screening of their film Brain Candy (1996), the
two explained that their comedy troupe, Kids in the Hall, never improvised when making the
film, much to the audience’s surprise. They re-wrote scenes on set, and improvised in the
rehearsals, but they couldn’t afford to change things while the cameras were rolling.
189
As Ganz
188
Ibid., 28.
189
Dave Foley and Bruce McCulloch, Q and A Following Brain Candy, University of Southern
California, February 27, 2016.
238
and Khatib identify, improvisational acting in feature filmmaking was once a marker of big-
budget production, due to the material cost.
190
Such conservatism is no longer a concern when
the drive’s the limit and media can accumulate. With cameras catching both sides of the action,
actors can fire off lines until they get tired.
The increased propensity for improv affected labor practices both before and after
production. The creative control of the writer, while certainly not eliminated, was further reduced
from an already-tenuous position in Hollywood on improv-heavy sets in which the actors
themselves became uncredited writers for many of a show or a film’s lines or entire scenes.
Afterward, with the massive accumulation of footage, the demand for post-production labor
increased in the need to log and manage all of the pieces, many of them mismatched across the
variety of takes. Much of the comedy shifted to the editing room as well, as any number of takes
were cut together in order to make the best possible whole, sometimes sacrificing natural on-set
performance, flow, and chemistry for a choppier Frankenstein monster of lines – a technique that
critic Mark Harrison called “the comedy chimera.”
191
The propensity for improv as a production
technique went beyond joke-based comedy, with movies like Drinking Buddies (2013) and The
One I Love (2014) both shot on Red digital cinema cameras and both with entirely improvised
dialogue.
192
While the practice of shooting video often yielded the adoption of high shooting ratios
and a looser approach to scenes, it is important to point out how such practices were often
actively resisted. If at first only by habit or as the result of a pushback by old-school
190
Ganz and Khatib, “Digital Cinema,” 25.
191
Mark Harrison, “Has Improvisation Harmed Movie Comedy?” Den of Geek (February 15,
2013) http://www.denofgeek.com/movies/24491/has-improvisation-harm.
192
Maane Khatchatourian, “’Drinking Buddies’…” Variety, August 16, 2013; Geoff Berkshire,
“Sundance Film Review…” Variety, January 22, 2014.
239
practitioners, the ability to shoot more footage didn’t necessarily mean that a crew needed to
adopt the practice. Director Paul Feig, whose affinity for improvised performances is on display
in many of his comedies, lamented that cinematographers frequently refuse to adopt the practice
of shooting with multiple cameras, opting for image integrity over spontaneity.
193
At the
educational level, in USC film school’s Introduction to Production class, even though students
are shooting on flash memory cards capable of holding multiple hours of footage and quick
dumping, the course places an arbitrary and artificial limit on how many minutes of footage
students can shoot. Mirroring the film school days in which movies were shot on 16mm Bolex
cameras and students were issued reels of film, professors now issue a hard data cutoff. The idea
is that such practices will help students recognize the problem of “overshooting” and encourage
extensive planning over just winging it, even if the latter isn’t as damning a practice as it once
was. The “back in my day” attitude suggests a time in which everyone on set had to be at their
best, nailing everything from sound to performance to framing, as opposed to today’s sets in
which retakes are free. The extent to which such nostalgic practices are colored by rosy glasses
certainly depends on the set, but in practice, such attempts to demonstrate the downside of the
apparent freedom of digital capture reveal how changes in practices after adopting new
technology often lag behind the adoption of the technology itself.
The coupling of drive storage with cheaper and more efficient LCD technologies resulted
in one of the more visible shifts in practice both on professional sets and in the amateur spheres –
the increased reliance on the LCD screen. Cameras in the 90s and early 2000s generally relied on
the viewfinder – the small eyepiece with a built-in monitor for displaying video, similar in design
193
Peter Caranicas, “’Bridesmaids’ Caught Improv on Film,” Variety (May 17, 2011).
240
to the glass viewfinders of film cameras.
194
The LCD initially gained popularity in the early
2000s on more expensive consumer and prosumer cameras, allowing for operation without
having to press one’s eye to the camera body, and by the late 2000s it became the only means of
viewing in the newer and smaller drive storage cameras. Many television professionals
discouraged the use of the LCD early on because, unlike the viewfinder, the LCD wasn’t a real
television, but a computer monitor. As such, it wasn’t an accurate representation of what the final
picture would look like. As computer monitors and televisions fused together in the digital HD
era, these concerns were mitigated. The LCD screen was just a tiny version of what I’d be
getting on my TV, my computer, and (if I was lucky) the digital projection in a theater.
As matters of practice, the difference between shooting with an LCD screen and a
viewfinder is significant in that it alters the relationship between one’s body and the imaging
technology. With film cameras and analog video cameras alike, the camera bodies are held up to
the eye and, as such, serve as extensions of vision. It is a kino-eye – machine optically fused to
man. As I navigate my environment, I see through the camera, experiencing the scene. Vertov’s
poetics are clearest when they were experienced firsthand, camera pressed up against the eye
socket, the machine providing the vision. Even attempting to move safely on set (or in my home)
requires me to navigate through the limited vision of the camera, trusting its optics to enhance or
even replace my own. The camera’s vision is my priority and my limitation, experienced most
clearly when bumping into objects that are just out of frame. Proper technique for a good
cinematographer is to be a cyborg – one human eye open to scan the environment, the other
viewing the images through the viewfinder. A viewfinder camera operator is a body split – one
194
Heavy-duty studio TV production cameras, mounted on pedestals, generally had larger TV
screens that operated functionally more like LCD screens, but the portability of that practice was
only possible in the 2000s.
241
half in the real world, the other in the film, schizophrenically in two planes of existence
simultaneously.
In the 2000s, kino-eye gave way to kino-hand. The LCD screen flipped open from the
side of my camera, itself resembling a tiny television. No longer seeing through the camera,
cinematography became an act of screen watching, experienced at distance. A third-person view.
Fused to my hand, the camera appeared more as its own entity, no longer usurping my sight but
an extension of my desire for tactility. I moved hand-first, as with a flashlight. The camera would
turn, and I would need to adjust my vision accordingly. It was distant from my body, always one
step ahead. The experience of having one’s world mediated via the kino-hand was a precedent
for that of mobile phone video. In that light, Heidi Rae Cooley’s discussion of mobile phones
and “fit” is particularly applicable. “Fit” refers to the manner in which the mobile device (or, I
would suggest, the LCD-enabled video camera) “gives way” to the hand, and the two fuse to
allow a more bodily act of seeing. Cooley describes,
Vision that transpires in relation to fit and as filtered through
interface is seeing that is simultaneously distracted and sensual.
Finger, hand and wrist muscles synergistically flex and extend,
abduct and adduct accordingly in order to maintain the integrity of
contour between the hand and [mobile screenic device] and,
thereby, sustain fit. But to the extent that fit happens without
intention, and without assistance from the eyes because the hand is
familiar with the MSD, the eyes are freed from the task at hand and
can look on surrounding scenes (as well as screens) and events.
The acquaintance materializing fit, the spreading of hand and MSD
into one another, expands into interface and permeates the eyes’
engagement with the surroundings. In this way, vision is never
really free of the hand, insofar as it is always infused by fit, but
also because of the analogous geometry of movement between the
eye and the shoulder.
195
195
Heidi Rae Cooley, “It’s All About the Fit: The Hand, The Mobile Screenic Device, and
Tactile Vision” Journal of Visual Culture 3, no. 2 (2004): 145.
242
Unlike the myopic view through a viewfinder, the use of the LCD enables a greater sensation of
presence within the shooting environment, allowing for an altered experience of mediated vision
theoretically but also functionally. When cameras in the early 2000s featured both LCD screens
and also viewfinders, the difference in their operation led to complementary practices. Shooting
a landscape? Viewfinder. Get a nice frame, and really nail the focus and exposure. Shooting a
tracking shot? LCD. Forget the perfect framing and keep the movement smooth. The LCD is
safer. As the viewfinder began to disappear on most cameras, all camera usage resembled that of
the Steadicam. More motion, more movement, more frenetic, more tactile exploratory. This
sense of tactility would later infuse videographic practice to an even greater degree with the
smartphone, as the screen (and, by extension, the representational images themselves) became
objects to be explored haptically.
Even professional digital cinema cameras, which still often feature a viewfinder, leant
themselves more readily to being viewed on screens. They featured outputs and cable feeds
(often wrangled by the DIT) to monitors of all sorts – for the DP and director, but also for
everyone else on set who cared to watch. As larger sets moved to digital video cameras, the low-
resolution approximation of the video assist gave way to an accurate representation of the final
product. And everyone could see it – producers, actors, makeup artists. The captured image had
become a collective one. The image was no longer a privileged view beamed into the brain of the
DP, but was live TV for the on-set masses. Kino-broadcast. Directors often found themselves far
from the set in the video village, as if watching the finished movie from the comfort of their
chairs. Even on smaller indie sets, the LCD screen was always in view to anyone willing to
watch over the operator’s shoulder. The ability to see the camera images was mixed (often on the
same monitor) with recorded footage, and cut footage from the editor. The effect was a
243
distancing of everyone – from operator to DP to director – from the physical, material act of
shooting. Moviemaking became a process of watching screens, just as screens were proliferating
in casual culture as well.
As cameras shrunk in size with the switch to drive storage, the LCD screen went from an
expensive possibility in the early digital era to standard-issue feature. Viewfinders were a small
protrusion on giant cameras, but were quite bulky for a very small one. An LCD screen, on the
other hand, was flat, and could fold into the camera body as if it were never there. On a small
scale, it mirrored the difference in shape between a large CRT television and a flat screen. By the
2010s, consumer camcorders shed the viewfinder almost entirely. A camera like the tiny Canon
Vixia had almost no buttons at all – it was primarily just a screen with a battery and card reader
attached. To view that camera is to see the inevitable transition to the camera phone, the device
that largely replaced the personal camcorder and the one that marked the finality in the
conversion of the camera into a screen. Whether it was the video village attached to an Arri
Alexa or an iPhone, the act of shooting and watching had collapsed and become one and the
same.
Along with becoming more of a screen in itself, the images on the cameras changed with
data storage as well. With most video cameras, what was being viewed was the image as it
would be received by the television or the computer. I could alter the images after the fact, but
the footage I saw in the camera corresponded to that which I would see later in editing. Not so
with contemporary cameras, especially those in the upper-tier professional sector. As digital
color correction has become more elaborate with software like DaVinci Resolve, with the ability
to alter color, shading, and light effects with the precision of Photoshopping a still image, the
material from the camera is less important visually than it once was. This foregoing of the image
244
appears in its most extreme form with the use of recording formats like “log video” that
intentionally capture an image such that it is more susceptible to alteration. A camera shooting in
“log” records an image that is rather unpleasing to the eye – very low in contrast, washed out,
lacking color and definition. This thin-looking image, though, is precisely what the computer
needs to have the most latitude in color correction. Such a feature is often unavailable on cheaper
cameras (nor is it likely desired in many cases, given how aesthetically unpleasing the image is),
again serving as an example of the potential for data exploitation demarcating spheres of
production. For a DP, the goal is no longer to capture the best-looking camera image, but to
capture the image that is the most manipulable. Because any optical effects (e.g. lens filtering)
will potentially “distress” the image, trading aestheticization for data loss, standard practice is to
simply shoot the image as cleanly as possible for manipulation later.
196
Professional
cinematography has further become an act of data gathering – now visually – as the qualitative
quality of the image is made in post. John Mateer notes the pushback among DPs against the
characterization (or actualization) of their job as being reduced to mere “product acquisition,” a
literal commodification of their images as data to be sold.
197
Many dramatic examples of the image-making-in-post process exist in marketing “color
reels,” like a viral example from colorist Taylre Jones. As an act of self-promotion for his
postproduction services, Jones posted a series of side-by-side videos that showed his color
grading work, comparing the log footage his company received from a production with his final
color grades, and the difference is (obviously intentionally) stunning. The low-contrast, washed
out and, by all modern standards, ugly images become vibrant, striking, dramatic, effective. The
196
Terry Flaxton, “The Technologies, Aesthetics, Philosophy, and Politics of High Definition
Video” Millenium Film Journal 52 (Winter 2009/10): 51.
197
Mateer, “Digital Cinematography,” 11.
245
DP has created the framings, and lit the scene, but it is the colorist’s job to fix (in both senses of
the word) the final images with software. Rather than create the stark visual contrast of a noir
film on set, the DP can get the scene fairly contrasty, and perfect the scene in post with the
colorist.
So as not to revert to the approximation of images that marks video assist shooting with
film cameras, the digital cinema camera’s unsightly log image is aided (or further confounded)
by on-set software, applied through the camera or sometimes by the DIT to footage he or she is
sending out to be viewed. The software counters the log image’s washed-out appearance by
applying a look-up table (or LUT) – a filter that simulates the final look of the film, making it
more pleasing to the eye, more beneficial to the DP in trying to set the mood for scenes, and
more useful for the on-set personnel who benefit from the setting of that mood. The images of
the LUT are not “burned in.” That is, they are a digital filter over the screen, rather than the lens,
that provide one possible view of the footage being captured; they are a computer-generated
preview of what the final footage might look like.
The use of on-set software integration represents a significant change in the DP’s role on
set and, as with other elements of drive-based video, its adoption has similarly changed the
relationship between practitioners and their footage. In addition to the many practical and
logistical shifts that have been discussed in relation to the process of incorporating digital
cameras into once-analog production labor, the concept of cinematography has changed
theoretically as well, in what it means to shoot footage and in what it means to be a
cinematographer. The film DP was the on-set alchemist with privileged vision and corresponding
authority, tasked with working in a process of absolute finality, in which the invisible registry of
light onto film was the ultimate in “burning in” the image. The digital DP works in possibilities,
246
in gathering building blocks, in taking steps toward finality but never producing it herself. Of
course, on the surface, a DP does the same things, more or less. She lights the scene, plans the
framing and the camera blocking, directs the camera operator or operates herself in shooting the
scene. But materially, as the image falls under the regime of digital mutability, the act of burning
has given way to an act of sketching. Perhaps Wally Pfister’s “oil painting” analogy is accurate
beyond its hierarchical boasting, and in regards to practice and the respect of that practice. As the
camera has continued to function more and more like a computer, through its interface and its
data processing, in the need to gather data to suit its computational demands (nevermind the
images), cinematography has begun to operate as an extension of post-production. Pfister’s
anxieties are a more practical manifestation of those of many digital theorists – he’s worried
about being subsumed into the computer.
The Sensor-ized Camera
As the practice of cinematography is increasingly computerized, it is striking how much
cameras of all sizes have come to look like computers. Early digital video cameras resembled
film and video cameras of the analog era. With some professional exceptions, they were all-in-
one housings, with a body, lens, grips, and ergonomic design. Amateur camcorders were
shrunken versions of the same, but the pieces of the camera were still clear and identifiable. They
looked like cameras. 20 years later, the Red Epic is a $30,000 cube. For a professional
purchasing that or some equivalent, it comes without any external features. That is, LCD screens,
proper handles and grips, and various other accoutrement are all separate. The “camera” is just a
camera body – a black block, not unlike a desktop computer. The camera has largely been
reduced to just a sensor, as if a modern digital version of the pinhole camera obscura.
247
The extent to which this conversion is clearest is not in the high-end cameras, which
inevitably are assembled into more functional camera-looking devices, but in the professional
application of smartphone cameras. DPs using an iPhone as the imaging device have built
comically complex rigs for shooting video – metal rods surround the phone in a cage, wires
protrude every which way, a giant 35mm lens sits in front of an adaptor affixed to the phone’s
tiny glass lens input, handles swing out for hand-held manipulation, a fluid head allows for
defined moves atop a tripod, its features are controlled remotely via an iPad. Amongst all this
mess, somewhere, is the “camera” – the five-ounce rectangle. Nothing but a sensor.
When tape housings were first removed, many camera engineers preserved the size,
shape, and ergonomics of previous generation cameras. Some, like the HVX, actually retained
the tape housing in addition to the drive storage, allowing for both tape and drive recording
within a frame that was even bulkier than previous era models. Even within these similarly
designed models, drive storage presented functional benefits. Because tape decks required
considerable power from the battery to drive the reels, their removal significantly extended
battery life on cameras, which meant the potential for either comparably-sized batteries with
extended performance, or smaller batteries with comparable performance. With the former, the
potential for nonfiction filming – especially for productions in remote locations or those
requiring long-performance recording – was physically beneficial in that a single camera battery
could sustain a shoot or several without needing to be changed. The bigger impact came in the
form of smaller batteries, which in turn allowed for smaller cameras. Shrunken camcorders like
the Canon Vixia allowed for home video shooting on a device the size of a candy bar. As image
qualities increased, the small, lightweight cameras could utilize drive storage to shoot in full HD,
and didn’t require any additional materials like tapes or multiple batteries. One could throw the
248
camera in their jacket pocket and be off. Qualitative quality in the palm of one’s hand. Or,
strapped to one’s head.
In addition to the decreased size and portability made possible by digital storage, camera
shapes altered in ways even more dramatic than the black cubes at the professional end. For one,
the DSLR would have been impossible if video were still shooting on tape. The mechanisms of
the digital still camera and the video camera were always drastically different. Both operated on
a system of lenses and photosensors, but the storage media of the digital still camera was always
in drive form, as the file sizes were much smaller. Once drive storage was engineered to handle
HD video and the tape housing was removed, the recording mechanisms of digital still and video
cameras began to function much more similarly – so much so that DSLRs were able to shoot
video in the exact same manner as drive-based video cameras. The difference between the
camera bodies lie in a feature that was always in SLR still cameras – interchangeable lenses.
While amateur video cameras shrunk and their lenses shrunk with them, the DSLR
offered an alternative, though the strange shape was rather awkward to work with when shooting
video. DSLRs weren’t designed for prolonged handheld shooting; their LCDs were designed to
show stills, not continuous video; they were prone to overheating during longer takes; and their
file structure system was not designed to handle large clips. All of these problems, initially hand-
waved by engineers, came to the forefront as videographers adopted the DSLR’s corollary add-
on feature as the camera’s raison d’etre. They were using the cameras “wrong,” and yet, the
image quality and image control was such that these strangely shaped and awkward cameras
became the standard shooting device for many low-budget and amateur projects, as well as big-
budget ones that wanted a great image in a small package (“Let’s strap the 5D to Iron Man’s
head!”). DSLR manufacturers eventually caught up and redesigned some of the features with
249
video in mind, but the still camera shape remained, as these cameras still had to exist in the
midway point between photography and video – two formerly separate practices that
increasingly became joined in machine and in practice. DPs using a DSLR could shoot video,
and then fire off a still photo as a continuity reference. An artist could shift effortlessly between a
professional video device and a professional still device – a tradeoff that had serious benefits for
a newsgathering operation, as two devices and two jobs became one (and likely were paid as
such). Other specialized video cameras modeled themselves after the DSLR’s system of large
sensors and interchangeable lenses, but featured more traditional camera bodies, and the design
of some, like the Digital Bolex, was modeled as a callback to the shape of vintage 8mm or 16mm
film cameras, as a strange homage to the analog era using state-of-the-art digital technology.
Despite these varied designs, the low price point (under $1,000) of the many competing, mass-
produced DSLRs made them the low-budget standard in the 2010s despite their nontraditional
design.
Video reached new places and new heights with the rising popularity of “action cameras”
like the GoPro – a camera that likely deserves an entire book in itself. These cameras were
designed to make use of the shrinking drives, albeit to an extreme, but also of the
computerization of the camera, through onboard computer image stabilization. Action cameras
were both small and stable enough to be mounted in a variety of unexpected places for a variety
of reasons. The GoPro was noteworthy originally as being paired with sport recording, especially
so-called “extreme” sports, from surfing to parachuting to wing suiting. The pairing of
videography and extreme sports has a long history, exemplified in the rich lineage of
skateboarding videos. Because of the spontaneity and tendency to require multiple takes (and
sometimes multiple cameras), the advantages of video (especially cheapness) made it a popular
250
choice among amateur skate videographers. The GoPro was an additional tool in their arsenal,
though, as the tiny cameras were designed to take a beating, shake about with little impact on the
image, and go underwater, if necessary. They allowed for a practice of “sadistic videography,”
similarly pushed in the devices’ advertising, as GoPro flaunted their cameras’ ability to take the
abuse of any practitioners, no matter how extreme. The camera body itself was a tiny box –
smaller than a pack of cigarettes and lacking any major image controls, in favor of portability
and durability. It was cheaper than a DSLR and was able to capture content with minimal user
input, like the camcorders of previous eras. It was designed to be idiot-proof to an extreme –
content capture was king. What was important was not the imaging features of the camera nor
the ability to control the image (especially as image control became increasingly the terrain of
postproduction), but the ability to film anywhere, from anywhere, and easily. Those three
sticking points, previewed to an extreme in the GoPro, could be seen to a lesser degree as the
most current support pillars in all consumer camcorders, in the form of the smartphone.
Beyond increased physical functionality, much of the GoPro’s novelty came in its
aforementioned ability to capture views that were rarely seen before – cameras mounted to limbs
of surfers, cameras capturing jumps from space, the POV of wing suiters soaring past cliffs. In
addition to the novelty, though, the GoPro allowed amateurs the ability to film images that had
been seen before, but not without a significant capital investment. Take underwater videography,
for example. The ability to film high-quality video underwater was absurdly expensive in
previous eras, with underwater housings for cameras often costing more than the cameras
themselves. The underwater view was one that was strictly forbidden to amateurs and low budget
professionals out of sheer cost.
251
While an underwater view might be an obscure example, the ability to mount a GoPro or
some equivalent to a drone was considerably more applicable (and likely even more of a cost
discrepancy). Aerial shots, both those captured with a crane or through a helicopter, were very
difficult for a low-budget videomaker, and largely impossible for a home video user. Both were
expensive and, on screen, were a visible marker of production value (even if it was just a single
shot, reflecting a single day or weekend of crane rental). Such shots would frequently draw
“oohs” and “ahhs” from crowds at a lower-budget film festivals, with the practitioner-heavy
audiences themselves aware of the difficulty in achieving them. With the drone-enabled action
camera, the sky was literally the limit. While still not cheap, a videographer could acquire the
ability to produce not just crane shots, but helicopter shots for less than the price it would cost to
rent a crane (forget a helicopter). Just as higher-quality images spread from the bigger budget
spheres to the lower-budget ones with cinematic video, allowing both the illusion of competition
and also a muddling of modes, so too was the case with angles and points of view.
The spreading of elite views to smaller productions yielded a greater accessibility to the
visual conquering of space, a point that was greatly celebrated in the trade press.
198
On the
surface, this may seem a trivial act, and as far as surpassing cultural gatekeepers goes, it likely
was. If anything, the ability to attain more eye-catching and logistically complex shots across the
board only raised the bar for producing material with a high production value. Drone shots are
still something of a rarity for many video users – the purchase price, technical ability ceiling, and
flight restrictions in many areas currently limit the activity to the most ambitious moviemakers,
allowing something of a return to the hierarchy of image making that existed in the late 90s, if
only in one specific area. That said, it should not be underestimated how much the newfound
198
Erik Fritts, “Drone Buyers’ Guide,” Videomaker 30, no. 6 (December 2015): 22-27.
252
ability to have one’s images soar was a point of pride for those who could afford it. The ability to
do something that was once restricted as the result of capitalistic boundaries was a novel and
rewarding concept regardless of one’s ability to capitalize off of it oneself.
That last concept of conquering space, along with the earlier-discussed notions of
conquering time, lead to the final discussion of this dissertation. Along with the digitizing and
file-izing of video, the third and most crucial component in the development of the digital video
camera’s functionality lie in the combining of all the elements of videographic practice into a
single device – the shortening of the chain between production and distribution. In coupling the
video camera with the now-ubiquitous mobile phone, video cameras were literally in the hands
of every smartphone user in the world. With video’s image quality, I suggested that the
potentially revolutionary increase in image quality was tempered in part by the video camera’s
proliferation. But was video’s proliferation itself revolutionary? The switch from analog index to
digital code made it possible to “computerize” one’s video. This was a noteworthy technological
step, but its cultural potential was suppressed by the still-complex nature of the process by which
video was brought to a computer system. A user gains all the potential of digital freedom, but
only if they possess the complex skillset required to get video from camera to computer and
engage in its manipulation. Later, after video shifted from tape to drive storage, a user could
much more easily transmit files. The chain between production and distribution had been made
shorter, and the resulting inclusion of computer processes in the camera itself made considerable
impact on videography in the realms of ontology, aesthetics, and labor, and for all sectors. Still,
the results of this impact were only felt if a user participated in the act of videography – taking
the camera, as a unique device, shooting video, taking the footage to one’s computer,
manipulating it, and then distributing it. The file-izing of video affected videographers, and
253
videography. As the chain of production was collapsed into a single device – the smartphone –
and as that device became ubiquitous, the practice of videography experienced its greatest shift
yet, in that it effectively ceased to exist. As more and more people are able to shoot video, and at
all times, the act of videography is no longer specialized. In short, the modern era is one in which
videography is dying.
254
The Smartphone and the Death of Videography
In a 2011 episode of the sci-fi anthology Black Mirror entitled, “The Entire History of
You,” humans in the near future have small chips called “grains” implanted behind their ear that
allow the recording and playback of anything that they see. Entire lives are permanently recorded
for access at any point. Childhood memories are videos. Last week’s mundane day at the office
is a video. Everything is a video. It’s Vertov’s kino-eye in the most literal sense, though perhaps
in reverse. For Vertov, cinematography involved seeing through the machine, abandoning our
own senses and conceptions of reality in favor of that which the camera wanted to see. In Black
Mirror, it’s the opposite – it is a machine preserving and materializing what the body had
internalized mentally. Memory is fixed. The episode’s protagonist, Liam, is obsessed with his
wife’s recorded memories of a former lover and is intent on deleting them – removing their
potential for playback. His philosophy is not unlike that of the Las Vegas Board of Tourism – if
it isn’t recorded, it didn’t happen. Liam’s nightmare of constant videography comes to an end
(spoilers) with the intentional removal of his own grain. Risking blindness and brain damage, he
opts for a life in which he no longer can interface with his past beyond the subjectivity of
personal memory.
This particular dystopian vision of future society is one in which videography is dead. As
everyone can record everything that they have ever seen, the act of intentionally recording
something using a separate device is largely redundant and unnecessary. And, like all the best
science fiction, it has its roots in contemporary anxieties about technology. This particular
episode of Black Mirror bites hard as our own society becomes one that is increasingly reliant on
the videographic image, inching ever closer to a reality in which videography is dead. “Dead”
and “dying” are, of course, intentionally hyperbolic for the purposes of rhetoric. Even in Liam’s
255
dark future, in which eyes are cameras and everything is recorded, it is likely that cinema and
television still exist as before, and that those artistic practices require cameras beyond the
cinematographer’s eyes. The “death of videography” is my practice-based equivalent to claims
of the “death of cinema” in an earlier age – metaphorically indicative of a seemingly inevitable
and overwhelming sense of change, but in theory more than in actuality. Videography surely still
exists, but in a form that is fundamentally different from that which came before; videography is
dead, long live videography. In Black Mirror, the greater cultural change has come in the fact
that, through this new technology, 1) everyone can record anything at any time; 2) everyone can
watch anything that they’ve recorded at any time; 3) the act of videography has become
normalized, quotidian, de-specialized, invisible, constant, and encouraged. With the
popularization of the smartphone in the 2010s and the collapse of the chain of image production
into a single device, the “grain” has been planted for us as well.
Kino-Computers
I have argued thus far in this dissertation’s second section that, over the course of digital
video’s existence in the various realms of production, video cameras have become increasingly
computerized and the logic of videography as a practice has been likewise subsumed under that
of the computer, a process that has been accompanied by numerous and varied changes in the
relationship between practitioner, camera, and footage. A digital camera like a DSLR has been
computerized in the sense that it is still very much a camera, but it has adopted features that were
common and necessary to the logic of computing, as has its operator with her practices. The
smartphone tips the scales of the camera/computer relationship in the opposite direction – it is a
computer that has been camera-ized. Kino-computer. The computing power of a DSLR or a
256
digital cinema camera is minimal. My smartphone, on the other hand, has a processor that can
run many of the same applications that I can run on my laptop or desktop computer. It is web-
enabled. It has considerable drive space. Its processor increases in power with every iteration. It
is a computer that happens to have a lens attached. The camera is just one of its many features.
The kino-computer is the imaging device of the modern era. As such, if one of the separating
factors amongst the spheres of production was the extent to which practitioners could exploit
their data, the smartphone might be seen as the functional equivalent of the “cinematic,” and
with equivalent revolutionary potential, though no less mythological.
The update in functional utility that comes with the smartphone continues the progression
of practical development in which videographers capitalize off of the features that have long
made video unique and popular – cheapness, transmission, and flexibility. As a camera, the
smartphone certainly retains the benefits of having a small, portable frame. Shrinking beyond the
pocket-sized cameras of the previous drive era, the iPhone was the most portable camera one
could purchase aside from a GoPro. As far as cost goes, there were other cameras that were
cheaper. Consumer cameras were available in HD for only a couple hundred dollars. Smartphone
cameras would be available for a few hundred dollars more – about the price of a cheap DSLR.
The remarkable thing about smartphones was not their cheapness nor their portability in
themselves, as they did not offer any noteworthy advancements in either regard. What the
smartphone offered was consolidation. The Canon Vixia was a tiny camera, but it was still a
camera. It was a separate object to carry around. It was no PortaPak, but bringing it with you still
required a separate action, a separate pocket, a separate mental note. It was its own object. When
the camera was fused with the phone, its necessity was fused with that of the phone as well. If
someone left the house to go to the amusement park and forgot their video camera, they may turn
257
around, they may not. Forgetting one’s phone was a lost lifeline – to other people, to the massive
scope of the internet, to all the software and functionality contained therein. The video camera
now just happened to be attached to that unforgettable object. If I forgot my camera, I also forgot
my internet. As such, the digital video camera became as common on one’s person as their house
keys.
The same fusion benefitted cost as well. As household technological devices go, the
phone has always been much higher on the list of necessities than a video camera. Both a video
camcorder and a phone in the 2000s would have only been a few hundred dollars each, but if one
could only afford one device, it was the phone. Individuals who would never opt to buy a
camcorder got one by extension when a phone was purchased. In the 2010s, there is almost no
way not to have an HD video camera on your phone. Further, as smartphones became more
ubiquitous to the point of being apparent social necessities, they were available for purchase not
just as items in themselves, but as extensions of phone and data plans. Shelling out $800 for a
camcorder was a luxury, and one that a person could opt into. Paying $40 a month and getting
the phone (and the camera) for “free,” for something deemed necessary by society anyway,
meant that camcorder ownership skyrocketed in the smartphone era.
An additional financial factor lie in the fusion of camera with brand, as well. Obviously
all manufactured cameras have been branded across the digital era and well before, and amateurs
and hobbyists no doubt showcase brand devotion to camera manufacturers like Canon or Sony,
but those allegiances pale in comparison to those of a company like Apple. Noting the mythic,
borderline religious adherence to Apple products by its devotees (supported and perpetuated by
that same company), Ryan Alexander Diduck speaks to the manner in which iPhone use reflects
258
a sense of shared community, camaraderie, and belonging amongst its users.
199
With the fusion
of phone and video camera, such social engagement extended to camera use, marking the act of
videography simultaneously as an act of utopian community-building-through-brand-support and
one of dystopian capitalistic devotion and brand adherence governing the participation in
videography. As videographic communication became more common, that communicative
practice itself became branded. While a home video user might not even recall what brand their
prior camcorder was, all were surely well aware and even actively vocal about the fact that their
videography was iPhone-branded videography.
The result of the fusion of phone and camera meant that more people had cameras, and
more people had cameras with them at all times. If the videography advertisements in the 1990s
and early 2000s were to be believed, the use of the video camera remained largely in the domain
of white males. It was a specialized, masculinized practice. With the ubiquity of the smartphone
camera, videography came under the domain of anyone with a phone. If the GoPro offered novel
views spatially and optically, the smartphone offered them socially. If anything, the current
presiding stereotype of contemporary smartphone selfie culture is a sexist and an ageist one,
having shifted the coding of home videography into much more feminized terrain. The
disparaging image of a young woman constantly photographing or videotaping herself is one in a
long line of stereotypes aimed at shaming women into self-control, though in this case the
impetus is born not just out of criticism of vanity but out of the technology as well – it is the very
ability to shoot so freely enabled by the smartphone’s ubiquity that is apparently the driving and
dangerous force in selfie culture. In a famous (and totally fake) meme shared in 2013, a pair of
199
Ryan Alexander Diduck, “Reach Out and Touch Something (That Touches You Back): The
iPhone, Mobility, and Magick,” Canadian Journal of Film Studies 20, no. 2 (Fall 2011): 58.
259
photographs juxtaposes two crowds – one in 2005 and one in 2013, both purportedly for the
inauguration of the respective popes – Benedict and Francis.
200
In the 2005 photo, all the crowd
members stand, watching intently. In the 2013 photo, the crowd is naught but a sea of camera
phones, all capturing the event. Regardless of the integrity of the meme, it remains an effective
demonstration of the climate of personal imaging in the smartphone era. We interact with the
world via the video camera.
Earlier in my life, I had the strange experience of being a subject in one of Michael
Apted’s “Up” documentaries. I was filmed by a documentary crew at age seven, and then again
at 14 and 21, each version a snapshot of my life at the time. Looking back at the three films, the
strangeness as a subject came both in being able to return to these snapshots as representations of
an age, but also to see my selfhood change as juxtaposed across all three snapshots. Due to
funding issues, Granada TV did not make an installment for 28 or 35. I remember thinking in
response, “That’s ok – I’m friends with all the other subjects on Facebook. I can follow their
lives there.” I realized at that moment that nearly everyone has an “Up” documentary now. The
last decade of my life exists on my smartphone in video and photographs far more than it ever
could have in an installment of the documentary. The snapshot is constant, curated by myself and
my social circle.
In tracking the major thrust of camera advertisements across the 20
th
century, Brooke
Wendt marks a trend from the earlier advertisements’ sale of “preservation of moments” to one
of rampant and compulsive collection of them in unlimited cloud storage and later public
200
The photos themselves are undoctored, though the 2005 photo is actually of Pope John Paul
II’s funeral, a more somber event for which photographs or video might have been deemed in
poor taste.
260
presentation.
201
What is being sold is the notion that “we can become better versions of ourselves
through self-quantification and image-sharing.”
202
Beyond a preservation of moments, the
smartphone video camera allows a renewed form of the media mummification and “preservation
through representation” as discussed by Bazin – not just in individual photographs, but in our
existence in a wider public data network.
203
“Digital security” takes on a whole new meaning
when representational immortality is tied to the cloud. Even if our cameras enable us to self-
preserve, our susceptibility to data loss makes us dependent on Facebook’s servers.
This current moment in which my video camera is on my person at all times represents
the latest stage of the video camera’s life as a time machine. Just as the potential for digital
nonlinear editing changed the videographic object into a thing to be edited, the ubiquity of the
camera changed my reality into something to be recorded – something to be instantly made into
that past-present fusion that the drive camera first enabled. My friend is now my-friend-to-be-
recorded. My dog is now my-dog-to-be-recorded. Space and environment have been
transformed, as now there is “no distinction between the private and the public. Both the public
and private spaces have become above all filmed spaces, spaces where filmmaking can and does
occur.”
204
The present is a button-press away from becoming a past object, a part of my curated
history to be conjured up on the same device at my choosing. Or, worse, at my device’s
choosing, as Facebook’s “On This Day” feature resurrects my dead dog, showing me the video
of her running at the park. I see her post just as I am about to shoot a video of my new dog. I
shake off the awkwardness and shoot the video anyway, knowing it’ll be a hit on the internet in a
201
Wendt, The Allure of the Selfie, 16.
202
Ibid.
203
Andre Bazin, “The Ontology of the Photographic Image,” Film Quarterly 13, no. 4 (Summer,
1960): 4-9.
204
Ganz and Khatib, “Digital Cinema,” 25.
261
few hours. I put on a performance while shooting. Like many other video camera users, I interact
with the present as if the video is already on the internet.
205
The video camera has become an
integral part of my present interactions, and my present interactions with the past, and my future
interaction with past videos I’m shooting in the present.
My digital camera’s status as a time travel device is due in part to its ubiquity, but also
because it is a camera that is fused to the full processing power of the computer. It still seems
odd that the thing I carry with me, that is at once a telecommunications device, a video game
console, a data processor, a word processor, and a camera receives the categorizing label
“phone” – possibly the function on the device that gets used the least. As time goes on, the
connotation of that branding title itself will likely change (if it hasn’t fully already), but until
then, it can serve as a distraction in technological writing. The digital video camera is less a
“camera phone” and more a full power “camera computer.” Thus, in addition to going
everywhere that the phone does, the video camera also has nearly all the features that a home
computer might have. The scenario requires a rethink of that question of hierarchical separation
first raised when video became digital – what can you do with your data? A whole heck of a lot
more now.
The kino-computer’s functionality reveals itself in the manner in which the camera is
operated. In order to shoot video, the camera must be activated through the use of software. We
interface with the camera like we would Microsoft Word or Internet Explorer, as opposed to the
haptics of a previous era. A DP could load a 16mm camera in complete darkness, hands shoved
into the light-proof bag, feeling the edges and curves and gears, threading the film through. The
camera was a physical object to be manipulated. The smartphone – the camera-screen – is just a
205
John Del Signore, “Police Seek Three Men…” Gothamist, November 16, 2011.
262
gateway to the camera software, where the real manipulation occurs. On the iPhone, the standard
app for camera access is called simply that – Camera. Camera has become software as much as it
is hardware, and is only accessible as such. There is no glass viewfinder, no large protruding
lens, no raised buttons. It is a camera whose functionality exists almost solely as a computerized
interface. This might be clearest in the fact that when I activate the Camera app, I have to wait
for my “camera” to load.
The camera interface itself is a dual still photo/video app. This means that still photos and
video are fused into a single device, a single piece of software, a single digital medium. There is
no photo camera or video camera, only “Camera.” Like other digital cameras, my photo-to-be-
taken exists as digital video before it exists as a photo; the LCD preview is in full video mode,
showing me all the potential frames for capture before I press the button and make my selection.
As both photography and video are coupled within the phone, both have come fused into the
domain of software. The iPhone’s Camera app and others like it simulate the functions of an
actual camera in varying complexity, with digital zoom, focus, and aperture. The camera shoots
video like any other, and like the other more professional incarnations, the video is under the
domain of a nonlinear interface – a file system. My “Camera” app is digitally fused with the
“Photos” app, allowing me to seamlessly jump between shooting video and reviewing it, and
manipulating it through a variety of post-production processes, not least of all editing the video I
have shot. More elaborate apps allow for a full suite of post-production services. As much as the
smartphone represents a fusion of camera and computer, it is at all times both. I can shoot video
with nearly as much functionality as any other consumer camcorder and then edit it with nearly
as much functionality as any consumer editing software. It is not just that I have my camera with
me at all times, but also my editing suite, my color grading suite, and my digital effects suite.
263
Computerized Practices
While the practices of traditional amateur videography still play a role in how the
smartphone functions as a video camera, the kino-computer has led to a number of altogether
new forms of production, post-production, and distribution. The processes themselves are not
necessarily new, as many were available when video was first digitized. The major change
occurred when the obstacles that obfuscated the potential of the processes for amateur users were
mitigated or eliminated. Getting video to my computer with tape was challenging. With SD
cards, it was much easier. With a smartphone, it is a non-issue. A non-process. The video is on
my computer upon filming. The collapse of the chain is clearest in the fact that the practices
(production and post-production) don’t just exist on the same device, but they actually exist
within the same app.
With a program like Snapchat, the camera is used to record video as it would be with a
traditional video camera. The primary difference between it and my standard Camera app is in
the ability to apply various digital effects to the images themselves. These “filters” come in
myriad flavors from more traditional visual accents like color grading or sepia tinting, to
complex visual effects. In one popular example, I can shoot video of a friend as Snapchat
overlays a digital rendering of dog ears onto his head. When my friend opens his mouth, the
software renders a long tongue protruding from his mouth. Here, the video camera is coupled
with the potential of digital software such that I record my friend, the software analyzes the
image in real time, recognizing my friend’s face based on its algorithm, and then renders the
digital effects. Snapchat capitalizes off of digital video’s medium-specific potential for liveness
and its mutability to create a new digital object – a video, yes, but one that has been run through
264
a postproduction suite. It is not merely live video, but live postproduction. Live finality. Not
unlike the LUTs used in professional digital cinematography, software like Snapchat provides an
image that is fully computerized, fully processed.
The medium-specific property that enables live transmission of video remains, but now
incorporates the liveness of computing, sitting somewhere in between the liveness of viewing
and the “volitional mobility” discussed by Tara McPherson in her phenemonology of web
surfing.
206
We see the image but alter its texture as a digital object; we observe images moving as
a representation of the profilmic event in reality, but are equally aware of their on-screen
mutability and our ability to alter them through the use of the software and the unseen
exploitation of its data. If the practice of Vertov’s film-camera-as-kino-eye was philosophically
and practically driven by the machine’s desire to electronically “see,” then our use of the kino-
computer is driven by this sense of videographic-computational liveness – providing the machine
not just with the images it wants to see, but to touch.
Our own sense of touch is activated as well. If the LCD-enabled camera represented a
“kino-hand,” allowing for a more tactile sense of visuality, the haptic smartphone screen allows
for the tactility of the image itself – I focus and alter lighting by touching the image. I zoom into
it by pinching it. My phone’s “haptic response” provides live feedback – live tactility, wherein
the image’s perceived texture is no longer just perceived by my eyes, but by my hands as well.
To my haptic senses, the image has a real texture as it is being captured. As Ryan Alexander
Diduck argues, our interaction with the photographic smartphone image via the touch screen
(and, I would suggest, surely the videographic image by extension) has become not one of
206
Tara McPherson, “Reload: Liveness, Mobility and the Web” in The Visual Culture Reader,
2nd Edition, ed. Nicholas Mirzoeff (New York: Routledge, 2002), 462.
265
“looking upon” as opposed to “looking through,” but looking through with the body rather than
the eyes alone. The process represents the extension of the tactility enabled by the LCD-
equipped camera, wherein we are not just led by the hand, but by the hand’s desire to touch and
alter. If Cooley’s notion of “fit” represented a fusion of hand and vision, the haptic image of the
smartphone fuses both with the profilmic event, an image to be haptically explored in real time.
Our use of the kino-computer is beyond the optical, engaging new machine-enhanced
senses, drawing us to alter faces, catch Pokémon, augment reality. While the difference in image-
making between analog and digital was observable in theory in the early digital era, in the kino-
computer that difference is manifested fully as the cartoon tongue blows a digital raspberry to the
luddites and the romantics. As was the case with the metaphorical “death of cinema,” to say that
videography is dead is to call attention to this new computerized logic, the altered medium, and
the evolving practices.
The image, whether analog or digital, was always a processed one, always a mediated
one, and it is important to recognize that history in the face of digital technologies that allow
seamless and casual mutability. The lack of “truth” in the Snapchat image is startling, and yet,
Snapchat is still primarily a communicative platform. The act of manipulation, even extreme as it
is, becomes fused with the image itself over time, just as once-novel manipulations like shallow
focus have. Our ability to easily forget that processing over time in traditional analog
videography, and apply the truth claim to video such that we believe in the integrity of the
image, has implications on contemporary technologies just as it did with those of the past.
Novelty is only novel as long as it is recognized as such, and the dystopian potential of
technology is only applicable as much as it is realized in practice.
266
The fears of simulation associated with the lack of indexicality in the early digital age
were effectively challenged by Tom Gunning, reminding that the truth claim of the image is still
a claim – not inherent in the image, but applied by its users based on their understanding of its
properties.
207
Our acceptance of contemporary kino-computer images as truthful or not,
simulated or not, requires a constant reassessment of their “truthfulness,” in relation to other
contemporary media and that of the past. This assessment was key to accepting images as
truthful both with analog photography and film, as with digital imagery, as Gunning reminds, “A
host of psychological and perceptual processes intervene here which cannot be reduced to the
indexical process.”
208
If anything, the initial fears of being inevitably and unknowingly
subsumed in a wave of digital simulation have flipped on their head as digital mutability has
evolved in practice, as the current dystopian anxiety is not one in which we are duped, but that
we are overly skeptical – even “truthful” images are dismissed as “fake news.” Again, turning to
practice as videography continues to evolve can help temper these extremes, in better
understanding not the utopian/dysoptian potentials of kino-computers, but the discourse
surrounding their use in practice, as that is where faith in the image is ultimately generated more
than the existence or imagined use of the technology.
Beyond the application of software to the act of videography, the fusion of those media
has brought consumer videography further into the realm of “video watching” – practices that I
initially identified with industrial video and surveillance. In these, the video itself isn’t
necessarily meant to function as an archival object; it is a tool for extending vision, not memory.
Such a shift in practice further fuels the fire of my rhetorical calls for the death of videography,
207
Gunning, “What’s the Point of an Index?” 42.
208
Ibid., 41.
267
as the etymology of that term’s suffix, “-graphy,” denotes writing. In many contemporary
practices, the act of writing is absent entirely. Nowhere is this clearer than in the use of the front-
facing camera – a video mirror to the self. While I might have used a camcorder’s reticulating
screen to record myself in a previous era, I now use it check my teeth after a meal. The act is not
about recording – in fact, it is the opposite of recording, ensuring that something which I
currently see will never be seen by any other. Beyond the mirroring, functional use of the video
camera is central to many of the smartphone’s software applications. Using the Yelp app, I hold
my camera up to a city street and an on-screen display shows me the ratings of all the restaurants
in frame. I use it spy Pokémon, invisible to the naked eye but rendered in real time by my
Pokémon Go software. I frame a sign in Mandarin and my phone provides a real-time
translation. I cover the video camera with my finger and the phone measures my heartbeat. In as
much as the video camera has always been a tool of looking, it is now a tool of processed
looking – looking coupled with data gathering. As Chris Chesher describes it, “The image is only
a disposable, intermediate step on the way to information.”
209
That which the camera sees, and that which we see, is ultimately translated into digital
code (as image and otherwise), collected and stored, intermingled with other data, processed,
analyzed, uploaded, and re-engineered. Our looking is participatory in a wider network of digital
processes. If fears of invasion of privacy proliferated along with analog video cameras in the
1980s, such fears are compounded as the act of looking is paired with invisible data mining.
Here, we can return to that same question – “What can you do with your data?” – and expand it –
“What can someone else do with your data?” My use of the kino-computer, in its act of imaging-
209
Chris Chesher, “Between Image and Information: The iPhone Camera in the History of
Photography,” in Studying Mobile Media: Cultural Technologies, Mobile Media, and the Phone,
ed. Hjorth et. Al (New York: Routledge, 2012): 110.
268
through-data-gathering, is partly defined by my manipulation of that data and, again, the
separation amongst practices and spheres of practice is partly defined on that level of
exploitation. But it is further defined by how private that exploitation is. In using apps to interact
with the world through video, all of the imagery being created is subject to collection, voluntarily
or involuntarily. This is, of course, not unique to video. Digital data collection is constant and
largely invisible, though it is manifested most clearly in data breaches, whether through financial
institutions, major retailers, or Facebook. What is unique about the video camera, though, is that
the data collection comes from my computerized vision. The more I use the camera to collect
data for myself, for the creation of images, to communicate with friends, to archive my life, to
make art, to make a profit, to look at myself in the virtual mirror, the more those actions
themselves are no longer just my own. Taken to the extreme, as in Black Mirror, our eyes
themselves have been incorporated into capital.
The act of using a video camera has become ubiquitous, but differently. It is not your
father’s videography. As a computer, and as a complex communications device, the video
camera is intermixed within the network of processes that occur within the phone and those that
benefit from its use. As Chris Chesher outlines,
The iPhone Universe extends well beyond its technical parts, to
include dynamically interconnected and extended social, mental
and abstract components: material and energy components;
semiotic, diagrammatic and algorithmic components (plans,
formulas, equations and calculations that lead to the fabrication of
the machine); components of organs, influx and humors of the
human body; individual and collective mental representations and
information.
210
210
Ibid., 99.
269
The video camera is no longer an isolated object to be used for the process of videography.
While always a communication device, it has been directly injected into a microcosm of societal
processes manifested within the smartphone. As such, the smartphone video camera can no
longer be viewed in isolation, no longer be viewed as a tool of traditional videographic
processes, but instead as a conduit for social, mental, and emotional ones in ways that were far
more distantly removed from its use in the past. The collapse of the chain between camera and
computer has resulted in a fusion not just of technical processes, but of social-communicative
ones. I check myself in the video mirror. Record my rant. Upload it to Facebook. Give and
receive likes and comments. Make the video public. Make new friends. Receive death threats.
Revolutions in Image Quantity
You know, the courts might not work anymore, but as long
as everybody is videotaping everyone else, justice will be
done!
- Marge Simpson
In 1948, film theorist Alexandre Astruc published his discussion on the concept of the
“Camera-Stylo” – the camera pen. In contrast to Vertov’s kino-eye (or perhaps a logical
progression of it), Astruc’s conception of the Camera-Stylo brought cinema away from the world
of the theatrical exhibition and into the realm of direct communication. He wrote, “The cinema
will gradually break free from the tyranny of what is visual, from the image for its own sake,
from the immediate and concrete demands of the narrative, to become a means of writing just as
flexible and subtle as written language.”
211
With the growing popularity of smaller gauge
film/projectors and video, the act of motion picture presentation would cease to be a specialized
211
Alexander Astruc, “The Birth of a New Avant-Garde: La Camera-Stylo,” L'Écran française
March 30, 1948.
270
act and become a daily one, expanding our intake of visual communication to the point that the
motion picture would be as quotidian as writing with a pen. Rather than rigid roles of
screenwriter and director, an individual would produce motion picture images of reality free of
the bounds of commercial cinema’s grammar.
212
While Astruc saw such potential in the 40s, the
required level of access to moving image technology is only being approached now, as the video
camera is as common (or perhaps more so) than the pen. By the time I was 18 years old, I had
used a video camera twice in my life. In contrast, my son had used a video camera more than
twice before he was 18 months old (his videos weren’t very good, but still). The ability to
produce and distribute video content is in the hands of everyone with a phone. Not everyone full
stop, but enough to tip the scale.
When video became digital, I argued, the potential for exploitation of the image-as-code
was activated immediately, like an on/off switch. However, the dissemination of access to the
technology and the means to exploit its digitality was not as binary. Through advancements in
drive storage and consolidation of the chain of production into a single communication device,
access to videography was granted to a far greater number of people. In my discussion of image
quality, I outlined how an image coded with cultural power and aesthetic capital was
disseminated into far more hands across the digital era. The ability to wield that image for
varying ends was a major way in which the act of videography could hold power, but only in one
specific area. As the cinematic image spread to all spheres of production, it ultimately only
reinforced the financial and cultural hierarchy that was already there. A revolution in camera
utility – the ability to easily shoot and distribute video – offers potential in a different form. It is
less about aping the higher ups, less about standing shoulder to shoulder with them, and more
212
Ibid.
271
about creating an alternative – videographic literacy. Rather than the potential for influence
being contained within the image, it was contained within images – image quantity over image
quality. The majority of the U.S. adult population has a video camera on their person at nearly all
times. The number of smartphone users globally is predicted to eclipse 6 billion by 2020.
213
It is
not merely that more people have cameras, but that more people have fully digital-capable
cameras. File-ized cameras. Kino-computers. The increase in image quality that occurred across
the digital era granted something that was held by those with capital to everyone. That is, it was a
power that already existed. With the proliferation of the smartphone and the spread of
videomaking access, the resulting widespread videographic literacy is something entirely new.
Therein lies the revolutionary potential of the video camera in the 21
st
century.
The 2010s are an era of image saturation, but one in which individuals readily participate.
The one-way flow of media is not gone – television and cinema are still largely uncracked
bastions of media influence. But in addition to these major pipelines, tiny tributaries have
sprouted. Not strictly for the purpose of reversing or circumventing the heavier flows, but for
branching out in all directions. In short, the proliferation of video cameras has made media flows
less singular. As is the nature of digital technologies, media communication is less bound by
linearity, less restrained by real time, less tethered to the cable and the broadcast. An individual
Skypes with another across the globe. A musician gives guitar lessons for free via YouTube.
Moviemakers find an audience. A social media star is born. A Trump supporter is “destroyed.”
I would suggest that in this era of the Camera-Stylo, of the kino-computer, the hype of
digital revolution is not entirely hype, but the change in power dynamics that is occurring is not
strictly a coup, but an obsolescence. It is not a reversing of tides, but a rendering them less
213
Ingrid Lunden, “6.1B Smartphone Users Globally,” TechCrunch, June 2, 2015.
272
impactful. Not a severing of the cable, but the building of a network around it and apart from it
and away from it. With a revolution in video camera utility, the mere act of communication-as-
power possessed by media companies becomes less powerful when we all become able to
communicate as widely. Revolution in the sense of a wider videographic literacy might not go as
well on a t-shirt, but it reflects a turn to the use of cameras rather than the utopian idea that the
major studios will shutter their doors as moviemaking is democratized. Marvel’s Avengers:
Infinity War just opened and it did pretty well. I definitely cannot make that on my smartphone.
That said, I do have some solid videos of my dog to share, and I do believe that counts for
something.
It is difficult to separate the camera as an object of image creation from the broader
concept of “internet video” or “social media video,” precisely because of the shortening of the
chain of production within the imaging device. Turning a video on one’s smartphone into an
“internet video” merely requires another quick tap of a button once shooting is complete (or not
even that, if one is livestreaming). Perhaps separating the two is a fallacy to begin with for that
very reason, though it is one that I acknowledge ahead of this final discussion. Discussing the
significance of internet video and the myriad texts that exist in that distribution network is
obviously beyond the scope of this section, and many examples of the success of videographic
activism in the digital era lie in the adoption of existing techniques into a wider chain of internet
distribution. Instead, I want to consider how the potential for public sharing affects the practice
of operating the kino-computer beyond distribution alone, as smartphone videography expands
beyond traditional practice to become an act of interpersonal or even mass communication.
In a 2015 Mother Jones article, Jaeah Lee and AJ Vicens discuss the increase in videos of
police shootings in the wake of the high-profile killing of Michael Brown in Ferguson, MO.
273
Breaking down thirteen such videos in the previous year’s time, most of which involved the
deaths of unarmed people of color, the article makes two points – one sobering and the other
optimistic, and both speak to the perils and potential of video as a communicative medium in the
digital era.
214
The first discouraging point is that despite clear video from a variety of sources
(smartphone cameras, body cameras, surveillance footage), only three of the 24 officers involved
in any of the shootings have been charged. A similar article, published by the New York Times
two years later, provides an extended cataloging of officer-involved shootings, suggesting little
change on that front.
215
The second, more optimistic point that Lee and Vicens make, though, is
that the increasingly vast combination of citizen-captured videos have resulted in a “tectonic
shift in public awareness” of the deaths of people of color at the hands of police.
216
No justice,
but awareness.
For many activists battling police brutality, the momentary optimism that justice might be
done given the events captured on camera by citizen bystanders quickly gives way to the
realization of the bleakness of the situation – despite being caught on camera, no change is made
at the administrative level. The thought that the brutality, so often observed and experienced and
reported by people of color, was given some evidentiary clout by the video camera was short
lived. Even the clear on-screen evidence was not enough to conquer institutional racism, the
power structures already in place, the imbalance in the courts, and ingrained public prejudice
among the institutions and the general public. In Mary Angela Bock’s sobering words, “If the
214
Jaeah Lee and AJ Vicens, “Here are 13 Killings…” Mother Jones, May 20, 2015.
215
Sarah Almukhtar et. al, “Black Lives Upended by Policing,” New York Times, April 19, 2018.
216
Lee and Vicens, “13 Killings.”
274
success of [filming the police] were measured only according to sanctions against police officers,
it would be a remarkable failure.”
217
It seems that Marge Simpson’s satirical jab in this section’s epigraph was darkly funny
for the wrong reasons. In the late 1980s, with video cameras beginning to spread in the public
hands, the satire in Marge’s joke lie in the fact that we risked giving up our privacy in favor of
apparent safety and “justice.” It was a joke about citizen surveillance. In reality, the satire comes
in the form that the trade-off was never there to begin with. As these officer-involved shootings
suggest, even if everyone films everyone else, justice is far more fleeting than Marge believed.
Homer’s line to wrap up the episode is even more apt for that reason. “Marge, my friend, I
haven’t learned a thing.”
The ability to film everything and anything, to create content and distribute it, is indeed a
manifestation of cultural power. One can reach an audience, leverage their influence, and even
make money. Citizens can surveil the surveillors, keeping cops and politicians (or anyone else,
for that matter) accountable for their actions. The disempowered have a means of establishing
fact and credibility. Whole groups whose existence within the media was restricted to those
corporations’ portrayals can challenge those representations through the daily filming of
individuals by individuals. As Bock argues, even the mere act of holding a camera in the
presence of a surveillor has a panoptic effect in its potential for disciplinary counter-surveillance
and a weaponizing of bodily presence.
218
Yet, as was the case with image quality, many of these revolutionary acts run into the
cultural and economic systems already in place, and those designed to effectively maintain the
217
Mary Angela Bock, “Film the Police! Cop-Watching and Its Embedded Narratives,” Journal
of Communication 66 (2016): 26.
218
Ibid., 26-27.
275
media hierarchy and separation in degree of cultural influence. When Antony and Thomas tout
the success of activists getting video of the 2009 shooting of Oscar Grant at Fruitvale Station
onto the evening news as an act “that compensates for an otherwise lax mainstream media
response, which in turn precipitates a reversal of the traditional agenda-setting prerogative of the
guard-dog news media,”
219
it is worth further acknowledging that such success requires
dependency on that same news media. It reflects a tendency in video activism, as cited by Chris
Robé, in that it historically “both challenges certain aspects of capitalism while also being a
beneficiary of it.”
220
For all the momentary victories in the video revolution, network TV still
retains millions of viewers. Of the top 100 most-viewed YouTube channels, the overwhelming
majority are owned by major media companies. Cops escape indictment and prosecution.
Cultural elites caught on camera can “grab ‘em by the pussy” and still be president.
In examples like these, the manifestation of power was temporary, ineffectual, and
apparently illusory. The ability to wield the image, and the cinematic image specifically,
frequently and readily is a reality. But the notion that the democratization of moving image-
making might be a renewed opportunity to slay the media giant (or any other institutional giant)
is as mythological as any of its previous incarnations. Again, the fault comes partly from the
myth’s confrontational, David vs. Goliath ideology. It’s not that the democratization of
videography itself was overstated – that account is true. But the notion that democratization will
necessarily lead to power, or at least power in the direct-conflict, pitchfork-wielding
revolutionary sense, has proven to be utopian in ways that no amount of cameras and user-
219
Mary Grace Anthony and Ryan J. Thomas, “’This is Citizen Journalism at Its Finest:’
YouTube and the Public Sphere in the Oscar Grant Shooting Incident,” New Media and Society
12, no. 8 (2010): 1291.
220
Chris Robé, “Criminalizing Dissent: Western State Repression, Video Activism, and Counter-
Summitt Protests,” Framework 57, no. 2 (Fall 2016): 164.
276
generated content will ever make manifest. As much should have been clear with the Rodney
King case in the 1990s.
Unlike that fateful day in Los Angeles, though, video is now on a computerized device,
capable of nearly instantaneous upload and live streaming. Videography is not what it once was.
Here we can situate Lee and Vicens’s optimism about filming police and the “tectonic shift in
public awareness.” Despite the fact that battling elites hasn’t led to change in those elites, nor in
the systems, there has been a shift in the way that video cameras are used in the creation of and
engagement with online communities. While video of Philando Castile’s death might not have
resulted in a conviction, the fact that it was shot, distributed, shared, and hashtagged was key in
further galvanizing movements like Black Lives Matter. It brought into the public consciousness
something that was once unseen, in a way that was mostly free of media influence.
221
The use of
the camera produced and distributed a document in a form that generated and allowed
commentary. It spread through that camera’s place in a wider network of cameras and screens
and people. On the evening news, yes, but also person-to-person, to individuals that don’t watch
the evening news, or who actively distrust the evening news, acquiring connotations that speak
against the ones picked up in the major media flows – not working against media flows, but apart
from them. The video went viral, as did video reactions, hot takes, social commentary, pain and
rage, all via the smartphone camera. They all spread across a community, and the spread
solidified community, and extended community, and generated community. Even if these acts of
videographic practice don’t represent challenges to the institutions directly, even if they are
oftentimes in service of the institutions financially (through the monetization of videos and ads),
221
But, of course, not completely free, as it was dependent on Facebook, YouTube, and Twitter
for its hosting and shareability.
277
the tectonic shift in awareness is tied directly to the tectonic shift in videographic literacy, in the
expanded use of the video camera as a tool of communication.
I am personally more comfortable attaching the word “revolution” to the proliferation of
the kino-computer than I am with any other scenario in which that word is applied in the trade
press, by practitioners, scholars, or as it has been analyzed in this dissertation. I say this if only
because it’s the one that stands up the most in this turn to practice. The utilization of the kino-
computer to expand traditional videographic practices and the demonstration of that practice
from a wider sector of the population has been the point of most significant cultural impact
across the video camera’s development in the digital era. Expanded videographic literacy,
evidenced in the increased frequency of direct and indirect interpersonal communication via the
video camera, and the community-building that has accompanied it, all hinge on the exploitation
of video’s digitality. It has been manifested specifically through the coupling of the camera with
a data network – a shortening of the chain between production and distribution. With only the
camera, our conveniently captured videos would live on our phones and within our restrictive
circle; they might never leave arm’s length and, if they did, only at the mercy of cultural
gatekeepers. With only the internet, communication and virality are possible, but without on-
screen representation. The kino-computer allows the combination of capture and transmission.
The result is a qualitative quality of a different sort. The “video look” of the previous era existed
through comparison to the “cinematic look” of the cultural elites. The “video look” was the look
of amateur video. Videography now is no longer an amateur practice – it is a popular one. It is
not done for love, but as a part of daily existence. Our communities exist through video, whether
they be familial groups or church groups or Dungeons & Dragons groups. That is what we do
with our data. The qualitative quality of video now is no longer drawn from its texture, but from
278
its ubiquity. Forget the cinematic. The “video look” is the look of the people. And it’s
everywhere.
279
Post-Roll: Concluding Thoughts
In order to “roll out the tape,” I want to briefly return to the discussion of practice-based
theory from two different perspectives – one concerning the present state of videography, and
one concerning the future.
Optimism and Revolution
After a recent presentation on the contents of this dissertation, a colleague asked me a
question about my characterization of the digital video revolution(s). Because both of my
arguments regarding image quality and utility suggested that the revolutionary potential in the
digital video camera was largely illusory, mythological, and ideological rather than one that
might affect shifts in power and cultural influence, she noted that my philosophy was rather
pessimistic. She asked: how did I reconcile that pessimism with my teaching? How did such a
lack of enthusiasm affect my production classes? How did I frame the act of making media given
my thoughts about its apparent lack of revolutionary potential?
After some thought, I offered a response that I hoped offered a bit more optimism,
though in a somewhat different area. In teaching my production courses, it is not my intention to
tell my students that their work is largely ineffectual, their cultural representations will mostly
remain unseen, and that videography is all pointless because they’ll never overpower Disney.
That would be an overly binary and reductive discussion of cultural media influence, and would
likewise be somewhat pedagogically suspect (though surely not any more suspect or reductive
than telling them that cinema was dead). To that point, one of my self-stated goals in this
dissertation’s introduction was to find a middle ground between the dystopian rhetoric of
280
theorists and the overly utopian rhetoric of the trade press and the practitioners. There’s a reason
I’m not “Doing What I Can’t” besides the fact that I don’t own a Samsung phone. Gaining a
better understanding of the obstacles that stand in the way of media practitioners at all levels,
beyond the tools themselves and beyond what the tools can seemingly “allow” their users is not
to discourage media-makers but, at the risk of sounding utopian myself, to empower them. I want
my production students to better understand their practice as it is situated amongst broader flows
of media such that they might be able to better exploit their station and their technology. I want
them to understand the technological differences between their gear and that of the other spheres
of production, how images are read differently and coded specifically across the hierarchy of
image-making, and how they might challenge those conventions with a realistic expectation of
results and success that they can further leverage for the advantage of themselves or those whom
they choose to represent. I have heard my fair share of production teachers tell their students
(myself included) that their work is going to change the world, and I think we need more critical
analysis within the production classroom as to the idea of why changing the world with media is
so difficult to do.
Further, practice-based theory is not just an isolated act of research; like everything else,
research findings grow in depth and richness as they are incorporated into the classroom.
Additionally, it is not merely my findings that I wish to bring into the classroom, but my
methodologies. As much as I think it is necessary in this particular moment to turn scholarship
toward videographic practice, I want to bridge that gap in the classroom as well. Thus, when I
look at the revolutionary potential of the video camera, and I see the mythological and
ideological status of the revolutions in quality and utility, I want my students to better understand
and question these dominant ideologies of practice – to consider how media-making is embedded
281
with a theory of its own, and how the video camera might have revolutionary potential beyond
the one touted by film festivals that thrive off their submissions, or tech companies that push
their “game-changing” products, or film schools that survive and thrive off their tuition dollars,
or even other practitioners and trade press authors, who themselves may not have the opportunity
to step back and critique their own position, as is our privilege in academia. The act of media-
making needs to be one of study and experimentation, a probing of its embedded theories and a
development of new ones, discovering the revolutionary potential yet to be exploited. I want
production students to apply a critical, analytical lens to their own production work, and likewise
encourage media scholars to understand and embrace the practical realities of their academic
object of study. And, ideally, those two groups of students are one and the same.
Again, without wanting to slip into an overly utopian mindset myself, I see turning a
critical eye to practice within the classroom as potentially impactful in this particular moment,
given the state of what I likewise view as the most potentially revolutionary aspect of the digital
video camera – its ubiquity. We exist in a strange media landscape in which more and more
people each year are able to write, but without ever having learned to read; videographic practice
occurs amongst wider groups and more diverse individuals, the overwhelming majority of which
have never applied a critical lens to media. I’m not holding my breath for media study and
production to be standard curricula anytime soon (English is itself barely hanging on), but as a
crucial first step, we need to make an effort at the university level to address practice more than
we have, as we have been content to let it live in the other building without much intervention.
Production students have been making media without critically analyzing their practice and
unpacking its embedded theories (at least not formally) for long enough, and I don’t think that
282
separation is doing anyone any good. Now that we’re all production students, the stakes have
been raised.
“Media literacy” is a loaded term and a can of worms that I would rather not open (and
certainly not in my conclusion), though its dystopian manifestation may be what best
characterizes the scenario we are currently careening toward – an unbridled media literacy, a
Wild West of production. danah boyd’s recent piece on media literacy speaks to the dangers
inherent in teaching a hypercritical approach to media (one I would agree with) and also to the
limitations of teaching media production: “Developing media making skills doesn’t guarantee
that someone will use them for good,” a point that I would agree with as a point of fact, but not
as a discouragement of such teaching altogether.
222
Boyd’s characterization of the endgame of
media literacy historically is that it is mostly one that involves students being critical in reading
media;
223
it is that which we do in our departments and classrooms every day. I would suggest
that we additionally turn our students’ attention to being critical of making media – the same
proposition I am making for us as scholars. Teaching media production is fine, though students
are already shooting video as a daily act of practice. We need to teach them to be critical of the
act that they are already doing. If there is potential in the video revolution at present, it can be
found in the classroom.
A Future of Practice
In May of 2018, I attended the Virtual Reality Los Angeles Expo (VRLA), one of the
largest of its kind. Situated among the many immersive software, hardware, and content
222
danah boyd, “You Think You Want Media Literacy… Do You?” Medium,
https://points.datasociety.net/you-think-you-want-media-literacy-do-you-7cad6af18ec2
223
Ibid.
283
demonstrations was a booth run by Radiant Images, a self-dubbed “digital cinema innovator”
operating in Los Angeles. A combined research and development firm and equipment rental
house, Radiant had on display a wide collection of digital video cameras and camera rigs, all for
the purpose of shooting immersive video: Red digital cameras with giant fisheye lenses;
stereoscopic 3-D cameras, with side-by-side optics; a five-foot arc of action cameras attached to
a Steadicam harness for shooting in 360 “bullet time.” By far the most theoretically interesting
was a camera array designed by Radiant in which an operator wore a vest supporting a metal
collar that held four separate DSLR cameras, each shooting in a different cardinal direction. Sat
atop a mannequein, it looked like a man whose head had been replaced with a camera.
Then there were the really weird ones – cameras of all shapes and sizes, meant to shoot
spherical video (many spherical themselves). They all resembled droids or aliens more than
traditional cameras. The Nokia Ozo looked like Q*bert. The Insta360 was a dead ringer for BB-
88. The Kandao Obsidian did a solid HAL 9000. The Samsung Gear 360 could have played
Ghost in a live-action Destiny reboot. Some of these sentient-looking imaging machines sat top
robotic “rovers,” which themselves might as well have been styled after Johnny 5 (“Kino-
computer is ALIVE!”). Forget strapping them to an operator’s head – they’ve got their own
bodies now. Walking through that collection of cameras, with their eyes – so many eyes – staring
me down, I had two thoughts. The first was the obligatory dystopian vision of a spreading self-
awareness and a “rise of the machines” scenario, for what I assume are obvious reasons. The
second, more relevant, and perhaps even more terrifying thought, was: “Is my dissertation dated
already?”
Like most exhibits at VRLA, the Radiant Images booth was designed to evoke futuristic
sensations, replete with laser lights, LEDs illuminating the tech, digital projections adorning
284
every surface, geodesic domes, atmospheric soundscapes, and immersive imagery. It all stated
the message, loud and clear, “The future of imaging… today!” If this was the future of imaging,
it represented a combination and explosion of both of the concepts on which I based my
argument: the cinematic and the kino-computer. The quantitative/qualitative cinematic was an
industry standard of image quality for decades. In varying forms, for almost a century. It was that
which low-budget videographers desired and what big-budget cinematographers longed to
preserve. It was what helped make our everyday images so aesthetically pleasing as our everyday
selves became digitally presentational. It represented the hierarchical separation when it was
elusive and expensive, it was usurped and rebelliously used to elevate low-budget practitioners’
work, and it ultimately represented a homogeneity in image quality when it was available to all.
It was the texture of a partially failed revolution. And here, in this futurescape, it was nowhere to
be found. Aspect ratio? There is none. It’s a full sphere. There is no frame. The fundamental
building block of cinema doesn’t exist. Frame rate? It needs to be very high. 24 fps doesn’t work.
The juddery motion makes people sick. It needs to be higher: 60, 120, 240, and beyond.
Resolution? It’s 8k. Per eye.
Likewise with utility. I had gotten used to the strangeness and novelty of a camera in a
cube, or in a smartphone body, but these cameras were otherworldly. If one of the markers of the
video camera was its ability to shoot imagery that a film camera could not because of Moore’s
Law, these cameras represented that exponential growth to a visual extreme. While some
noteworthy spherical film projections have been developed in the past, as with anti-aircraft
gunnery trainers and planetarium domes, the full-spherical capabilities in a handheld camera no
larger than a softball represented a physical impossibility for filmic media (and for anything
resembling a traditional camera, for that matter). Not only can the camera’s all-seeing eye
285
capture in all directions, its built-in software can stich the imagery of its various lenses together,
creating zero overlap and a seamless presentation of an immersive environment. Many of the
cameras are linked with other technology like motion and depth trackers, such that the cameras
can be used for data gathering in applications like photogrammetry, in which the real-world
environment is mapped onto a virtual computer space. Not only does the camera make images, it
makes them into computerized environments, for applications like traditional video gaming or
VR. Such applications make the imaging capability of these devices secondary to pure data
gathering. If anything, the image gathering itself is only for the purpose of its use as data; the
image is meant to be utilized by a computer program. In instances like these, the camera serves
less as the thing to be supported, and is increasingly reduced to “input” status. The meaning of
“camera utility” is quite different when the camera itself is not the principal element, not the be-
all-end-all, but one input of many in a larger computational process.
The incorporation of the camera as a data-gathering device into larger and more complex
systems serves as yet another point of division in what technically-savvy and big-budget
producers can do with their data beyond that which the home video user can. Further, spherical
expansion of the frame similarly threatens to exacerbate the divisions in the hierarchy of image-
making, as the “cinematic” standard of quality becomes further normalized by the existence of
newer high-quality and high-stakes standards. For low-budget and amateur producers, the
practices that accompany immersive video visualization are so strange and different, so
computer-based and hardware-dependent, it again appears that a one-way flow is opening up.
It would be wrong to think that immersive video or immersive video cameras are in any
way a replacement of 2D media, whether cinema, television, or any of the many incarnations of
videography. The fallacy there is evident in the sheer number of more traditional cameras at
286
VRLA, documenting the event, providing news coverage, and live streaming content, which is to
say nothing of the hundreds or thousands of phones shooting video in the hands of attendees. The
question is not one of obsolescence, just as it was not (or, at least, should not have been) in the
face of the digitization of video. Following my approach to that development, though, I suggest
we apply to these new immersive technologies the same questions that were critical to
digitization (but often left unasked). As much as the development and proliferation of new
moving image technologies force a rethink of what cinema is and can be, they similarly call for a
consideration of who is able to wield them and to what ends. Not just, “What is cinema now?”
but also, “Whose cinema is it?”
I think the existence of this new sort of video camera and the nascent development of the
practices that utilize it mark a significant moment in technological history and, crucially, for the
turn to practice. These immersive technologies were barely on the horizon when I started my
research of video cameras, but their rapid adoption over the course of my studies has situated my
work at a moment of videographic emergence, whereas I initially took it as a moment of stability
(if such a thing is even possible). The late 2010s seemed like a fitting time for reflection, as the
waters of technological change had apparently settled, and were due for the sifting that they had
largely not received in academia, if perhaps for their turbulence, but more likely for their
oversight. Instead, I am writing on the cusp of an exciting moment of expansion in visual culture,
and one that, at the risk of overstatement, feels as significant as the one in which our central
medium of study first proliferated and matured over a century ago. And it is one in which the
very concepts that drove the technological development of the video camera, and my study, seem
to be almost obsolete in comparison.
287
Or… at least that’s what I thought. But then I heard a curious exchange behind me, while
I was marveling at the wonders of a glorious optical future. A representative from Radiant
Images was demonstrating a spherical camera attached to a hand-held mount for some “content
creators:”
So as you can see here, you just attach the spherical lens to the
mount and, just like that, you can create a really nice cinematic
camera movement for your spherical video.
The emphasis is obviously mine, and for obvious reasons. “Cinematic movement.” I had to
laugh. Among the term’s many uses, “cinematic movement” was not one I had heard often. It
seemed especially egregious and ambiguous. My anti-“cinematic” colleagues would have surely
bristled. What was especially funny was the fact that this term was being used in relation to a
device that, to me, seemed so un-cinematic, so post-cinematic, so far-removed from the devices
that I studied, despite sharing the same imaging mechanics. What exactly was happening here? A
techno-verbal slippage? A poor application of an already poorly-applied term? The ambiguity
needed further analysis.
Like other uses of the term, it was not without semantic meaning. In this particular
instance, it again seemed coded as a marker of higher production value and professionalism,
aesthetic achievement, and a visual separation of one’s work from the rest. By prior uses of the
term, as in the trade press, it didn’t have as clear a quantitative basis. All of those “cinematic”
standards were largely inapplicable. Yet, this representative wanted to tie practice to qualitative
quality, and again lacked the terminology. One could easily view this as an example of a
practitioner in a new medium, lacking a new set of terms, resorting to those from an earlier
medium. Instead, or at least in addition, I would suggest that this use of the term “cinematic,” as
has been the case in the analog and earlier digital era, stems from the continual separation
288
between theory and practice as approaches to the study of moving images, and marks a critical
point for the intervention of practice-based theory.
The use of the term “cinema” for this practitioner seemed to reflect his approach to a
medium that largely lacks standards, a sentiment that is echoed in many discussions of VR by
practitioners themselves. The phrase, “No one knows…” comes up a lot. “Virtual reality takes
after the wild west – no one knows what is right and what is wrong.”
224
Or, “No one knows what
type of content will work best.”
225
Or, more to the point, “There are no rules anymore.”
226
It is a
moment in time in which practitioners are admittedly struggling to come to grips with a
videographic practice that is not new altogether, but likely new for them. That is, it is a moment
in which video practitioners are themselves studying practice. I suggest that we, as scholars,
follow their lead. The potential to discuss the video camera as a tool of digital immersion is
obviously extremely theoretically fertile ground, and I can easily see the conversation being
pulled into abstraction once again, just as it was with the proliferation of digitality, and in the
exact same directions. What is the nature of simulation? What is the nature of reality? How do
we come to terms with these new digital objects? The specter of imbalance in discourse rises
anew.
Over the course of this dissertation, I hoped to call attention to crucial elements of the
digital cinema debate that were largely missing from academic discourse. By turning to the act of
operating a camera and the theory embedded both within that practice and also in the discourse
224
“Virtual reality + video360 + empahty [sic],” 4Experience, http://4experience.co/virtual-
reality-video-360-empahty/
225
Signe Brewster, “In a VR World,” Tech Crunch, https://techcrunch.com/2016/06/03/in-a-vr-
world/
226
Bryn Mooser, founder of RYOT Films, quoted in Robert Hernandez, “Virtual reality: The shift
from storytelling to ‘storyliving’ is real,” Medium, https://medium.com/journalism360/virtual-
reality-the-shift-from-storytelling-to-storyliving-is-real-ff465c220cc3
289
of practitioners themselves, I attempted to reframe the academic discussion of digital cinema to
the practical implications of the digitization and development of the medium and the cultural
causes and effects it has had in the various communities of practice. Discussions of image quality
and the “video look;” the cultural significance of the “cinematic;” the technological and practical
potential of the digital video image for all spheres of production; the nature of the revolutions in
quantity and in quality; the theoretical ramifications of video as a time travel device; the cultural
value of the exploitation of data; the nature of the kino-computer as a tool of videography – these
were all born from the meeting point of academic scholarship and practice. All were meant to not
simply enhance the digital cinema debate, but to demonstrate the fruits of such a methodology.
In looking at the current era in which digital videography has become one injected with a logic
of presence and immersion, in which practice itself is a mix of technical and creative struggle,
play and commerce, fumbling and success, I am reminded of the fumblings of those early
filmmakers, performing their “film experiments,” learning how the medium worked
technologically, aesthetically, financially, and culturally. What scholars wouldn’t give to be there
for the productions and the screenings of Lumière or Méliès. How precious are the writings of
those who were, and especially of those who bridged that gap between the practical and the
academic. Let this serve as my final, romantic push – it is critical to turn to the practice of
videography now, lest this moment of remarkable change in imagery, image-making, and image-
making tools be confined only to history.
290
Bibliography
Almukhtar, Sarah, et. al. “Black Lives Upended by Policing.” New York Times. April 19, 2018.
Anthony, Mary Grace, and Thomas, Ryan J. “’This is Citizen Journalism at Its Finest:’ YouTube
and the Public Sphere in the Oscar Grant Shooting Incident.” New Media and Society 12,
no. 8 (2010): 1280-1296.
Armes, Roy. On Video. London: Routledge, 1988.
Astruc, Alexander. “The Birth of a New Avant-Garde: La Camera-Stylo.” L'Écran française.
March 30, 1948.
Baltruschat, Doris, and Erickson, Mary, eds. Independent Filmmaking Around the Globe.
Toronto: University of Toronto Press, 2015.
Bankston, Douglas. “All the Rage: Anthony Dod Mantle, DFF Injects the Apocalyptic ‘28 Days
Later’ with a Strain of Digital Video,” American Cinematographer 84, no. 7 (July 2003):
83-84.
Barthes, Roland. Image, Music, Text. New York: Hill and Wang, 1977.
Bazin, Andre. What is Cinema? Berkeley: University of California Press, 2005.
Belton, John. “Digital 3D Cinema: Digital Cinema’s Missing Novelty Phase.” Film History 24,
no. 2 (2012): 187-195.
Belton, John. “Digital Cinema: A False Revolution.” October 100 (Spring 2002): 98-114.
Berkshire, Geoff. “Sundance Film Review…” Variety. January 22, 2014.
Binkley, Timothy. “Camera Fantasia: Computed Visions of Virtual Realities.” Millennium Film
Journal 20/21 (Fall 1988/Winter 1989): 6-43.
Bock, Mary Angela. “Film the Police! Cop-Watching and Its Embedded Narratives.” Journal of
Communication 66 (2016): 13-34.
Bordwell, David. Pandora’s Digital Box: Films, Files, and the Future of Movies. Madison: The
Irvington Way Institute Press, 2012.
boyd, danah. “You Think You Want Media Literacy… Do You?” Medium.
https://points.datasociety.net/you-think-you-want-media-literacy-do-you-7cad6af18ec2
Brakhage, Stan. “In Defense of Amateur” in Essential Brakhage: Selected Writings by Stan
Brakhage. New York: McPherson and Company, 2001.
291
Brewster, Signe. “In a VR World.” Tech Crunch. https://techcrunch.com/2016/06/03/in-a-vr-
world/.
Buckingham, David, et al, eds. Home Truths? Video Production and Domestic Life. Ann Arbor:
University of Michigan Press, 2011.
Burke, Andrew. “Put it on my card,” Videomaker 21, no. 3 (September 2006): 21-22.
Buonanno, Milly. The Age of Television: Experiences and Theories. Chicago: The University of
Chicago Press, 2007.
Burch, Elizabeth. “Getting Close to Folk TV Production: Nontraditional Uses of Video in the
U.S. and Other Cultures,” Journal of Film and Video 49, no. 4 (Winter 1997): 18-29.
Caldwell, John. Production Culture: Industrial Reflexivity and Critical Practice in Film and
Television. Durham: Duke University Press, 2008.
Caldwell, John. Televisuality: Style, Crisis, and Authority in American Television. New
Brunswick: Rutgers University Press, 1995.
Caranicas, Peter. “’Bridesmaids’ Caught Improv on Film.” Variety. May 17, 2011.
Carroll, Noel. Theorizing the Moving Image. Cambridge: Cambridge University Press, 1996.
Cavell, Stanley. The World Viewed: Reflections on the Ontology of Film. Cambridge: Harvard
University Press, 1979.
Chamley, Santorri. “New Nollywood Cinema: From Home-Video Productions Back to the Big
Screen.” Cineaste 37, no. 3: 21-23.
Chesher, Chris. “Between Image and Information: The iPhone Camera in the History of
Photography.” In Studying Mobile Media: Cultural Technologies, Mobile Media, and the
Phone, edited by Hjorth et. al, 98-117. New York: Routledge, 2012.
Chin, Daryl. “Transmissable Evidence: Is This the End of Film?” PAJ: A Journal of
Performance Art 24, no. 1 (January 2002): 44-51.
Cooley, Heidi Rae. “It’s All About the Fit: The Hand, The Mobile Screenic Device, and Tactile
Vision.” Journal of Visual Culture 3, no. 2 (2004): 133-155.
Coykendall, Bruce. “Tech Support.” Videomaker (August 2001): 8.
Craven, Ian. Movies on Home Ground: Explorations in Amateur Cinema. Newcastle upon Tyne:
Cambridge Scholars, 2009.
Cubitt, Sean. Videography: Video Media as Art and Culture. New York: St. Martins Press, 1993.
292
Del Signore, John. “Police Seek Three Men…” Gothamist. November 16, 2011.
Deren, Maya, “Amateur vs. Professional” in Film Culture 39 (Winter, 1965): 45-46.
Doane, Mary Anne. “The Indexical and the Concept of Medium Specificity.” differences 18, no.
1 (2007): 128-152.
Eco, Umberto. The Role of the Reader. Bloomington: Indiana University Press, 1979.
Edgerton, Dave. The Shock of the Old: Technology and Global History Since 1900. London:
Profile Books, 2006.
Elsaesser, Thomas, and Hoffman, Kay, eds. Cinema Futures: Cain, Abel, or Cable? The Screen
Arts in the Digital Age. Amsterdam: Amsterdam University Press, 1998.
Elsaesser, Thomas. “Digital Cinema and the Apparatus: Archaeologies, Epistemologies,
Ontologies.” In Cinema and Technology: Cultures, Theories, Practices, edited by
Bennett et al, 226-240. Basingstoke: Palgrave Macmillan, 2008.
Ellis, John. Visible Fictions. London: Routledge, 1982.
Elwes, Catherine. “Visible Scan Lines: On the Transition from Analog Film and Video to Digital
Moving Image.” Millenium Film Journal 58 (Fall 2013): 58-63.
Emery, Chris. “Anchors Can’t Hide Anything in HD.” The Los Angeles Times. December 25,
2007.
Feuer, Jane. “The Concept of Live Television: Ontology as Ideology.” Regarding Television:
Critical Approaches – An Anthology, edited by Patricia Zimmermann, 12-21. New
Brunswick: Rutgers University Press, 1983.
Fisher, Bob, and Rhea, Marji. “Interview: Doug Trumbull and Richard Yuricich.” American
Cinematographer 75, no. 8 (August, 1994): 55-66.
Flaxton, Terry. “The Technologies, Aesthetics, Philosophy, and Politics of High Definition
Video.” Millenium Film Journal 52 (Winter 2009/10): 44-55.
Flood, Kathleen. “The Premiere of Alejandro G. Iñárritu’s Short Film.” Vice (October 26, 2012).
https://creators.vice.com/en_us/article/3dpwxw/exclusive-the-premiere-of-alejandro-g-
i%C3%B1%C3%A1rritus-short-film-inaran-jai.
Franks, Eric D. “Is 24p For Me?” Videomaker (August 2003): 44-47.
Fritts, Erik. “Drone Buyers’ Guide.” Videomaker 30, no. 6 (December 2015): 22-27.
293
Fox, Jesse David . “What the Critics are Saying about The Hobbit’s High Frame Rate.”
vulture.com. December 12
th
, 2014.
Friedberg, Anne. “The End of Cinema: Multimedia and Technological Change.” In Reinventing
Film Studies, edited by Christine Gledhill and Linda Williams, 438-452. London: Oxford
University Press, 2000.
Ganz, Adam, and Khatib, Lina. “Digital Cinema: The Transformation of Film Practice and
Aesthetics,” New Cinemas Journal of Contemporary Film 4, no. 1 (May 2006): 21-36.
Gaudreault, Andre, and Marion, Phillippe. The End of Cinema? A Medium in Crisis in the
Digital Age. New York: Columbia University Press, 2015.
Geuens, Jean-Pierre. digital filemaking. https://rethinkcinema.com/digitalfilemaking/
Geuens, Jean-Pierre. “The Digital World Picture.” Film Quarterly 55, no. 4 (Summer 2002): 16-
27.
Geuens, Jean-Pierre. “Through the Looking Glass: From the Camera Obscura to Video Assist.”
Film Quarterly 49, no. 3 (Spring 1996): 16-26.
Gladstone, Steven. “7 Expert Digital Imaging Technicians…” B and H (2015).
https://www.bhphotovideo.com/explora/video/features/7-expert-digital-imaging-
technicians-dits-discuss-their-role-film-set.
Gray, Simon. “An Unlikely Hero,” American Cinematographer 94, no.1 (January 2013): 50-65.
Guerrero, Ed. “Be Black and Buy.” In American Independent Cinema: A Sight and Sound
Reader, edited by Jim Hillier. London: BFI Publishing, 2001.
Gunning, Tom. “What’s the point of an index? Or, Faking Photographs.” In Still/Moving:
Between Cinema and Photography, edited by Karen Beckman and Jean Ma, 23-40.
Durham: Duke University Press, 2008.
Gunning, Tom. “Moving Away From the Index: Cinema and the Impression of Reality.”
differences 18, no. 1 (Spring 2007): 29-52.
Hanhardt, John, ed. Video Culture: A Critical Investigation. Layton: Peregrine Smith Books,
1986.
Hanson, Jarice. Understanding Video: Applications, Impact, and Theory. Newbury Park: SAGE
Publication, Inc., 1987.
Harrison, Mark. “Has Improvisation Harmed Movie Comedy?” Den of Geek (February 15, 2013)
http://www.denofgeek.com/movies/24491/has-improvisation-harm.
294
Haynes, Jonathan, and Okome, Onookome. “Evolving Popular Media: Nigerian Video Films.”
Research in African Literatures 29, no. 3 (Autumn 1998): 106-128.
Hernandez, Robert. “Virtual reality: The shift from storytelling to ‘storyliving’ is real.” Medium.
https://medium.com/journalism360/virtual-reality-the-shift-from-storytelling-to-
storyliving-is-real-ff465c220cc3
Hetrick, Judi. “Amateur Video Must Not Be Overlooked,” The Moving Image 6, no. 1 (Spring
2006): 66-81.
Hillier, Jim. American Independent Cinema: A Sight and Sound Reader. London: BFI
Publishing, 2001.
Holla, Aravinda. “How Multi Channel Networks…” Vidooly. May 30, 2016.
http://vidooly.com/blog/multi-channel-networks-youtube-superstars.
Holm, D.K. Independent Cinema. Harpenden: Kamera Books, 2007.
Horak, Jan-Christopher. "Editor's Foreword," The Moving Image 2, no. 1 (Spring): vi.
Howson, Simon. “Douglas Trumbull: The Quest for Immersion.” Metro Media and Education
Magazine 169 (2011): 112-114.
Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New
York University Press, 2006.
Khatchatourian, Maane. “’Drinking Buddies’…” Variety. August 16, 2013.
Kim, Ji Hoon. “The Post-Medium Condition and the Explosion of Cinema.” Screen 50, no. 1
(Spring 2009): 114-123.
King, Geoff. American Independent Cinema. Bloomington: Indiana University Press, 2005.
Kiwitt, Peter. “What is Cinema in the Digital Age? Divergent Definitions from a Production
Perspective.” Journal of Film and Video 64, no. 4 (Winter 2012): 3-22.
Kleinhans, Chuck. “The Change From Film to Video Pornography: Implications for Analysis.”
In Pornography: Film and Culture, edited by Peter Lehman, 154-167. New Brunswick:
Rutgers University Press, 2006.
Krauss, Rosalind. “Video: The Aesthetics of Narcissism.” October 1 (Spring 1976): 50-64.
Lancaster, Kurt. DSLR Cinema: Crafting the Film Look with Video. Oxford: Focal Press, 2011.
Lang, Patrick. “24p For You and Me.” Videomaker (April 2003): 14-15.
295
Lee, Jaeah, and Vicens, AJ. “Here are 13 Killings…” Mother Jones. May 20, 2015.
Lunden, Ingrid. “6.1B Smartphone Users Globally.” TechCrunch. June 2, 2015.
Lynch, John. “Meet the YouTube Millionaires.” Business Insider. December 8, 2017.
Mangolte, Babette. “Afterward: A Matter of Time.” In Camera Obscura, Camera Lucida, edited
by Richard Allen and Malcolm Turvey, 261-274. Amsterdam: Amsterdam University
Press, 2003.
Manovich, Lev. The Language of New Media. Cambridge: MIT Press, 2001.
Marks, Laura U. “How Electrons Remember.” Millennium Film Journal 34 (Fall, 1999): 66-80.
Marx, Leo. “Technology: The Emergence of a Hazardous Concept.” Technology and Culture 51,
no. 3 (July 2010): 561-577.
Mateer, John. “Digital Cinematography: Evolution of Craft or Revolution in Production?”
Journal of Film and Video 66, no. 2 (Summer 2014): 3-14.
Mayer, Vicki. Below the Line: Producers and Production Studies in the New Television
Economy. Durham: Duke University Press, 2011.
McCarthy, Todd. “Sundance spectrum broad, often digital.” Variety. December 12, 2000.
McLuhan, Marshall. Understanding Media: The Extensions of Man. New York: McGraw-Hill,
1964.
McKernan, Brian. Digital Cinema: The Revolution in Cinematography, Post-Production, and
Distribution. New York: McGraw-Hill, 2005.
McPherson, Tara. “Reload: Liveness, Mobility and the Web” in The Visual Culture Reader, 2nd
Edition, edited by Nicholas Mirzoeff, 458-472. New York: Routledge, 2002.
Moran, James. There’s No Place Like Home Video. Minneapolis: University of Minnesota Press,
2002.
Morgan, Daniel. “Rethinking Bazin: Ontology and Realist Aesthetics.” Critical Inquiry 32
(Spring 2006): 443-481.
Mulvey, Laura. Death 24x a Second: Stillness and the Moving Image. London: Reaktion Books,
2006.
Munsterberg, Hugo. Hugo Munsterberg on Film: The Photoplay: A Psychological Study and
Other Writings. New York: Routledge, 2002.
296
Myer, Clive, ed. Critical Cinema: Beyond the Theory of Practice. New York: Columbia
University Press, 2012.
Newman, Michael. Video Revolutions: On the History of a Medium. New York: Columbia
University Press, 2014.
Newton, Dale, and Gaspard, John, Digital Filmmaking 101. Studio City: Michael Weisse
Productions, 2001.
Nichols, Bill. Representing Reality: Issues and Concepts in Documentary. Bloomington: Indiana
University Press, 1991.
O’Kane, Sean. “Watch a Gorgeous Timelapse…” The Verge. January 15, 2016.
https://www.theverge.com/tldr/2016/1/15/10776386/hateful-eight-70-mm-film-time-
lapse-video
Pearson, Rebecca. “The Ugly Truth...” The Telegraph. November 9, 2015.
Perez, Gilberto. The Material Ghost: Films and Their Medium. Baltimore: Johns Hopkins
University Press, 1998.
Peterson, Brian. “Getting that Film Look: Shooting Video to Look Like Film.” Videomaker (July
2008): 37-40.
Petit, Chris. “Pictures by Numbers.” Film Comment 37, no. 2 (March/April 2001): 38-43.
Petty, Sheila. “Digital Video Films as ‘Independent’ African Cinema.” In Independent
Fillmmaking, edited by Doris Baltruschat and Mary Erickson, 255-269. Toronto: Toronto
University Press, 2015.
Prince, Stephen. “True Lies: Perceptual Realism, Digital Images, and Film Theory.” Film
Quarterly 49:3 (Spring 1996): 27-37.
Quezada-Dardon, Roberto. “State of the Art: HDSLRs in 2010.” Filmmaker 18, 4 (Summer
2010): 66-79.
Rabinovitz, Lauren, and Geil, Abraham, eds., Memory Bytes: History, Technology, and Digital
Culture. Durham: Duke University Press, 2004.
Rascaroli, Laura, et al, eds., Amateur Filmmaking: The Home Movie, the Archive, the Web. New
York: Bloomsbury Academic, 2014.
Reff, Michael. “Duck, Duck? Goose?? Decoying Your Video to Look Like Film.” Videomaker
(July 2006): 49-52.
297
Renov, Michael, and Suderburg, Erika, eds., Resolutions: Contemporary Video Practices
.Minneapolis: University of Minnesota Press, 1996.
Robé, Chris. “Criminalizing Dissent: Western State Repression, Video Activism, and Counter-
Summitt Protests.” Framework 57, no. 2 (Fall 2016): 161-188.
Robertson, Barbara. “Tricks: Next-Gen High-Def.” Film and Video, May 1, 2005.
Robinson, Joanna. “Game of Thrones,” Vanity Fair, September 26, 2017.
Rodowick, David Norman. The Virtual Life of Film. Cambridge: Harvard University Press, 2007.
Rogers, Ariel. Cinematic Appeals: The Experience of New Movie Technologies. New York:
Columbia University Press, 2013.
Rombes, Nicholas. Cinema in the Digital Age. London: Wallflower, 2009.
Saunders, Scott. “Wares.” Filmmaker (Summer 2002).
Sepinwall, Alan. “How It’s Always Sunny…” Uproxx. February 4, 2015.
Shamberg, Michael, and the Raindance Corporation. Guerrilla Television. New York: Holt,
Rinehart and Winston, 1971.
Shaviro, Steven. “Emotion Capture: Affect in Digital Film.” Projections: The Journal for Movies
and Mind” 1, no. 2 (Winter 2007): 37-55.
Sheckter, Alan. “DV Music Video Duo.” Videomaker 16, no. 6 (December 2001): 15.
Sim, Gerald. “When and Where is the Revolution in Digital Cinematography?” Projections 6,
no. 1 (Summer 2012): 79-100.
Smith, Gavin. “Straight to Video.” Film Comment 33, no. 4 (July/August 1997): 54-55.
Smith, Merritt Roe, and Marx, Leo, eds., Does Technology Drive History? The Dilemma of
Technological Determinism. Cambridge: MIT Press, 1994.
Smith, Murray. “My Dinner with Noël; or, Can We Forget the Medium?” Film Studies 8
(Summer 2006): 140-148.
Snickars, Pelle, and Vondureau, Patrick, eds. The YouTube Reader. London: Wallflower, 2010.
Spector, Mike et al. “Can Bankruptcy Filing Save Kodak?” The Wall Street Journal. January 20,
2012.
Spielmann, Yvonne. Video: The Reflexive Medium. Cambridge: MIT Press, 2008.
298
Sterne, Jonathan. MP3: The Meaning of a Format. Durham: Duke University Press, 2012.
Streible, Dan. “Moving Image History and the F-Word; or, Digital Film is an Oxymoron.” Film
History 25, no. 1-2 (2013): 227-235.
Strickland, J.R.: “Make Scenes Cinematic: What Does it Take to Make Your Work Cinematic
and Feel Like a Movie?” Videomaker 29, no. 9 (March 2015): 40-44.
Studlar, Gaylin. “Trumbull on Technology.” Spectator 3, no. 1 (Fall, 1983): 7.
Sü dderman, Martin. “Wares.” Filmmaker 11, no. 2 (2003).
Susik, Abigail. “Sky Projectors, PortaPaks, and Projection Bombing: The Rise of a Portable
Projection Medium.” Journal of Film and Video 64, no. 1-2 (Spring/Summer 2012): 79-
92.
Thompson, Anne. “Fest Circuit: Sundance Film Festival.” Filmmaker 11, no. 3 (Spring 2003):
28-31.
Turnock, Julie. “Removing the Pane of Glass: The Hobbit, 3D High Frame Rate Filmmaking,
and the Rhetoric of Digital Convergence.” Film Criticism 32, no. 2 (March 2013): 30-59.
Vachon, Christine. Shooting to Kill: How an Independent Producer Blasts Through the Barriers
to Make Movies That Matter. New York: Avon Books, 1998.
Vertov, Dziga. Kino Eye: The Writings of Dziga Vertov. Berkeley: University of California
Press, 1984.
Vonderau, Patrick, ed. Behind the Screen: Inside European Production Cultures. New York:
Palgrave McMillan, 2013.
Walsh, Sinéad. “Digital Filmmaking.” Film Ireland 72 (August/September 1999): 18-23.
Wang, Yiman. “The Amateur’s Lightning Rod: DV Documentaries in Postsocialist China.” Film
Quarterly 58, no. 4): 16–26.
Wayne, Mike. Theorising Video Practice. London: Lawrence and Wishart Limited, 1997.
Wendt, Brooke. The Allure of the Selfie: Instagram and the New Self Portrait. Amsterdam:
Network Notebooks, 2014.
Willis, Holly. New Digital Cinema: Reinventing the Moving Image. London: Wallflower, 2005.
Winner, Langdon. “Do Artifacts Have Politics?” Daedalus 109, no. 1 (Winter 1980): 121-136.
299
Winston, Brian. Media Technology and Society: A History from the Telegraph to the Internet.
New York: Routlege, 1998.
Winston, Brian. Technologies of Seeing: Photography, Cinema, and Television. London: British
Film Institute, 1996.
Wong, Cindy Hing-Yuk, Film Festivals: Culture, People, and Power on the Global Screen. New
Brunswick: Rutgers University Press, 2011.
Zettl, Herbert, Sight, Sound, Motion: Applied Media Aesthetics. Belmont: Wadsworth
Publishing, 1999.
Zimmermann, Patricia, Reel Families: A Social History of Amateur Film. Bloomington: Indiana
University Press, 1995.
Zinman. Gregory. “Analog Circuit Palettes, Cathode Ray Canvases: Digital’s Analog,
Experimental Past.” Film History 24, no. 2 (2012): 135-157.
Zoller Seitz, Matt, and Wade, Chris. “What Does ‘Cinematic TV’ Really Mean? Vulture.
October 21, 2015. http://www.vulture.com/2015/10/cinematic-tv-what-does-it-really-
mean.html.
Abstract (if available)
Abstract
The central aim of my study is to track the evolution of video camera technology from the mid-1990s to the present across diverse communities of use and examine that technology’s reciprocal relationship with culture – how the development of video camera technology has affected artistic, industrial, and communicative practice and, in turn, how society and culture have shaped the imaging technology that they have produced. ❧ In the first short section, (“Pre-roll: Digital Cinema and Production-Based Theory”) I outline the nature of my intervention in the digital cinema debate and the stakes of that intervention by demonstrating the practical elements that the debate has long been lacking and situating my work to fill that absence, as a middle point between its opposing poles: a fixation on the indexical and, conversely, a medium eliminitivism. ❧ In the second section (“Video Camera Image Quality”) I argue that the engineering of the video image has been carried out in relation to the industry standards set by the use of photochemical film – a set of standards that I call the “cinematic,” following the use of that term within communities of videomaking practice. Across this section, I argue that the initial revolutionary potential of digital image manipulation was tempered through the democratization of the cinematic look, shifting markers of production quality to areas beyond image texture, and to ones more tied directly to capital and the complex manipulation of data. ❧ In the third section (“Video Camera Utility”), I argue that the development of the video camera across the digital age occurred concurrently and in concert with advances in modern computing, such that the camera itself underwent a process of computerization. Across this section, I argue that the apparent democratization of moving image-making exists in practice more as a self-perpetuating myth than a fully realized act of cultural revolution. The ability to manage, secure, and manipulate video’s existence as data becomes a dividing line among spheres of production, especially as image quality becomes homogenized. ❧ At the ground level and across the various spheres of production, I suggest that the digital revolution (surely still in progress) has been realized less as a direct uprising to slay the “media giants” and more as a slow-burning growth in videographic literacy. High-quality image-making has been democratized in that it falls under the control of more communities of practice, but the liberating power of this democratization is still held in check by established power structures that continue to separate these same communities. Revolutionary potential lies less in the reversal of the singular flows – challenging Hollywood and the major media companies on their turf – and more in the abundance of alternate networks of videographic communication and community building.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Viral selves: Cellphones, selfies and the self-fashioning subject in contemporary India
PDF
Labors of love: Black women, cultural production, and the romance genre
PDF
Whose quality is it? Transnational TV flows and power in the global TV market
PDF
Special cultural zones: provincializing global media in neoliberal China
PDF
Co-producing the Asia Pacific: travels in technology, space, time and gender
PDF
Emergent media technologies and the production of new urban spaces
PDF
Unsettled media: documenting refugees and Europe's shifting borders along the Balkan Route
PDF
The virtual big sister: television and technology in girls' media
PDF
Hollywood vault: the business of film libraries, 1915-1960
PDF
Toward counteralgorithms: the contestation of interpretability in machine learning
PDF
Coding.Care: guidebooks for intersectional AI
PDF
"Shaken out of the ruts of ordinary perception": vision, culture and technology in the psychedelic sixties
PDF
Infrastructures of the imagination: building new worlds in media, art, & design
PDF
Channeling Shirley MacLaine: stardom, travel, politics, and beyond
PDF
Engineering Hollywood: technology, technicians, and the science of building the studio system, 1915-1930
PDF
One more time: instances, applications, and implications of the replay
PDF
Riddles of representation in fantastic media
PDF
Relational displacements: visual and textual cultures of resistance in the east Los Angeles barrios and banlieues of Paris, France
PDF
More lives to live?: archiving and repurposing the daytime soap opera
PDF
Revenge of the fanboy: convergence culture and the politics of incorporation
Asset Metadata
Creator
LaRocco, Michael
(author)
Core Title
Video camera technology in the digital age: Industry standards and the culture of videography
School
School of Cinematic Arts
Degree
Doctor of Philosophy
Degree Program
Cinematic Arts (Critical Studies)
Publication Date
07/27/2018
Defense Date
05/24/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
camcorder,camera,cinematic,Cinematography,digital,digital camera,digital cinema,digital video,film,film theory,OAI-PMH Harvest,practice-based theory,production studies,Technology,Video,video camera,videography
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
McPherson, Tara (
committee chair
), Govil, Nitin (
committee member
), Jaikumar, Priya (
committee member
)
Creator Email
michaelalarocco@gmail.com,mlarocco@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-36892
Unique identifier
UC11671534
Identifier
etd-LaRoccoMic-6544.pdf (filename),usctheses-c89-36892 (legacy record id)
Legacy Identifier
etd-LaRoccoMic-6544.pdf
Dmrecord
36892
Document Type
Dissertation
Format
application/pdf (imt)
Rights
LaRocco, Michael
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
camcorder
cinematic
digital camera
digital cinema
digital video
film theory
practice-based theory
production studies
video camera
videography