Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Worldizing data: embodiment, abstraction, and distortion in art-science praxis
(USC Thesis Other)
Worldizing data: embodiment, abstraction, and distortion in art-science praxis
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
WORLDIZING DATA:
EMBODIMENT, ABSTRACTION, AND DISTORTION
IN ART-SCIENCE PRAXIS
by
Brian Cantrell
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
CINEMATIC ARTS (MEDIA ARTS AND PRACTICE)
December 2020
Copyright 2020 Brian Cantrell
Acknowledgements I would like to express my deepest gratitude to my committee chair, Dr. Andreas Kratky, for
pointing me in some very fruitful directions over the past number of years and for displaying
great patience and guidance when it was most needed. Professor Kratky’s influence and example
has guided me through the complexities of this project and, in many ways, has helped to mitigate
the inevitable uncertainties that arise from exploring interdisciplinary territory. Professor Kratky
gave me wide latitude to explore not only what would become the main theme of this work, but
also some of its more tributary lines of investigation and for that I am extremely grateful.
I would also like to express great fondness and appreciation for the rest of my committee,
Drs. Holly Willis and Scott Fisher, each of whom personify very different but equally inspiring
variations of theorist/practitioner and whose feedback has, at various points, been both
challenging and thought provoking. Whether or not they are aware, their input and the living
examples they set have given me the encouragement and the provocation needed to do the work I
find most exciting. Special thanks is also due to Drs. Vicki Callahan, Steve Anderson, and Jeff
Watson for their willingness and enthusiasm to step in when asked and for putting me in touch
with some of the deeper philosophical roots of this work long before I had a clear picture of what
it would turn out to be.
My own habits as a theorist/practitioner tend toward the hermetic and the autodidactic, so I
am certain that many of those listed above remain unaware of just how influential they have been
on my progression. Early on in my studies, Dr. Steve Anderson displayed an enthusiasm for my
ideas that helped me embrace the interdisciplinary nature of the work I care about although, at
!i
the time I hadn’t the slightest clue how to go about accomplishing it. Even in the most casual
conversations, Dr. Anderson has given me not only his genuine interest and support, but has
handed me some absolute gems regarding cultures of technological representation that
undoubtedly crop up here and there in the pages to follow.
As department chair during my first few years at USC, Dr. Holly Willis helped facilitate a
deeply experimental, inspiring, and supportive atmosphere within iMAP that made all of us in
the program feel secure in taking both academic and aesthetic risks. As a professor, she has also
helped me to limber up my writing and find inspiration in post-cinematic media trends. She has a
knack for asking questions that cut directly to the core of my relationship to my own work. It’s
no exaggeration to say that it was one of her more astute lines of questioning that helped me
separate the wheat from the chaff with respect to the various ideas I attempt to reconcile in this
work and to finally cut away those topics for which I had more passing curiosity than genuine
passion.
Dr. Scott Fisher is another whose periodic questioning of my work has had a positive
influence on the direction I’ve taken as a maker. As unfortunately limited as our interactions have
been over the years (a situation I was remiss in allowing to continue for so long), Dr. Fisher’s
input always either sets me on a productive line of inquiry or puts me on the trail of some artist
or new technology from which to draw inspiration. Most importantly, however, the example he
sets as a researcher and lab director is one that has helped me to see both the practicalities and
the benefits for transitioning out of the familiar but isolating environment of the art studio and
into a more collaborative design practice where one regularly discovers new ways of thinking,
doing, and solving problems.
!ii
Dr. Jeff Watson has also been a source of much inspiration over the past few years, both as a
friend and a colleague. Observing Dr. Watson’s approach to teaching during my time as his
teaching assistant helped me to see a broader and more experimental set of possibilities for
pedagogical work. His weaving of philosophical and socio-political content throughout his
lectures on media and his proclivity for building up to some larger humanistic truth is always
thought-provoking, but it is also plainly driven by an honest and deep-seated sense of empathy,
which is perhaps the larger lesson I’ve taken away our time teaching together. The example he
sets in this regard has been immeasurable.
There are a number of others who have helped to inform both the present work and my
thinking more broadly who also deserve my gratitude and recognition. Professor Alex
McDowell, with whom I have worked in close collaboration over the past few years, has been
both friend and mentor and has helped to me to scale up my practice in ways that I didn’t realize
were possible. Professor McDowell has a knack for showing “impossible” to be a word we use to
simply make ourselves comfortable with our perceived limitations. While his concept of
worldbuilding only makes a limited appearance in the final chapter of this work, I would be
remiss in failing to acknowledge its influence on the general methodology I employ. The
systems-thinking philosophy at the core of worldbuilding, while not specifically conceptualized
as a means for conducting scholarly research, helped me to see the need for a methodological
framework capable of both providing a variety of perspectives upon my objects of study and a
means for weaving these together in a holistic fashion. It has also helped me think through the
philosophical and practical value of shifting between the “top-down,” systems view of things and
the “first person,” partial perspective of embodied agents among the things that make up a world.
!iii
Dr. Sergei Gepshtein, with whom I’ve only recently begun to collaborate, pointed me in
some extremely fruitful directions just as I began engaging the stickier philosophical issues in
this work, particularly those related to phenomenology. Dr. Gepshtein and I have spent many
hours over lunch and, during quarantine, over Skype discussing all manner of art-science related
topics, always coming back to the tension between the manifest image and the scientific image.
It would be an understatement to say that he helped me cut through some of the deepest weeds
that gather around this dichotomy and for that I would like to express endless gratitude. Suffice it
to say that my own views of art-science praxis dovetail neatly with Dr. Gepshtein’s, even in those
few areas where there are slight disagreements, and I look forward to further collaboration and to
countless more hours of friendly and engaging conversation.
Lastly and most importantly, I would like to acknowledge the limitless support I have
received from Dr. Allison de Fren over the past ten years both before and during my studies at
the University of Southern California. It is no exaggeration to say that if not for the care she took
in easing my transition to Los Angeles and her encouraging of me to live more expansively in the
world, I most likely would never have begun, much less completed this work. Throughout this
process, she has been at various times a sounding board, a muse, a colleague, an advisor, a
collaborator, and a fellow spelunker of the deepest and strangest depths of both art and the
history of science. Most crucially, she has been a stabilizing force for me even in those moments
when she was facing her own struggles and for that I am deeply indebted.
!iv
Table of Contents
Acknowledgements i
Table of Contents v
List of Figures vii
Abstract ix
1 Introduction 1
1.1 Worldizing 1
1.2 Big Science, Big Data 2
1.3 Navigation 6
1.4 Theory, Practice, and Representation 12
1.5 The Postphenomenological Perspective 17
1.6 The Ontological Perspective 24
1.7 Distortional Tactics and the Synoptic Perspective 29
2 Iconographies of Art and Science 33
2.1 About this Chapter 33
2.2 The Two Orders 36
2.3 Science and Reality 39
2.4 Objectivity 60
2.5 Inference 64
2.6 Art and Jurisprudence 75
2.7 Art, Understanding, and the Noise of Experience 87
3 Waveforms 97
3.1 The Water or the Wave? 97
3.2 About this Chapter 99
3.3 Taking Signal For a Walk 105
3.4 WaveForm 114
3.5 Ethereal Bodies, Liminal States, and Aeolian Music 123
3.6 Aeolian WiFi 133
3.7 Noise as the Nature of Things 138
!v
3.8 Notation, Variation, and the Noise/Signal Binary 142
3.9 Sonde: Field Perception and Aeolian Theater 150
3.10 Embodiment and Abstraction 158
4 Chapter Four: Clouds, Storms, and Space Weather 168
4.1 Turner and Ikeda: Two Phenomenological Interludes 168
4.2 About this Chapter 175
4.3 The Origin[s] of Linear Perspective 179
4.4 Workshop Practices and a Phenomenology of Doing 182
4.5 Perspective Devices and the Truth of Distortion 194
4.6 Fragmented Vision and Distortional Dialectics 202
4.7 TEC: Points, Clouds, and Electron Drift 209
4.8 Clouds, Signs, and Space 221
4.9 Notation, Variation, and the Distortional Attitude 225
5 World in a Cell 237
5.1 About this Chapter 237
5.2 The City and the Cell: Beginnings 240
5.3 The Cell & the City: Motives and Methodology 242
5.4 The Trading Zone 246
5.5 The Abstract Vector of Science 252
5.6 Where Intuition Fails 258
5.7 Language Construction: The Distortional Effects of Metaphor 263
5.8 Shaping the Interlanguage: Dual Codings and Pidgin Terminology 271
5.9 Cell & the City: Preliminary Findings and Lessons Learned 279
5.10 World in a Cell: From Interlanguage to Form Language 281
5.11 World in a Cell: The Limitations of Representation 289
5.12 World in a Cell: Origami, Scale, and the Base Unit 293
5.13 Saccades: GLP1R_extracellular_domain 299
Conclusion 308
Bibliography 311
!vi
List of Figures
Figure 1: NASA image of the Crab Nebula. 106
Figure 2: Luis Hernan, Digital Ethereal. 107
Figure 3: Etienne-Jules Marey, Walking Man. 109
Figure 4: Leonardo da Vinci, Turbulent flow. 110
Figure 5: Etienne-Jules Marey, Sphmygmograph. 113
Figure 6: Shadowgraphs of diffraction patterns in a ripple tank. 117
Figure 7: Brunelleschi’s perspective device. 183
Figure 8: Hans Holbein the Younger, The Ambassadors. 200
Figure 9: Lippo Memmi, The Last Supper. 203
Figure 10: Still from ionSandBox. 216
Figure 11: Still from TEC_1_17_13. 217
Figure 12: Antonio da Correggio, Assumption of the Virgin. 222
Figure 13: Jan Brueghel the Elder and Peter Paul Reubens, Sight. 254
Figure 14: Fritz Kahn, The Seven Functions of the Nose. 270
Figure 15: Digram of the signaling pathways of the pancreatic beta cell. 271
Figure 16: Whiteboard from Cell and the City pedagogical session. 275
Figure 17: Diagram produced from Cell and the City whiteboard. 276
Figure 18: “Ball and stick,” “spacefilling,” and “ribbon” diagrams. 290
Figure 19: Base unit and scale relations between the “constituents” for World in a Cell. 294
Figure 20: Still from the World in a Cell VR Experience 296
Figure 21: X-Ray crystallography diffraction pattern. 305
!vii
Figure 22: Eye tracking equipment used by Yarbus. 306
Figure 23: Recordings of saccadic movements. 306
!viii
Abstract
This work suggests a potential direction for hybrid art-science investigation, focusing
specifically upon practices of data display. Beginning with a critique of the “two cultures” and
the caricatures of art and science most commonly held by practitioners on opposite sides of a
perceived epistemological divide, the present study locates the source of these tensions in
misperceptions that follow from what Wilfrid Sellars terms the “manifest” and “scientific”
images. Bridging the gap from abstraction on one side of this dichotomy to embodiment on the
other, and drawing upon various historical precedents in the histories of art and science, this
work makes a case for an aesthetics of data display that foregrounds materiality and complexity.
From this point of view, immersive and experiential works reconfigure the signal/noise binary,
upon which most sensing technologies and task-based data displays are based, such that noise is
viewed as “the nature of things,” always teeming with signal. Immersive or experiential data
displays therefore present opportunities for “worldizing” data—for retrieving the original
complexity out of which signal is abstracted—and are capable of producing not only novel
aesthetic experiences, but also potential directions for scientific investigation that might
otherwise go unnoticed. This work argues for a “distortional” mode of artistic interventions into
scientific representation and, in the last chapter, provides a practical example of how such a
mode might benefit art-science collaboration.
!ix
1 Introduction 1.1 Worldizing
The term “worldizing” denotes the practice of recording sound effects in the acoustically
neutral space of a sound studio, then playing them back over a loudspeaker “on location” and re-
recording them in order to capture the acoustic complexity of the space in which the sound is
understood to be occurring within the diegesis. The technique is generally accepted to have
originated with Walter Murch, who made the practice standard in audio production for film after
his work on American Graffiti. What is striking about the process is that it captures something
powerful yet subtle about the nature of our perceptual encounters with auditory media, viz. the
ease with which a sound, no matter how “dry,” can be convincingly placed back into the world.
Worldizing interleaves the wider complexity of a material world space with a sound captured
elsewhere, recovering the noise that speaks of an “out-thereness.” It places us bodily within the
auditory scene where we resonate along with the other surfaces and objects of the environment.
What is even more conceptually evocative than the manner in which worldizing plays upon
our spatial intuition of sound is the fact that it is needed in the first place. Sound effects recorded
in the studio are mostly agnostic with respect to their environments; they are without context and
impoverished when compared to the rich sonic ecologies that make up everyday auditory
experience. They are data produced at maximum signal, artificially extracted from the noisy
fabric of the sonic world. We typically do not think of sound recording as data capture save,
perhaps, in the context of sound’s digitization and storage, but this is exactly what recording is.
Just as a seismograph extracts data from the complex dimensionality of the earthquake in the
!1
form of a line scrawled in two dimensions across a page or screen, so too does the recording of
sound, especially within controlled studio conditions, excise the sonic event from the multi-
dimensionality of its physical entanglements, plotting amplitude over time in a linear flow
inscribed into magnetic oxide or captured as a series of floating point numbers at 44,100 samples
per second. To worldize sound is already to worldize data. 1.2 Big Science, Big Data
Of course, sound recordings are only one type of data among many feeding a constant hum of
paranoia and commerce in this present era of “Big Data”—a term that has come to denote the
ever-expanding number of databases stuffed with personal user information collected by tech
companies, social media “services,” and advertising agencies. Big Data is facilitated by our
buying and browsing habits, our “likes,” the amount of time we pause on a particular image
while scrolling, our listening preferences or—for the particularly paranoid—the hot mics in our
smartphones. Understood thusly, Big Data is a vast and dynamic resource that helps us to help
companies to help us by making their services a little more agile, a little more precise, and life
itself (so we are told) just a little more frictionless. What is more, the abstractions of Big Data
enjoy a profound autonomy from their physical referents. What is captured is amplified,
submitted to the logic of the algorithm, and has an asymmetrical effect at the output stage. Big
Data, understood in this way, is generative in the extreme.
For many critics, the convenience provided by the feedback loop between data collection and
proffered services is merely the lubricant for the gears of a dystopian capitalist machine; elite
cadres of technocrats and company board members, it is reasonably argued, are increasingly
!2
afforded unheard-of power to shape the lives and realities of consumers. A crowded field of
scholars have come to direct a critical eye upon this dystopian aspect of Big Data, investigating
and analyzing its potential as a tool for social control and manipulation. Scholars working across
a panoply of disciplines ranging from media and cultural studies to pop journalism have given
Big Data a well-deserved dose of serious critique and theorization. Meanwhile, other impacts of
the Big Data revolution—less spooky, perhaps, but just as disruptive—have largely gone under-
theorized save by a few practitioners working in highly specialized fields. While much analysis
and ideological debate is being focused upon the social and ethical implications of Big Data’s
terrible perfection of command and control, the experimental sciences have been struggling to
keep pace with their own expanding data stockpiles. In what we might characterize as a
“singularity” of empirical investigation, advances in computation and instrument design have
produced systems that collect data at such speeds and in such quantities that they present serious
challenges to any individual scientist’s ability to fully analyze, test hypotheses, or glean insight
from said data. In short, without algorithms to identify and sort pattern, such massive stores of
data become oceans of noise.
What this situation has in common with the social problematics of Big Data is the degree to
which it reveals an unsettling truth about the limits of human cognition. Large scale social
experiments like that used by Cambridge Analytica for unprecedented behavior modification
prior to the 2016 U.S. presidential election are unnerving because they suggest a human
subjectivity that is mostly automated, calculable, predictable and ultimately, manipulable.
Similarly, mountains of machine-gathered experimental data reveal the limits to the only
!3
cultural/biological devices we know of that are capable of interrupting our tendencies toward
self-automation: reason, introspection, and conscious attending.
It could be argued that Big (Experimental) Data are returning science to its empiricist roots.
Data (in the monolithic, singular sense of the word) has re-ascended the throne it occupied in the
earliest years of modern scientific inquiry. For many, scientific theories must keep up or fall by
the wayside. Increasingly it seems, all science is becoming data science—a turn that requires
new skills, new training, and new sensibilities. Where previously the problems of scientific
inquiry were those of taxonomy, theoretical self-consistency, robustness of explanatory models,
and effective instrument and experiment design, we now see a proliferation of issues around
access to data, statistical analysis and, most importantly, how to extract insight from the patterns
in data and make them perceivable and communicable to others. As is made evident by the rising
number of artists and designers engaged in the aesthetics and practice of data visualization, many
within the Arts have kept one eye upon these changes in both culture and the Sciences and are
adjusting their work accordingly.
To some artists, Big Data do not simply represent a vast accretion of ones and zeros ready to
be mined for profit or scientific insight; taken together, they are digital clay—a material that has
come to necessitate forms of expressive practice that are both very new and very old. These
practices are undertaken individually and in collaboration with scientists for whom the
expanding stores of digital material has resulted in a desire to make porous the boundaries
between specialized disciplines. Today, scientists are much more likely than they were at any
time in the recent past to express a willingness to collaborate with colleagues who employ very
different technical/theoretical languages and methodologies. If late-nineteenth and early
!4
twentieth century science was marked by a great divergence of proliferating scientific specialties,
the data revolution of the early twentieth century is heralding a great convergence. The result of
this, for better and for worse, has been an uptick of interest in art-science praxis, which has, in
turn, created a carnival of hybrid approaches and qualitative results for both the Arts and the
Sciences. How do these collaborations benefit the collaborators? What is their aesthetic and
scientific potential? What are their limitations or pitfalls and how does one avoid them?
The most important question for the present work is: “What is gained, epistemologically or
aesthetically, by worldizing data?” We might say that it is enough to produce visualizations or
sonifications that in some way retrieve the original context of the data, thereby uncovering
insights about the patterns we produce that are precluded by the original scope of the data
collection. We might investigate the methods used to produce data and probe the margins for
error. We might even bring multiple datasets into conjunction and investigate the possible
systems or experiences that might be designed to make sense of their collisions. For myself, the
most evocative and frustrating approach is the use of data to create embodied, immersive, or
“enworldened” encounters—evocative because the practice raises a number of questions about
how and whether the embodied encounter might produce knowledge or augment experience, and
frustrating because many of the datasets collected by scientific instruments describe processes
(phosphorylation, ionization, quantum spin) or entities (electrons, neutrinos, molecules) that are
radically imperceivable to humans and thus resist the metaphors derived from everyday sensorial
experience. Such a practice thus requires sustained engagement with abstraction and theoretical
knowledge as much as with the materials and technologies of lived experience and artistic
production.
!5
Where materiality becomes most interwoven with theory is arguably in the design and use of
scientific instruments. On the one hand, instruments are commonly understood to augment our
perceptions; on the other, they achieve something much more profound than the extension of the
biological senses into greater or smaller spatial distances. Many scientific instruments today
circumvent the senses almost entirely. I say “almost” because clearly they must communicate
their mathematical results in some sensible form. Phenomenologist Don Ihde has described the
instrument readouts, charts, graphs, dials, heads up displays, and other such means for receiving
instrumentally produced data as “hermeneutic interfaces,” emphasizing the role of interpretation
in their construction. Unfortunately, this interpretive layer has the potential to complicate
attempts at leveraging the objects of science for the purpose of creating phenomenal experiences.
Nevertheless, as we will see, our overfamiliarity with particular interpretations creates the very
need for new perspectives. The manner in which the phenomena made tangible by instruments
retreat from our embodied intuitions therefore calls not only for new ways of thinking about our
aesthetic languages of sound and image, but also sustained engagement between scientists and
artists in pursuit of deepening the discourse around those slices of reality that are exterior to
perception, the material constructions that put us in contact with them, and how these might best
be addressed through sensorial or aesthetic means. 1.3 Navigation
During the opening remarks of a recent art-science symposium, the introductory speaker
drew attention to the fact that use of the compound word “artscience” often makes as much sense
as saying “flourwater.” In the case of the latter, what one really means to say is “cake.”
!6
Artscience, he argued, is not a simple conjunction of two ingredients; it is cake. This assertion
speaks to the formation of a new domain, or “third culture,” between the arts and sciences, which
is emergent and radically distinct from its constituent disciplines.—a suggestion that is often
made in art-science discourse. While inspiring, what this observation nearly always fails to
address are the processes by which disciplinary boundaries are produced and sustained and how
these processes might find a common language and operational logic that produces the hybridity
we are presumably seeking. Everyone wants to talk about cake; almost no one wants to think
seriously about the recipe.
At its broadest, the goal of the present work is to reflect upon and problematize both art-
science praxis and the rhetoric around it, paying special consideration to those practices,
concepts, or material products termed “boundary objects,” which Peter Galison examines in his
essay, “Trading with the Enemy” (Galison 46). What are the objects or concepts germane to the
goals of each order, which we might characterize as the products of high certainty, but which
themselves have the effect of stretching certainty to its limit? What are the philosophical postures
of practitioners within each domain with respect to ordering schemas, material production,
questions of the real, and the phenomena that are exteriorized by ordering processes? What is
required for the hybrid theorist/practitioner to productively maneuver within the gradients of
uncertainty that interleave in the space between orders? Most importantly, what tactics do we
require to navigate these spaces successfully?
I will begin my analysis of these questions in the next chapter by examining what I will call
the “iconographies of Art and Science,” which can be understood in simplest terms as the
caricatures we in either field tacitly or explicitly perpetuate of the fields of our counterparts and
!7
bring with ourselves into interdisciplinary collaboration or praxis. These iconographies are
constructed primarily, I will argue, around misunderstandings and crosstalk regarding knowledge
production within each domain and how such knowledge is understood to relate to “real things.”
From there, we will hopefully be able to get a sufficient bearing in relation to the sticky problems
of domain knowledge without necessarily needing to solve them. Equipped with a sufficiently
nuanced perspective on these epistemological differences, I will argue, we can turn our attention
to more evocative philosophical and methodological foundations upon which to build.
More specifically, my goal is to excavate and analyze potentialities for the noisy and
embodied/experiential/perceptual encounter in art-science praxis. Exploring the disjunctures and
linkages between what Wilfred Sellars terms “the manifest image” and “the scientific
image” (Sellars 373). I will consider how artists, who make the objects of vision, hearing, touch,
and proprioception the center of expertise, might productively engage the abstractions of science
for which the senses are typically understood to present an obstacle. As I hope to show,
interventions of the sensual into the objects of abstract knowledge are particularly evocative
when they address imperceivable entities, raising questions about not only the limits of the
senses, but also the limits of abstraction. In response to these limits, I will suggest an aesthetics
that is equipped to interrogate exteriority, the noise of which clings to the boundaries of sense-
making, and argue that the multiple points of view that open the curtain onto otherness are best
produced by drawing attention to the artifice in our representational schemas.
Herein, I will use the concept of the penumbral—a word whose meaning is derived from the
partial shadow between areas of full darkness and full illumination—to help scan the gradient
morphology of various types of schemas (disciplines, knowledge domains, sense-making
!8
practices) as well as the shape of the complex and noisy field between the orders that our
schemas rationalize, produce, and sustain. Penumbral spaces are marked, I will argue, by an
uncertainty with which the hybrid practitioner working between disciplinary orders must not
only become comfortable, but which is indispensable for the creation of evocative outcomes.
Due to the uncertain nature of hybridity, one can quickly and easily become lost in the spaces
it describes and be tempted to retreat to the bright centers of well-rationalized praxes. Let us
therefore outfit ourselves with some navigational tools. A navigational metaphor is fitting, I
think, as both the Sciences and the Arts comprise multiple subdomains, each having their own
commitments, methodologies and languages, forming not so much monolithic silos as they do
archipelagos of knowledge. To navigate effectively between them, we require a metaphor that
accommodates their shifting boundary conditions. Maps are most likely of little use in these
spaces, but even the compass has its limitations, despite its ubiquity as an operational metaphor.
The reasons for this are plain: while compasses respond dynamically to lines of magnetic force,
orienting us in the wilderness without schematizing or encoding the terrain, maps rationalize
dynamic phenomena, ordering them into easily understood abstractions. Compasses are data-
poor but rich in information. Maps are rich with data but overlay a grid of misleading certainty.
While, at times, they remain the best tool for the job, maps often fail to take into account
contingencies, changing circumstances, and the countless random events that can befall us in
situ.
Ultimately, I suspect that compasses, like maps, are of limited use in the penumbral spaces I
propose to investigate. Simply put, each order has its own magnetic north, which might exert
influence in directions orthogonal to those of other orders. Put still more plainly, interdisciplinary
!9
practices—at least those comprising such ostensibly different domains as Science and the Arts—
are liable to get drawn into the orbits of one or more of their constituent fields or become over-
coded by the specific commitments of these fields. A Levi Bryant has observed, institutions and
concepts have the potential to exert gravity, which “bends the space-time movement and
becoming of another entity” (Bryant, Onto-Cartography 188). As we will see, this truism
especially holds with respect to philosophical disciplines. What is needed here is not so much a
map or even a compass, but more of a sextant—a device which orients the navigator by
providing a means for sighting the positions of two heavenly bodies against Earth horizon.
Similarly, the synoptic approach I wish to construct herein is one that will aid navigation by
means of a perpetual sighting of our position among the intersecting boundaries of various
established orders (both theoretical and practical) and gauging them against the horizon of
materiality and embodied experience.
Talk of navigational tools might imply the desire to colonize new territories, yet, while it
might be inevitable that making connections between long-standing disciplinary fields will result
in the creation of “third orders,” for myself, making such the teleological focus of
interdisciplinary practice is somewhat misguided. Our work in these spaces only benefits, I think,
from viewing the evolution of new forms of praxis as residual and secondary—a sometimes
fortunate but nearly always unpredictable side-effect of sustained practical engagement. In
chapter three, I will begin outlining what I believe to be a more productive attitude or motivation
for art-science than that of forming “third cultures,” and turn to it more explicitly in chapter four.
Drawing on Jacques Attali’s notion of composition, I name this approach to interdisciplinary
practice “the distortional attitude” and argue for its potential to affect changes in our
!10
representational schemas, challenge our philosophical assumptions, and generate new
perspectives by testing the flexibility of material and theoretical frameworks. The distortional
attitude exposes our disciplinary practices to the noise they either tacitly or explicitly exclude
and seeks mutation through an embrace of alterity. It attempts to de-center and problematize
human subjectivity by foregrounding material agency and thrives upon grounded and tactical
engagement with both the certainties of abstraction and the uncertainties of lived experience.
As we will see, even the wildest improvisational approaches require “hard points,” nodes of
certainty and stability, against which to push—not only in order to “make sense,” so to speak, but
also to provide leverage and propulsion for the doer/thinker moving between orders. Moreover,
the penumbral gradients between orders are not areas of full darkness; they are produced by
crosshatched or interleaved waves of relative uncertainty—what Barad, drawing upon Haraway
in her essay “Posthuman Performativity,” calls “diffraction patterns” (Barad 803)—that are
produced between disparate knowledge domains. One reason for periodically sighting our
position in relation to these domains is so we might better decode the possibilities of their
diffractions for improvisational maneuvers. However, where Barad and Haraway argue for a
“reading” of one discipline “through” another in the space of diffraction, I will suggest a less
hermeneutic approach—one in which our improvisations are performed amid the uncertainties of
material production and are likely to fail, but which remain no less evocative or (to use a dirty
word) useful for that fact. With this in mind, I will now turn to the specific philosophical
frameworks that I will hybridize for our navigational sextant. !11
1.4 Theory, Practice, and Representation
In this work, practices of worldizing raise a mostly open philosophical question of
representation—a problem at least as old as the Greek conception of logos and one that persists
in some form or another in myriad philosophical debates. Wittgenstein sums up the problem
succinctly: “A picture held us captive. And we could not get outside it, for it lay in our language
and language seemed to repeat it to us inexorably” (Wittgenstein 48e). In the philosophy of
science, this quandary re-emerges as rationalism versus empiricism. As we will see, it even rears
its head in the debates around painting, with champions of linear perspective appealing to an
almost pious commitment to a rationalization of space that is concomitant with a privileged,
Cartesian subjectivity, while distorting practices like anamorphosis throw this privileging into
question.
Today, I find no example of the tension between representational ordering schemas and
perceptual experience more striking or ripe for analysis than that we find in the production,
visualization, and sense-making of scientific data. Visualization of natural phenomena, from the
precise reproduction of the visible to the transduction and material inscriptions of the invisible,
has a long history—one that is contingent on rationalized (and rationalizing) technologies of
representation—that can be mined for analysis. To be clear, however, in placing the
representational products of these technologies in conversation with the problem of
representationalism, I do not intend to conflate the latter term as deployed in philosophies of
mind with “representation” as commonly understood within the arts or in critical theory. While,
at times, these disciplinary understandings of representation will converge, by
“representationalism,” I am predominantly speaking of the philosophical discourse concerned
!12
with mind and its abstract (mostly linguistic) contents as a “mirror of nature,” whereas
“representation” is mostly deployed in the usual sense of likenesses or mimesis. I will do my best
in this analysis to make this distinction clear where necessary.
Wittgenstein’s representational problematic not only troubles my objects of study in this
work, but is also a bug in nearly any philosophical program we might use to make sense of these
objects. Theory, it seems, inevitably over-codes practice. Why should this be? A foundational
premise herein is that the tendency of philosophical frameworks (and theory by extension) to
over-code their objects of study is a by-product not of their relation to truth claims or to
knowledge, but rather of their capacity to decrease uncertainty among those who espouse them.
In his book, The Quest for Certainty, John Dewey makes a convincing case that Western
philosophy as a whole is rooted in the all-too understandable desire to create islands of stability
among the uncertainties of lived experience. In order to terraform these islands, Western thought
has, according to Dewey, erected an ontological hierarchy in which the representational products
of Mind are privileged above practical activity. Why? Because, “[p]ractical activity deals with
individualized and unique situations which are never exactly duplicable and about which,
accordingly, no complete assurance is possible” (Dewey 10).
The realm of Universal Truth, or Being, is cast as the realm of the mental while “the realm of
the practical is the region of change, and change is always contingent; it has in it an element of
chance that cannot be eliminated. If a thing changes, its alteration is convincing evidence of its
lack of true or complete Being” (Dewey 22). Even Dewey’s predecessor in philosophical
pragmatism, Charles Sanders Peirce, an experimental scientist and champion of empiricism,
!13
tended toward an operationalist view that privileged the construction of semantic models and
abstraction over embodied or tacit knowledge as central to the practical activities of science:
…it is a fact that I see an inkstand before me; but before I can say that I am obliged to have impressions of
sense into which no idea of an inkstand, or of any separate object, or of an "I," or of seeing, enter at all; and
it is true that my judging that I see an inkstand before me is the product of mental operations upon these
impressions of sense. But it is only when the cognition has become worked up into a proposition, or
judgment of a fact, that I can exercise any direct control over the process; and it is idle to discuss the
"legitimacy" of that which cannot be controlled (Peirce 150, emphasis added).
Peirce’s aside about the idleness of discussing the “legitimacy” of what cannot be controlled
reflects an all-too common epistemological attitude: As theorist/practitioners we want, on the one
hand, to legitimize practical activity as a form of knowledge production, but practical activity
brings with it all manner of random events, unconscious reactions, noisy disruptions, and
unpredictable affects. So, on the other hand, as thinkers about practice we cannot exorcise the
rationalist impulse toward what W. V . O. Quine refers to as “semantic ascent”—“a shift from talk
of objects to talk of words as debate progresses from existence of wombats and unicorns to
existence of points, miles, classes, and the rest” (Quine 280-281). It is inevitable that this
philosophical truism will hold in these pages as well, so rather than making an attempt to resist
it, my intention is to argue for systems of praxis that give abstraction and materiality equal
footing and to draw out their inconsistencies and contradictions where possible so that they may
be turned to productive purposes.
To the extent that this work seeks to construct an experiential account of both abstract
encodings and the uncertainties of practice, it could be understood as another entry in the
!14
ongoing debate around the legitimization of Arts-based research (ABR). Patricia Leavy’s
Research Design: Quantitative, Qualitative, Mixed Methods, Arts-based, and Community-based
Participatory Research Approaches (2017) builds upon her earlier introduction to arts-based
research, Method Meets Art (2020) (now in its third edition), and situates arts-based research
among more traditional research methodologies, drawing out various distinctions. Other concepts
of ABR, like those found in Liamputtong’s and Rumbold’s edited volume, Knowing Differently
(2008)—a work published around the same time as Method Meets Art—center upon alternative
forms of knowing itself, as opposed to hybrid or arts-based approaches to knowledge production.
Ultimately, however, the overall shape of this debate holds little interest for me. I have never
been fully convinced that a model of “object-” or “poetically-centered” generalizable knowledge
—one that, epistemologically speaking, has full autonomy from written discourse—is even
possible in principle. Again, we find ourselves at loggerheads with the problematic Dewey
describes. This is not to say that artworks or methodologies cannot or do not participate in
research and knowledge production; quite the contrary. My position in this regard aligns with
Leavy’s. Clearly, both knowledge and research can be produced from artistic activities and can
even act to support them in an evidentiary capacity, but this is not as simple a formulation as it
ostensibly seems.
Ultimately, my contention is that artworks are not easily extracted from the networked
interplay of theoretical contexts and linguistic argumentation with which we are already familiar
and within which we have long since learned to conduct discourse. This, I argue, is precisely
why we need the logic of the sextant, which requires theoretical understanding, a material
construction, and an experiential horizon to orient us in relation to established orders of
!15
knowledge, especially those that exert strong “gravitational force” as do both Art and Science.
Thus, rather than overstate the case for practical activity as a “legitimate” form of knowledge
production, I wish view both the abstractions derived from theory (scientific or otherwise) and
enworldened, embodied engagements as interrelated forms of ordering and to see what comes of
them when they are exposed to the noise and uncertainty extant within the experiential field.
For the present undertaking, the “theory-ladenness” of the sextant is part of its aesthetic
appeal. As a metaphor, it presents us with a systems-oriented navigation of praxis in which
textual analysis of experiential objects (film, installations, games, sculptures, visualizations,
sonifications, instrumental readouts, etc.) is one node in a network of material/discursive flows,
exchanging information with mathematics and other modes of scientific theory. Like much
systems-thinking applied to cultural theory, this approach owes much to Bruno Latour’s Actor-
Network Theory (ANT) and to similar ontological constructions it has inspired, namely Levi
Bryant’s Machine Oriented Ontology (MOO). I have also borrowed heavily from Feminist new-
materialisms, phenomenologies of the posthuman, and (as previously mentioned) Barad’s
expansion of Haraway’s concept of diffraction. What I hope to construct from these resources is
a metatheoretical framework in which each of its parts is considered modular, adaptable,
exchangeable, and even, when circumstances necessitate it, dispensable.
Prima facie, all of this taken together might seem like theoretical anarchy and closer
inspection is bound to uncover contradictions, but I suspect this to be an inevitable byproduct of
attempting to ground the uncertainties of material practice with abstract schemas and vice versa.
Ultimately, I am neither interested in resolving the open questions or internal contradictions that
bedevil any one philosophical tradition, nor am I interested in addressing them head-on from
!16
within these traditions. My primary concern is not self-consistency, but in drawing out the
potential in these very breakages for evoking possible avenues of a robust experiential praxis.
The first breakage is arguably the one toward which I have already gestured: that any
discourse centered on the experiential, but undertaken in writing, runs afoul of Wittgenstein’s
dilemma. We might turn to that philosopher’s Tractatus, for a way out—to show, as it were, the
things that cannot be put into words; certainly, much practice-based research runs exactly along
these lines. In many respects, the impetus for the present work is a deeply-held sympathy with
Wittgenstein’s concern for the limits of language and the relationship between words and their
referents—a concern that, moreover, lies at the heart of much of the philosophy of science to
which I will turn in the next few chapters. My purpose, however, is not to over-prove the validity
of demonstrating the ineffable or to argue explicitly in favor of alternative “ways of knowing”;
rather, it is to examine the manner in which noise and uncertainty not only cling parasitically to
sense-making frameworks, but present opportunities for bringing productive distortions to bear
upon our most tacitly-held philosophical perspectives. The question is how to best make use of
this potential for the creation of experiences that leverage the body, the affective, and the poetry
of materiality to evoke intuitions in relation to the phenomena referred to in our abstractions. 1.5 The Postphenomenological Perspective
A theory of “worldizing” requires a suitable account of “worlds” to help ground it. The
concept of “world” has been defined in multiple ways across multiple disciplines but, for our
purposes, there are two distinct perspectives that I believe create an opening for a productive
philosophical approach. On the one hand, a world implies a mostly closed system populated with
!17
real things and agents interacting in various ways to produce dynamic flows of matter and
meaning as well as temporary orderings. This facet of “worlds” is mostly the subject of ontology,
to which I will turn in the next section. On the other hand, “worlds” can also be understood in the
sense of Jakob von Uexküll’s “umwelten”—the situated life-worlds within which individual
biological agents live and with which they interact through the filters of their specific biological
situations. For phenomenology, this second facet of “worlds” mostly concerns the issue of
individual experience—of “what it is like” to be in the world—and requires a rigorous method
for analyzing the manifest image and the structure of experience. As my use of Dewey as a
means for opening this work’s main problematic might telegraph, the third philosophical
approach to “worlds”—one that I believe to be highly useful for present purposes—is
pragmatism, which is highly suited for hybridization with phenomenology.
In many respects, artists are street phenomenologists. Those with a little academic training
are even likely to use the word “phenomenological” on occasion in their statements of practice,
although typically as a casual substitution for the words “perceptual” or “experiential.” While
few conduct rigorous experiential analyses, perform the phenomenological reduction (suspension
of preconceptions), or fully bracket the verboten “natural attitude” (appeals to realism or
scientific explanations) as do trained philosophers, they tend to excel in performing what Don
Ihde, drawing upon Husserl, describes in Experimental Phenomenology as “phenomenological
variations” (Ihde 123). Both of these terms describe a mental/perceptual experimentation—a
series of imagined transformations of an object, or an intentional shift in sense perception
accomplished by, for example, squinting or a change of head position. In traditional
phenomenology, the variation is conducted as a means for extracting the essential features or
!18
“multistabilities” of the thing perceived. Many artists perform phenomenological variations as a
means for gauging the perceptual effects of each progressive decision as a work is being
produced. Indeed, such techniques are taught early on as embodied tactics or “rules of thumb” in
the curricula of many traditional art and design schools.
As I will later argue, artists also perform their own version of phenomenological bracketing.
Whereas for phenomenology, bracketing, or “suspension of judgement,” is meant to scrub the
philosopher’s mind of the everyday assumptions we make about naturalistic or scientific
causality, for artists, it is a method for seeing past the cognitive bias of professional distortions—
the tendency to view the world through the narrow lens of one’s expertise. At each stage of
critique, it is beneficial for the artist to place herself in the shoes of the “cold viewer” and ask
what such laypersons, naive of both the material and cultural processes of Art, will experience in
their raw encounter with the work. To ground such speculation into another’s subjectivity
requires a radical suspension of one’s discipline-specific background knowledge and trained
judgement. It also requires the artist to put on the metaphorical lab coat of the behavioral
psychologist, particularly if the work in question is immersive, interactive, or is meant to guide
the viewer’s or user’s actions in some way.
Both variants of the methodological tools described above are highly useful as embodied
tactics and they are not the only ones at artists’ disposal. Most often, a facility for embodied
tactics is honed through years of practice, becoming tacit and unexamined with time such that,
for many artists, while they might be central to the work they do, they often remain somewhat
ineffable. Indeed, this speaks to the need for a more thorough account for “embodied
knowledge,” which I will lay out in chapter four. Even with such an account, however, we are
!19
lead to one of the deepest problems of phenomenological report as a method of analysis: It places
the experiential at the center of its investigations yet, like the products of all (or most) discourse,
those investigations must be reported in words. Even the phenomenological concept of
intentionality (directed-ness of conscious attention toward its objects) and its relationship to
verbal report resonates with Peirce’s axiom that only by working up cognitive encounters into
propositions and exercising “control over the process” constitutes legitimate knowledge
production.
As we will see, this aspect of phenomenology as traditionally understood potentially elides
entire vistas of somatic affect and experiences that are only “half-sensed” or describable only in
the roughest of linguistic sketches. There have been attempts to rescue phenomenology (and
philosophy more broadly) from its strict reliance on language and to argue for material/non-
linguistic practices as “legitimate” means for phenomenological investigation. For example, Ian
Bogost’s concept of philosophical carpentry, which he outlines in Alien Phenomenology,
champions the construction of (and encounters with) objects as discursive acts in themselves
(Bogost 85). Nevertheless, I am certain that what follows from phenomenology’s requirement for
a rigorous, linguistic description is that its works must, at least in part, be characterized as about
the experiential, even when augmented by embodied or practice-based methodologies that
operate through the experiential.
Phenomenology’s primary concern is with the structure of experience and even in
foundational works like those of Merleau-Ponty, which break with Husserl’s tendencies toward
idealism and place focus upon the individual’s embodied situation in the world, the philosophical
bottom line, so to speak, concerns the nature of consciousness and subjectivity. American
!20
pragmatism, and especially the work of John Dewey, is also concerned with the organism’s
relationship to “world,” but emphasize the experimental nature of actions performed “out there”
and judge the success of such actions, as well as that of concepts and tools of abstraction, by
their efficaciousness amid the uncertainties of practice. Others before me, Don Ihde in particular,
have argued for a mutual, two-way augmentation between phenomenology and pragmatism. In
Postphenomenology and Technoscience, Ihde writes,
The enrichment of pragmatism includes its recognition that “consciousness” is an abstraction, that
experience in its deeper and broader sense entails its embeddedness in both the physical or material world
and its cultural-social dimensions. […] The reverse enrichment from phenomenology includes its more
rigorous style of analysis that develops variational theory, recognizes the role of embodiment, and situates
this in a lifeworld particular to different epochs and locations (Ihde 19).
The present work puts to use a similar, hybridized postphenomenology, however, while certain
postphenomenological concepts will be important for the synoptic and textual analyses of the
objects under study, the use of the rigorous “phenomenological reduction” will be limited. I will
occasionally deploy such a reduction in silhouette as a methodology for textual analysis, but
mostly, to the extent that I will take a phenomenological tack, it will be toward the outermost
edges of phenomenology’s boundary conditions, where it most requires supplementation with
both pragmatism and natural science.
There are two important reasons for taking a meta-perspective regarding phenomenological
report. First, it is of limited use in accounting for experiences that philosopher David Roden
characterizes as the “phenomenologically dark”—a term which describes phenomena which
!21
confer no intuitive understanding of their nature on the experiencer (Roden 85). Importantly,
such phenomena might be ineffable at one sense register, but contain patterns which resolve at
another. Dark phenomena are very often the “aperceptual” phenomena that concern us herein and
my presumption is that to do any kind of experiential analysis of phenomena that retreat from
experience requires 1) appeals to technological mediation, for which Ihde makes a strong case in
his own postphenomenology, and 2) a healthy dose of grounded speculation.
Dark phenomena are rich soil to be tilled for the design and construction of works that
leverage a kind of “perceptual engineering” as a means for facilitating flows of affect,
particularly via computational or other electronic media. This is true for artists working squarely
within the confines of the arts, but especially for those working within art-science. As David
Roden has suggested (Roden 83), to mine the aesthetic potential of dark phenomena requires us
to break with traditional phenomenology’s commitment to the epoché, thereby rescuing appeals
to natural science. The reciprocal effect we might uncover for such an approach is a method for
facilitating the intuitions of both scientists and the lay public with respect to those entities or
principles that slip the nets of perception and cognition.
The second reason for taking up a meta-perspective with respect to phenomenology is that it
increases the action potential for experiential engagements with scientific theories and
abstractions themselves, which, to my knowledge, has not been much theorized outside of
traditions that follow Husserlian phenomenologies of mathematics. To hold the scientific image
and the manifest image in the same perspectival view requires at least a baseline understanding
of how abstractions and the scientific image—what Deleuze and Guattari, in What is
Philosophy?, refer to as “the plane of reference” (Deleuze & Guattari 118)—come to structure
!22
and inform our disciplinary work, our tacit assumptions, and our everyday experiences.
Undoubtedly, a thorough phenomenological account of this problematic is well outside of the
current scope, but we will find hints or potential signposts pointing toward it in various
descriptions of praxis.
By keeping the scientific image and the manifest image in stereo focus, we are better
equipped for peering into and developing works around the phenomenologically dark. Yet, in
fact, where we are likely to find these images already bleeding together in daily life and thought
is in expressions of what I will call, following the cues of scholars working in science and
technology studies, the scientific imaginary. Sean Miller, for example, argues for a definition of
the scientific imaginary that hinges upon its function as an intermediary between the discourses
of mathematics and Science on the one side and the Arts (specifically literature) on the other.
Focusing specifically upon the degree to which string theory has penetrated the public
imagination, Miller states, “If on the one hand, string theory offers up a body of scientific
knowledge expressed exclusively in terms of mathematical formalisms, while on the other hand,
a literary text’s ‘imaginative response’ occurs within a distinct, non-mathematical language, then,
assuming that, in general, literary authors are not professionally trained string theorists, a third
discourse must mediate the exchange between the technical and the literary” (Miller 30).
Miller is correct in his identification of a third discourse that serves a hermeneutic function
with respect to bringing the objects of mathematics out of their disciplinary contexts and into the
contexts of other disciplines. Indeed, one could argue for this as the central function for many
works of art-science. I would argue, however, that if this definition is made too strict, it risks
placing unnecessary, disciplinary boundaries around the concept of the scientific imaginary
!23
itself. To my thinking, what is notable about the exporting of ideas out of their contexts within
scientific discourse is not only the manner in which they are translated and expressed within
works of Art but, much more generally, the degree to which they structure predominant beliefs
and actions in the lived experiences of ordinary people. Because Miller’s focus is the production
of literary texts, he requires the scientific imaginary to serve a hermeneutic function with respect
to mathematical literature and imaginative literature. Herein, I view the imaginary as spread
across any number of cultural phenomena, from theater, television, and product design, to
spiritual beliefs, conspiracy theories, and everyday speech, all of which, taken together, require a
synoptic account. In short, for my purposes, the scientific imaginary is witnessed in whatever
background attitudes that are conditioned on popular notions of scientific concepts or
technologies and which viewers/listeners/users bring with themselves into their encounters with
art-science. As we will see, it is not only the certainty-producing objects of disciplinary orders
that art-science seeks to defamiliarize, but also, and with much more difficulty, the mercurial
certainties that the scientific imaginary helps to produce in our everyday experiences. 1.6 The Ontological Perspective
The second dimension of a concept of “worlds” to concern us is that of their ontological
nature—which is to say, how the things that produce them are ordered categorically. Because
scientific thought and experimental instrumentation play no small role in the praxis under study,
and bearing as these do upon our understanding of mind-independent, physical reality, I find it
necessary to additionally argue for a modest but sufficient ontological realism. This suggestion
could also be considered as following directly from the requirement of a break with the
!24
phenomenological epoché. To be clear, however, I do not wish to construct an ontology (or
indeed any “-ology”) out of whole cloth as such an undertaking is far outside of the present
scope. I do, however, believe that certain trends stemming from the “ontological turn” are useful
for thinking through problems of scientific representation and the relationship between abstract
schemas and their physical referents generally. To that end, a modest realism with respect to
science also dovetails cleanly with the postphenomenological perspective spelled out above.
Any argument for realism requires a sufficient definition of “real” in order to avoid some of
the more predictable confusions and contradictions that can arise in ontological discussions.
What I wish to avoid is appealing to a so-called “naive realism”—one which leans on untenable
claims to the “veridicality” of perception or which leads inexorably to an unfortunate dualism of
mind/world that holds the images “in here” as perfect reflections of the logos of objects “out
there.” The ontological tack I wish to take up in this work borrows from, or is sympathetic with,
the ontologies of a number of thinkers. Perhaps the most germane realist ontology stemming
from philosophy of Science is Roy Bhaskar’s “transcendental realism,” which seeks to
investigate what the world must be like in order for science to be possible in the first place
(Bhaskar 18). The ontology that squares most fully with the tensions between the scientific and
manifest images is Deleuze and Guattari’s process ontology, particularly the ontological
distinction these author’s make between “the plane of immanence” and the “plane of
reference” (Deleuze & Guattari 118-119).
Moving on, Karen Barad, makes a strong case for a materialist ontology she refers to as
agential realism, a way of thinking through subjects, objects, means of representation—indeed,
nearly anything and everything—as the products of “intra-actions” between entangled agencies.
!25
Useful for my purposes is her frequent turning to scientific instruments as a means for
demonstrating entanglements—a natural move given her previous life as a physicist. Drawing
from the insights of quantum physics, Barad seeks in Meeting the Universe Halfway to escape
the representationalist obsession with the mirror of nature by placing the mirror, so to speak,
back in the picture itself (Barad 85). This approach bears a striking aesthetic similarity to
Latour’s Actor Network Theory (Latour, Reassembling the Social 46), which views almost any
phenomenon, but especially those implicated in the production of knowledge, as the result of
differential interactions between multiple human and non-human agencies. Following a similar
line of thinking, Levi Bryant has suggested a Machine Oriented Ontlogy (MOO), which not only
borrows heavily from nearly all of the ontologies I have already listed, but goes the furthest, I
would argue, in attempting to account for how the “machines” of different ontological orders
actually process and transmit information between themselves. Of particular use for the present
work is Bryant’s differentiation between what he terms “corporeal machines” and “incorporeal
machines” (Bryant, Onto-cartography 112), a distinction which helps clarify and unpick the
entangled relations between nature and nurture.
Since the interdisciplinary praxis I am attempting to theorize sits squarely between scientific
and posthumanities discourses, ontologies of entanglement, machines, or distributed agencies
are, prima facie, highly useful for thinking through the hybrid practitioner’s role within those
spaces, but we must exercise caution. Of all the various branches of philosophy one could call
upon for this purpose, ontology is the most likely to over-code or essentialize cosmic alterity by
means of schematization. Such taming of otherness insulates thought from the potential for both
!26
awe and truly novel insights, which are afforded by our fleeting glimpses into the dynamic and
often ineffable field we exteriorize with every abstraction.
As much as I find the ontologies of Bryant, Barad, Latour, Deleuze & Guattari, and other
materialist thinkers with a penchant for systems useful for unpacking relational structures, as
previously stated with respect to phenomenology, they all have a tendency to elide the
pragmatics of lived experience, particularly those experiences that present serious challenges to
their explanatory frameworks. Later, when we turn to deeper discussions of noise, I will note
where I find these limitations problematic and attempt to make use of the uncertainties they
produce. For now, I will simply draw attention to the fact that, while many ontologies profess an
interest in boundary conditions, many of these are also liable to subject the alien phenomena that
lie beyond the boundaries of their frameworks to the inner logic of those frameworks themselves
—a habit of philosophical analysis to which I will almost certainly fall prey, but one which I
intend to face head on in such an eventuality.
The variant of realism deployed herein is noisy one. Like Deleuze, Guattari, and Bryant, the
ontology I wish to synthesize is one that affirms the reality of multiple orders of “things.” Such
an ontology holds electrons, mountains, unicycles, and unicorns to be all be real, but
differentially so and within adjacent ontological strata. The perspective I believe to be of most
use to art-science is one that broadens the rubrics for “real” to include not only those that
determine the mind-independent status of physical objects and forces, but also those that
determine the potential for mind-dependent “objects” to afford action within both physical and
social orders. The importance for establishing these “hard points” herein is that they help us to
trace the potential of one structured flow to become the noise that clings to and disrupts other
!27
structured flows. Such disruptions and parasitical relations, as I will later argue, are not just
matters of complexity and context; while they often yield insight when viewed as such, they also
hold the potential for mutation, hybridity, and importantly, for operationalizing the liminal edges
of sense-making—for retrieving those affects that are evoked in the face of cosmic alterity.
On a more pragmatic level, as incomplete or contradictory as any particular realism is likely
to be—as unnecessary as it is for aesthetic analysis—I will argue that what is necessary is for
hybrid practitioners (particularly those with Arts and Humanities backgrounds) seeking to
engage productively with scientists to become acquainted with at least a modest ontological
realism even if they are disinclined to adhere to one outright. Certainly, the philosophy of science
has its own commitments and agendas when it comes to realist claims, represented by naive
realists at one end of the spectrum and anti-realists (an admittedly very few in the strong sense)
at the other. Complicating matters is that the realism vs. anti-realism debate within philosophy of
science comes in a jumbo pack of flavors. I will therefore limit the scope of my discussion to
those realisms like that espoused by Nicholas Rescher, which underscores the need for a
background realism in pragmatic activity (specifically communication and investigation), and
Ian Hacking’s “entity realism” (ER), which forefronts instrumental experimentation as the
linchpin for our knowledge of imperceivable entities. A familiarity with these particular forms of
the realist debate as they have played out in the philosophy of science will not only help to oil
the gears of art-science collaboration, they will also help to ground the noisy, ontological
perspective I propose to take up.
!28
1.7 Distortional Tactics and the Synoptic Perspective
We are surrounded by oceans of experimental data. These oceans grow by the second and are
prompting a great convergence of disciplines as scientists find new approaches for sounding and
drawing insights from their depths. Directly or indirectly, artists are increasingly caught up in
this convergence and are undertaking hybrid practices that range from those about or inspired by
science (either as critique or what I will qualify as science advocacy) to those that access and
perceptualize data directly through computational practices. Generally speaking, the purpose of
this work is to suggest a plausible mode of navigation through the penumbral space between the
Sciences and the Arts and, more specifically, to provide a synoptic analysis, supported by works
culled from a range of scientific, philosophical, historical, and literary sources, of those topics
most germane to my own art-science investigations, viz. perception, mediation, noise,
uncertainty, and the tensions between ordering schemas and dynamic phenomena. Equipped with
our metaphorical sextant, we will examine the linkages between instrumental data and sense
data; the manner in which artists “hack” both abstractions and material technologies through
what I will term “distortional tactics,” which throw the connection between physical reality and
representation into question; and the affordances for such practices in art-science collaboration.
Most importantly, we will attempt to face and make use of the noise that both Science and Art
inevitably exteriorize by dint of the processes that generate the order of each field.
By now, it has most likely become apparent that, even with all my talk of noise, I have
provided no specific definition for the term. Part of the reasoning for taking up a synoptic
analysis herein is that definitions of “noise” have a tendency to mutate in relation to whatever
discourse or context in which they are deployed. For Pythagoras, noise is the “fifth hammer”
!29
ringing out discordantly from the forge, disrupting the rational order of the harmonic series he
heard produced between the tones of the other four hammers striking the anvil. For Claude
Shannon, it is any source of signal that is extraneous to the communication channel. For
designers of scientific instruments, noise is an unfortunate material reality of the circuits and
thermodynamic principles that make the instruments possible, requiring a notion of “signal to
noise ratio” as a means for increasing certainty with respect to measurement. For musicians and
audiophiles, noise is simply unwanted or incidental sound—a problem that emerges even within
genres that develop around aesthetics of noise. For data analysts, noise is unwanted or incidental
data, which skew our methods of pattern recognition.
As slippery as definitions of noise can be, perhaps the most widely-applicable
characterizations position it as the interfering complexities, unmapped entanglements, and
contextual relations of any phenomenon singled out for investigation. In short, following
information theorists like Dennis Gabor and Abraham Moles, we might say that noise is
dependent upon the intentions of the observing subject. Yet, even this definition leads to a
certainty that noise has the potential to destabilize. To “worldize” is therefore thought as a means
for recovering or operationalizing the productive qualities of noise (in whatever form they might
take) without taking these qualities to be essential to a definition. In the context of art-science
more broadly, what I would like to suggest is the development of a situated tactics that allow us
to “surf” the noise extant within the penumbral space between domains, never quite settling into
the comfortable certainties that would characterize the creation of a new order.
The tactics themselves are simple enough; they are three in number and are variations upon
tools that are deployed within a number of practices and discourses, from traditional
!30
phenomenology and hermeneutics to the everyday activities of art-making. While I will focus
mainly on one per chapter, all of them will appear to some degree or other throughout. In chapter
three, I will focus primarily upon the tactic of association which, in its simplest form, might be
understood as “drawing together” or making connections across ideas, boundaries, and domains.
This tactic, I will argue, goes far in helping understand Art’s epistemic potential in the arena of
art-science. The tactic of variation will appear in chapter four and is cut nearly out of whole
cloth from the ideas of “eidetic variation” and “free imaginative variation” found in Husserlian
phenomenology. Variation concerns the embodied relations of artists to their materials, the
manner in which artists intentionally shift or alter their perceptual experience as a means for
sussing out “stabilities” and “invariances” in their objects of study, and the methodical
interventions that follow from this process in both technologies and the abstractions that
underpin them. Finally, the tactic of notation concerns the creation of new languages in both
individual art-science projects and art-science collaborations. Notation might be thought as the
development between collaborators of what Galison calls “interlanguages,” however, it might
also be understood as a method for pinning down the more elusive aspects of a phenomena that
are perceptually “dark” at one register, but the patterns of which become clear at another.
All of the tactics above, I will argue, de-emphasize the importance for establishing discipline-
specific certainties, and afford a greater freedom of movement along the knife edge between
intuition and those phenomena that outrun it (e.g. alterity of scale, mathematics, perception, or
organizational principle). How can art-science best walk this vertiginous edge? What
understandings can such hybrid activities produce? What does an aesthetics of noise and
distortion have to offer rational investigation? In order to begin suggesting answers to these
!31
questions we must first dispense with what I will call “the iconographies of Art and Science” and
get a clearer sighting of how each order has historically produced and maintained its boundary
conditions, most importantly with respect to knowledge production. For, the knowledge question
and its concomitant problematics regarding Science’s privileged access to “the real” have been
the most reliable disruptors of the potential for art-science to affect positive change in our
understanding of both constituent orders and their relationships to the experiential. !32
2 Iconographies of Art and Science
2.1 About this Chapter
This is a work about data, their “disembodied” condition, and the strategies we employ to
make the referents of data legible and intuitive as real things in the world. Specifically, this is a
work about scientific data and those works of art-science that make them the focus of praxis.
Accordingly, the type of data that concern us here are not those that have become the focus for
discourse around social media, mass surveillance, or online shopping habits; rather, these are
data produced via radio telescopes, x-ray crystallography, ionosondes, and other scientific
instruments. What nearly all of these instruments have in common is that they probe and measure
phenomena that fall outside the scope of biological perception, pointing to a universe comprising
a great deal more than what is given to our senses, accounted for in our theoretical constructions,
or intuitive to embodied minds. The data afforded by scientific instruments give us a wider,
empirical window upon physical reality, but it is a window glazed with the filters of abstraction.
The dynamic flows of energies and material we study are pinned down to the scientific grid, each
pin forming another vertex of a mathematical figure that enjoys a clarifying autonomy from the
noise and uncertainty of existence. To abstract a thing is to pull it from the complexity of its
situated contexts and to fix its universal properties.
The present work is driven by the question of what is gained by recovering and representing
noise and complexity in our visualizations and sonifications of data. It is an attempt to
problematize the familiar hand-waving we encounter regarding “artistic modes” of data
visualization, which are often classed differently from those visualizations that make pattern
!33
recognition their primary concern. Such works are appreciated for their superficial beauty, but
are almost never analyzed for their “usefulness,” for the affective responses they evoke, or the
potential role these responses might play in developing intuitions of the phenomena underlying
the data. Broadly speaking, this question gestures toward the more general roles for artists in the
arena of art-science praxis. If we truly desire art-science to be a space of radical hybridity, what
do artists, whose methodologies are heterogeneous in the extreme, bring to the table? What do
artists stand to benefit from their exposure to scientific inquiry? What do scientists gain beyond
the satisfaction of making scientific concepts accessible to a lay public (important though such
accessibility might be)?
Before drilling into these questions in earnest, we must first take a sighting of the two orders
we wish to hybridize and focus specifically on those habits of discourse that stand out as the
most common stumbling blocks for productive art-science. These are emergent from what I will
call the iconographies of Art and Science (I capitalize these terms as a way of denoting their
status as knowledge domains, in contradistinction from their workaday products). I wish to call
out the iconographies for two reasons: 1) They do not provide a complete picture of the real
differences between the Arts and Sciences (of which there are many), but instead sketch only
stick figures produced from unwarranted certainties made tacit over years of institutional,
cultural, and discipline-specific learning; 2) The iconographies are predominantly grounded in a
culture of critique that has historically pitted each order against the other in conflicts regarding
the nature of reality itself. On its face, this reasoning might seem to be making mountains of
molehills, and that each knowledge domain can only stand to benefit, or at least be kept honest,
from spirited critique. I am unconvinced by this argument, not because I believe skeptical
!34
critique is unproductive or because I adhere to a verboten naive realism, but because, given the
depth to which these critical habits have penetrated thought on either side, and the degree to
which critics often gird themselves against counterarguments, it remains unclear whether such
critiques are capable of producing new insights. What is very clear is that much good faith has
been squandered in order to maintain the debate. Science, after all, makes the discovery and
rationalization of new phenomena and new chains of causality the center of its operations, and all
of these amount to very little if their very reality is explained away as idealist products of mind
or, alternatively, of social activity.
Already, I can hear the high-pitched whirring of reflexive and deeply-felt rejoinders spinning
up in the heads of those with a taste for post-Kantian metaphysics but, before snap judgements
are made, I will say that, at the very least it is unclear how to proceed theorizing an art-science
praxis—one which, by definition, concerns itself with hybrid experimentation with real things
having real effects in a real world—without first rethinking some or all of the iconographies that
have made cartoons of both scientific and artistic knowledge production. The effect these straw
people have had upon the images we hold of one another’s disciplines de-tethers the knowledge
we produce from the ground-shaking realness we sense in our encounters with awe—that quality
of experience around which both Art and Science organize many of their respective activities.
Such encounters are qualitatively different from (one might say antithetical to) a distanced,
skeptical attitude with respect to real things and it is my contention that to formulate evocative,
art-science approaches to abstraction—approaches that recover and operationalize the
complexity out of which abstraction is culled—we must place ourselves back into a universe
populated by real things, most of which retreat from our biological senses, but which nonetheless
!35
hold as much potential for awe as do seascapes, aurora, and solar eclipses. 2.2 The Two Orders
So, what are the iconographies of Art and Science? First, let us consider that, despite the
frustrations both artists and scientists are likely to express at the most commonly held
misconceptions of their fields, they are just as likely to find themselves rehearsing similar
misconceptions of the fields of their collaborators as they enter art-science discourse. This small
hypocrisy should not surprise us; while the disjuncture between the Sciences and the Arts has
arguably begun to close since C. P. Snow’s 1959 Rede lecture “The Two Cultures and the
Scientific Revolution,” many of the tacit assumptions that produce it persist nonetheless. To be
sure, much cross-pollination has brought the Arts and Sciences into more cozy relations since
Snow’s lecture and these hybrid practices seem only to have increased in number and quality
since the turn of the millennium, giving us good reason to be optimistic about the “third culture”
Snow anticipated. Indeed, two recently published books, Hybrid Practices: Art in Collaboration
with Science and Technology in the Long 1960s and Gyorgy Kepes: Undreaming the Bauhaus,
each do an excellent job of unpicking the historical threads that weave the structural fabric for
contemporary developments in art-science. Nevertheless, despite these histories and a rising
interest in hybridity and collaboration, the iconographies each culture has internalized of its
perceived opposite continue to go unexamined and to disrupt collaborative potential.
As for the iconographies themselves, the most commonplace caricatures of the Arts and
Sciences place the two in binary opposition in some very crucial and problematic ways and even
the most earnest attempts at mapping a third culture have tended to re-inscribe these same
!36
binaries. Art is understood as thoroughly subjective; Science, objective; Art is understood as
concerned with emotions and pleasure; Science, with facts about the universe; Science is viewed
as serious, cold, calculating, and its knowledge generalizable, while Art is argued as the more
“humanist” endeavor (a most ambiguous qualifier), a product of leisure, and its very definition as
always already generalized—an imprecise validation of any personal choice or taste that falls
under the umbrella of Art’s “do whatever” slogan. Most of these iconographies are relatively
benign, but many can lead to ideological posturing that can wreak unintended havoc on hybrid
collaborations or, more subjectively, damage the effective potential of individual art-science
interventions.
While I hope to dismantle the most troubling iconographies in the work to follow, I equally
hope to nuance some of the more common platitudes aimed at showing how art and science
“aren’t so different after all.” What I wish to illustrate is that what disrupts perceived art-science
binaries is not simply how each side reflects within itself some image of its counterpart in
miniature, but how each established order of theory/practice within this supposed binary
comprises a multiplicity of approaches to certainty and materiality whose exclusions form a
productive, phenomenal field to be investigated. This field is best recognized at the penumbral
boundaries produced between orders where the gradients of uncertainty are most interleaved with
the gradients of high certainty, material production, and sense-making. To collaboratively and
productively explore penumbral space, where our objects of study are most defamiliarized,
requires a stereoscopic approach capable of holding the manifest image and the scientific image
within the same philosophical field of view. It also requires a healthy amount of good faith
between theorist/practitioners with respect to the methodologies each domain employs to
!37
produce knowledge of and through their objects—a good faith which, as I have already
suggested, is eroded by the iconographies I seek to challenge.
Let us re-examine some of the iconographies I have listed above. As noted, there is much
theory/practice stemming from today’s art-science projects and collaborations that would seem to
challenge many of our preconceptions of the two cultures. However, in these same projects, we
are also likely to find observations that reproduce the very stereotypes they seek to undo. I would
suggest that this is due in large part to a failure on both sides to examine the epistemic
foundations of each field. Many such projects rush to prove that art and science “aren’t so
different after all,” yet while the boundaries of art and science often do intersect, our enthusiasm
for hybridizing these two fields should not obscure the very real differences between them or the
benefits such differences afford. Put into conversation, these differences might produce a
dialectical synthesis that is otherwise unachievable. Importantly, real differences—those that are
more than caricature—require careful unpacking if we are to prevent the straw persons we burn
in haste from rising out of the ashes in unforeseeable ways. I will begin by suggesting that the
most problematic iconographies follow from a widely recognized issue in academic discourse
that I will dub the “epistemic divide.” The knowledge of Science on one side of the divide is
understood as generalizable and oriented around universal truths while, on the other side, the
knowledge of the Arts (if it can even be said to count as knowledge per se) is submitted to
wholly different epistemological criteria and determinable only by a rubric of personal taste,
technical virtuosity, or alternatively, by social function.
The characterization above is only a rough sketch, but before I begin to more thoroughly
flesh out the contours of the epistemic divide, I should offer a few caveats. First, whereas the
!38
discourse since Snow’s lecture has focused mainly upon the disjuncture between the Sciences
and the Humanities, I have modified this formulation slightly by choosing to focus on the divide
as it pertains specifically to the Arts. Moreover, I consider design and its conflation with Art to
present problems and opportunities for art-science that are distinct from those presented by the
iconographies of Art as a more general field. In many respects design is much easier to square
with the knowledge of Science and so it will wait in the wings for later discussion. Second,
because I am only a visiting ambassador within the domain of the Sciences, my strategy for
unpacking and troubling its epistemic iconographies will differ from that I will apply to the Arts.
In the case of the latter, my experience as a theorist/practitioner will inform a much closer
reading and analysis. As for the former, what will follow in this chapter is an extremely
abbreviated historiographic synopsis of Science’s epistemological issues drawn mostly from
philosophy of science—a field distinct from that of everyday scientific practice. I will
undoubtedly omit or gloss over much in this short journey, but I will hopefully touch upon
enough of this history’s philosophical high points to help dismantle or restructure the most
troubling iconographies of scientific knowledge and provide some footholds for more practice-
oriented analyses in the chapters to follow. 2.3 Science and Reality
In my estimation, it was the so-called “science wars” of the mid-late twentieth century that
widened the epistemic breach seemingly beyond repair. The period between the late sixties and
the early aughts saw Humanities scholars and social scientists launching multi-front assaults on
“scientism,” positivist thinking, and foundational appeals to objectivity as a way of pushing back
!39
against the perceived inevitability of scientific progress and its truth claims. Many sought—to
use a common refrain—“to show how things could be otherwise.” Defenders of Science in turn
often dismissed such attacks out of hand as mere relativistic or, worse still, solipsistic thinking.
To echo Steven Weinberg, the effect of these salvos was that scientists working in the trenches of
everyday practice came to view philosophy of science as “about as useful to scientists as
ornithology is to birds” (qtd. Haack 21).
Despite the worst pitch battles of the science wars, the knowledge of science continues to be
held as generalizable in the strong sense, regardless (or perhaps because) of its historicity and
social character. Philosopher Susan Haack likens the knowledge of Science to entries in a
gigantic crossword puzzle, admitting that, “progress in the sciences is ragged and uneven, and
each step, like each crossword entry, is fallible and reversible. But,” she says, “each genuine
advance potentially enables others, as a robust crossword entry does […]” (Haack 25). That this
puzzle can most likely never be finished is beside the point; a puzzle without defined edges
presents no fewer localized solutions. As Haack suggests, what makes the knowledge of Science
generalizable is the manner in which it is produced by systems of evidence and inquiry that are
rarefied versions of those we use to conduct investigation in our everyday lives regardless of our
predilections or expertise. “[A]s for its method,” says Haack, “it is what historians or detectives
or investigative journalists or the rest of us do when we really want to find something out: make
an informed conjecture about the possible explanation of a puzzling phenomenon, check how it
stands up to the best evidence we can get, and then use our judgement whether to accept it […],
modify, refine, or replace it” (Haack 24).
!40
In contrast, it remains equally difficult, for reasons I will later explore, to make the case for
Arts methodologies as capable of producing generalizable knowledge in the same way as do
those of the Sciences. The rub, of course, lies in that tricky phrase “in the same way.” It is
commonly suggested that Art and Science concern themselves with the same universal themes
(life, death, existence, etc.); they just get to their answers via different routes. For certain, the
characterization of the Arts and the Sciences as “separate but equal” with regard to the horizon of
experience is comforting; it is also, I would suggest, a red herring. More to the point, it simply
papers over many of the crucial differences we caricature in our iconographies and subsequently
reify in practice. In many respects, these differences are resultant from a widening of the fissures
created by what Gaston Bachelard in The Formation of the Scientific Mind characterizes as the
ruptures required for scientists to overcome “epistemological obstacles” (e.g. appeals to intuition,
previous knowledge, or sensorial experience) (Bachelard 24-25) We can also, however, think of
it as following from the manner in which post-Kantian critical epistemology turned upon itself,
creating an inward regression that positions radical exteriority, or “out there-ness” forever out of
reach.
We will arrive at the metaphysical conditions for the divide very soon. For present purposes,
however, it might prove more helpful to begin with the knowledge of Science, understood in
simplest terms, as propositional knowledge: statements of fact characterized by justified, true
belief. Speaking just as generally, the Arts and Humanities could be said to deal mostly with
situated or culturally contextual knowledge—an epistemological position in which knowledge(s)
of objects is/are inextricably linked to the subject positions making claim to it/them. Neither of
these characterizations is exact, but for now they will serve to anchor the iconographies I am
!41
seeking to trouble. With respect to the Arts, Humanities, and social sciences, many theorists of
the mid-twentieth century and up to the present day argue in favor of alternative “knowledges” to
stand in place of propositional knowledge, often viewing methodologies of the latter as
irredeemably stained with colonialist, sexist, and racist histories that have elided or erased the
situated knowledges of indigenous cultures, women, people of color, or indeed any group
“othered” by asymmetrical power relations. The most radical views along these ideological lines
seek to discredit the epistemological foundations of objectivity and facts themselves, while softer
versions take aim at the cultural biases of Science and undermine any characterization of its
methodologies as “value free.”
Yet, while critiques of cultural bias in the Sciences are very often justified in their application
to morality, ethics, and social justice, they also risk leaning too heavily upon one of the more
intractable epistemological iconographies of Science: that the production of scientific facts is a
social practice and that Science’s truth claims are therefore no better founded than those of any
social activity. What follows from this assumption is that scientific knowledge and
methodologies may be viewed as culturally relative or simply as tokens in games of power. The
first thesis—that Science comprises social practices like any other—could be read as the weaker
of the two. Indeed, it is obvious that Science is a deeply social enterprise, much more so, in fact,
than many Arts practices. However, the second thesis, which holds Science’s factual claims as no
better founded than those of other social activities, gets us into deeper and more dangerous
waters almost immediately: deeper because it raises very sticky questions about warrant, and
dangerous because of the epistemological nihilism that it can (and has) engender(ed).
!42
There are many reasons to believe that the activities of Science are epistemologically distinct
from (not universally “better” than) other socio-cultural knowledge practices, and these will
become clear as we move along. For now, I would suggest a reading of the two theses above that
holds them as being more than products of a generalized anti-Science bias (although this bias is
often evident); the impulse driving them is often commensurate with the degree to which
“situated knowledge” and “generalizable knowledge” are understood to be at odds, and rather
than trace the limitations of each and forge a productive and amicable truce between them, many
find it more comfortable to simply reduce one understanding of knowledge to the other. In what
follows, I hope to suggest ways for the hybrid practitioner to think through and past the supposed
incompatibility of situated and generalizable knowledge, but to round out this segue into our
little historiography, let us first examine how the epistemic divide can bubble up to the surface of
discourse.
Accompanying the defense of situated knowledges, we often find “the lovely and nasty tools
of semiology and deconstruction” (Haraway 577) brought to bear upon the “true” (read
universally true) in “justified, true belief” in an attempt to redefine knowledge per se as radically
situated or contingent. Objectivity, reason, and other epistemological workhorses of Science are
given a liberal dose of scare quotes in many of these works—a signifier of skeptical criticism
that is a sure indicator of what Susan Haack calls the “passes-for fallacy” (Haack 27-28). More
nuanced critiques of objectivity attempt to show the impossibility of a “view from nowhere” or,
alternatively, a God’s Eye View (GEV)—a disembodied perspective that affords a view of
“truth” undistorted by bias. The impulse to take a position against objectivity appealed to in this
way is understandable; early Enlightenment thinking, borrowing heavily from Platonist idealism
!43
and still in the early stages of its break with religious thought, was wont to organize the world
into immutable classes, put in their respective places and set in motion by the Creator Himself.
Carl Linnaus’s Systema Naturae, published in 1735, was the first work to fully break from
Aristotle’s “great chain of being” and give name to these classes, but its formulation was
nonetheless interpreted as descriptive of God’s logic. No deeper analysis was needed beyond the
division of phenomena into their respective classes as designated by the Creator. Moreover,
scientists of this period had an unfortunate tendency to explain away or alter their representations
of outliers that did not fall neatly into God’s rational ordering scheme—a habit of “epistemic
virtue” Daston and Galison term objectivity as “truth to nature” (Daston & Galison 109-113).
Who would not want to buck against such a stifling, patriarchal view of the universe — one
which only serves the status quo—especially when it can be shown to be so unscientific?
Assertions that ring of Enlightenment rhetoric are still voiced on occasion in casual,
scientific conversation. In the arena of practice, I have often had my sensibilities tested by
scientists who push back against proffered artistic interventions or design solutions with
outbursts like, “that’s just not the way that particular protein looks!” or “we have to do justice to
the truth of the phenomenon!” What on Earth could justify such certainty around the universal
“truth” or “look” of entities that are not only radically imperceivable, thus slipping the net of our
most baseline empirical apparatuses, but whose existence in a changing, evolutionary continuum
also presents challenges to any theory of universals? There are a number of practical and
philosophical answers to this question. John Bigelow, for example, has offered what he terms a
“physicalist account of mathematics” that attempts, albeit parenthetically, to illustrate the
material/physical bases for abstraction into universals (Bigelow 22), but any account along these
!44
lines is riddled with uncertainty and must go to great lengths to avoid the pitfalls of early
Enlightenment thinking. If today such casual and unreflective appeals to naive realism still crop
up from time to time, they were without a doubt much more common when the first shots of the
science wars were fired.
In the face of scientific hubris or unwarranted deference to a naive realism, we can certainly
support a push to legitimize subjectivity, interpretation, and situated knowledge. It would be
prudent, however, to keep two points in mind: first, giving a legitimized voice to subjectivity
need not be contingent upon ill-fitting critiques or a wholesale rejection of a modest and
thoroughly-considered scientific realism, the latter of which has already been shown (e.g. in the
works of a number of materialist thinkers) to dovetail quite neatly with the aims of many in the
(post)Humanities; such a move can in fact be self-defeating for a number of praxical aims,
particularly those of art-science. Second, my suspicion is that many of today’s working scientists
are likely to slip into the language of naive realism more as a linguistic shortcut than as a voicing
of staunch philosophical positions. When hard-pressed about what she really means by such
words as “real” or “fact” or “truth,” the scientist is likely to express a more nuanced view of
Science’s negotiations of the probabilities and certainties of hard-won scientific claims. It is
therefore prudent for theorist/practitioners from the Arts or Humanities to check their own
presumptions regarding the philosophical positions of their scientific counterparts; a thoroughly
socratic approach might reveal more subtlety than the daily grind of scientific practice provides
the opportunity to express. More plainly put, the impulse to trouble “facts” or appeals to a naive
realism often risks regression into naive criticism. A tenable alternative to the former naiveté is
not necessarily found in the latter.
!45
Many critiques of the GEV or of scientific truth claims in general argue from some form of
social constructionism. Social constructionism itself, however, comprises an extremely diverse
and often contradictory set of approaches and ideas. It is therefore not my intention to clumsily
paint them all with the same brush; but neither is it my intention to parse every tributary
constructionist argument. For present purposes we need only examine what I will characterize as
appeals to a generalized “strong form” of constructionism on the part of anti-science combatants
—a form that, unsurprisingly, is often deployed by its supporters and detractors alike in a manner
that flattens the heterogeneity of constructionist thought into its own iconography. What I would
emphasize is that even while contemporary theorists have turned their critical tools elsewhere,
many of the most problematic assertions of the strong form (or at least its iconographic
silhouette) have seeped into everyday argumentation and continue to manifest in the trenches of
art-science praxis, chipping at the edges of the epistemic divide.
In order to get a grip on what the strong form is, let us consider constructionism’s “weaker”
forms as they are often applied to Science. These versions of social constructionism seek to
illustrate the contingency of scientific progress on social/historical events or activities. Therein,
we find various observations of the manner in which Science has, historically, chosen this branch
of exploration over that, not as a fixed, teleological trajectory, but by dint of numerous social
mechanisms of which well-defined, scientific activity is but one component. Alternatively,
scientific progress is decoupled entirely from the physically real and made the sole product of
social activity. Thomas Kuhn, whose highly influential The Structure of Scientific Revolutions
marked a sea change in post-positivist scientific thought, differentiates between “normal
science,” defined as “research firmly based upon one or more past scientific achievements,
!46
achievements that some scientific community acknowledges for a time as supplying the
foundation for its further practice” (Kuhn 10), and the leaps and bounds made by scientific
progress, which Kuhn characterizes as a series of shifts in “paradigms”—the works or
discoveries that help structure the “legitimate problems and methods of research,” because “their
achievement was sufficiently unprecedented to attract an enduring group of adherents away from
competing modes of scientific activity [and was] sufficiently open-ended to leave all sorts of
problems […] to resolve” (Kuhn 10-11).
Science’s socially-mitigated course corrections, Kuhn argues, lead to specific discoveries that
are periodically incommensurable with previous paradigms and are ultimately what determine
the trajectories of Science. For Kuhn, this fact challenges the view of Science as a linear
progression toward teleological ends and his critique of scientific advancement helped to shape
many similar historiographic and constructionist works to follow. What is of note in these weaker
forms of constructionist thought is that, while they show the social and historical contingency of
scientific activity (and thus how things could be otherwise), they do not debunk the knowledge
or even the methodologies of Science outright. It would be more accurate to say that they
problematize the notion of scientific progress as driven “forward” solely by the workhorses of
rationality, evidence, and objectivity. In this way, many constructionist views could be said to
present a more realistic picture of scientific activity by accounting for the mitigating role played
by its social/historical facets in a wider complex of driving factors.
On the other hand, the iconographic strong form of social construction goes further. As
previously stated, its foundational premise is not just that science is a social activity like any
other, but also that its insights are no better founded than any other. The most rigid adherents to
!47
the strong form, following the work of Berger and Luckmann or the tradition these authors
helped to further, would hold that facts and reality themselves are radically constructed by human
social interaction. “How is it possible,” ask Berger and Luckmann, “that subjective meanings
become objective facticities? […] How is it possible that human activity […] should produce a
world of things […]? In other words, an adequate understanding of the 'reality sui generis' of
society requires an inquiry into the manner in which this reality is constructed” (Berger and
Luckmann 30). Such questions and the framework provided in answer are easily interpreted as a
privileging of social activity and specifically language as the means by which facts are worked
up from subjective encounters, but of course it does not follow from Berger’s and Luckmann’s
inquiry into the sociology of knowledge that the universe exterior to human life does not exist
until socially constructed or that some methods for conducting inquiry aren’t better founded than
others. Neither, I would argue, did Berger and Luckmann intend to promulgate such a view.
To reiterate, for our purposes it does not matter if the iconographic strong form continues to
be explicitly adhered to or whether it is based upon misinterpretations of the literature; what is at
issue is the extent to which it tacitly underpins the iconographies of science so often brought into
and critiqued within art-science discourse. My suspicion is that such critiques trade upon a
slippage between what John Searle has termed “epistemological subjectivity” and “ontological
subjectivity.” “Some statements,” says Searle, “can be known to be true or false independently of
any prejudices or attitudes on the part of observers. They are objective in the epistemic
sense” (Searle 113). He extrapolates this view, stating,
!48
[I]f I say, ‘Van Gogh died in Auvers-sur-Oise, France,’ that statement is epistemically objective. Its truth
has nothing to do with anyone's personal prejudices or preferences. But if I say, for example, ‘Van Gogh
was a better painter than Renoir,’ that statement is epistemically subjective. Its truth or falsity is a matter at
least in part of the attitudes and preferences of observers. In addition to this sense of the objective-
subjective distinction, there is an ontological sense. Some entities, mountains for example, have an
existence which is objective in the sense that it does not depend on any subject. Others, pain for example,
are subjective in that their existence depends on being felt by a subject. They have a first-person or
subjective ontology (Searle 113-114).
Strong constructionist views that perform what I will call, following Searle, the “ontological
slip” have been hotly debated in the science wars. Bruno Latour’s and Steve Woolgar’s highly
influential Laboratory Life: The Construction of Scientific Facts has been something of a
Rorschach test at the center of many such debates. Therein, the authors set about showing how
the everyday social activities of researchers at the Salk Institute “constructed” the fact of the
peptide hormone, somatostatin. This work was widely criticized by defenders of Science and
championed by strong constructionists who sought to undermine Science’s unique
methodological access to the real. Ian Hacking has argued, however, that while as a philosopher
he is discomforted by much of Latour’s and Woolgar’s language in that work, what was at stake
in their investigation was not reality in itself, but something quite different:
Latour and Woolgar briefly emphasized etymology. The word ‘‘fact’’ comes from the Latin factum, a noun
derived from the past participle of facere, to do, or to make. Facts, they said, are made. Since made things
exist, Latour and Woolgar […] did ‘‘not wish to say that facts do not exist nor that there is no such thing as
reality.’’ Their point was ‘‘that ‘out-there-ness’ is the consequence of scientific work rather than
its cause.’’ And: ‘‘ ‘reality’ cannot be used to explain why a statement becomes a fact.’’ (Hacking 81)
!49
To the extent that Hacking’s analysis is accurate, it reveals a general lack of nuance in how
Latour’s and Woolgar’s work has been both critiqued as nonsense and praised as gospel. Other
philosophers, like Susan Haack, are not so generous. In Defending Science - Within Reason, she
argues,
There are three interpretations in which the ‘the’ social constructivist thesis is true:
1. Social institutions, etc., are socially constructed insofar as they are constituted by people’s beliefs
and intentions.
2. Laboratory artifacts are brought into being by means of scientists physical manipulations.
3. Scientific concepts are brought into being by the intellectual work of scientists.
But, explicit in Latour and Woolgar […] is a much stronger thesis:
4. Not only scientific concepts, but the objects to which they apply—not only the concept of gene or
electron, but actual genes and electrons are brought into being by scientists’ intellectual
activities (Haack 190-191).
This stronger claim, argues Haack, “amounts to nothing less than an outright linguistic/
conceptual idealism” (Haack190). Without a doubt, such language as, “We are not arguing that
somatostatin does not exist, nor that it does not work, but that it cannot jump out of the very
network of social practice which makes possible its existence,” (Latour & Woolgar 183,
emphasis added) feeds such accusations. Does “its existence” refer to the epistemological status
of the “fact” or the ontological status of the thing itself? If the latter, are we truly expected to
believe that somatostatin popped into physical existence (as opposed to discursive or
representational existence) because of the social activities of those observing it? Surely not… or
!50
are we? Latour’s and Woolgar’s language is notoriously slippery, allowing for such
(mis)interpretations.
Whether intentional or not, it is the “linguistic/conceptual idealism” sensed at the heart of
many constructionist attacks on scientific claims, in addition to the relativist attitude re Science’s
investigative affordances, that I find to be a consistent and reliable disruptor of art-science praxis
today. My contention is not that there are no aspects of scientific practice that are socially
constructed or that these should be verboten topics within the discourse, but that, historically, the
overcorrection we find in the iconographic strong form of social construction produced a genie
of cynicism that further widens the epistemic divide and hinders productive discussion. This
genie has, moreover, proven difficult to return to its bottle and has arguably fed the
epistemological nihilism we find in today’s wider political debates. For the staunchest defenders
of the Humanities in these latter-day brushfire conflicts, appeals to realism, facts, or objectivity
continue to represent naive, conservative thinking. Conversely, for defenders of Science, tacit
idealism and the abuse of scientific knowledge by critical theory and strong-form
constructionism are simply “fashionable nonsense” (Sokal & Bricmont 4-6).
Thankfully, in the heightened anxiety of current debates and mass disinformation campaigns
re such issues as climate research and vaccines, many within the Humanities and social sciences
have back-pedaled from hard-line constructionist (read relativist) positions. Bruno Latour’s 2004
essay, “Why has critique run out of steam? From matters of fact to matters of concern,” reads
(perhaps only slightly) as one such mea culpa. In those pages, Latour certainly appears to be
giving new marching orders and to suggest, as the title indicates, a retreat from critique bent on
undermining scientific facts and a rallying of troops around more social and ethical problematics:
!51
[…] the critical mind, if it is to renew itself and be relevant again, is to be found in the cultivation of a
stubbornly realist attitude […] but a realism dealing with what I will call matters of concern, not matters of
fact […] Reality is not defined by matters of fact. Matters of fact are not all that is given in experience.
Matters of fact are only very partial and, I would argue, very polemical, very political renderings of matters
of concern and only a subset of what could also be called states of affairs. It is this second empiricism, this
return to the realist attitude, that I’d like to offer as the next task for the critically minded (Latour 231-232).
Others, however, have been evincing a discomfort with epistemological relativism from
within the ranks of the Humanities since long before Latour’s essay was published. Donna
Haraway, in the decade before the science wars saw what is arguably its most famous and savage
counter-attack in Alan Sokal’s famous hoax, lamented in her essay, “Situated Knowledges: The
Science Question in Feminism and the Privilege of Partial Perspective,”
So much for those of us who would still like to talk about reality with more confidence than we allow to the
Christian Right when they discuss the Second Coming and their being raptured out of the final destruction
of the world. We would like to think our appeals to real worlds are more than a desperate lurch away from
cynicism and an act of faith like any other cult's, no matter how much space we generously give to all the
rich and always historically specific mediations through which we and everybody else must know the world
(Haraway 577).
What unsettles Haraway in the passage above seems evident: In the rush to trouble appeals to the
“natural” and its concomitant realist foundations, social critics risk disqualifying some of the
most effective epistemic tools for supporting their own arguments, even among their own peers
!52
and allies, resulting in what she refers to as a “self-induced multiple personality
disorder” (Haraway 578).
Given observations like Haraway’s, the “turns” of Humanities discourse toward posthumanist
trends like ontology and affect, and Latour’s suggested regrouping around “matters of concern,”
does strong-form social constructionism continue to be the self-appointed boogeyman haunting
the nightmares of today’s closet deferentialists? As Ian Hacking has noted, philosophies that
characterize mind-independent reality as radically socially constructed have always been so rare
as to be effectively non-existent (Hacking, The Social Construction of What? 24-25 ) and it is
arguable that to the extent that early constructionist works like Laboratory Life are limited to
showing the activities of science to be social endeavors, they reveal little more than is already
obvious to science practitioners. If this is true—if strong-form constructionism is so rare, why
does it continue to rear its head so often within art-science discourse? My suspicion is that this is
due to a general metaphysical attitude that has become so ambient that it is rarely faced directly
save by scholars who make it their intellectual bread and butter.
The trouble lies in the stickiness of “real things” to the social/mental procedures and
negotiations we employ to communicate them. This stickiness, which has helped cement the
epistemic bedrock for much post-Kantian critical thought, has been dubbed “correlationism” by
Quentin Meillassoux and been heavily critiqued by other materialist thinkers, particularly from
within the ranks of the Speculative Realists. “By ‘correlationism,’” says Meillassoux, “we mean
the idea according to which we only ever have access to the correlation between thinking and
being, and never to either term considered apart from the other” (Meillassoux 5). Building on
Meillassoux’s definition, Levy Bryant has demonstrated that, while many anti-humanist
!53
movements (e.g. structuralism, post-structuralism, etc.) have de-privileged the role of individual
human minds in the correlation of being, they have simply replaced it with other anthropocentric
concepts such that being is made correlative to “language, power, or social relations” (Bryant,
The Democracy of Objects 39).
Correlationism, or indeed metaphysical formulations in general, might be viewed by some
workaday scientists as “too philosophical” to be of use for the work they do, but even
philosophers of an analytical bent are wont to skirt correlationist problematics. For example, Ian
Hacking, who considers himself an anti-realist with respect to scientific theories but not
“experimental entities” (Hacking, Representing and Intervening 22-23) has no problem
whatsoever in making a distinction between the aggregation of knowledge in itself and the
diverse social activities in which we engage to produce said aggregation. This is a view that
dovetails with my own, but only insofar as the reality of the things accounted for in this
aggregation refers to physical objects or forces having absolute autonomy from human thought.
Cultural artifacts (ideas, beliefs, institutions, and especially art) are real at different register; they
can have physical effects in the world (e.g. the effect of policy on the world’s climate), but their
reality is clearly predicated on human thought and activity.
Complicating matters is the fact that not only do these two facets of “the real” affect one
another in non-trivial ways, but philosophers, theorists, artists, and even a few scientists display a
tendency to unwittingly slip between categories. In his work Onto-Cartography, Levi Bryant
makes a task of facing this slippage head-on as he charts the flows of reciprocal influence
between what he terms “incorporeal” and “corporeal machines”—material and immaterial
“objects” that send, receive, and process information amid the networks they create: “A
!54
corporeal machine is any machine that is made of matter, that occupies a discrete time and place,
and that exists for a duration […] Incorporeal machines, by contrast, are defined by iterability,
potential eternity, and the capacity to manifest themselves in a variety of different spatial and
temporal locations at once while retaining their identity” (Bryant 26). Importantly, Bryant sees
these two “machines” as being radically dependent upon one another. His ontological proclivities
are particularly useful for art-science as they attempt to make distinctions that support a total
picture of reality that give nature and nurture equal weight. Still, we must be careful. As I said in
the introduction to this work, ontologies are where we are most likely to find radical alterity or
exteriority over-coded (and here I am using Bryant’s own term to problematize his approach) by
theoretical schemas.
To take the penumbral view is to understand the ongoing debate between nurture and nature
as mostly driven by rhetorical postures in which certainties are produced prematurely, each side
struggling to remap the line of demarcation as though it were sharply drawn and not itself a
space of diffraction, an ecotone, or a crosshatching that structures daily experience. Like Bryant
and a number of contemporary realists, the view I am taking herein is this: while abstraction is
produced from our engagements with the recurrent patterns of the physical world—while it
affords us the ability to intervene in that world—there is nothing to indicate that the physical
world itself is wholly dependent upon abstract schemas (or social behavior, or power
differentials, etc.) for its existence. What follows from this view is that, while strong
constructionist statements like Latour’s and Woolgar’s (examined by Hacking in the above
quotation) might be said to escape charges of anti-realism with the authors’ insistence that
constructed objects are real, they suffer nonetheless from a chronic case of ontological slip: They
!55
conflate the real, epistemologically objective models of somatostatin with the real, ontologically
objective substance we give the label somatostatin.
To the extent that scientists engage with metaphysics at all, they explicitly center their praxes
on questions of how the world is—questions of ontology—while adherents to a tacit,
correlationist metaphysics must always fall back upon questions of access, which is to say
epistemology. On its face, it is tempting to think that the split in focus between ontology and
epistemology might easily be corrected by simply drawing attention to the distinctions created
between these two philosophical investigations or by cobbling the two together as does Barad
with her “onto-epistemology.” Sadly, this is not the case. As both Bhaskar (Bhaskar 26) and
Bryant (Bryant, The Democracy of Objects 52) have suggested, for the correlationist,
epistemology is always first philosophy. From this perspective, ontological questions are
impossible to address without a suitable epistemic grounding, which, alas, is forever beyond the
reach of beings held captive by the distorting influence of their internal representations and
subjectivities. The correlationist thus commits what philosopher Roy Bhaskar has termed the
“epistemic fallacy,” which,
consists in the view that statements about being can be reduced to or analysed in terms of statements about
knowledge; i.e. that ontological questions can always be transposed into epistemological terms. The idea
that being can always be analysed in terms of our knowledge of being, that it is sufficient for philosophy to
‘treat only of the network, and not what the network describes’, results in the systematic dissolution of the
idea of a world […] independent of but investigated by science (Bhaskar 26-27).
!56
As should be clear from this statement, for Bhaskar, the most important question elided by
correlationism concerns the transcendental conditions for science — that is, the question of what
the world must be like in order for science to be possible (Bhaskar 19).
To be clear, Bhaskar’s ontology does not predicate the term “must” upon one scientific theory
or another, but upon those qualities of the universe that make the products of scientific
investigation legible in the first place, not only in terms of scientific schemas but also in terms of
practical activity. With respect to our ability to predicate the quality of realness for the electron,
“if you can spray them,” says Ian Hacking, “they are real” (Hacking, Representing and
Intervening 23). Of course, Hacking’s quip and Bhaskar’s use of the term “world” speaks to
reality narrowly defined as physical reality exterior to the products of mind. For the
correlationist, the exile of thought from this exterior space is not simply a matter of the
incompleteness of our knowledge; it is rather our primary condition. This, however, simply will
not do. For one, it attributes to mind the privileged status of standing apart from the flows of
noise and information that structure behavior in non-trivial ways. Not only is conscious attention,
or intentionality, a temporary state most of the time (think, for example, of the number of times
the reader’s mind has wandered while driving, turning the body into an automaton responding to
road and vehicle alike quite independently of the intentional guidance), it would also stand to
reason, given the best evidence available, that minds supervene upon physical reality and not the
other way around. Moreover, the range of conscious thoughts I enjoy today are not those I had
when I was six or fifteen or thirty-five, and what determines the changes in between are not
cultural factors alone, but also material agencies, embodiment, and importantly the physical
environment and its law-like qualities in relation to which I develop.
!57
Yet, we need not predicate our realism upon such a metaphysics viewed at thirty-thousand
feet. While many anti-realists focus their sights upon the incompleteness of knowledge or the
mercurial nature of causal explanations as proof-positive of the need for adopting the anti-realist
attitude, we might just as easily deploy these same limitations in the service of realism. Nicholas
Rescher sums up our situation succinctly: “Objective reality outruns experience” (Rescher 3).
“Real things,” he says, “are cognitively opaque; we cannot see to the bottom of them; our
knowledge about them can thus become more extensive without thereby becoming more
complete” (Rescher 4), a statement that dovetails cleanly with the view of reality as a puzzle that
can only ever be partially solved. Arguably, the same cannot be said of the products of mind,
whose abstract nature renders them portable, generalizable, and potentially exhaustible by the
operations of mind. Thus, for Rescher and for my purposes herein, a tenable realism might be
found not in our ability to make positive claims about the nature of things, but in our very
ignorance of the phenomena we try to explain. For myself at least, reality is located in
uncertainty, noise, and the half-sensed. Conversely, the protective circle correlationism draws
around its “subjectobjects,” and which naive realism draws around its positive assertions,
attenuates the transformative potential of the noisy, the alien, the radically exterior, and the
unexpected—any yet-to-be rationalized thing that approaches its perimeter. Where both
epistemes place dragons at the edges of their maps, these dragons are tamed, defanged, and
rendered in the guises of lions, tigers, or bears. Simply put, both correlationism and naive realism
filter the awe of the unknown through a set of known qualities and produces the same or similar
insights ad infinitum.
!58
So, how should we proceed when the episteme of correlationism runs so deeply throughout
contemporary critical thought—so deeply, in fact that it would take far more unpacking than
would be possible within the limited space of the present work? For the purposes of art-science,
and particularly for those theorist/practitioners engaged in interdisciplinary collaboration, it is
best to focus upon those reasons given by Rescher for adopting the realist attitude in social
engagements: “The assumption of a mind-independent reality is essential to the whole of our
standard conceptual scheme relating to inquiry and communication. Without it, both actual
conduct and the rational legitimation of our communicative and investigative (evidential)
practice would be destroyed. To be evidentially meaningful, experience has to be experience of
something” (Rescher 17). If one finds this overtly pragmatic reasoning to be less inspiring than
metaphysical paradoxes, Quentin Meillassoux, in After Finitude, provides what I find to be one
of the most evocative problematizations of realism with respect to Science: “This is the enigma
which we must confront: mathematics’ ability to discourse about the great outdoors; to discourse
about a past where both humanity and life are absent. Or to say the same thing in the form of a
paradox […]: how can a being manifest being’s anteriority to manifestation” (Meillassoux 26).
With this paradox as a starting point, we can begin to build toward a realist position, complete
with tools and tactics, that productively engages the penumbral boundaries between the
complexities of the wider, physical world and the abstractions with which we construct our small
islands of certainty. !59
2.4 Objectivity
Where does the account above leave us with respect to that other favorite target of critique,
objectivity? Even-handed critics and philosophers of science (many of whom predate the science
wars) rightly note the manner in which objectivity has historically been abused or misapplied,
particularly in the field of cultural anthropology, but such reasonable critiques are not what we
find in the common iconographies of scientific objectivity. What many Humanities scholars and
artists carry into art-science discourse remains a view of objectivity as the untenable appeal to a
GEV , which continues to be a target of critique by theorists seeking to undermine the notion of
Science as a value-free epistemic perspective. Is this understanding of objectivity accurate? And,
if not, what stands in its place?
In The View from Nowhere, Thomas Nagel describes the GEV as an imaginary window onto
a world that “is not just centerless,” but, “also in a sense featureless. While the things in it have
properties, none of these properties are perceptual aspects […] Whatever it contains can be
apprehended by a general rational consciousness through whichever perceptual point of view it
happens to view the world from” (Nagel 14-15). For Nagel, the problem with this “bleached out
physical conception of objectivity” (Nagel 15) is not necessarily the impossibility of its
achievement, but rather that it fails to account for the “perceptions and specific viewpoints that
were left behind as irrelevant to physics but which seem to exist nonetheless […] as well as the
mental activity of forming an objective conception of the physical world, which seems not itself
capable of physical analysis” (Nagel 15). Nagel is speaking, in other words, of a fettering of
objectivity enacted by the bifurcation of the scientific and manifest images—an objectivity
!60
which denies the inferences of biological perception while attributing to itself a detached, top-
down perspective of reality.
For myself, the work that best helps to dismantle notions of objectivity as GEV is found in
Lorraine Daston’s and Peter Galison’s appropriately named book Objectivity. Therein, the
authors trace the birth and development of objectivity’s role in scientific image production from
the enlightenment to the present age and flesh out the many “epistemic virtues” objectivity has
represented over time. I have briefly discussed one already: the “truth to nature” sought by
scientists of the mid-nineteenth century. What Daston and Galison call “blind sight” (Daston &
Galison 125) is perhaps the closest scientific objectivity comes to the GEV and is of greatest
concern during the period in which “mechanical objectivity” (the perspective of scientific
instrumentation) comes to take precedent over “truth to nature” (Daston & Galison 44-45). We
will revisit the relationship between blind sight and mechanical objectivity in later chapters
where it will help to center much of our discussion around contemporary issues of data and
scientific instruments, but for present purposes I would like to examine the role played by
objectivity in the formation of what the authors term “the scientific self.”
Daston and Galison define the scientific self in contradistinction from self as subjectivity, self
“imagined as a polity of mental faculties” and self as “an archaeological site of conscious,
subconscious, and unconscious levels” (Daston & Galison 44). The scientific self described by
the authors is a set of iconographies that evolved over time from “sage” and “indefatigable
worker” to the “intuitive expert” (Daston & Galison 44). Importantly, what all of these
iconographic selves have in common is that they are marked by a kind of mental discipline
(memory, training, judgement, etc.). Following a similar train of thought, Thomas Nagel holds
!61
that a tenable objectivity should be framed not as an ideal perspective, but as a way of thinking—
a conception that squares with the mental discipline that characterizes Daston’s and Galison’s
scientific self:
To acquire a more objective understanding of some aspect of the world, we step back from our view of it
and form a new conception which has that view and its relation to the world as its object. In other words,
we place ourselves in the world that is to be understood. The old view then comes to be regarded as an
appearance, more subjective than the new view, and correctable or confirmable by reference to it. The
process can be repeated, yielding a still more objective conception (Nagel, The Limits of Objectivity 77).
By way of caveat, Nagel describes how the iterative process of shifting from old to new views
might go astray, yielding results that lead to ill-considered inferences. “Not everything is better
understood,” he says, “the more objectively it is viewed” (Nagel, The Limits of Objectivity 78).
Objectivity is therefore not an escape from cultural values, but is itself a value of discipline—not
a GEV , but a cognitive method for incrementally increasing information about events or entities
proximal to subjectivity. In this way, objectivity becomes a ratcheting mechanism for decreasing
uncertainty, but one whose very effectiveness for rationalizing sublimity, evocative fields of
cosmic noise, and the rich fecundity of social and biological life requires a cautious and ethical
hand at the controls.
For Daston and Galison, a commitment of the scientific self to objectivity (particularly
mechanical objectivity) requires a non-trivial sacrifice of personal subjectivity on the part of its
adherents. Whether we are able to fully commit this sacrifice or whether a total blind sight can
ever be truly achieved is beside the point; almost certainly it cannot. It does not follow from a
!62
failure to achieve the ideal in full, however, that a commitment to it does not significantly
improve our understanding of objects under study—no less in fact than a commitment to
diversity or democracy, though rarely achieved in full, significantly improves the lives of
individuals within the polity. My contention is that such an understanding of objectivity replaces
the iconographic “view from nowhere,” with a multiplicity of localized, subjective viewpoints—
intersubjective, situated, and social viewpoints as Helen Longino has argued (Longino 62-63),
and those that aggregate within the mind of the individual. “The moral is simple:” says Haraway,
“only partial perspective promises objective vision […] These are lessons that I learned in part
walking with my dogs and wondering how the world looks without a fovea […] It is a lesson
available from photographs of how the world looks to the compound eyes of an insect or even
from the camera eye of a spy satellite or the digitally transmitted signals of space probe-
perceived differences ‘near’ Jupiter that have been transformed into coffee table color
photographs” (Haraway 583).
As theorist/practitioners, once we have rethought objectivity in this sense, we may rightly ask
what subjective perspectives are being excluded or disqualified in the aggregation of
perspectives overall, but such questions address an ethics of implementation, not objectivity as a
process in itself. To address the former, I would suggest, is to strengthen objective methodologies
by identifying their boundary conditions. Operating within these boundaries, which transcend
those of specific disciplines, a modest and active objectivity—even one of a speculative bent,
which explores the strangest reaches of the virtual space of possibility—helps us to productively
engage the stabilities and coherences between the physical and cultural phenomena we
investigate and goes far in helping to bridge the epistemic divide. What we will see in due time,
!63
however, is that for the artist who is permitted to increase uncertainty rather than decrease it,
objectivity itself might be turned to subversive purposes as a means for reconfiguring the
epistemic and phenomenological tools of praxis. 2.5 Inference
For the Sciences, objectivity as a self-discipline of practice is hobbled without the critical and
rectifying influence of collective objectivity. While we might say that both of these have largely
replaced the antiquated GEV , the latter in particular presents a challenge for the ambiguities of
language that Science has struggled to overcome. Arguably, this where the certainties of
abstraction chafe most uncomfortably against the uncertainties of the physical world. For, as the
set of entities scientists observe and analyze expands to include those that lie outside the spectral
bounds of human biology’s perceptual windows, abstractions are afforded more weight and their
physical referents, being imperceptible, are easily dismissed by the anti-realist. The
counterintuitive nature of many of Science’s objects has led to a subsequent iconography of
Science that follows directly from the epistemic divide: the abstract representations scientists use
to communicate the qualities, structures, or relations of imperceivable entities are arbitrary or,
more strongly, that scientists simply shoe-horn the (almost) ineffable discoveries of embodied
and social practice into pre-conceived theoretical metaphors (or Kuhnian paradigms more
broadly). Science’s language problem extends beyond preconceived notions of theory versus
practice and down into the requirement for communication itself. From roughly the 1930s -
1960s, the logical empiricists, or Positivists, many of whom were ardent anti-realists with respect
to scientific theories, held to an observation-based philosophy of science that de-privileges
!64
theory and places the primacy and verifiability of linguistic description at the center of their
rubric for establishing scientific truths. Rudolf Carnap and others elevated the direct experience
of the scientist to a prime position in their philosophy of knowledge and offered (hesitantly, in
the case of Carnap) the “protocol sentence” as a method for linking direct experience to
theoretical constructions.
Carnap differentiates between his version of protocol systems and Otto Neurath’s. Carnap’s
systems are held to be exterior to the language of the systems they describe and deploy “special
rules […] for translating protocol sentences into system sentences” (Carnap et. al. 458). In the
latter case, “protocol sentences are found inside the language of our system; here the form of
protocol sentences is not arbitrary, but rather bound to the syntax of our system language. There
are no special translation rules here” (Carnap et. al. 458). We need not unpack the notational
syntax and other details of the manner in which these sentences are constructed, but only note the
primary concern of their advocates, which, simply put, is the identification of a linguistic
structure that can have “the last word” with respect to whether a theory squares with empirical
observation. In short, Carnap and others sought an abstract schema that could closely track the
phenomena under study with maximum certainty and with the fewest a priori assumptions.
Yet, protocol sentences suffer from deep philosophical problems, which were identified
almost immediately after they were first formulated. The most damaging critiques hang upon the
question of whether it can reasonably be argued that any observational statement is truly free of
theory—an idea that led Kuhn, Quine, Feyerabend, and others to argue in various ways that all
observational language is “theory-laden;” that is, they held that there can be no privileged
language with respect to observation that does not already come loaded with a number of
!65
theoretical assumptions. As we will see in later chapters, phenomenological report suffers from a
similar problem. For the present, it is worth taking a moment to consider the extent to which
charges of theory-laden-ness actually trouble claims to realism or, more importantly, to
knowledge of facts.
How can we truly know that the imperceivable entities we discover at biologically-
inaccessible registers really are the entities we represent in mathematics or theoretical language?
Questions like these proliferated in scientific discourse after the discovery of the atom in the late
19th century, all but consuming it after the formalization of quantum mechanics. For some
thinkers, like Hacking, accusations of theory-laden-ness like those we find in the work of Paul
Feyerabend, trade on a definition of “theory” that makes the objection little more than pointing
out a truism of communication: “Feyerabend […] used the word ‘theory’ to denote all sorts of
inchoate, implicit, or imputed beliefs” (Hacking, Representing and Intervening 175). Here,
Hacking quotes Feyerabend as using the case of our saying aloud that a table is brown when we
view it under “normal circumstances,” thus assuming that our senses are veridical, as evidence of
the “theory-ladenness” of everyday speech. This, Hacking argues politely, is “rather hastily
said” (Hacking 175). Indeed, such a watered-down understanding of theory-laden-ness hardly
presents a challenge since we must (and do) live and communicate in a world of changing
conditions and can change our observations accordingly.
Still, it might be suspected here that to speak of chairs and the color blue is to place ourselves
on a comfortable, Newtonian stage where you and I experience a spectrum of mostly
intersubjective perceptions of the objects and forces that populate it. It stands to reason that our
language develops around this intersubjectivity. Such intuitions and perceptions, however, are
!66
not available to us at the nano- or quantum scale. At these scales, objects are smaller than the
very wavelengths of light that interact with objects and the eye, and which set the synaptic
machinery in motion to make visual experience possible. As previously observed, the behavior
and relations of phenomena at such scales are wholly alien to even the widest spectra of
everyday biological perception. Yet, as Hacking has argued at length, we are nevertheless quite
capable of exercising hard-won control over these phenomena, of using them to construct new
instruments, and of leveraging these to uncover and measure still more previously unknown
phenomena.
For myself, this raises two important questions: one pertaining to the primacy we attribute to
biological perception in our determinations of the real and one centering upon the importance of
abstraction and theory-laden-ness for instrumental investigation, the latter of which,
characterized more positively, is simply a constraint of measuring processes. Regarding the first
question, it is clear that perception’s primacy has long been destabilized as the linchpin for
empirical knowledge, a fact that I will unpack more fully later on. The second question is one
that centers on our ability to make inferences from the patterns we perceive through whatever
vector they might be communicated. A physical phenomenon might be measured via any number
of instrumental, non-perceptual processes, many (I would dare say most) of which will produce
results that comport with one another quite well. This characterization falls in line with Kuhn’s
description of “normal science” as conducted in daily practice. But, charges of theory-laden-ness,
when deployed in the service of anti-realism, are yoked with accounting for why it should be that
any theory should be capable of conducting precise action with respect to phenomena that, after
all, are under no obligation to reveal their pattens to human beings, unlocking the potential for
!67
further action and discovery. Following Hacking, at the end of the day, I would argue that the test
of a theory is not in its ability to accurately describe the world, but in the affordances it presents
for reducing uncertainty and increasing precision in our practical activities. As we will see in the
next chapters, while there is a cost to be paid for such precision, not only can this understanding
of “theory” be applied across disciplines, it raises interesting problems for artists seeking to
engage the abstractions of scientific theory through the “blocs of affect” (Deleuze & Guattari,
What is Philosophy? 164) around which our own disciplinary theories are structured.
Many practicing scientists simply brush away the tensions between observational activity and
the abstractions of theory unconcernedly. For them, it is of little import whether instruments are,
in the words of Gaston Bachelard, “theories materialized” (Bachelard, The New Scientific Spirit
13) or whether written papers are the primary means for scientific communication—both are
simply the broadly accepted requirements of collaborative knowledge creation. Furthermore, ask
these scientists how theories operate and you are likely to get some version of the following: A
theory should a) make predictions; b) have its predictions held up by experiments that are
reproducible; c) be falsifiable; and d) be replaced or modified in the event that it is indeed
falsified. Of course, novel entities can and do emerge from experimental practice, but these are
mostly contested until they can be shown to be repeatable under the same conditions or
accounted for in a particular theory. Only rarely do new entities upset the apple cart by requiring
major changes to established theories or the creation of new ones. Mostly, in everyday scientific
practice, the quality of realness, whether attributed to an entity, causal processes, or a theory, is
determined through prediction, experiment, and review. Full stop.
!68
One will notice that rules a and b mostly square with positivist thinking; we need only
suspend judgement of the theoretical assertion until observation and subsequent protocol
statements prove it out. Rules c and d, however, run counter to the verificationist strategies of the
logical empiricists and were popularized by the philosophy of Karl Popper, a contemporary of
the Vienna Circle (the most famous cadre of empiricists) and whose ideas around falsifiability
and falsification form two pillars of today’s scientific thinking. The trouble Popper identifies
with verificationist approaches is that they far too often predicate their truth claims on inductive
inference:
…it is far from obvious, from a logical point of view, that we are justified in inferring universal statements
from singular ones, no matter how numerous; for any conclusion drawn in this way turns out to be false: no
matter how many instances of white swans we have observed, this does not justify the claim that all swans
are white (Popper 4).
For Popper, the acceptability of an inductive inference would require a principle of induction that
is capable of validating the universality of truth claims emergent from specific observations. But,
…if we try to regard [the truth of a principle of induction] as known from experience, the the very same
problems which occasioned its introduction will arise all over again. To justify it, we should have to
employ inductive inferences; and to justify these we should have to assume an inductive principle of a
higher order; and so on (Popper 5).
Inductive inference thus traps us in an upward spiral of justification, never allowing us to
move on; only a deductive method of testing is strong enough to support the truth claims of a
!69
hypothesis and only “after it has been advanced” (Popper 7). I will not attempt to unpack every
detail of Popper’s falsificationism, but I will note here that even though it has become part of the
epistemic bedrock of scientific thought, it is not without its limitations. In practice, hypotheses
are undoubtedly falsified all the time; theories, however, are another matter. One explanation for
this potentially lies in the very power of falsification to weed out faulty hypotheses before they
can lead to a substantive theory. Whatever the case, while theories today are rarely falsified
outright within the sciences, the related epistemic pillar of falsifiability, which Popper held as
distinct from falsification, has become one of the most important rubrics for determining whether
a hypothesis or theory is scientific in nature. If a claim is unfalsifiable, it is, for the Popperian at
least, simply not a scientific claim. In this respect, falsifiability remains an important tool for
demarcating science from pseudo-science.
As powerful as it can be for philosophies of science, falsification unfortunately leaves us
wanting in its account of scientific progress as it pertains to practical activity. There are two
problems that concern us here: Firstly, all of us, including scientists, seem to rely in our daily
lives upon all sorts of casual inductive inferences and hold them to be true (see, for example, the
above discussion of chairs and the color blue) regardless of our philosophical positions. A
reasonable rejoinder is that Science employs far more rigorous standards than those employed in
workaday “street epistemology.” This is true, but it remains doubtful that the clean divisions
between inductive and deductive reasoning ever really survive the messy conditions of practical,
scientific activity. The knowledge of science, particularly for working scientists, just seems to be
more than a growing pile of logically discarded or falsified theories and are treated as such
regardless of the commitments of any one philosophical framework. But, most troubling of all
!70
for falsification, is that it isn’t clear that a “purely” deductive argument can be made. Colin
McGinn points out that, while Popper suggested falsification as an antidote to inductive
inference, falsifying statements do themselves rely upon inductive inferences:
Suppose we are testing the hypothesis that all swans are white: we come across what we think is a
falsifying instance - an apparent black swan. In order to use this instance to reject the generalization we
need to be convinced that we are indeed confronted by a genuine black swan. But this means that we need
to verify that a black swan is really before us. This is an act of verification, not falsification, and it requires
that we make an inductive inference, since the hypothesis that this animal is a swan itself implies all sorts
of things about its anatomy, evolutionary history, and future behavior (McGinn section III).
McGinn’s objection points to a smuggling of induction into what was ostensibly a deductive
argument—a criticism, in fact, that Popper tries to address proleptically:
[…] the attempt might be made to turn against me my own criticism of the inductivist criterion for
demarcation; for it might seem that objections can be raised agains falsifiability as a criterion of
demarcation similar to those which I myself raised agains verifiability […] This attack would not disturb
me. My proposal is based upon an asymmetry between verifiability and falsifiability (Popper 19).
The asymmetry Popper identifies here is perhaps one reason why falsification has been widely
accepted within scientific theory and practice, but it remains debatable whether falsification has
replaced verificationist strategies entirely.
Whether or not they square with what we perceive to be more rigorous forms of reasoning, it
is doubtful that inductive inferences can ever be fully exorcised from empirical activity. While
deductive inferences are much more logically sound and provide us with maximum certainty,
!71
scientists find relatively few opportunities to employ them in the observational practices of
“normal science,” save for those cases in which falsification curtails a faulty hypothesis before it
leads to a dead end or in which falsifiability is used as a rubric of demarcation. These are not,
however, the only inferential means at our disposal. Thus, we arrive at the epistemic approach
that I will argue most accurately comports with the practices and attitudes of the working
scientists in today’s data-driven world: probabilistic inference.
There are many forms of probabilistic inference, but without a doubt the most popular is
what has come to be known as “Bayesianism.” One particular marker of Bayesianism’s
popularity is that the term itself has become eponymous for more general forms of probabilistic
thinking, with the result that even scientists well aware of the differences call nearly all
probabilistic approaches to knowledge, “Bayesian epistemology.” For the sake of precision,
however, I will make a distinction between strict Bayesianism and its alternatives, particularly
the “softer” or more casual forms of probabilistic inference adopted in daily practice. In the case
of the former, there are highly specific mathematical constructions employed to determine
probabilistic outcomes based upon prior assumptions. The use of “Bayesianism” as a convenient
shorthand for the latter is a slippage of language that can only confuse matters, so for present
purposes I will identify that slippage where necessary as “Informal Bayesianism” (henceforth
IB).
Named for the statistician Thomas Bayes whose conditional probability theorem underpins
its methodology, Bayesianism is a statistical approach to determining the rationality of a belief,
hypothesis, or theory in which our prior assumptions are assigned probability distributions and
then calculated to produce “posterior” probabilities. These posteriors are often (although not
!72
always), plugged back into the calculation as prior assumptions in turn—a fact that stands out of
one of Beyesian’s shortcomings, viz. the potential this iterative process holds for regression. The
general swing toward epistemic attitudes characterized by probabilistic inference in the Sciences
is important because, not only does it track closely with the methodologies deployed for
instrumental data production and analysis, but because in many respects it runs directly counter
to the more common iconographies of Science that hang upon critiques of the GEV and positivist
formulations generally. Indeed, a quick scan of any reputable scientific journal will show that
strong claims to “facts” or “truth” will only be made when certainty is exceedingly high—that is,
when the author feels there to be sufficient evidence to provide warrant beyond a reasonable
doubt for a particular belief. Otherwise, nearly every finding of an experiment will be
communicated in the language of probability in which terms and phrases like “most likely,”
“strongly correlated,” and “within the standard deviation” abound.
None of this is to say, however, that probabilistic inference has entirely replaced other modes
of inference within the Sciences. I would argue, in fact, that while philosophical trends of
scientific thinking come and go, the practical activity of Science operates according to an ethos
of “transcend and include.” Unless a methodology or an epistemic attitude fails outright, has
logical faults, or produces more problems than it solves, the likelihood is that it will simply be
modified or filed away to be used only when circumstances require it. In an interview centering
on how scientific methodology operates today, artificial intelligence researcher and philosopher
of science, Kevin B. Korb, describes the relationship between falsificationism and IB thusly,
!73
It’s fair to say that in philosophy of science, as of a few decades ago, [falsificationism] pretty well died out
[…] I see falsificationism as being subsumed by Bayesianism […] The basic idea is just that you look for
alternative theories, which give varying degrees of probability to some evidence you might be able to
accumulate through an experiment. If they have highly variable likelihoods then the likelihood ratio is
going to be either vary large or very small, meaning we either get high confirmation or high
disconfirmation… In that way you get a confirmation theory, which Popper couldn’t give. You get a theory
of how it is that hypotheses are justifiable—confidence in some hypotheses increases and [confidence in]
others decreases. (Korb)
According to this view, we might argue that probabilistic inference helps to stitch together
verificationist and falsificationist approaches. Most troubling for the iconographies of Science
that concern us here, talk of “high confirmation” and “high disconfirmation” is a far cry from the
strong appeals to a “view from nowhere” or even to the fuzzy Popperian distinctions between
inductive and deductive reasoning. Probabilistic inference weakens the “justified” in “justified
true belief” and, undoubtedly for some, this fact proportionally weakens any claim we might
stake upon it as a legitimate epistemological approach. Nevertheless, probabilistic
understandings of knowledge are not only more tenable alternatives to the “view from nowhere”
but are also more reflective of the attitudes evinced by working scientists. In the Sciences today,
strong claims to “truth,” or even premature appeals to “facts” are, in practice and theory, simply
not well-received. As I have already observed, the language one hears most often outside of high
school science text books or pop-science media is the language of probability, statistics, “best
explanation to date,” or “inference to the most likely explanation.” Bruno de Finetti, one of the
foundational thinkers of contemporary probabilism, describes the need for a transition from
GEVs to subjective probabilistic thinking thusly,
!74
[…] no science will permit us say: this fact will come about, it will be thus and so because it follows from a
certain law, and that law is an absolute truth. Still less will it lead us to conclude skeptically: the absolute
truth does not exist, and so this fact might or might not come about, it may go like this or in a totally
different way, I know nothing about it. What we can say is this: I foresee that such a fact will come about,
and that it will happen in such a way, because past experience and its scientific elaboration by human
thought make this forecast seem reasonable to me (de Finetti 170).
He goes on to describe what is bought by this epistemic transition in highly poetic terms:
Once the cold marble idol has fallen in pieces, the idol of perfect, eternal and universal science that we can
only keep trying to know better, we see in its place, beside us, a living creature, the science which our
thought freely creates. A living creature: flesh of our flesh, fruit of our torment, companion in our struggle
and guide to the conquest (de Finetti 169-170).
When we dovetail this notion of probabilistic inference with the account of objectivity
previously given, Science, both exalted and critiqued as a monolithic tower of revelatory or
constructed factual knowledge turns out to be more akin to a vast collaborative terraforming of
islands and archipelagos of relative (un)certainty amid living oceans of physical events, causal
laws, and the discordant noise of experience. 2.6 Art and Jurisprudence
Workaday scientists understand their domain to be oriented around the knowledge of real
things (entities, processes, causes, etc.). They also hold that some practices are more effective for
the investigation of real things than others. In contradistinction, the Arts evince a great disunity
!75
of methodology and have no central objective around which to swarm their collective efforts.
The objects that transcend to the venerated status of “Art” run the cultural gamut and what
methodologies exist are as idiosyncratic as the practitioners who deploy them. In comparison to
Science, the Arts comprise a Wild West of means and ends.
The “anything goes” attitude that characterizes much contemporary Art ostensibly precludes
any characterization of Art as capable of producing propositional knowledge. It is relatively
uncontroversial, for example, to argue that the Arts deal in “practical,” “experiential,” or “tacit
knowledge” as defined by various discourses, but arguments in favor of Art’s capacity to produce
propositional knowledge are likely to be met with at best a bemused skepticism. This fact helps
to deepen the iconographic contours that bind the knowledge of the Arts to issues of subjective
taste and widens the epistemic divide further still. Yet, the problem is not that the iconography is
simply wrong but that, like most caricatures, it is incomplete; the reason for its persistence is not
that generalizable knowledge has no place within the Arts but that, in their current state, the Arts
are capable of manifesting and validating any and all of the aforementioned knowledge
categories simultaneously—a quality that I would characterize as, pace Bakhtin, a carnivalesque
or polyphonic epistemology.
The leisurely conjunction of knowledge categories within the arts is marked by a colorful
heterogeneity. I use the term “leisurely” to underscore the deeply-seated inclination we evince to
set the Arts apart from the “serious” activities of survival. In Experience and Nature, John
Dewey traces this inclination to Biblical origins: “Because of this primeval rebellion against
God, men toil amid thorns to gain an uncertain livelihood […] Festivity is spontaneous; labor
needs to be accounted for […] Yet, in fact, it was not enjoyment of the apple but the enforced
!76
penalty of labor that made man as the gods, knowing good and evil instead of just having and
enjoying them” (Dewey 121). Dewey continues in this vein, “While leisure is the mother of
drama, sport and literary spell-binding, necessity is the mother of invention, discovery and
consecutive reflection” (Dewey 121-122).
Dewey’s historical/philosophical account goes far in explaining why the Arts have become
one of the few fields in which practitioners are very much respected for their expertise while
being simultaneously denied it. But, the air of entertainment and leisure around the Arts is only
one possible explanation for this denial. Another is the lay public’s general assumption that an
artist’s knowledge is found only in embodied or tacit skill (an old assumption that led to the
common historical epithet, “stupid as a painter”). In this contemporary age, in which pickled
sharks and 4’33” of silence are as venerated as Rembrandts and Bach within the hallowed space
of the white cube, what “knowledge” can be left for the artist in the eyes of a lay public?
Of course, it is a well-worn truism that the disjuncture between the respective “knowledges”
of Art and Science has not always existed and that the two orders were at one point nearly
indistinguishable from one another. Yet, its very status as truism implies that this sentiment
almost never penetrates very deeply below the surface of discourse. We might trace a number of
forks in the historical path that sent Art and Science on their separate ways. For the purposes of
troubling the iconography at hand, however, we need only consider the moment in 1917, when
Duchamp’s Fountain (a urinal turned on its back and signed with the pseudonym, R. Mutt)
caused a stir by being first accepted into, and subsequently omitted from, the inaugural
exhibition of the Society of Independent Artists. Duchamp’s urinal and his other readymades
signal a productive dissolution of the boundaries demarcating “legitimate” Art objects and
!77
practices from everyday things, actions, and judgements. Yet, while these dissolutions are often
liberating for working artists and theorists, they simultaneously feed iconographies of Art that
strip art-making-as-praxis from its epistemic roots and reduce its tangled ontologies to a rubric of
personal choice or taste. This flattened interpretation of Duchamp’s intervention with Fountain
and the motto “anything goes” that it inspired misses an important subtlety that, when taken into
account, points us in more fruitful directions: Duchamp was, himself, an organizer and dues-
paying member of the Society and, as such, would have been well aware of the various
arguments and hypocrisies around the Society’s ethos, “no jury; no prizes.”
For art historian Thierry de Duve, there is little doubt that Duchamp, the consummate chess
player, would have known that Fountain was destined to be sidelined by someone else on the
board of Independents, even if the work had technically been allowed into the show (de Duve,
Kant After Duchamp, 268-269). If this interpretation of Duchamp’s strategic thinking is correct,
his intervention is not simply a performative manifesto of aesthetic practice; it is a highly
contextual discursive act in which Duchamp exposes the contradiction lurking within the
Society’s idealistic posturing with respect to the motto of “no jury, no prizes”—a scenario which
squares more fully with Duchamp’s dadaist, prankster roots. “No prizes” means no judgement of
quality, but “no jury” means no jurisprudence whatsoever with respect to what “counts” as Art
or, perhaps more importantly, who “counts” as an artist. More than a hundred years later, whether
we agree or disagree with the ideas Duchamp inspired, what is clear is that they are simply
comfortable aesthetic platitudes outside of the social/historical/material contexts from which they
were produced.
!78
On its face, “anything goes” continues to liberate artists from the fetters of overly-
prescriptive instruction, whether from the academy, from viewers, or from market demands, but
where are its boundary conditions? If subjective determinations of beauty are the only standard
of Art—if anything one chooses can be art and anyone can be an artist no matter their training or
sensibilities—why, then, can the scientist-cum-artist not take it upon herself to mount
particularly attractive microscopic images onto supports and circulate them among galleries and
collectors? Of course, she is perfectly free to do so, and perfectly free to call such objects “Art”
if she so wishes. But, would such a move, predicated almost exclusively on individual aesthetic
judgement, be accepted as aesthetically or discursively evocative in the wider networks of
cultural jurisprudence that determine Art’s boundaries? More importantly, how would such
objects function outside of their original contexts or within the contexts of other orders? If these
questions seem rhetorical or that I am myself erecting a straw person with which to quibble, I can
only say that I have fielded exactly this argument with scientists on multiple occasions in the
arena of art-science collaboration.
Our situation is complicated by the fact that universal definitions of Art or its methodologies
are all but impossible to maintain. How are we to sight our position with respect to the strange
attractors of Art and Science when one of them always drifts stubbornly out of focus? One way
forward might be to refuse any prescriptive definition of Art (or the Arts more broadly) that
hinges upon some essential quality of discrete objects or practices. To do so, however, would not
allow an escape from ordering processes, but would reveal two facts: 1) the Arts, like the
Sciences, are constitutive of the interactions between what Levi Bryant has, again, termed
!79
“incorporeal” and “corporeal machines”; and 2) that “Art" as a culturally-mitigated signifier for
these networked “machines” is always imputed to them ex post facto.
There are cultural/historical vectors by which the everyday objects of “art” (children’s
drawings, TV shows, games, performances, action films, genre paintings) transcend to “Art” and
almost none of these are predicated exclusively upon essential qualities of the objects or
experiences they elevate. For certain, quality is determined under multiple rubrics, but any
determination along those lines participates in a much wider discursive network along with
methodologies, practitioners, individual cultures, institutions, and capital that form the islands of
certainty within Art’s domain. Understood thusly, an individual methodology or object cannot be
better understood as belonging to a domain of “Art” outside of the wider contexts that produce
Art in the first place. We might enjoy an art object as we would any other object without taking
these contexts into account, but as practitioners, our domain is shot through with tributary
discourses and tangled engagements between the corporeal and incorporeal that condition
whatever knowledge upon which we might lay claim.
On its face, this would seem to be an argument for exactly the type of strong social
construction I have troubled with respect to the Sciences, and this observation would have a
certain degree of merit, but it is not quite accurate. There are important differences to take into
account and these are mostly between the epistemological bent of social construction and the
ontological account I have just given. In the case of the latter, knowledge is not determined
exclusively by social activity, but also structured by the manner in which embodied activity
(including tacit and unconscious decision making), flows of material (Bryant’s corporeal
machines), and biological perception are placed in network with social activity. In other words,
!80
the epistemic component of the Arts only acquires its importance through the manner in which it
comes into conjunction with all the other flows of the ontological construction.
As we have seen in the case of Science, theoretical constructions, semantic representations,
social activities, and abstractions are not so easily separated from the material products and
physical constraints that interact to make a knowledge domain possible. What is of note,
particularly with respect to Art’s processes of boundary making, is the manner in which these
entangled relations produce asymmetries within themselves. Strange attractors appear as one
particular region of the network becomes, for some reason or other, more stable than others. In
the Sciences, the phenomena that produce these stabilities are more evident and are made
explicit. For the Arts, it is often unclear why and how the qualities or commitments of one region
of activity get pulled in the direction of others. Whatever the case, while beauty and
sociopolitical relevance might be factors in the production of these asymmetries, the latter cannot
be reduced to the former. Of course, an objection might be raised that such an understanding of
Art as “assemblage,” or “network” does not escape the trap of essentialization; it simply
essentializes at a different register. Perhaps Art isn’t derived from some inherent quality of
certain kinds of objects (e.g. Walter Benjamin’s “aura”), but are we not simply substituting these
with an essential nature defined as contextual relations? Such is the danger of ontologies (and of
correlationism generally). But, instead of attempting to counter this objection with a
metaphysical argument, I will draw attention to what viewing Art as a systemic, networked, and
emergent phenomenon affords us in practice.
Firstly, and most obviously, the knowledge domain of Art, understood as a networked
phenomenon, squares with that quality of praxis and material circulation that is dynamic and
!81
responsive to changes in culture and technoscience. But secondly, and more importantly for my
purposes, it helps us to feel out the contours of Art’s boundary conditions. We are better placed to
argue for the knowledge and expertise of the “art” in art-science when we are capable of tracing
the flow of ideas, material, and experiences out to the edges of Art’s domain where we can better
face those phenomena that Art exteriorizes and, ultimately, colonizes. The primary node in this
network—the one which serves as the strongest attractor at Art’s center—turns out to be the one
that most troubles any strong claim to generalizable knowledge within the Arts: a process that
Thierry de Duve characterizes in Kant After Duchamp as transmission of jurisprudence (de Duve
37-46).
According to de Duve, the credo “anything goes” represents only the most recent move in the
long dance between what the author characterizes as “tradition and betrayal.” In reality, most
artists never get to decide, at least in any profound sense outside of highly localized conditions,
whether and when the things they produce “count” as Art. This is as true today, where we find
works of Art judged by their relative degrees of political engagement and commentary as it was
when they were judged (in the case of painting) by their capacity to “strike the eye.” According
to de Duve, Art as a designation is always contested and mutates through variegated processes of
historical jurisprudence. It is therefore nearly impossible to prescribe “on the ground,” by dint of
individual judgements from within the network. “What tradition transmits, translates, and
betrays” says de Duve, “are first of all the things called art. In preserving them in museums of
art, in gathering them together in the name of art, what tradition also transmits, translates, and
inevitably betrays, is the name ‘art’” (de Duve 67).
!82
We can therefore train as artists, but in many respects we cannot train to make Art. We can
hope to produce and can sincerely believe that we have produced Art, but we can never be certain
that the wider, pseudo-darwinian cultural/material networks that have made it (and us) possible
will validate our work as Art (or us as artists). While we certainly participate in these processes
of judgement, as individuals, we exert only so much influence upon the strange attractors of
cultural jurisprudence in the larger scheme. Our position and status in this regard is thus marked
by a radical uncertainty as judgements and objects alike are pulled into stronger orbits by capital,
the academy, the movements of history, and by the power of colonizing forces.
The notion that the designation “Art” is always attributed after the fact and through inherited
judgement creates the fertile conditions for the iconographies of Art to flourish because it
produces a tacitly felt sense of ambiguity around the relationship between “art” as a signifier of
everyday practice and “Art” as an order of knowledge, cultural value, and consumption. At the
end of the day, the liberating motto of the contemporary artist, “anything goes,” is given various
certainty-producing conditionals by cultural jurisprudence: “as long as it sells,” “as long as it is
politically engaged,” “as long as it isn’t passé,” or “as long as the critics give it a stamp of
approval.” From the Medicis and Giorgio Vasari to Clement Greenberg and today’s blue chip
gallery system, the history of Art is filled with examples of how these cultural forces can
periodically attain superposition in the influence of a single person, institution, or movement.
Mostly, however, the flows of jurisprudence are smeared out across daily life in diffuse
conceptual and sensorial excess. If the Arts constitute a domain of knowledge, it is a domain
whose boundary conditions are penumbral in the extreme.
!83
What I would stress here, and what I would argue points us in a more productive direction, is
that in order for a work of art to transcend to the status of Art, processes of cultural jurisprudence
must extract these works from their lived contexts within the noise and excess of daily life. In
this respect, Art’s ordering processes resonate with the etymological roots of the term “science,”
which, as noted in the introduction, trace back to various words meaning “to cut” or “to
separate.” Yet, if de Duve is correct, the “cutting” enacted in the becoming of Art must always
take place glacially, unevenly, and, for the most part, invisibly—operating autonomously from
any one privileged perspective. This gap between individual labor and the larger machinations of
jurisprudence is one that echoes that between between Kuhn’s “everyday science” and the
commitments of a scientific paradigm. Yet there are key differences between these sets of
relations that help flesh out our two iconographies. Perhaps the most glaring is this: While Art’s
various systems of jurisprudence inscribe Art’s boundaries as well as the paths of ascent within
those boundaries, there is no central goal for those judgements such as we find in the Sciences,
viz. inquiry into physical reality and the most proficient means for conducting it. This, I would
argue, is the reason for Art’s heterogeneity of methodology with respect to knowledge
production. Simply put, no precisely determinate relationship exists between the knowledge
creation of individuals and the rubrics for ascent to the status of Art within Art’s domain.
While the scientific methodologies taken up by individual researchers are (mostly) shaped
and guided directly by the central epistemic virtues of Science as a knowledge domain, the motto
“anything goes” creates the methodological anarchy we find in the Arts, the products of which
are pruned back according to a set of wider, invisible logics. But, even if the Arts today provide
no strict set of methodological standards for its practitioners, what it does provide is a space for
!84
the production of tactics for engaging both the complex networks of Arts discourses and the
noise of the outside world. In these tactics, I would argue, is where we find the knowledge of Art
assembled at local scales. To begin unpacking these, we might first say that, generally speaking,
the knowledge of Art has two primary facets: it has a discursive face, in which transmission of
Art’s theories and judgements is brought into conjunction with wider, cultural networks,
allowing for a two-way flow of information; and it has an affective face, which we might
characterize as “pre-subjective,” “embodied,” and largely a matter of sustained engagement with
the “stuff” of the physical world. It would be safe to assume that most artists engage knowledge
at both registers to varying degrees. As we have seen, the engine of praxis that drives activity
with respect to both of these faces of Art’s knowledge—and the one most caricatured in the
iconography—is choice. Yet, as the very example of Duchamp’s mischief-making with Fountain
shows, little is revealed in considering choice alone; to the contrary, we gain much by analyzing
the question, “choice in relation to what?”
According to the iconography, it is in relation to personal taste or aesthetic impulse that the
choices of the artist are made. It is easy to see how choice understood in this way connects
directly with the artist’s sustained engagement with the objects of sensory perception. In the
spirit of phenomenological discourse, I will use the term variation to describe how artists
experiment with and leverage perception as a means for guiding their choices in their material
engagements. Indeed, we will see how such perceptual principles are codified and subsequently
reified in technological constructions, providing the opportunity for the artist to practice
variation with respect to technologies of vision and sound.
!85
Fountain, however, when considered in its historical context, reveals the extent to which
antecedent processes of cultural jurisprudence also guide subjective judgement in nontrivial
ways. It was not only Duchamp’s inherited a priori assumptions (aesthetic or otherwise) that
prompted his specific choices, but also the socio-epistemological “assemblages” of highly
localized events. Choice thereby becomes a more explicitly discursive action—a product of
social, epistemic, and material dialectic. In order to capture this feature of choice as a tactic of
art-making, I will refer to it henceforth as association, defined as explicit connection-making
between various nodes in our affective and discursive encounters. The artist’s facility with
association—connection-making between seemingly disparate ideas, properties, materials, etc.—
combined with a facility for variation is where we might begin to define a more precise
understanding of Art’s potential for knowledge production that goes well beyond the
iconographies of Art, and which dovetails most cleanly with the operations of Science. !86
2.7 Art, Understanding, and the Noise of Experience
Even if we fully embrace the idea that multiple vectors of cultural jurisprudence play a
substantial role with respect to whether and how the products of our creative labor transcend to
the status of Art, we are still left without a sufficient account for the knowledge we create, which
might satisfy the stricter epistemological rubrics of our scientist-collaborators. In truth, outside of
the academy (and often inside as well) most of us are never required to provide much of an
epistemic basis for our work; we might speak to our intentions, but mostly we do what we are
compelled to do (whatever that might be) regardless of what language is used to describe our
activities or whether the activities themselves are culturally validated. Certainly, this is a
legitimate way of working. Alternatively, we do what we are paid to do. As designers, we “solve
problems” for clients and, as “craftspeople,” we execute the designs of others. This is also a
legitimate way of working. In fact, these two situations, I would argue, are a fair characterization
of the creative labor undertaken in many successful projects labeled “art-science,” but I would
also argue that without a sufficient understanding of how artist’s practices of “association”
produces knowledge, such projects are often best characterized as art-advocacy or patronage on
the part of science—a relationship not so different from the patronage of painters on the part of
the catholic church whose cardinals and bishops guided much of the content of works adorning
Renaissance cathedrals.
For the purposes of art-science, there are two accounts of knowledge in the Arts that hold
much potential. First, we have “modal knowledge,” which Dustin Stokes argues are epistemic
propositions, the production of which the Arts are not only capable, but at which they excel.
Modal knowledge is commonly understood as knowledge of possibility, or “counteractuals” and
!87
its propositions take the formal construction of “it is (im)possible that p” (Stokes 68). For Stokes,
any claim we might make upon Art’s production of modal knowledge requires said knowledge to
be non-epistemic (non-relative to individual perspective) and non-nomological (not bound to
what is scientifically (im)possible). “As is standard in these discussions,” he says “this notion of
possibility involves an appeal to talk of possible worlds; call it […] metaphysical possibility. It is
metaphysically possible that p if there is some possible world where p obtains or where p is true”
(Stokes 69).
We can immediately see how Art’s capacity to engage or produce modal knowledge has great
value for art-science, most conspicuously in its potential application to thought experiments,
which are themselves a workhorse of scientific inquiry. As Stokes puts it,
[…] in attempting an explanation of a newly discovered physical phenomenon, it might prove important
that a physicist not restrict herself to nomological possibilities – to the physical laws as currently
understood – since the phenomenon might not be explainable in such terms. The explanation might, in
other words, require a new set of laws (Stokes 78).
If we imagine Stokes’s hypothetical physicist brought into collaboration with a hypothetical,
science-literate (or at least science-curious) artist, what emerges is more than a one-way
intervention of creative thinking into a problem of Science. Indeed, thought experiments can
often require a mutual and non-trivial bracketing of inherited or over-coded judgements and a
willingness on the part of each conversant to turn away from the certainties of discipline in order
to face those disciplines’ limitations. For artists, developing intuitions for scientific concepts can
be as de-centering as counteractual thinking can be for scientists. While often discomforting, this
!88
state of uncertainty is to be embraced if our collaborations are to allow new associations and
evoke new intuitions or knowledge.
Another account of Art’s epistemological potential does not concern knowledge creation
directly (at least not in the usual sense), but rather how works of Art assist or help to “ratchet up”
knowledge in other contexts. The theory of aesthetic cognitivism is an account of Art’s capacities
in this respect that, according to Christopher Baumberger, consists of two separate theses: one
aesthetic, and one epistemic. The epistemic thesis is relatively self-evident and states simply that
“artworks have cognitive functions” (Baumberger 1). The aesthetic thesis holds that these
cognitive functions “partly determine [the artwork’s] aesthetic value” (ibid. p.1). The cornerstone
for Baumberger’s take on aesthetic cognitivism is a revision of epistemology that centers not
upon the truth of propositional statements, but rather on “understanding,” which he describes as
“cognitive achievement.” For my purposes, the key feature of this view is the manner in which
understanding is always positioned in relation both to its own statements as well as to those that
network together within the wider knowledge domain: “Understanding,” says Baumberger, “is
primarily related to a fairly comprehensive body of information. The understanding expressed in
individual propositions derives from an understanding related to larger bodies of information that
include those propositions” (Baumberger 3).
Baumberger helpfully breaks down five primary ways in which understanding or cognitive
achievement might be facilitated. These are, in brief:
1. the structural rearrangement of a domain’s categories.
2. the production of new models, diagrams, visualizations, etc.
!89
3. asking new questions or clarifying old ones.
4. facilitation of the experiential in both its sensorial and emotional facets.
5. facilitation of thought experiments. (Baumberger 4-6)
We have already examined the value of number five for art-science, but the value of the
preceding four should also be readily apparent. In later chapters, we will examine all of these
methods for cognitive achievement to some degree, but it is particularly numbers two and four
and the tensions between them that are of greatest concern. Data visualization—arguably one of
the most common fulcrums for art-science collaboration—nearly always places its emphasis
upon the production of new ways of seeing or extracting pattern. The affordances of artistic
works that address a broader range of experiential registers, however, remain unclear, especially
for scientists of a mind with Gaston Bachelard regarding the importance of what that author, in
The Formation of the Scientific Mind, calls a scientific concept’s “vector of
abstraction” (Bachelard 26). Running tangent to the abstract vector is what Baumberger terms
“phenomenal knowledge,” which “[broadens] our experience in encompassing things we might
never otherwise have undergone or felt […] In these cases, the works may lead to propositional
knowledge. But the phenomenal knowledge gained by the works does not reduce to
it” (Baumberger 13-14).
Most of Baumberger’s examples of phenomenal knowledge are drawn from painting and
literature and importantly, especially with respect to examples of the latter, his theory of
phenomenal knowledge does not conflate “experiential” or “phenomenal” with a strict notion of
“the given.” As we have seen, the idea of experience as what is given to the senses occupies a
!90
contradictory philosophical position within scientific discourse in that it can be both validated as
the central axis for observation and de-privileged as a hindrance to it. For Bachelard, what is
given to the senses is a serious epistemological obstacle requiring an epistemological break. “The
philosophy we oppose,” he says, “is one that rests on more or less unequivocal, more or less
romanticised sensualism, and that claims its lessons come directly from a clear, distinct, reliable,
and constant given which is always offered to an always open mind” (Bachelard 33). In contrast,
Nelson Goodman, in his defense of Carnap’s aufbau, asserts that the de-privileging of the given
“remains uncontested so long as we are dominated by the tradition that there is a sharp
dichotomy between the given and the interpretation put upon it […]” (Goodman 547). In this
way of thinking, the assumption that the senses must always give faulty information follows
from a disconnect between the primacy of experience and its funneling into subsequent cognitive
order:
For the question in what units experience is actually given seems to amount to the question what is the real
organization of experience before any cognitive organization takes place; and this, in turn, seems to ask for
a description of cognitively unorganized experience. But any description itself effects, so to speak, a
cognitive organization; and apart from a description, it is hard to see what organization can be. (Goodman
547-548).
Again, arguments like Goodman’s will later re-emerge to haunt our investigation of
phenomenological methodology, particularly with respect to the concept of affect. For now,
however, it should be clear how arguments around the primacy of perception and the obstacle
presented by its funneling into abstraction become all-too entangled once we shift our conceptual
!91
focus onto the realities of matter and forces that are knowable only by dint of their interactions
and material traces with and upon the screens, lenses, and antennae of scientific instruments.
Following Bachelard’s logic, scientists who set aside the frequently misleading inferences of
their senses, deferring to the instrumental readout and the sense-making that follows, often
struggle to find the value in exploring the potential for experiential works that go beyond the
placing of mathematical data upon a two dimensional plot or three dimensional Newtonian stage.
For them, both knowledge and instruments are shot through with abstraction.
Baumberger’s notion of phenomenal knowledge presents an opportunity for easing the
misgivings evinced around works that leverage the experiential, enworldened encounter by way
of its expansion of the phenomenal to include fictional narrative, thought experiments, and
speculation upon what it is like to be/feel/experience x. Such tactics, which affirm the manner in
which abstraction is already interleaved with the phenomenal, have potential for jostling both
scientists and artists out of their disciplinary somnambulism. What I would emphasize here is
that this speculative facet of knowledge rethought as understanding—an understanding which
subtends the forking paths of abstraction and embodiment—is highly germane to the topic of
worldizing data captured from aperceptual phenomena. In its further unpacking, we might
uncover the components of an important tool with which to bring both the scientific image and
the manifest image into stereo focus within the same epistemological frame. Ideally, what this
tool would facilitate is a radical defamiliarizing of the methodologies and objects of study for
both scientists and artists, the latter of whom are just as prone to robbing the phenomenal of its
productive potential by leaning upon the certainties of cognitive schemas.
!92
Within the Arts, investigations into the phenomenal periodically emerge and re-emerge as we
see in such varied historical movements as impressionism, light and space art, dada, the work of
John Cage, and relational aesthetics. All of these artists and movements sought in one form or
another to reconnect the mediated work to the “immediacy” (a term that I will further scrutinize
in the next chapter) of experience. In my estimation, it was Cage who best succeeded at
producing work that mined the phenomenological potential of noise. Yet, even his work, like all
of the others mentioned, became assimilated by means of cultural/historical jurisprudence into
the safe and known quantities of Art, regardless (or perhaps because of) his attempts to recover
the raw, enworldened, sonic encounter.
John Cage encouraged a radical listening that, for him, affords a reconnection of the listener
to the auditory qualities of the world exterior to the schemas of musical order. Thus, his strategy
points to the experience of an excess that threatens to overspill the rationalized boundaries of Art
from the outside in. If, as Elizabeth Grosz has argued, creativity always produces an excess—one
that is unconstrained by the requirements of mere survival (Grosz 7)—it is an excess that
overspills Art’s boundaries from the inside out, helping produce interferences of relative
uncertainty as it crosses the gradients of other disciplines. Processes of cultural jurisprudence, in
their validation and translation of such excess, have the capacity to defang it and to damage its
disruptive potential by first distilling it to a known unknown and then to a known known,
absorbing it into Art’s most rational core identity and over-coding our creative actions.
Otherwise put, much like Science, Art maintains the perimeters of its domain by iteratively
extracting and abstracting signal from both internal noise and the noise of an exteriorized field of
complexity. This process of demarcation and digestion produces new tradition, its concomitant
!93
betrayal, and thus new noise to be rationalized. Unlike scientists, we artists—especially those of
us shaped by institutional instruction—struggle to free ourselves from the ordering schemas of
our respective domains. We struggle, in short, for an authenticity of action that resists the over-
coding influence of traditions, but find our practices no less over-coded when we act to betray
said traditions. How, then, might we recover the noisy potential of liminal events and lived
experience if our objects are so liable to become weighed down by, or otherwise trapped within,
the cycle of Art’s ever-adapting ordering processes? This is an extremely knotty question, but it
is one with a history.
For myself, Jacques Attali’s notion of “composition” which he suggests in Noise: The
Political Economy of Music, points us in what I would argue to be the most fruitful direction. We
will turn to Attali and composition in the coming pages. For now, suffice it to say that the
organizational principles of music, according to Attali, are wont to mutate and adapt to
spontaneous noise under conditions of repetition (Attali 32)—a cycle of aesthetic colonization
similar to that described by de Duve and which we have already examined at length. What Attali
terms composition is an escape hatch presented to musicians who wish to remain connected to
the productive potential of noise while taking advantage of the guiding abstractions they have
absorbed in their training. Attali’s notion of composition is the starting point for the distortional
attitude I will later unpack and place alongside “association” as an important tactic for artistic
intervention. What this affords is the ability to treat abstraction and ordering schemas as the
“hard points” that help to tacitly structure our perceptions, behaviors, and expertise, but which
also position us always at their penumbral edges, facing and acting in relation to the noise they
exteriorize.
!94
Tactical maneuvering, whether through distortion, association, or variation, should be
capable of facilitating activity in relation to exteriority or “out-thereness” as an always-constant
presence within interiority—a mark of the interleaved nature of the penumbral. It is the otherness
that inheres within the certainties of established identity that we encounter in our experiences of
the uncanny, the hypnogogic, and the liminal workings of our half-understood impulses. The
“noise of the other” clings to signal like the static or station bleed of radio transmissions. The
tactical capabilities afforded to the artist by both education and practical experience equips her to
turn this noise to productive potential, but it also equips her to act as noise in relation to
established schemas. This situation is in plain view with respect to artistic interventions into
material and social configurations, but becomes particularly interesting when abstractions
themselves become artistic material as we see in such practices as anamorphosis.
In tactically maneuvering in relation to “out-thereness,” the “out there in here,” and the
recombinant potential of the abstractions we use to make sense of both, we are afforded the
opportunity to defamiliarize the known knowns and known unknowns of our disciplinary silos
and to engage the objects of our activities—both artistic and scientific—with keener sensibilities.
What it requires (at least in the arena of art-science) is a modified, posthuman phenomenology
that helps us better account for what Art does with respect to the enworldened body and
abstractions of mind beyond the epistemological strictures that so often become the focus of art-
science discourse. Such a reframing of Art’s potential contribution to art-science bears upon
questions of knowledge without over-coding them and I suspect that it is here at the boundary
conditions of understanding and the experiential rather than those of epistemology as
!95
traditionally understood that we will be able to build the most durable bridges across the
epistemic divide. !96
3 Waveforms 3.1 The Water or the Wave?
“Utram bibis? Aquam an undam” (“What are you drinking? The water or the wave?”), asks John
Fowles’s reclusive antagonist, Maurice Conchis, of young poet, Nicholas Urfe in The Magus.
With this question, Conchis invites Urfe to reflect upon his daily acts of imbibing the world.
Which is more valuable, our raw encounters with things in and for themselves or the aesthetic
schemas with which we organize them? The question is not one of binary opposition; it is more a
question of values and perspective. Urfe (and the reader by proxy) is asked to ruminate upon
how the seductive and crystalline beauty of the “wave” comes to take aesthetic precedence over
an appreciation for its action—over what Hume would call its “hidden powers.” Conchis’s
philosophized wave is both matter shaped by forces that act upon and within it and a metaphor
for how such physical becomings are fixed in representation. Whether as poetic images of
breaking waves, or as a set of coordinates inscribed upon the Deleuzian plane of reference, either
construction sticks in the imagination, filtering our experiences, long after the “thing-in-itself”
has rolled back into the sea.
In his film Inschrifts des Krieges und Bilder der Welt, Harun Farocki illustrates the manner in
which abstraction as the product of a rationalized, technologically-afforded visuality produces
blind spots—gaps in understanding—with potentially dehumanizing consequences for the human
subject. The “top-down” images produced by Allied spy planes during WWII, he shows us, fail
to produce legible signifiers of the inhuman conditions and monstrous activities that unfolded in
the buildings below. In top orthographic view, the stuff of the world becomes a flatland of
!97
rectangles, circles, and lines that speak only to the high-level concerns of architecture,
infrastructure, and industry, represented in this case by the Allie’s myopic interests in the IG
Farben industrial plant and their concomitant blindness with respect to the plant’s neighboring
facility at Auschwitz. Farocki traces a path from the perspectival projection of architecture to the
militarily-enforced, photographic identification of women villagers, many of whom had never
been seen in public without the veil, during the French occupation of Algeria. At each stop along
this path, Farocki invites us to question the price of visually mapping our worlds—that is, of
privileging the capture of analytic data over the situated conditions of sense data.
As compelling as all of Farocki’s examples are, the visual motif he deploys with the least
exegesis is the one I find to be most evocative and mysterious: a massive wave pool used for
hydrodynamic research. Here, Farocki’s perspective remains consistent; we are invited to muse
upon a rationalization of vision that subjects the ceaseless energy and motion of the aquatic wave
to the petrifying operations of the scientific grid (and of the camera itself). We are asked to intuit
how the instrumental address of a wave yields precise measurements at the potential cost of what
makes a wave live and breathe in our fleeting experience of it. In Deleuzian terms, the wave
itself is a set of virtualities made actual; it is a continual movement throughout the “plane of
immanence” that is “slowed down” by the operations of science, its coordinates selected and
placed upon the “plane of reference.” This slowing down is how the abstractions of science are
actualized, but at the cost of a rupture or hole produced in the infinite (Deleuze & Guattari 118).
For myself, however, the wave in all its facets—as abstraction and material, as virtuality and
actuality, as ephemeral and rule-governed, as both fluid and structure—is a useful metaphor we
might expand upon to further nuance Farocki’s (and indeed Deleuze’s and Guattari’s)
!98
problematic(s) and begin to make peace between “unmediated” encounters and the abstractions
we employ for the purposes of rationalization and control. 3.2 About this Chapter
In this chapter, I will use three different works I have developed over the better part of the last
eight years as practical footholds for discussing the instrumental address of phenomena that
retreat from our perceptual fields of view, focusing specifically upon the issue of what I will call
“electromagnetic weather.” While the works in this chapter do not address data per se, at least as
previously defined, they are concerned nonetheless with many of the analog technologies
involved in data capture and instrumentation and, therefore, point to some of the stickier
philosophical issues around the production of abstraction. The first “electromagnetic encounter,”
under discussion was completed in 2011 and is entitled WaveForm. It is a work I often describe
(in homage to artist Paul Klee) as an experiment in “taking a signal for a walk.” WaveForm was
the first of a number of radio experiments inspired by the proliferation of cell phones, Bluetooth,
and WiFi in public spaces and it is arguably the most (physically) complex of the three works
discussed in this chapter. The following discussion of it will focus specifically on the work’s
relationship to the basic ideas of signal transduction, noise defined as unwanted complexity (or
unselected signal), materiality of signal, and the potential for leveraging the scientific imaginary
in art-science praxis.
The second electromagnetic encounter, Aeolian Wifi, will be used to further many of the
themes discussed in relation to WaveForm, but will present an opportunity to wander through
scientific paradigms that precede the rationalizing schemas that were constructed to parcel out
!99
the electromagnetic spectrum for the purposes of commerce. Of specific interest are the concepts
of ether (or “aether”), defined as an undifferentiated “fluid” tying the human body and mind to a
universal energy, and the aeolian—particularly the aeolian harp—which has for centuries served
as an auditory philosophical tool, tying the cosmic machinations of the ethereal plane to the
human sensory apparatus. With the third encounter, entitled Sonde, we will trouble the signal/
noise binary and take a turn toward strategies for expressing the imperceivable that rely more
directly upon a highly-interpreted, poetic mode of sonification—a form I will dub “aeolian
theater”—in which the features of an analytic signal are used to affect the parameters of
synthesis models designed to “narrativize” the phenomena driving them.
As something of a break with the norms of textual analysis, before giving a full description
of each of these electromagnetic encounters, I would first like to take a sighting of those
theoretical, scientific, and social issues that weave throughout, informing the shape of each work.
The most obvious of these issues is the observation that we are awash in a torrent of human-
generated electromagnetic radiation, most of which falls outside the spectrum of visible light.
Thus, no direct perceptual contact can ever be made with the phenomena in question, yet, despite
this fact, radio and microwaves structure our behavior in non-trivial ways; to witness a person
search at random for a strong WiFi or cell signal, waving their device above their head while
staring at the screen, hoping for a change in the number of bars, is to witness one of the many
ways in which the scientific image bleeds into the manifest image. There are, of course, natural
sources of radio and microwave radiation, but mostly, in today’s wireless world, the ocean of
invisible waves that proliferate around us is produced and sustained by communication
!100
technologies and the theories that have made them possible, both of which have had countless
effects in our manifest worlds.
The imperceptible nature of electromagnetic weather raises a number of evocative questions:
What is its morphology? How do its patterns differ from those of “natural” electromagnetic
weather? Are there “storms,” so to speak, produced by cycles of human activity? To what degree
do its constituent waves interact with materials other than those involved in their transmission
and reception? For myself, the overarching question—the one driving much of the present
investigation—centers upon the possibilities for creating novel, immersive mediations of these
imperceivable phenomena without leaning too heavily upon the abstractions that color their
forms in the contemporary mind. To be sure, arriving at sufficiently evocative, scientifically
grounded responses to any of the questions above is simply impossible without sustained
engagement with the abstract schemas that anchor our knowledge of the phenomenon. The trick
lies in finding new ways of producing an embodied or more expansive understanding of those
aspects of the phenomena to which the abstractions of theory (diagrams, notation, equations,
etc.) gesture without re-asserting these as the primary semiotic channel.
However long one may struggle with the mathematics, pore over electrical diagrams, and
develop proficiency with concepts of wave propagation, the question of experience—of “what it
might be like to hear (see, touch, feel)” the waves in question—will be continually deferred or
relegated to informed speculation without material experimentation. Still, it is contestable
whether rough and ready experiments with amateur radio, such as those I conducted for the
projects in this chapter, are sufficient in themselves for providing phenomenological insight into
the electromagnetic imaginary. On the one hand, what both science and everyday experience tells
!101
us is that the speculation around “direct” sensorial contact with radio or microwaves is mostly
idle; our particular evolutionary channels simply did not outfit us with the biology that would
help to predicate a notion of “direct” in this case. On the other hand, whatever doubts I have that
run along these lines are outweighed by even heavier doubts about whether any informed, modal
speculation of the form “what it might be like to experience x,” where x is defined as a physical
if imperceivable entity, must necessarily be conditioned on the requirement for immediacy. In the
final balance, all sensory experience is arguably mediated to the extent that the body itself is a
medium.
From the perspective of molecular biology and neuroscience, the senses emerge from
constructive and iterative material processes, particularly the stochastic processes expressed
between molecules as they combine to form the more complex molecular machines we label
peptides, proteins, and, nucleic acids. As complexity theory has helped to show, the stochasticity
of systems like those we find in cell signaling is capable of facilitating highly efficient self-
assembly of noisy constituents into increasingly higher orders, eventually giving rise to an
emergent determinism. In the case of peptides and proteins, when self-organized at “higher”
scales, these become organs: eyes, ears, skin, etc. What is key to the success of these operations
are processes of signaling. Each biological process both emerges from signals passed between its
constituents and acts to pass signals on to other processes. Of course, as artists, the type signals
that tend to concern us most are those involved in sensory activity, which are passed to the brain
to help facilitate the percepts that are constructed into a legible “scene” or, in the language of
phenomenology, a perspective upon “the given.”
!102
The fact that stochastic signals can give rise to deterministic ones provides a physicalist
provocation for why we retain a myth of “immediacy.” Another provocation might be found in
everyday, phenomenal experience. Unless consciously attended to, the noise produced by the
macro effects of our material wetware and its conjunction with external stimuli retreats from the
forefront of conscious attention. Thus, our over-familiarity with the body’s materiality marks our
encounters with those objects and forces to which we have access with an imagined authenticity
and transparency. What reveals the physical and stochastic nature of sensory signals to ordinary
experience is the manner in which their self-oscillations or interferences (their “glitches,” so to
speak) cross the invisible thresholds we erect between body and world. The material and
mediating nature of the body jumps back into the picture when our senses are either deprived of
external stimuli (as John Cage discovered during his experiences with anechoic chambers) or
otherwise begin to degrade and fail. It is difficult to ignore, for example, the tinnitus that adds a
constant whine and whooshing to the experience of one’s favorite musical recording. With
training, one can learn to live with or otherwise block out the self-noise generated by the body’s
wetware, but its infringement upon our clear “pictures” of reality is a reminder of the degree to
which our bodies are biological transducers for the physical signaling events our brains
subsequently parse into the certainties of meaning. Bodies are therefore more than complicit in
the transduction of signal; they are radically generative. Thus, the question of immediacy
becomes one regarding the shifting rubrics for authenticity we deploy, which demarcate
biologically mediated encounters from those that are mediated through “exterior” technological
apparatuses.
!103
The demand for immediacy, however easy to trouble either scientifically or ontologically, is
no less stubborn for that fact. As we saw in the last chapter, it appears time and again in
philosophy of science as both linchpin and gremlin with respect to empirical observation. We
cling to spurious demands for immediacy even while evincing a tendency to frame our
experiences with the noisy (the liminal, the other, the alien) in the vernacular of those
experiences for which we have long since developed abstract schemas, disciplinary languages,
and most importantly embodied intuitions. We do so tacitly, as we find in the phenomenon of
pareidolia—a condition in which the perceiver incorrectly extracts patterns from noise—and
explicitly in our resistance to the alien interruption and the uncertainties of the
phenomenologically dark.
In Phenomenology of the Alien, Bernhard Waldenfels asserts that where we find no appeal to
familiar experiences or to their corresponding vernaculars in our encounters with alterity, new
ones must be formed: “The transgression of the sphere of an intentional or rule-governed sense,”
he says, “takes place in responding to an alien demand that does not have sense and does not
follow rule, but which interrupts the familiar formations of sense and rule, thus provoking the
creation of new ones.” (Waldenfels 36). Electromagnetic encounters have the potential to trouble
“the familiar formations of sense and rule” in precisely this way. Our experience of radio and
microwaves is as indirect as our experience of natural light; it is made possible only through the
interaction of said waves with the material world we can see, touch, and hear.
By means of their physical interactions, electromagnetic waves facilitate the most pedestrian
of daily activities, yet their intangibility and their withdrawal from sense can also produce
shadows in the public imagination, creating the conditions for the paranoia we see expressed in
!104
cases of so-called “electromagnetic hypersensitivity,” or “WiFi allergy”—a photonegative of the
“healing” magnetic experiments conducted by Anton Mesmer, which captured and entranced
eighteenth century imaginations. Due to the absence of sensorial “immediacy” with respect to
radio and microwaves, the tangible presence of their indirect effects, and a generalized but
passing familiarity with their scientific rudiments, an uncertainty is produced around
electromagnetic weather that imagination attempts to stabilize. Electromagnetic waves thus
present a tangle of physical, social, and representational phenomena that is both rich in aesthetic/
discursive potential and, due to their lack of perceptual footholds, elusive as an artistic medium. 3.3 Taking Signal For a Walk
Despite the problematics illustrated above there are methods for detecting and expressing
electromagnetic weather that communicate through what I will call “luminocentric” modalities.
Perhaps the longest-standing of these are found in radio telescopy, the two-dimensional
projections of which have, since the nineteen-sixties, depicted a wide range of electromagnetic
waves emitted from celestial bodies through a visual modality that renders them easily
understood as near neighbors to visible light on the electromagnetic spectrum (fig. 1). On a more
terrestrial scale, there have been numerous published research projects over the past decade that
investigate or operationalize technologically-produced electromagnetic weather in similar
luminocentric ways. In 2013, for example, Fadel Adib and Dina Katabi, working from a
laboratory at MIT, were able to track individuals through solid walls by measuring WiFi signals
reflected from their bodies (a project that will no doubt be seized upon and interpreted through
some of the darker lenses of the public imagination) (Adib & Katabi). As intriguing and
!105
innovative as projects like this can be, they nonetheless fall very much in line with the “rule-
governed sense” created by our familiar encounters with visible light.
Projects by other artists, undertaken both before and after my own works described in this
chapter, have engaged the propagation of electromagnetic weather more explicitly in terms of
what Antony Dunne names “Hertzian Space,” which he describes as an “‘electroclimate’ defined
by wavelength, frequency, and field strength arising from interaction with the natural and
artificial landscape” (Dunne 104-105). Since the late 1970s, Christina Kubisch has been
facilitating “electrical walks” via headphones outfitted with induction coils and custom
electronics that tap in to the electromagnetic auras emitted by various pieces of electrical
!106
Fig. 1. Crab Nebula, NASA, ESA, G. Dubner (IAFE, CONICET-University of Buenos Aires) et
al.; A. Loll et al.; T. Temim et al.; F. Seward et al.; VLA/NRAO/AUI/NSF; Chandra/CXC;
Spitzer/JPL-Caltech; XMM-Newton/ESA; and Hubble/STScI. Jet Propulsion Laboratory,
https://www.jpl.nasa.gov/spaceimages/details.php?id=PIA21474. Accessed 31 March 2020.
infrastructure in public spaces. Taking a more luminocentric approach, projects like Foulab’s
Project Cogsworth and Luis Hernan’s Digital Ethereal rely on systems of localized spatial
mapping (Foulab). Experiments like these have been mainly achieved in one of two ways: 1) the
construction of custom made radio receivers attached to robotic servos that “slice” up a given
space into a rationalized grid; or, 2) a much less imprecise, manual positioning of hand-held,
light-emitting radio receivers that act as signal gauges at various points within a room. With
strategies of this second type, the color of light-emitting diode, which changes according to
frequency or signal strength at each spatial position, is photographed in a series multiple
exposures such that the resulting image is an aggregation of discrete readings (fig. 2). With
strategies of the first type, the images are produced in a manner extremely similar to the
!107
Fig. 2. Hernan, Luis. Digital Ethereal. In Alderson, Rob. “Luis Hernan’s work makes WiFi
signals visible in astonishing images,” Itsnicethat, 28 Ju. 2014, https://www.itsnicethat.com/
articles/luis-hernan-digital-ethereal. Accessed 1 April 2020.
sampling strategy of radio telescopic arrays described by Woodruff Sullivan in his history of the
radio telescope (Sullivan 118-123) and often make use of the “false color” so often used by
scientists to maximize clarity through contrast.
For myself, the strategy of using secondary luminous sources to map selected nodes within
Hertzian space particularly brings to mind the technique of “partial photography” invented by
Etienne-Jules Marey in the late nineteenth century as a method for capturing, through multiple
photographic exposures, the traces of skeletal joints and other key physiognomical markers of an
animal in motion. To cull the particular signals he was after, Marey first photographed his
subjects against a black background and draped black cloth over their bodies so that all of the
animal’s surface detail was obscured. This helped to minimize the effects of ambient light.
Marey then attached small strips of white or reflective material to the exterior of the cloth at
those coordinates he found most salient for kinetic study. In doing so, he was able to suppress the
noise represented by both unwanted light and the complexity of the animal’s surface attributes,
thereby maximizing the analytic signal, which, in this case, was mediated my means of abstract
markers that left measurable traces upon the photographic plate over regular time intervals (fig.
3). In many respects, Marey’s proto-cinematic approaches to maximizing the analytic signal was
a foreshadowing of today’s motion capture technologies, yet for myself at least, they also
foreshadow a much more fundamental technology of data capture: digital sampling.
Marey was frustrated by what he identified as the “temporal gaps” produced by his
technique. Too much uncertainty was introduced in the lag between exposures and this drove him
to explore ever more precise means for achieving mechanical objectivity. Similarly, one of the
foundational mathematical theorems of digital audio—the so-called “Nyquist theorem”—states
!108
that the sample rate (the number of “exposures” per second) must be at least twice the highest
frequency one wishes to capture in order to avoid introducing “artifacts” into the data. This is
due to the fact that the analytic signal is represented as a swing of amplitude values between +1
and -1 with respect to a baseline of 0 (the so-called “zero crossing”). Let us imagine that the
amplitude of a “slice” of continuous sound is sampled at 1, then proceeds to decrease with
subsequent slices through +.9, +.3, -.6, all the way down to -1, then back up to 1, traveling
through all the previous values. Just like Marey’s multiple exposures, if the sample rate isn’t fast
enough to capture these values as presented by the incoming analog signal, the device is likely to
register the first +1 and then, say, another +1 directly after. The uncertainty of the machine
becomes new noise (the unwanted glitch) and the signal is temporarily lost. What is interesting is
that artist/researchers who apply a manual approach with respect to electromagnetic weather
inadvertently create the very same randomized spatiotemporal gaps that so frustrated Marey and
drove him toward a faster and more precise methods for achieving mechanical objectivity.
!109
Fig. 3. Marey, Ètienne-Jules. Walking Man. 1884. In Jensenius, Alexander Refsum. “Some
Video Abstraction Techniques for Displaying Body Movement in Analysis and
Performance.” Leonardo, vol. 46, no. 1, 2013, pp. 53–60., doi:10.1162/leon_a_00485.
Yet another difference between Marey and the artist/researchers investigating
electromagnetic weather is that Marey’s animal subjects display phenomena that, while very
often too fast or too subtle to be captured by the unaided human eye, nonetheless fall squarely
within the same optical domain addressed by the technology built to compensate for this fact. In
other words, the Deleuzian “slowing down” facilitated by Marey’s devices is a true “freezing” or
crystallization of a phenomenon that is no less visual for the fact of its outpacing the everyday
analytical gaze; no translation across sense registers is needed. As such, Marey’s experiments
with these phenomena could be said to belong to an empirical tradition like that we find in Da
Vinci’s sketches of hydraulic action (fig. 4). Da Vinci attempted to capture the eddy’s and swirls
of running water using the then cutting-edge technology of chalk and portable sketchbooks, the
!110
Fig. 4. Da Vinci, Leonardo. Turbulent flow and free water jet issuing from a square hole into a
pool. “Leonardo da Vinci and Physics of Fluid.” Sersol, 5 May 2017, http://sersol.weebly.com/
workshop/leonardo-da-vinci-and-physics-of-fluid. Accessed 31 March 2020.
latter of which were made cheap and widely available during the book printing revolution of the
fifteenth century.
As Marey would later find with respect to the objects of his own experiments, though visible,
the sheer speed and ephemerality of hydraulic action would no doubt have made Da Vinci’s
observations extremely hard-won and riddled with uncertainty. The coordination between eye,
hand, and trained judgement was the primary tactic hybrid artist/scientists like Da Vinci had at
their disposal to capture the gesture and overall gestalt of their objects, but the nebulous and
ephemeral character of phenomena like storms, clouds, waves, and running water quickly test the
limitations of this tactic. Marey’s partial photographs were a means for pinning down such
optical fluidity in his animal subjects—for chasing out uncertainty by means of a precise and
mechanical objectivity.
On the other hand, Marey’s investigations into the action of the circulatory system more
explicitly revealed the limits of human perception itself as the baseline for scientific “seeing.”
The problem of rationalizing an understanding cardiopulmonary action precedes Marey by some
centuries. It was a problem that also troubled Da Vinci, who is well known for his dissection of
cadavers as a method for studying human anatomy. Unfortunately for Da Vinci, while a corpse
can afford precise observation of the structure of the cardiovascular system, it is by definition
capable of providing only limited insight into its action. While, according to UK heart surgeon,
Francis Wells, Da Vinci was able, despite all limitations, to deduce much of the function of the
circulatory system from observations of its structure (Wells, The Heart of Leonardo 173), his
particular mode of making visible, one which requires the subject to be inert under the analytical
eye, nonetheless places the live and subtle operations of the dynamic systems it investigates quite
!111
beyond its field of view. It was a problem of diagnosis and examination with which doctors and
surgeons were still grappling well up to Marey’s time. According to Marey biographer François
Dagognet, the primary solution to this problem during the Victorian period was auditory in
nature and came in the form of René Laennec’s stethoscope. Yet, while it provides auditory
access into the body’s interior, the stethoscope turned the incidental noises of the body into a
noisy orchestra requiring doctors and surgeons to develop a sensitivity to its movements and a
vernacular for selecting and interpreting individual “voices.” For Marey, such methods were far
too noisy and relied too heavily upon the trained judgement of the scientist. Quoting Marey’s
own writings on the topic, Dagognet states in A Passion for the Trace,
René Laennec learned to hear and interpret [mostly cardiopulmonary tremors, murmurs, and noises] but
Marey […] qualified his approach. What is more complex than a movement? It was necessary to know ‘the
amplitude, force, duration, regularity and shape. And if the force of this movement is not enough for us to
be able to perceive it, if its duration is too short for us to have the time to analyze the other features…’
Neither direct, violent experimentation nor observation would be enough (Dagognet 18).
What Marey required was the means for seeking the phenomenon’s “‘signature,’ so that it would
surrender its rhythms and variations in the form of graphic lines” (Dagognet 16). In short, what
he needed was the means for “slowing down” the phenomenon and for fixing its coordinates as
data on the plane of reference.
With both luminocentric and auditory approaches off the table, Marey sought to construct
new instruments whose mechanical objectivity could cull the signal for which he was searching
from the noise of the subject’s body, providing maximum certainty that the human in the loop
!112
was not reading phantom signals from the contextual noise. For myself, what is notable about
such instruments as Marey’s sphygmograph (the technological forebear of the EKG) (fig. 5.) is
the manner in which their inscriptions are facilitated by a series of physical linkages that are
themselves made visible in the construction. Each linkage leads back to a well-calibrated sensing
device and its coupling with the body of the subject. Yet, the analytical signal is not untouched
by its walk through these linkages; indeed, even while their delicate operations transduce the
signal smoothly and reliably, excising it from the noise of fleshy, enworldened complexity, the
linkages themselves afford the possibility for new noises, new sources of uncertainty, and new
methods for reducing both.
!113
Fig. 5. Illustration of Ètienne-Jules Marey’s sphygmograph. In Fonseca, Lucas José Sá Da, et
al. “Radial Applanation Tonometry as an Adjuvant Tool in the Noninvasive Arterial Stiffness
and Blood Pressure Assessment.” World Journal of Cardiovascular Diseases, vol. 04, no. 05,
2014, pp. 225–235., doi:10.4236/wjcd.2014.45030.
The physical and tactile nature of the linkages needed to produce analytic signals from
enworldened complexity and the sculptural potential these qualities represent is what drove my
initial research interests for WaveForm. Yet, where Marey’s desire was to fix cardiopulmonary
signals on the Deleuzian plane of reference, my desire was to throw electromagnetic signals back
into the world—to have them resonate through registers of matter and space at the scale of the
human body and its sense modalities. This goal is made much easier to achieve by the fact that
the mechanical objectivity so sought after by Marey is today built into all manner of
technologies. What is left for the artist is to make the appropriate associations between
technological means and perceptual ends. Both sonifying and luminocentric approaches are used
in the construction of WaveForm, but in a way that speaks to the cross-modal, techno-
phenomenological pipelines like those developed by Marey for physiognomic investigation.
Most important for my purposes, was the use of these pipelines to defamiliarize the functional,
indirect, everyday encounters with the phenomenon in question and to draw out the
phenomenon’s physical presence.
3.4 WaveForm
As we have seen, our contact with electromagnetic weather occurs by means of technological
conveniences. Importantly, as Marey’s experiments illustrate, signals can be made portable
across any number of media. It is fascinating, for example, that while radio is a close neighbor to
visible light, until the graphs and images of radio telescopes and the more recent proliferation of
Bluetooth and WiFi for data transfer, the sense modality through which we have long made the
most contact with radio is hearing. In reality, there was nothing inevitable about the translation of
!114
radio waves into auditory phenomena; Heinrich Hertz’s verification of James Clerk Maxwell’s
field theories of electromagnetism came to him in the form of electrical sparks that were as
visible as they were audible. Lord Kelvin even referred to Hertz’s “resonator” (the device that
detected the wave emitted by Hertz’s spark transmitter) as an “electric eye.”
Furthermore, according to Sungook Hong, parallels between Hertzian wave transmission and
visual signaling were frequently drawn during the period when wireless technologies were in
their embryonic stages. “Even when [scientists] imagined signaling by means of Hertzian waves,
an optical rather than a telegraphic analogy dominated,” says Hong, “It was partly because the
similarity of the Hertzian apparatus to light signaling devices (such as a lighthouse or a
heliograph) was more conspicuous than any similarity to telegraphic technologies” (Hong,
Wireless 7). In the end, however, wireless telegraphy won out in these discursive struggles over
how to best make use of Hertzian waves, whether because of telegraphy’s familiarity as a means
for communication, by force of the arguments of its proponents, or what is more likely, because
the applications for military purposes quickly became evident. For myself, what this episode in
the history of wireless technology reveals is the extent to which imperceivable phenomena
simultaneously retreat from familiar sense modalities while displaying an evocative fluidity with
respect to the technologies we deploy to detect, express, and make use of them. Both of these
observations were pivotal for beginning to identify the particular material configurations through
which to mediate WaveForm’s detected signals.
I began my experimentation by first constructing a number of wide-band receivers (six in
total for the final work) that were capable of detecting not only WiFi signals, but also the
frequencies emitted by a number of cellular phones that were in common use at the time. These
!115
latter signals were somewhat unpredictable as a source of EM radiation; cellular service
providers tend to be protective of both their technologies and the bandwidths they use so, at the
time, information about them was scarce. By attaching a small speaker to the outputs of my
hand-built receivers, one can readily listen in on the crackle, whine, hiss, pops, periodic bursts,
and continuous pulses that are driven by the modulation of waves in the environment. Again,
while sound is a convenient analog for electromagnetic waves, it is more a matter of convention
than a necessity. With this in mind, I split the signal path for each receiver, creating a fork
between the speaker and a line output, routing the latter to an analog-to-digital converter. Once,
digitized, the incoming signals can be algorithmically manipulated and expressed via any
number of media.
Before implementing any specific computational strategy, however, it was important to think
through what, exactly, the final destination of this walk would be. Taking Farocki as an
inspiration, I began researching experimental methods for analyzing wave dynamics, which
eventually led to descriptions of the ripple tank, a common apparatus used to demonstrate wave
diffraction. In experimental practice, the ripple tank is often placed in conjunction with a
shadowgraph (fig 6.), producing an assemblage of technologies in which light is shined through
the container of water and reflected onto a wall by means of a mirror and lens. This results in a
clear pattern of wave diffraction rendered in light and shadow. To produce diffractions of this
sort as a visual output for WaveForm required some mechanism for translating the incoming
signals into ripples on the surface of water. To achieve this, I constructed a sound system in
which the incoming signals are analyzed according to frequency and amplitude, both of which
!116
are plugged into the parameters of sound generators that output frequencies roughly matching the
resonant frequencies of both the tray and the water inside it.
The hardware comprises six speakers arranged (cones upward) in a hexagonal pattern around
an approximately twelve-inch diameter hole in the top of a cabinet containing the computer, a
monitor built into one side for the display of informational graphics, a six channel amplifier, and
an overhead projector (figure 7.). During the installation process, I employed a spatial mapping
technique similar to those outlined above. During installation, I walked the space of the gallery,
lifting each receiver high and low in order to locate those positions in the room with the strongest
signals. When a suitable location was discovered, I would either attach the receiver to the wall or
!117
Fig. 6. Shadowgraphs of diffraction patterns in a ripple tank. Harvard Natural Sciences Lecture
Demonstrations, https://sites.fas.harvard.edu/~scidemos/OscillationsWaves/RippleTank/
RippleTank08.jpg. Accessed 30 March 2020.
suspend it between the floor and ceiling using thin-gauge aircraft cable pulled taught such that
the receiver would not easily swing about as people interacted with it. The signal processing
software was built in Max/MSP, a node-based graphical programming environment, and the
strategy employed involved filtering and splitting the incoming signals into six discrete
frequency ranges and using the peak amplitudes of each to affect changes in six different FM
(frequency modulation) synthesis voices.
The signals from each voice were routed to the speaker array upon which rested the acrylic
tray of water. Diffraction patterns appear on the surface of the water as soon as the carrier
frequency is amplified by incoming static. Further diffraction patterns appear as different signals
are picked up by the receivers. All of these are projected onto the wall in the form of a
shadowgraph. Unlike Marey’s graphical traces, the patterns of WaveForm’s shadowgraph are
fleeting, ephemeral, and affected by not only the signals driving them, but also by the geometry
and resonant properties of the water tray and the speakers; the self-noise produced by receivers
with bad ground connections or physically touched by curious visitors; children dipping their
fingers in the water; or the occasional bumping of viewers against the cabinet enclosure. At the
time of writing, the reader can visit https://vimeo.com/23552133 to view the results.
While presented squarely within the context of an art exhibition, during the period in which
WaveForm was installed and running, what became apparent is that, for many, the work is barely
legible as a work of Art. For some visitors, this fact was what made the experience more
interesting; for others, who were perhaps expecting something far more poetic in the context of
an Art exhibition, WaveForm was at best a source of conceptual frustration. Given the
provocations laid out in chapter two, it would be fair to question whether WaveForm is a product
!118
of art-science. Suffice it to say that a work’s illegibility as a product of either Art or Science
alone does not immediately place it in an art-science context. Does the work, for example,
operate as a thought experiment? Does it generate understanding by drawing connections
between multiple domains of knowledge? Does it truly prompt us to imagine what it would be
like to see, feel, hear an imperceivable, physical entity, x? My aim was to address the latter two
questions in particular. However, what became apparent as I approached installation of the final
prototype, was that my research into electromagnetic theory and the technology required for the
work’s construction effectively constituted the adoption of a more discipline-specific language—
one deeply unfamiliar to the “cold” viewer.
Recognizing early on that all but scientists or the scientifically literate were liable to
misunderstand the subject of the work or, at best, overlook the interactive potential of the
sensors, I developed a system of informational graphics to be integrated throughout the
experience. Some of the graphics were simply intended to illustrate the possible types of
interaction with the sensors. The graphics displayed on the flat screen attached to the projection
system, however, were meant to provide background information about electromagnetic theory
and to make the operational logic of the work relatively transparent. These visual guideposts
helped make a linkage for the viewer between what they were seeing and hearing and what they
were to understand as its root cause. In many respects, the reintroduction of familiar abstractions
of the electromagnetic spectrum into a work that was ostensibly meant to be a purely material/
sensorial encounter could be said to undercut the primary goal. Yet, there is a provocation to be
found in this convolution of diagrammatic explanation with the experientially raw that speaks to
a wider problem of art-science praxis. Simply put, without these semiotic guideposts, some
!119
viewers were deeply uncertain about what they were experiencing; they were to some extent
captivated by the dynamic, aesthetic quality of the work, but its content remained impenetrable
in the absence of didactic information. This problem is one that crops up time and again in works
of art-science.
In the case of WaveForm’s earlier iterations, informational graphics alone did little if
anything to prevent some of electromagnetic imaginary’s more paranoid speculations from
stepping in to fill conceptual gaps. For some, there was a suspicion that their data was being read
or stolen; for others, the mention of “radiation” provoked a sense of existential threat.
WaveForm’s demonstration of the physicality of radio and microwaves created something of a
material Rorschach test for these imaginings, which even the introduction of mathematical and
diagrammatic explanation did little to resolve. In some respects, this speaks to the
efficaciousness of defamiliarizing the pedestrian encounter with electromagnetic waves, or
indeed any such imperceptible phenomenon, regardless of the presence of didactic information.
Yet, what these viewer responses illustrate is the degree to which the tensions we have created
between the scientific image and the manifest image are the products of a mostly academic
binary. In the lived encounters like the one I have just described, it is the penumbral space of the
scientific imaginary that structures behavior and speculation in not-trivial ways. In the case of
WaveForm, the question therefore ceased to be one of how best to generate understanding by
means of the embodied encounter and became one regarding the best way to use the encounter to
reshape the scientific imaginary in such a way as to evoke curiosity for further investigation.
Mostly, the answer to this question was found in my iterations over the work’s visual design.
!120
WaveForm’s style language departs radically from the rectilinear forms that tend to mark out
objects (outside of minimal or abstract works) as neutral but necessary infrastructure within the
space of the gallery (e.g. pedestals, bases, display cases, frames, etc.). The kiosk, for example, is
trapezoidal in form, and is meant to call up the consoles and appliances one finds in retro science
fiction. I designed and constructed the acrylic tray that served as the ripple tank in the form of a
hexagon in order to not only match the six speaker array, but also to echo the most common
shapes of cell networks. The circular projection is meant to evoke the iconographic viewports of
microscopes and telescopes. The transparent, vacuum-formed enclosures for the receivers bring
to mind alembics, beakers, and other glass laboratory instruments. The overall aesthetic is
heightened by the audible, synthesized sounds driving the projected diffractions, which create a
general sonic ambience the flavor of nineteen-sixties science fiction. The resonances between
theoretical, physical, and aesthetic considerations help to embed the work in a narrative context
that activates a more positive imaginative potential with respect to the imperceivable while
helping to defuse whatever resistance more curious visitors might evince toward didactic
scientific information.
As WaveForm is the earliest work in which I attempted to tackle the subject of
electromagnetic weather, there were a number of unforeseen outcomes, some of which were
simply the results of oversights on my part (e.g. the water periodically evaporating due to the
heat of the projector), but many of which helped to shape the present discussion. First, while the
spatial mapping technique was effective for creating arrangements of receivers that produce peak
signals, these arrangements were optimum over very short periods of time. After a few hours,
some receivers would detect only weak signals from the WiFi, while others would become more
!121
sensitive to bursts from cell phones. Second, the audible portion of the work could not be
contained within the walls of the gallery. Up and down the hallways of the building, none of
which are carpeted or have acoustic treatment, one could hear the beeps and buzzes of
WaveForm reverberating off of walls and ceiling; the sound, which had a context and function in
the gallery, became a sometimes unwelcome noise pervading the hallways and rooms around the
gallery. This situation first became apparent during the initial setup and testing, which took place
a number of days before the opening, but rather than attempt to prevent the sound from
overspilling the space of the gallery, I instead placed small, circular vinyl stickers bearing WiFi
and Bluetooth logos, as well as a small iconographic sine wave, on windows, doors, and walls
throughout the building. My goal was to create visual anchors for the sound, hopefully reminding
listeners who were aware of the work’s content of the degree to which electromagnetic radiation
traverses spherically through physical space, irrespective of most physical boundaries.
Lastly, and perhaps more interestingly, each receiver’s wide-band reception meant that, in the
evening hours, these picked up a substantial amount of bleed-over from local AM radio stations.
This issue can be rectified by the design and construction of receivers that are more finely tuned
to each desired band (other media works, like Ryuichi Sakamoto’s and Daito Manabe’s Sensing
Streams have implemented exactly this solution), but what these noisy interferences pointed me
toward was a deeper reflection upon the undifferentiated nature of electromagnetic weather itself.
It is only through the historical processes of socio-economic, military, and techno-scientific
discourse that the vast ocean of what was once called ether comes to be rationalized, parceled
out, and sold. Station bleed-over, which is often the result of over-powered signals from radio
stations that accidentally, or sometimes willfully, run afoul of FCC guidelines is a reminder that
!122
the clean divisions we inscribe in our diagrams of the electromagnetic spectrum are much more
gradient in the physical world.
Furthermore, the very fact that the word “electromagnetic” imputes to the phenomenon it
describes two constituent forces, viz. “electricity” and “magnetism,” means that the gradient
boundaries of those parceled territories are not only incurred upon by the bleed-over from
neighboring territories within the schema, but also by the effects from a number of sources from
without—sources that we exclude from our abstraction. At local and architectural scales, the EM
fields of electrical wiring and high current appliances are the most easily recognized sources of
undesired signal, but at Earth scale, lightning and electrical storms stand as the most common
sources of noise that have worldizing effects upon the tools we make of electromagnetic signals.
“From the perspective of electronic circuitry,” says Daniel Rothbart, “there is no difference
between a desired signal and noise; both are explained as variations of voltage with
time” (Rothbart, Philosophical Instruments 86). In the case of WaveForm, the poorly isolated
components of the construction—the heat of the projector, the bad ground wires, the bleed from
AM radio, the EM fields emitted by the amplifier and gallery lighting, the grounding effects of
viewers’ bodies—are all summed with cell phone, Bluetooth, and WiFi signals in the work’s final
expression. 3.5 Ethereal Bodies, Liminal States, and Aeolian Music
In the book, Earth Sound, Earth Signal, Douglas Kahn traces the history of electromagnetic
inspiration in modern Arts practices, paying special attention to the role played by “earthly
energies” in early communication technologies as well as many “aelectrosonic” works of art
!123
created during the mid-twentieth century. Of note in this historiography is Kahn’s account of the
earliest days of the telephone in which Thomas Watson, Alexander Graham Bell’s assistant,
heard the sounds of electromagnetic weather leeching into the signals of Bell’s test set. The
infrastructure created for the earliest telephone systems made use of galvanized metal wires that
were poorly insulated from electromagnetic induction with the result that those wishing to make
telephone calls often picked up their handsets only to hear a mysterious whistling, crackling, and
popping that provided fuel to the nineteenth century imagination:
Those listening certainly knew what Morse code, music, and voices sounded like, so when they heard them
when no one was sending them from the other end of the telephone, they knew they had unexpected
company from other lines […] However, unlike Morse code, when people listening on the telephone heard
natural radio they could not know what it was, so its sounds were a matter of mystery and
speculation” (Kahn 26).
The Earth’s electric circuit of lightning storms, sprites (electrical discharges in the upper
atmosphere), and ionospheric activity made its arcane language heard through the material
constructs of the new telephony to such an extent that it is easy to forgive romantic
interpretations of this phenomenon as material evidence of so-called ether—the “universal fluid”
that was the subject of much philosophical thought in pre-Enlightenment scientific paradigms.
Much speculation prompted by these strange noises centered, as one might expect, on the
possibility of disembodied spirits drifting in the ether. As Jeffrey Sconce has demonstrated, in the
years before the telephone, the Spiritualists of the nineteenth century viewed the ethereal plane
as populated with the souls of the departed waiting (thanks to the “technology” of mediumship)
to be contacted. In Haunted Media, Sconce tracks the co-development of the telegraph and the
!124
Spiritualist movement across the United States, drawing attention to the many ways in which the
electromagnetic imaginary invited equivalencies between “far communications” conducted
spatially and those conducted inter-dimensionally. “As Spiritualist belief crossed the country
along with the unreeling cables of the telegraph companies,” says Sconce, “Spiritualist books
such as The Celestial Telegraph and periodicals such as The Spiritual Messenger frequently
invoked popular knowledge of Morse’s electromagnetic telegraph to explain their model of
spiritual contact” (Sconce 24-25).
Spiritualist speculation around disembodiment was rooted in earlier speculation around the
degree to which ether interpenetrates the human body. For the Romantic poets and philosophers
of the late eighteenth and early nineteenth century, who were responding to the scientific
advances of the Enlightenment, the concept of ether was entangled with the sensitivity of the
body to invisible forces. In the wake of Mary Shelley’s Frankenstein and the bio-electric
experiments of Luigi Galvani from which the novel took its inspiration, the body came to be
viewed as a material conduit for ethereal energies. One half of the electromagnetic imaginary
was thus accounted for in the mapping of the Earth’s network of electric forces onto the human
nervous system. But, if the works of Galvani and Shelley fueled Romantic imaginings of
electrified bodies, it was Anton Mesmer who did so with the magnetic body. Mesmer’s theatrical
demonstrations of “animal magnetism”—a force driven by so-called “magnetic fluid”—tightly
coupled in the minds of Mesmer’s spectators the mysterious attractive force one observes in
metals and minerals with the flow of spiritual energy through the human body. Waving an iron
wand about their bodies, Mesmer “cured” his participants of any number of psychosomatic
illnesses. Yet, for many, including the king of France, a question lingered regarding the
!125
relationship between the effects on display in Mesmer’s demonstrations and the aesthetic context
in which these latter were conducted. Mesmer’s “laboratory” was a far cry from those of
respected scientists. In his parlor, for example,
[d]elicate perfumes floated in the air to mingle with the magnetic fluid pulsing through the atmosphere.
Thick carpets, heavy curtains, and ornate furnishings graced the dimly lit chamber in which patients
gripped the iron rods extending from the baquet and awaited the onset of the pivotal magnetic "crisis." On
the walls of the room hung large gleaming mirrors which, according to Mesmer's precepts, reflected the
fluid and intensified its strength. Soft music played on the pianoforte or glass harmonica—on occasion by
Mesmer himself—kept the fluid in steady circulation (Tatar, “From Mesmer to Freud” 14).
Such a theatrical setting lent itself not only to a skepticism with respect to Mesmer’s claims, but
also to suspicions of improper behavior. It is no small irony that Benjamin Franklin, one of the
pivotal figures in the story of the taming of electricity, was among those chosen for the royal
commission appointed by Louis XVI to investigate Mesmer’s claims. Franklin and his
colleagues made short work of debunking “animal magnetism” as a real, physical substance and
are credited with the first “blind study,” so named due to their blindfolding of Mesmeric patients
in order to determine whether it was truly an invisible force or the immersive theatricality of
performance that created the observed effects of the “magnetic crisis.”
In the interrelated strains of Romantic speculation represented by Galvanism on the one hand
and Mesmerism and the Spiritualist movement on the other, we find both halves of the
electromagnetic principle intersecting with lived experience, shaping belief and behavior, by way
of a popular imaginary that places humans in profound relation to the invisible, abiotic forces of
nature flowing around and through human bodies. The question that emerges from this picture is
!126
one regarding the degree of autonomy we actually enjoy with respect to the forces that animate
us. As illustrated by works like Frankenstein, the very concept of Mesmerism (a word
synonymous with hypnotic suggestion), and the apparent submission to spiritual forces we see in
the practice of mediumship, the etheric imagination held the human body as wholly subject to the
logic of forces much greater than human will. In the etheric imagination, the body was woven
into the fabric of a luminiferous field that must, it was thought, contain a cosmic logos, but
which presented as cosmic noise, which the rational mind was unable to contain. In Ether: The
Nothing that Connects Everything, Joe Milutis draws attention to the fact that even Isaac
Newton, in his Opticks, speculated “on the existence of a vibrational plane where humans are
played like aeolian harps by an ether wind” (Milutis 7). Of course, Newton’s musing need not be
interpreted through the lens of automatism, but what is clear is that, once the affective connection
between body and ether has been made, such imaginings around the logos of exterior forces as
determinate of the body’s responses follow quite naturally. The aeolian harp, to which Newton
referred, was the perfect philosophical object for contemplation of these ideas.
First described by Athanasius Kircher in his Phonurgia Nova, the aeolian harp had long been,
until the mid-nineteenth century, an object of speculation around the logos or “intelligence”
extant within the invisible flows of energy (including wind and visible light), which the ethereal
plane was understood to comprise. The harps themselves are typically constructed of wood and
strung with gut, each string being tuned to the same note so that when “brushed” by gusts of
wind of various strengths and durations they produce a series of harmonic intervals that “appear”
and “disappear” in an undulating drone. The strings thus produce a ghostly “music” (although the
appropriateness of the term has been much disputed), and, as Henry Thorowgood states in his
!127
lyrical description of the harp, circa 1754, “tho’ untouch’d, can’st rapt’rous Charms impart, Of
rich, of genuine Nature, free from Art!” (Thorowgood 2). That is, free from human judgement,
creativity, or intervention (or so it was thought), nature is by means of the aeolian harp capable
of transcribing the inherent logic of its cosmic, stolidly imperceivable energies into the language
of sensory experience.
As both Mesmer’s theatrical “experiments” and Newton’s ethereal speculations demonstrate,
the “logic” of “Nature free from Art,” was not a detached and intellectual one; rather, it placed
the sensitive human body at the center of its ethereal operations. “Historically and formally,
then,” says Timothy Morton, “Aeolian harps belong to an age of sensibility, in which empiricism
constructed a world where the vibrations of matter were perceived by human nerves much like
the wind on a wind harp” (Morton, “Of Matter and Meter” 314). The harp creates an ambient
sonic experience that suggests a music latent in the clouds, wind, light, and shadow; the human
sensory apparatus is but another resonant node in this ambient network of vibrational energies.
Yet another example of this tendency of the romantics toward ethereal speculation and affectivity
is found in Carl Engel’s 1882 recounting of the legendary “subterranean music” reportedly
emanating from beneath the cliffs of Bergen in Norway.
In his essay “Aeolian Music,” Engel describes a letter sent to Johann Mattheson, a friend of
Handel’s, by General von Bertuch, the governor of Fort Aggerhuus in Norway, regarding “a
subterranean cliff concert and the musical accomplishments of mountain dwarfs” (Engel 433).
The letter tells of the time when Bertuch had been apprentice to the leader of a band in Bergen,
which was practicing some festival music one Christmas Eve when a farmer led Bertuch, the
maestro, the organist, and the cantor to some cliffs near his house from which he claimed a
!128
beautiful and mysterious music could be heard emanating. “First a chord was struck; then a
single tone was sounded, apparently for the purpose of tuning the instruments; then commenced
a prelude on the organ; and directly afterwards we heard a number of voices accompanied by
cornets, trombones, violins, and other instruments without being able to see any
performer” (Engel 433). In this case, we see that the ethereal energies at play are tied directly to
the Earth itself, but their terrestrial origins do not prevent their resonating through the bodies of
the listeners and overwhelming their senses: “In a moment the concert ceased; but the organist
fell down as if he had had a stroke, his mouth and nose foaming” (Engel 433). Interestingly, at
the close of von Bertuch’s account, he includes a few bars of musical notation, which he
transcribed based upon his memories of the melodic progression. Whereas, for the organist, the
sonic event was the source of catalepsy-inducing affective flows that override the senses and
place the body in-network with a cosmic noise impenetrable to rational thought, for von Bertuch,
it was a complexity to be tamed through notation—a dynamic, physical event to be mapped upon
the plane of reference.
For myself, what all of these windows into the history of aeolian music, ether, and
electromagnetism reveal is the degree to which the penumbral edges of the scientific and
manifest images already bleed into each other in daily life, not only in whatever real encounters
are described by the scientific imaginary, but within and by dint of the imaginary itself. What is
given to experience, particularly those experiences of half-sensed, unmapped complexity or the
phenomenologically dark, becomes the object of a quite natural desire for certainty and those
means for measurement that tend to accompany it. Yet, what is revealed in empirical,
technologically-afforded investigation is often transcribed into the language, or viewed as
!129
evidence, of those tacit conclusions and inferences already drawn from cultural encounters and
previous sensory experience alike. In the Age of Sensibility, the general air of which was
inspired by the bio-electrical experiments of Luigi Galvani, conclusions of this sort placed the
body in-network with an excess of ethereal flows, electromagnetic energies, and the abiotic,
material substrates through which their vibrations are transmitted. For the mid-nineteenth century
imagination, electromagnetic media and human mediumship were coextensive of the same
universal principles. In both cases, the investigations and discoveries of Science were dovetailed
with wider structures of belief. The aeolian harp and aeolian music more generally become the
philosophical objects around which these beliefs could gather.
It is not only old, but also new technologies that may become philosophical objects, which
accrue multiple layers of meaning—physical, imaginary, symbolic—that help to structure our
everyday experiences in non-trivial ways. Timothy Morton updates the techno-philosophical
status of aeolian instruments when he claims them to be, “nearer to modern sensors and
seismographs […] than to the idea of an ‘art object’, something in a frame, in an exhibition space
or in an anthology, discrete, separated, isolated, and focused” (Morton 313). By making the “Art
object” functionally sensitive to the energies and effects of the ambient environment—a
worldizing tactic—one expands the functional boundaries of not just the Art object, but Art itself.
Much post- and transhuman speculation has, in fact, centered its own scientific imaginings
on a similar deployment of electronic sensors as philosophical objects with respect to the human
body, viewing biologically-embedded sensor technology as capable of pushing the very
definition of human across a liminal boundary, redefining it through a profound interconnection
with a much more heterogeneous flow of information within the wider environment. Similarly,
!130
aeolian works, as philosophical objects, point outward and past the liminal boundaries of Art;
they gesture toward “Nature, free from Art,” or at least “blocs of affect,” as Deleuze and Guattari
would have it, unconstrained by the preconceptions we find in Art’s cycles of tradition and
betrayal.
A problematic similar to the one just described has been the subject of much contention in the
discourse around generative and algorithmic art, in which aeolian music has at best occupied an
ambiguous position. Are aeolian instruments (and more contemporary generative constructions
by extension) truly “musical” if no human jurisprudence intervenes in their (seemingly)
spontaneous operations? If we dispense with the ethereal imaginary and the logos it imputes to
nature, are these instruments simply novelties of craft akin to wind or water chimes? These
ambiguities are, for myself, precisely the draw of constructing philosophical instruments. On the
one hand, it is true that no judgement is exercised in the aeolian or generative instrument’s real-
time performance; on the other, it is clear that human judgement is involved in selecting those
constraints that shape the final results. As Morton describes it, “Aeolian harps […] organize
sound without human input: as in some modern music, the owner’s task is simply to set up
parameters – to purchase the harp and to place it in a window frame” (Morton 313). In the case
of the aeolian harp specifically, the underlying principle, Morton claims, involves, “modulat[ion
of] one wave […] in terms of another wave (sound waves produced by the strings)”—an
operational logic he classifies as “ecomimetic” (Morton 313).
While I am in agreement with this account of transduction, for myself, the characterization of
it as “mimetic” misses an important feature of both mimeses and the objects under study.
Mimesis is derived from the Greek word for “imitation,” a term which implies a somewhat
!131
distanced, hermeneutic relationship between the “thing” doing the imitating and the “thing”
being imitated. In the case of the aeolian instrument, we find no such distance in the process of
unfolding; rather, in that moment, we find a series of physical causalities through which the
patterns of one phenomenon are not simply communicated through, but drive the behaviors of
another. For myself at least, the term “ecomimesis” somewhat undercuts Morton’s comparison of
aeolian instruments to sensors and seismographs by de-emphasizing their physical logics in favor
of a representational logic. This is not to say, however, that mimesis plays no role among the
different strata of meanings we find in the ethereal imaginary. Indeed, the very association
between the play of wind across gut strings and the ethereal operations of electromagnetic
energies is built upon the logic of appearances as much as from a tacit appeal to “rule-bound
sense.”
I would argue that despite our tendency to fall back upon “known knowns and unknowns,”
the multiple strata of meaning and affect we find in the ethereal imagination points us outward
toward the penumbral edges of everyday sense and sensibility. There is perhaps no better
example of this than that we find in Henry David Thoreau’s journal entries in which he muses
upon “the telegraph-wire vibrating like an aeolian harp” (Thoreau, Writings 496). “It reminded
me,” he says, “with a certain pathetic moderation, of what finer and deeper stirrings I was
susceptible […],” continuing, “It told me by the faintest strain that a human ear can hear […] that
there were higher, infinitely higher, planes of life which it behooved me never to
forget” (Thoreau, Writings 497). The “higher planes” toward which the aeolian telegraph
gestures might be characterized in two ways, both of which speak to what lies beyond the reach
of the senses until brought within a technological field of view. In one sense, these “higher
!132
planes of life” are expressed through the logos of aeolian music, in another, they are expressed
through a noise that overspills and clings to the crystalline frameworks of rational order. In the
ethereal imaginary, there is a razor thin line that is walked between cosmic alterity and the logic
of rule-bound sense. 3.6 Aeolian WiFi
In the account I have just given, the notion of “taking a signal for a walk” could be
understood as expressive of the capacity of physical objects and material assemblages for the
transduction of information from one domain to another. In the ethereal imaginary, the very real
affordances of material constructions to facilitate “tele” or “far” communications was interpreted
through a number of popular speculations around just how far such communication could travel.
If a message could be sent from New York by means of electricity and received almost instantly
in Philadelphia, why could the same not be possible between the physical and ethereal planes?
We will see how such speculation around the crossing of liminal boundaries appears time and
again in the scientific imaginary, particularly with respect to electronic instruments and data, but
for now I will turn to a work that, in my own praxis, was the most direct response to the ethereal
imaginary described above.
The second electromagnetic encounter of this chapter, entitled Aeolian WiFi, makes use of a
number of the same wide-band radios and computational strategies that were used in the
construction of WaveForm with the addition of an antenna tuned specifically to 2.4GHz, which
was, at the time of construction, the standard frequency for wireless ethernet communication. A
wooden aeolian harp, whose strings are driven by the incoming signals, serves as the work’s
!133
perceptualizing “display.” Aeolian WiFi represents the first prototype in this body of work
through which I examined the imaginative potential for ambient sound alone as a method for
worldizing electromagnetic data. The principle question investigated in the work concerned
what, if any, cultural/perceptual association could be produced between electromagnetic weather
and the physical environment by taking electromagnetic signals for a walk through electro-
acoustic instruments strategically placed throughout an interior, architectural space.
While the same wide-band receiver used for WaveForm was also used for Aeolian WiFi, the
latter work presented technical challenges that required additional circuitry as well as more
precise digital signal processing strategies, both of which were constraints that followed from the
nature of the harp’s construction. As earlier described, traditional aeolian harps are made of wood
and strung with gut string (or nylon in the case of more contemporary harps). Only strings made
of these materials are sensitive enough to the “Kármán vortex street effect,” the phenomenon of
fluid dynamics that makes possible the oscillation of the strings in response to gusts of wind. In
the case of Aeolian WiFi, the “wind” in question is electromagnetic rather than fluid in nature, a
situation that required alternative methods for driving the strings. Taking inspiration from the
“Ebow,” a small, hand-held electronic device used to resonate the strings of an electric guitar, I
fitted the wooden harp with electromagnetic coils, matching one coil to one of three strings tuned
together in a minor triad. The overall dimensions of the harp roughly match those of the harps
described by Kircher and other authors: approximately 28” in length, 3” in depth, and 10” in
width. For the purposes of prototyping, a thin, cabinet grade plywood was chosen over tone
wood for it construction.
!134
As stated above, the electronic hardware comprises an input stage of a tuned, 2.4GHz
antenna and a wide-band receiver. However, the signals detected by this receiver are far too
modulated and spread too wide across the radio spectrum to be capable of producing the stable
resonance frequencies required to drive the strings of the harp. To solve this issue, I constructed a
patch in Max/MSP that splits the incoming signal into three paths, sending each path through a
narrow bandpass filter. I experimented with the settings for each filter until maximum peak
levels were detected at three separate frequencies and set the “Q factor” (the ratio of the center
frequency to the overall bandwidth) high enough to ensure that mostly signals of the selected
frequency were allowed to pass. As the peak levels of each signal path are detected, a boolean
value (1 for above the threshold and 0 for below) assigned to each path is set and communicated
via OSC (Open Sound Control) over a serial port.
The secondary hardware stage of Aeolian WiFi comprises an Arduino microcontroller and
three square wave generators tuned to the frequencies of each string. The Arduino was used to
receive the incoming boolean values and pass these to one of three analog outputs according to
the OSC address. The resulting voltages from each output were used to toggle gating transistors
assigned to each square wave generator in rapid succession. The overall result is that the square
waves drive the coils of the harp according to the peak levels of the three selected incoming
frequencies. At the time of writing, the results can be heard at https://soundcloud.com/
briancantrell/aeolianwifi-mix-01.
Initial tests of Aeolian WiFi yielded mixed and surprising results. Firstly, because the
prototype was constructed with plywood rather than tone wood, the sound of the harp is
relatively faint. The binary switching of the electromagnets by means of square wave generators,
!135
however, was highly successful and, when tested against an ordinary receiver, mostly displayed a
synchronous relationship with the bursts of WiFi noise heard over the speaker. Secondly, and
perhaps more surprisingly, the sound produced by the harp, while relatively pleasant in tuning, is
extremely rhythmic and staccato. My initial predictions in this regard was that the resonance of
the strings would most likely have relatively long attack and decay times due to the time required
for the electromagnets to produce sufficiently strong oscillations. This would effectively produce
something more akin to a slowly evolving drone. However, quite the opposite turned out to be
true; the rhythmic beating of the strings tended to match the quick modulated pulses transmitted
from nearby WiFi hubs. Indeed, the beating of the strings is erratic enough to run counter to the
ambient potential of the instrument. It is somewhat reminiscent of a hammered dulcimer being
struck randomly by means of some mechanical process.
Further iterations of Aeolian WiFi would require a number of changes to both its
technological infrastructure and material construction. For one, the introduction of more strings,
and indeed more harps, would help to spread the sound both spatially and temporally such that a
more psychoacoustically balanced environment—one in which the sound is mostly pushed to the
auditory background—is created. Helping this purpose along would be the use of a more
appropriate tone wood and damping material for the construction of the harp so that the volume
level can be controlled more precisely. Technological changes would involve replacing the
hardware with smaller microcontrollers and the porting of the software to a lower-level,
embedded language so that a full-scale PC is made unnecessary.
One of the primary questions that continues to dog these electromagnetic works centers upon
the issue of viewer (or listener) identification of the source of the phenomenon. Most viewer/
!136
listeners are bound to have little experiential grounding for making any sort of inference about
the underlying mechanism driving the sound of the work, much less a familiarity with the history
and function of the aeolian harp. For the “cold listener,” the sound of Aeolian WiFi and even the
visual connection made to the harp and its technological appendages, differs only slightly from
the more familiar algorithmic or generative musical works whose logic is predominantly a matter
of formal aesthetics. Whereas, in the case of WaveForm, the solution to this problem was found
in the development of a form language that speaks to the scientific imaginary, with Aeolian WiFi,
the solution was found to be much more simple: the use of the antenna which, along with the
harp, serves as a primary visual signifier.
In the production of nearly all of these works, I have found that there are few visual signifiers
more effective for evoking transmission and reception, or for pointing the viewer/listener toward
the imperceivable flows of electromagnetic weather, than the antenna. At present, the necessary
hardware for the prototype create a certain “Brechtian” aesthetic. In future iterations, the
antenna, along with the the harp, will remain as the only visual components of the work. The rest
will be hidden from the field of view. This is important because, phenomenologically, everything
presented to a viewer in the creation of an experience takes on a meaning (whether semantic,
aesthetic, or affective) that unfolds in the encounter. By creating a spatiotemporal boundary
around an experience—a boundary that marks an Experience apart from just any experience—
the creator draws a magic circle around a set of objects, events, or phenomena that imputes to
them a significance and what takes on significance becomes a matter of association. As we have
seen with Cage and, indeed, as I have argued already, the purpose of drawing this magic circle
and making these associations might be to orient attention outward, toward the penumbral edges
!137
of the work, but the bounded experience serves no less for this fact as the nodal point or nexus
from which to take up a particular perspective and with which to make the necessary, guiding
associations. 3.7 Noise as the Nature of Things
When we turn to discussion of the third and last electromagnetic encounter in this chapter, we
will revisit the issue of signifiers and their relation to the phenomenological encounter of a work.
At present, however, let us return to the issue of Aeolian WiFi’s sonic qualities and specifically to
the question of noise. There is an irony in the fact that, despite my overriding concern in nearly
all of these works with orienting the viewer/listener toward experiences of a noisy,
undifferentiated field, the initial results of Aeolian WiFi presented a noise that I found to be
unsuitable for the purpose. Thus, in that work, my interest in ontological noise runs directly
tangent to noise defined as unwanted signal. However, even while the contradiction between
these two understandings of noise troubles the aesthetic philosophy of any artist with designs on
the disruptive potential of noise—a tension that Jacques Attali explores at length and Paul
Hegarty notes as one of the problematic features of noise aesthetics (Hegarty 133)—the tension
between noise as complexity and noise as disruption can also be found in the very operational
and philosophical foundations of scientific instruments.
As I have already argued with respect to the senses, the physical nature of our scientific
instruments forecloses upon any appeal we might make to the “immediacy” of the signals
produced. Again, it is the physicality or, alternatively, materiality of the linkages that carry
signals which produce both self-noise and, by dint of their particular material contexts, a noise
!138
incoming from those slices of the world to which the device is sensitive. With respect to self-
noise, Abram Moles states in Information Theory and Esthetic Perception, “[…] Einstein showed
that, in the last analysis, background noise is due to the agitation of electrons in conductors; he
established that this noise is thus inherent in the nature of things and proportional to the absolute
temperature—just like the molecular agitation causing Brownian movement—and to the
frequency band considered” (Moles 84). With respect to extrinsic noise, one of the more
interesting and intractable problems faced by the investigator is one that is emergent from the
very process of amplification, which underpins the function of scientific instruments generally.
According to Moles, during the period in which electronic instrumentation began to outpace
the mechanical devices of the nineteenth century, “[a]mplification was opening an a priori
limitless view of the infinitely small; it appeared that every phenomenon, no matter how small,
would become measurable. If one amplifier has a gain of 100, two connected amplifiers will
have a total gain of 100
2
= 10,000, three amplifiers of 100
3
= 1,000,000, etc. In the limit, small
phenomena are no more, and there is no theoretical reason not to hear plants grow, not to hear an
airplane 1,000 miles away” (Moles 83-84). The wide-band receivers used for all of the aesthetic
works in this chapter make use of a number of amplification stages to bring detected
electromagnetic signals into either an audible range or one suitable for transistor switching and
control of mechanical effects. However, each stage of amplification increases not only the
presence of desired signals but also that of the noise in which they are embedded.
The primary tactic for extracting the desired signals out of noise is bandpass filtering, which
sets threshold values on either side of a desired frequency spectrum and allows only those values
that fall with these boundaries to “pass through.” It is tempting, due in no small part to our
!139
contemporary, technological understandings of the terms “amplification” and “filtering,” to think
of these processes as idiosyncratic to electronic instrumentation, yet we also find their principles
in some of our oldest scientific technologies. As Moles suggests (83-84), the diffraction limits of
optical microscopes and telescopes are the visual equivalents to the so-called “noise floor” of
electronic signal processing. As the ‘gain’ of any desired signal is increased, whether by the
addition of high-resolution lenses or through implementation of electronic op-amps, so too is the
‘noise’ represented by optical diffraction or the spread of unwanted amplitudes across unselected
frequency bands.
Importantly, it is not only noise understood as unwanted signal that is sacrificed in the
process of filtering, but also a certain percentage of the so-called ‘sidebands’—the incidental
frequencies that cling to the analytic signal and through which the latter’s particular complexity
and idiosyncrasies are produced. In short, while necessary for opening a window onto the
infinitesimally small or extremely distant, filtering also risks stripping away those enworldened
qualities that are integral to the phenomenon under study. It is notable that this filtering process
is a fair example of how invariants are abstracted from the world mechanically rather than
through eidetic reduction. The stripping away of what is “incidental” or determined as overly
variable is a first step toward identifying “universals”—those invariant properties that are shared
between similar or identical phenomena—but this comes at a cost. “The information gained in
one direction,” says Moles, “is lost in another. What is gained in sensitivity is lost in the variety
of elements. Here emerges an uncertainty principle which stems from the very nature of
things” (Moles 84).
!140
Of course, the best scientific instruments available are more than capable of taking both the
“fundamentals” and “sidebands” of a signal into account, yet, they often do so by simply
increasing the number of filters, producing a more complex map—a crystalline order—that rides
a vertiginous line between complexity and what Michel Chion, in Sound: An Acoulogical
Treatise, calls the “impoverishment” of a signal (Chion 156). For Moles, this fact does not
trouble his uncertainty principle, but only reconfigures it by substituting one dimension for
another. Whatever certainty we acquire by increasing the number of filters, and thus the signal’s
content, is sacrificed with respect to time:
[…] with the aid of a filter, we can go draw a phenomenon as weak as we like from the surrounding
background noise which masks it, and, with the aid of a sufficient number of these filters, describe its exact
form, its spectral composition […] However, we can extract the signal only on the condition that we have
an increasing amount of time, an amount tending to infinity as the phenomenon is more deeply drowned in
noise […] When the delay necessary for observation increases greatly, the chance increases that the
phenomenon considered has changed before we realize it (Moles 87).
The trajectory of this line of thinking points us to the fact that few, if any, strictly periodic signals
are encountered in lived experience. As Moles asserts, noise, formulated not only in terms of
undesired signal, but also as the product of incessant change, is the very nature of things. When
we consider Moles’s uncertainty principle in a broader context, it points us toward deeper
questions pertaining to the potential tactics for worldizing the “impoverished” signal or the data
that represents it. What effect, for example, might the introduction of artificially generated noise
have upon the signal in question? Might it help to retrieve (e.g. by means of stochastic
resonance) a modicum of the information lost in the original capture? Perhaps more important
!141
for the aesthetic potential of noise, what new affective experiences might it evoke? Perhaps the
most evocative implication of Moles’s account of noise as the nature of things is that it helps us
to rethink the binary opposition we have constructed between signal and noise. At the end of the
day, this binary arises exclusively out of the intentions of the observer and not from any primary
ontological status. 3.8 Notation, Variation, and the Noise/Signal Binary
Thus far, I have been slipping between definitions of signal and noise and placing them into
binary opposition as they tend to be used in everyday speech. For the purposes of this section,
however, let us problematize this binary and attempt to fix a better definition for “signal” in
place, accepting the caveat that, having done so, we will most likely find it to be in need of
further problematization. For Claude Shannon, a signal is the entirety of what a transmitter sends
over a communication channel, the latter of which we may think as the medium. “In oral
speech,” says Shannon, “the information source is the brain, the transmitter is the voice
mechanism producing the varying sound pressure (the signal) which is transmitted through the
air (the channel). In radio, the channel is simply space (or the aether, if anyone still prefers that
antiquated and misleading word), and the signal is the electromagnetic wave which is
transmitted” (Shannon & Weaver 7). Stated more simply, “signal,” in its Shannonite formulation,
is a sequence of perturbations within the channel or medium in question. Thus, signals might be
noisy, but are not necessarily pitted against noise in a strict binary sense. Perturbations are
perturbations. It is the sender and receiver who discern intended messages.
!142
In keeping with the mathematical nature of his original treatise, Shannon defines noise as, “a
chance variable […]. In general it may be represented by a suitable stochastic process” (Shannon
19). What these stochastic processes serve to disrupt is the transmission of information, which
Shannon defines (somewhat counterintuitively to our everyday understanding) as a measure of
surprise. Here, Shannon borrows the term “entropy” from statistical mechanics as a means for
underscoring the probabilistic nature of his concept of information. The entropy of a signal can
be measured according to the likelihood of the next arriving “bit” in a transmission being one
specific value among others in a finite set. For example, if I am waiting for a series of letters to
arrive over a channel, the first (let us assume that it is the letter “N”) will have the maximum
amount of information assuming that I am not already aware of the contents of the message.
Assuming the message is not encrypted and that I speak the language of the sender, information
(or entropy) will subsequently decrease as I receive each new letter. In our present example, if I
receive the series “O,” “I,” “S,” the last letter “E,” when received, will have extremely low
information as our background knowledge of the prior letters (N - O - I - S) will present us with a
low probability for the next arriving letter being any letter other than ‘E.’
The reader might sense already that Shannon’s definition of information presents us with an
interesting conundrum with respect to noise. Information defined in proportion to entropy would
seem to suggest that maximum noise is equivalent to maximum information. For, not only do we
tend to view maximum entropy (the highest likelihood, for example, of finding a given molecule
at any random spatial point within a closed system) as congruent with our everyday
understanding of noise, but mathematical white noise is calculated and produced by distributing
just such a probability across the frequencies within a chosen spectrum (in digital signal
!143
processing, we say that each value is “uncorrelated” with the next). Shannon sets us straight
regarding this confusion, however, stating,
This is a situation which beautifully illustrates the semantic trap into which one can fall if he does not
remember that "information" is used here with a special meaning that measures freedom of choice and
hence uncertainty as to what choice has been made. It is therefore possible for the word information to have
either good or bad connotations. Uncertainty which arises by virtue of freedom of choice on the part of the
sender is desirable uncertainty. Uncertainty which arises because of errors or because of the influence of
noise is undesirable uncertainty (Shannon & Weaver 19).
Noise, for Shannon, is any uncertainty-increasing, stochastic process that is destructive to the
intended message in a communication channel, which is to say such processes are capable of
degrading or changing the ‘bits’ of a message in undesirable ways. Instantly, we can see that,
with respect to scientific instruments, such a definition of noise, while effective for a theory of
communication, quickly becomes far more complicated when “intention” is considered in the
context of empirical investigation.
What we might call the “intended” message of a scientific instrument would be predicated
upon those phenomena with which we are already familiar or which are otherwise predicted by
theory. After all, we need to have at least some sense of what we are investigating to even begin
our search. At first blush, this relationship of theory to inquiry would seem to work against our
ability to uncover new phenomena (e.g. phenomena to which an instrument might, incidentally,
be sensitive, but of which the experimenter has no prior knowledge). This problematic can
arguably be found at the heart of many charges of theory-ladenness. It is easily demonstrated,
however, that the assumption of a such a constrained relationship between instrument,
!144
experimenter, and phenomenon that this form of critique supposes is regularly challenged in
scientific practice. While optical telescopes and microscopes, for example, leverage a small
sliver of electromagnetic radiation (the one with which we are most familiar), they do not simply
augment our vision with respect to what we already see imperfectly with the naked eye, but
widen our view upon a vast (albeit limited) field of phenomena. Furthermore, a scientific
instrument might “stitch” together fields of different scales or perceptual registers—a function
that is arguably characteristic of most modern scientific instruments—by leveraging the physical
linkages between perceivable and imperceivable phenomena. In Representing and Intervening,
Ian Hacking traces the intellectual history around such instruments to arch-empiricist Francis
Bacon, who termed them “evoking devices,”
Bacon […] knows the difference between what is directly perceptible and those invisible events which can
only be ‘evoked’. The distinction is, for Bacon, both obvious and unimportant. There is some evidence that
it really matters only after 1800, when the very concept of ‘seeing’ undergoes something of a
transformation. After 1800, to see is to see the opaque surface of things, and all knowledge must be derived
from this avenue. This is the starting point for both positivism and phenomenology […] To positivism we
owe the need to distinguish sharply between interference and seeing with the naked eye (or other unaided
senses) (Hacking 168-169).
Hacking’s observation of phenomenology’s co-development with positivism in relation to
post-seventeenth century critiques of the senses and their epistemic roles should not escape us
even if, for him, it is positivism that is of greater concern. In many respects, the skepticism about
the ability of Science to make truth claims about what Hacking calls imperceptible “entities” is
born out of the general episteme he describes. While Bacon might have considered the role of
!145
“evoking devices” both “obvious and unimportant” in pursuit of such knowledge, I would
suggest that, when considered in the context of phenomenology, Bacon’s assertions to the effect
of “it is obvious that air and spirit, and like bodies, which in their entire substance are rare and
subtle, can neither be seen nor touched. Therefore in the investigation of bodies of this kind it is
altogether necessary to resort to reductions” (Bacon 195) place our epistemic sensibilities back
into the realm of aeolian, where phenomena that retreat from the senses “speak” through their
embodied relations with phenomena that do not.
For phenomenology, in which we find not anti-realist skepticism but an injunction to suspend
our judgements of the real, “the opaque surface of things” becomes, pace Ihde, a hermeneutic
interface communicating an entanglement of matter and causes, much wider than our “field of
view.” Such interfaces are reliant upon our inferences for both epistemic and aesthetic effect.
Theory, from this perspective, is a working of those invariants inferred from our encounters with
the world into symbolic systems (notation) that decrease uncertainty and increase our precision
in predicting and intervening. The dance between these modes of investigation is one in which
neither partner leads for very long and raises two additional points regarding signal and noise
that deserve parsing, and which bear directly upon the discussion at hand.
First, the two terms of the noise/signal binary, at least with respect to scientific inquiry, enjoy
a relationship similar to that between figure and ground as theorized in Gestalt psychology. It is
arguably the certainties we produce by culling figure out of ground (or signal out of noise) that
provide the footholds required for navigating our world. Yet, in our embodied practices, no less
than in our abstract operations of thought, it is at times beneficial to attend to ground rather than
figure. With respect to the investigations of Science, it is our tendency to focus upon the latter at
!146
the expense of the former that can foreclose upon opportunities for discovery. For example, when
Penzias and Wilson powered up the radio telescope at their Bell Labs facility in 1964, they
detected a constant and intractable noise roughly equivalent to 4 degrees Kelvin, which persisted
even after ambient radio emissions and other sources of interference were eradicated. Only after
Arno Penzias contacted Robert H. Dicke at Princeton, who promptly sent him a copy of his and
his colleagues’ still-unpublished paper predicting exactly this temperature as residual of the Big
Bang, did the scientists realize the implications of what they had initially interpreted as noise. In
1978, Penzias and Wilson were awarded the Nobel Prize for physics for their discovery of the
cosmic background radiation—an honor typically reserved for theoreticians. Not only is this
story a classic example of the dance of theory and practice in the sciences, it also illustrates the
degree to which a myopic focus upon signal can cause the investigator to overlook a much wider
and more complex set of cosmic relations.
Yet, as compelling as this particular version of the signal/noise “gestalt switch” certainly is,
such monumental achievements are few and far between. The reality is that such switches are far
more commonplace in practice. For a more everyday, “normal science” perspective, we might
consider the continuing work by engineers and scientists at the Jet Propulsion Laboratory with
the incoming signals of V oyagers I and II. In this case, not only are researchers continually
tracking the signals sent to and from the two spacecraft, both of which, at the time writing, are
well beyond the heliosphere and traveling through interstellar space, but they are also carefully
studying the noise affecting these signals. In an interview with Popular Science magazine,
Michael Levesque, the operations manager of JPL’s Deep Space Network, explains simply that,
“[w]hen a craft is moving through something interesting, noise data is going to be what we want
!147
[…] the noise in the signal turns out to be science” (qtd. Bradley 69). Just as we saw with
Thomas Watson’s experiences with the inductive properties of early telephone lines, the
technologies in question cannot be reduced to the transmission and reception of those slivers of
data most germane to human intention. They are resonant conduits for a much wider range of
phenomena in their environment—media for the expansive fields of entangled forces acting upon
material bodies and pointing back to a multiplicity of causes, most of which retreat from human
experience. The signal, as previously defined, is acted upon by forces that affect its perturbations
in observable ways, thus speaking in the smallest of electronic voices to the wider contexts of
magnetic fields, solar storms, and the more mysterious “stuff” populating our universe. The
researchers of JPL have positioned themselves to leverage this fact and have adjusted their field
perception accordingly.
The second point we might consider, following Hacking’s distinction between representing
and intervening, is how the uncertainties of practical activity often themselves become content
for representation, both mathematically and, more importantly, in an embodied sense. Shannon’s
formulation of noise as stochastic processes is largely agnostic to the material/physical/embodied
nature of such processes. We might go so far as to say that Shannon’s representations are
“noisiness stripped of noise.” If we take a sighting of a similar issue in the more critical corners
of the Humanities, where it is imperative to make one’s normative positions plain, we find
suggestions like those made by Joanna Drucker in “Graphical Approaches to the Digital
Humanities,” which underscore the perceived need for a visual semiotics whose aesthetic
qualities should reflect the “constructed,” “interpreted,” and uncertain nature of knowledge
production in the Humanities. “Rethinking graphical display in humanistic terms,” says Drucker,
!148
“would involve designing point-of-view systems, partial knowledge representation, scale shifts,
ambiguity, uncertainty, and observer dependence into our visualizations and interface. These
could be custom-built boutique projects, but it would be better to develop conventions designed
to engage and expose principles of cultural conditions, hegemonies, and power
structures” (Drucker 248).
In many respects, Drucker’s push for graphical conventions in data display that make the
normative assumptions of the designer explicit would seem to comport with the aims and tactics
I am suggesting for worldizing data. There are, however, a number of philosophical distinctions
to be made between my approach and Drucker’s, a full account of which would be beyond the
present scope. What I would insist, however, is that Drucker’s design philosophy, concerned as it
is with characterizing the knowledge of the Humanities as “constructed,” displays a skepticism
with respect to what she has elsewhere termed “mathesis” that is unsuitable for what I believe to
be more fruitful approaches for art-science intervention. I make this observation not to take up a
polemical stance against Drucker’s ideology since, ultimately, I believe her aims to simply be
different from my own. Drucker herself spells out the differences thusly:
From a functionalist point of view, the directive to digital humanists is to learn the basic language of
graphics and use it in accord with the professional guidelines developed by statisticians. From a critical
point of view, however, the message is more skeptical and suggests a radical rethinking of the
epistemological assumptions that the statisticians have bequeathed us. The fault is not with the source,
since it is the borrowing for humanistic projects that is problematic, not the statistical graphics themselves.
They work just fine for statistical matters (Drucker 242).
!149
This distinction is an important one, and it presents us with a problem. Drucker’s critical eye is
directed toward an untenable application of scientific rhetoric to graphical displays of cultural
data. Thus, she affirms the fact that statistical rubrics work for the purposes of scientific
investigation while remaining skeptical of their suitability within the Humanities. What she
offers in their place is an aesthetics of data that forefronts uncertainty in the visual display.
It is fair to ask what new graphical conventions might be more suited to the Humanities, but,
to my mind at least, it makes little sense to ground such an inquiry in the same old binaries we
have inherited from the science wars. Indeed, this move presents the risk of overlooking features
of scientific notation and its underlying discursive frameworks that already account for noise and
uncertainty in quantifying rubrics like probability functions. Of course, Drucker’s point is that
such probabilistic methods are not part of the Humanities episteme, but this is exactly why a
blurring of boundaries is needed. For art-science, the question is not one of what conventions of
data-display “belong” to one episteme or the other, but what distortions we might introduce into
the frameworks already in place that might yield new perspectives. 3.9 Sonde: Field Perception and Aeolian Theater
Earlier, we saw the degree to which my approach to the radio technology for WaveForm and
Aeolian WiFi was colored by a bias for culling signal out of noise. The aesthetic quality of each
work is largely determined by regular patterns that emerge and are stripped of noise by filtering
processes. Any content of the original signals that does not fall into pre-selected ranges is
entirely lost. For the third and final work in this chapter, entitled Sonde, my aim was to turn the
noise produced by both radio receivers and environment alike to more productive, aesthetic
!150
purposes while simultaneously defamiliarizing its sonic qualities. Rather than construct an
explicitly physical/sculptural system as was done with Aeolian WiFi and WaveForm, the
approach I took for Sonde was overtly computational, auditory, and centered on the production of
what I will call “aeolian theater.”
Sonde comprises a quadrophonic speaker system with a listening area of roughly 100 square
feet, two radio receivers placed at the midpoints between speakers on two facing sides, and a
computer hosting custom procedural audio software. In keeping with the lessons learned in the
production of Aeolian WiFi, the receivers for Sonde are housed in plain black enclosures with
telescopic antennas sprouting from their tops. Each receiver is attached to a modified music
stand, which elevates the device roughly four feet above the ground. In a break from the
Brechtian aesthetics of Aeolian WiFi, in which nearly every component of the electronics (wires,
antennas, circuitboards, etc.) are exposed, the speakers, antennas, and receivers used for Sonde
are the only components of the system that are visible to the viewer/listener. The gallery lighting
for its exhibition is kept relatively low and spotlights the antennas, making these the only visual
signifiers suggesting the underlying phenomenon driving the work.
Sonde’s software operates in two stages: the first stage is an iteration of the same logic used
in both WaveForm and Aeolian WiFi. Once digitized, the signals from the radio receivers are sent
to the audio input of the main program, split into three channels, and passed through a series of
six bandpass filters, three of which provide the peak values of the most prominent frequencies
and three of which reject these frequencies, allowing the noise surrounding them to pass through.
Amplitude variations of the summed noise signal are then extracted by an envelope follower. The
second stage consists of a sonification model that responds in real time to detected peak values
!151
linked to its parameters. The sonification scheme itself is quite different from more familiar
parameter-based strategies like those I have used in other works. In my experiments with EEG
data, for example, the numerical values in the time series are mapped to separately stored MIDI
values equivalent to an E Minor Aeolian scale. The values in the EEG data are constrained to
particular note ranges such that, when cycled, produce quasi-musical phrases played out by a
very basic FM synthesizer. Strategies like this rely upon a strict association between scalar data
and synthesis parameters like pitch and, in many respects, call to mind the use of false color in
scientific visualization. Thomas Hermann has defined model-based sonification, like that used
for Sonde, in contradistinction from both parameter-based sonification and audification (direct
translation of data into sound) thusly: “Model-Based Sonification mediates between data and
sound by means of a dynamic model. The data neither determine the sound signal (as in
audification) nor features of the sound (as in parameter mapping sonification), but instead they
determine the architecture of a ’dynamic’ model which in turn generates sound” (Hermann, et. al.
404). Sonde does not comport precisely with Hermann’s and colleagues’ definition of model-
based sonfication, sitting as it does somewhere between model- and parameter-based strategies.
Yet, where Sonde deviates from model-based sonification, it does so because of design
considerations that follow from its responsiveness to a real-time signal rather than a stored
dataset.
The architecture of Sonde’s sonification model is designed to create an immersive sound
environment rather than a discrete sounding object such as a plucked string or stretched drum
head. The goal was to generate an ambient space that augments and intermingles with the
physical exhibition space in a manner which evokes the dynamic and immersive quality of
!152
electromagnetic fields and resists a mode of representation that reduces the underlying
phenomenon to a set of spatiotemporal coordinates mapped upon a screen. The model has two
main compositional elements that vary in amplitude, type, and spatial location according to the
peak values of both noise and signal. The first, driven by the incoming noise, forms the ambient
sound bed that glues the environment’s transient elements together. Amplitude fluctuations
within the noise determine both the density of the ambient bed and its movement within the
sound image. The sonic properties of the ambience was created using a modified version of the
granular synthesis techniques described by Curtis Roads in Microsound (Roads 86-118) and
Andy Farnell in Designing Sound (Farnell 305), which involve storing previously recorded
sound samples in computer memory and procedurally “chopping” these into smaller “grains,”
whose pitch, duration, amplitude, and spatial positioning are parameterized and thus controllable
according to input. For Sonde, the “input” is both the peak levels of the noise and the relative
strength of signal between the two receivers rather than, for example, the notes played upon a
keyboard. The samples used for the synthesis engine are field recordings taken from various
electromagnetic sources. When “chopped up” by granular techniques and subsequently
spatialized, these result in a complex “grain cloud,” creating a sonic environment populated with
knots and braids of drifting static, phase shifts, and subtle gradients of crackle and hum.
The audio software makes additional use of WiFi, Bluetooth, and other signal sources to
control the appearance, position, pitch and amplitude of procedurally generated
“earcons” (Hermann et. al. 339-358). The character of these earcons suggest rather than replicate
the “pings” produced by sonar and are meant to evoke a sense of searching and communication
within the ambient, electromagnetic field. There are additional sounds produced by the model
!153
that are situated somewhere between earcon and purely non-referential sound design, which blur
the line between organic sounds the those of electronic communication and function to seat the
recognizable sonic elements more “naturally” into the soundscape. Like the ambient bed, these
intermittent sounds are controlled by the character and behavior of incoming signals, but affected
parameters like spatial positioning are much more pronounced, creating more dramatic
movement in the quadrophonic image.
Overall, Sonde is an approach to sonification that uses fictionalized soundscape as a means
for expressing the dynamic character and complexity of electromagnetic weather. The sounds
produced are driven by constructed isomorphisms between the phenomena and the parameters of
the sonification model, but the arbitrary nature of these mappings results in a form of
representation without mimesis—or at least insofar as one could appeal to mimesis, it is one
predicated on behavior rather than appearances. I have dubbed this strategy “aeolian theater” as
a means for linking the sonic narrative to an underlying, physical phenomenon, but the term
“acousmatic” might also be used as a means for underscoring the degree to which the
phenomena themselves withdraw from perception.
“Acousmatic” is a word derived from the name given to the students of Pythagoras, who
would listen to their teacher lecture from behind a curtain. Pythagoras spoke to his pupils in this
eccentric way as a means for de-emphasizing his visual presence and placing focus entirely upon
his voice and words. “Acousmatic” has since come to denote the perception of sound for which
there is no discernible source and while the term is broadly applicable to much of the sound
design we hear in film, television, games, and theater, it places us in something of a conceptual
bind when dovetailed with the aeolian. On the one hand, the “source” of an aeolian sound might
!154
be said to reside in whatever instrument produces it, but on the other, such an understanding
strips the aeolian of its identifying character. Thus, in the case of aeolian “music,” the acousmatic
quality is predicated on an invisible musician rather than an invisible instrument, yet it is fair to
question, especially in works like Sonde, whether the musical logic of the aeolian can be said to
reside in this “invisible hand” or, rather, in the mind of the instrument builder.
If we take a sighting from the sciences, we see that his problematic is one that also dogs the
relationship between many scientific instruments and the phenomena their builders purport to
make perceivable. It is a problematic residing at the very heart of charges of “theory-ladenness”
and the question it raises is one centering on the degree to which the invariant properties of
physical phenomena can be made commensurable with the invariant impressions of perception,
especially in cases of those instruments Peter Galison has characterized as “logic devices.” Both
“image devices” and “logic devices” have their shortcomings in this regard, argues Galison,
stating,
On the one hand, the image devices provided detail but often were vulnerable to the charge that ‘anything
can happen once’; some unexplained fluke might have occurred in the device that could fool the
experimenter. On the other hand, the electronic logic devices typically produced ample statistics but
remained open to the objection that they recorded only a very partial description of any single subatomic
process. Unlike the image devices, the electronic experiment, being ‘blind,’ might miss some crucial feature
of the phenomenon being described (Galison 437).
Again, for Husserl, whose phenomenology seeks an experiential basis for mathematics,
problematics like the one posed by Galison must be bracketed. What we require, from this
perspective, is a sufficient methodology for investigating sensory impressions—a methodology
!155
that phenomenology is meant to provide. Yet, Galison’s investigation reveals the extent to which
investigations of the imperceivable are bound up in perception.
In some respects, aeolian works like Sonde are well-placed to make explicit the linkage
between phenomenological and instrumental address, but what I have termed “aeolian theater”
leads us into much more penumbral territory in its production of sensory excess, cultural
associations, and scientific imaginaries—all set into motion by the action of imperceivable
phenomena upon the sensitivities of technological instruments. In this regard, the aeolian might
be said to nest within a broader acousmatic aesthetics—one which, as Pierre Schaeffer argues,
“marks the perceptive reality of sound as such, as distinguished from the modes of its production
and transmission,” (Schaeffer, p.77, emphasis added). Like Abraham Moles, who was exploring
similar ideas at almost precisely the same time, Schaeffer was concerned with separating analysis
of the acoustic signal as a physical process from analysis of the subjective, aesthetic experience
of the signal. But the separate approaches were viewed by Schaeffer as being equally predicated
on invariant properties that are knowable and communicable, even while requiring address
through different methodologies:
[A]coustics and acousmatics are not opposed to each other like the objective and the subjective. If the first
approach, starting with physics, must go as far as the "reactions of the subject" and thereby integrate, in the
end, the psychological elements, the second approach must in effect be unaware of the measures and
experiences that are applicable only to the physical object, the "signal" of acousticians. But for all that, its
investigations, turned toward the subject, cannot abandon its claim to an objectivity that is proper to it: If
what it studies were reduced to the changing impressions of each listener, all communication would
become impossible; Pythagoras' disciples would have to give up naming, describing, and understanding
what they were hearing in common; a particular listener would even have to give up understanding himself
!156
from one moment to the next. The question, in this case, would be how to rediscover, through confronting
subjectivities, something several experimenters might agree on” (Schaeffer 77).
In a single paragraph we find precedent for both the post-GEV view of objectivity given in
chapter two and the same “self-induced multiple personality disorder” that troubled Haraway in
the mid-80s. But, we might also infer from Schaeffer’s question concerning “confronting
subjectivities,” that the analytical path for acousmatic phenomenology is not one that leads
toward fully rationalized physical causes. Acousmatic thinking brackets “real sources” from
those aspects of a phenomenon that are perceived through signal. It places its analytical lens
upon the structure of auditory effects and affects as given to a subject. A coupling of the aeolian
with the acousmatic retrieves physical causes and leverages acousmatic subjectivity to (re)embed
them in a web of auditory meanings.
There is a danger, however, that the word “retrieving” suggests representation of physical
causes as a form of mimesis as we saw earlier in Timothy Morton’s comparison of aeolian
instruments to sensing devices. I would make a distinction, however, between mimesis and what
Ian Hacking calls “likenesses,” the latter of which I would argue is a more appropriate
characterization of the work that concerns us. Ostensibly, there is not much to choose between
the terms; both “mimesis” and “likeness” seem to indicate a fidelity to an original. The definition
of “likeness” given by Hacking in Representing and Intervening, however, frees representation
from such fidelity: “We know likeness and representation,” says Hacking, “even when we cannot
answer, likeness to what? […] Likeness stands alone. It is not a relation. It creates the terms in a
relation” (Hacking 138). Understood thusly, likenesses might be said to communicate more than
just visual appearances; they speak as much to behaviors, to atmospheres, and to intuitions. They
!157
open upon a metaphorical repertoire much more diverse than the rubrics of mimesis would allow.
Many of the world’s phenomena, being under no obligation to communicate to our senses from
behind the veil, nevertheless propagate their signals through matter and forces that respond in
sympathy. The trick for any “director” of aeolian theater lies in arranging these sympathetic
connections such that they evoke “likenesses” that resonate with theoretical understanding
without appealing to either the logos of notation or a strict representational isomorphism as
would be implied by mimesis. 3.10 Embodiment and Abstraction
Ostensibly, Schaeffer’s concern for an experiential framework of auditory communication
seems to gloss over the thornier issues of language. Yet, what seems clear is that such issues as
those that might be raised by a rarefied and detached semiotics would be so much fluff without
sufficient investigation into our intersubjective capacities for apprehending patterns within sonic
structure—a capacity which, for Schaeffer, creates the transcendental conditions necessary for
language. The acousmatic “theatricality” of works like Sonde can be thought of accordingly as
employing “scripts”—in a decidedly non-linguistic sense of the term—as a programmatic means
for arranging blocs of affect. From the perspective I have just outlined, meaning is produced not
only in the abstractions of notation, but also in perceptual experiences that precede systems of
signification. In his Guide to Sound Objects, Michel Chion quotes Schaeffer directly in
addressing this view, stating,
!158
If we postulate […] that musical organization cannot be something that comes entirely from the dictates of
the mind, but that it must rely on the properties of the natural perceptual field of the ear […] Then the
problem of making an experimental music which still has a “meaning” can be stated in new terms […] This
music, rather then being the interplay of “differential structures” within a melodic-harmonic code of
reference (which allows us to go beyond the stage of sound to build up a “musical language”), would be an
architecture constructed on the logic of the material itself, with its meaning in its “internal
proportions” (Chion 93).
Schaeffer’s detachment of meaning from the abstractions of notation is coextensive with the
proto-electronic musical experiments he gave the name “musique concréte,” a form of
experimentation in which the newly-invented tape recorder was valued for its potential to render
sounds themselves into compositional units, producing an embodied mode of musical
arrangement that contrasts starkly with the more mathematical/semiotic bent of symbolic
notation. Agnostic as they are to the morphology of the captured sound, tape recorders, for
Schaeffer, allow the composer to work around the limitations of musical notation, for which
auditory phenomena like timbral noise are far too uncertain (and thus unrepeatable) to be given a
place in the compositional schema. In Schaeffer’s view, musique concréte facilitates arrangement
whose “internal proportions” and their effects upon the listener become the bases for embodied
modes of both measurement and meaning. By engaging directly with microphone, tape recorder,
and splicing kit, the composer can exercise relatively precise control (in near-real time) over a
sonic work’s affective potential—a potential that arguably arises from the technology’s ability to
fix the noises banished from notation into place, rendering them repeatable and mobile.
I can think of no better example of how abstraction (in its broader, mathematical sense)
comes together with the embodied practice of tape splicing than we find in the work of Delia
!159
Derbyshire, whose background as a mathematician allowed her to create wonderfully precise and
disturbing arrangements of musique concréte for the BBC Radiophonic Workshop in the 1960s.
In Derbyshire’s works, we see how, for example, temporal measurements (e.g. the duration of a
note) are transcribed into spatial measurements (the length of tape required)—an experimental
act that emerged from her variational experiments with the material, but which were guided by
her mathematical training and notational thinking. Derbyshire’s works also illustrate the degree
to which such experiments are capable of organizing entirely new blocs of affect, which orient us
toward wholly alien perceptual experiences. Her arrangement of Ron Grainer’s original theme
song for Doctor Who is but one example of the tendency to associate experimental, electronic,
and electro-acoustic music with encounters of the unknown. It should not escape notice,
however, that with Derbyshire’s work these encounters with alien sensibilities are facilitated not
by wholesale, “anything goes,” improvisation, but by the interplay between notational and
material experimentation—a fact that imbues the alien outcome with a structural familiarity, a
calculating intelligence, and, ultimately, an uncanniness appropriate to Derbyshire’s subject
matter.
For Schaeffer, “abstraction” was understood in exactly the discipline-specific context we find
in Derbyshire, and held in antonymous relation to “concrete”; what was at stake was not abstract,
representational schemas per se, but specifically the limitations musical notation (as traditionally
formulated) places on the composer with respect to the available sonic repertoire—a limitation
Derbyshire, who happily submitted any and all noise to the logic of mathematics and technology,
was able to circumvent by means of the same embodied practices Schaeffer championed.
According to the logic of notation, noise—whether produced by the coughs and shuffling of the
!160
audience or the timbral character of the instrument—is simply an unfortunate fact of auditory
life; it is an inconvenience to be tolerated, but never to be accounted for within the schema. This
limitation is what led Cage to throw open the doors of the concert hall, allowing traffic noise to
intermingle with the codified tones of the composition, and to compose a work entirely out of
notational rests.
Cage’s well-known 4’33”—is perhaps where the relationship of noise to notation is made
most explicit. The revolutionary aspect of the work resulted from Cage’s observation that
notational rests do not instruct or dictate what noise is to be inserted into the silence they
represent, thus, they open the opportunity for the stochastic nature of the extra-notational
soundscape to intervene. “If you listen to Beethoven or Mozart,” says Cage, “you see they are
always the same. But if you listen to traffic, you see it's always different" (qtd. Tuttle). In the
final balance, what is at issue is the degree to which notation affords control over expression—a
fact that arguably underpins the tight connection between musical notation and the broader
abstractions we find in computational logic and software design. Indeed, it could be argued that
such a remapping of musical language to the logic of software has long since been become tacit
in music technology.
Works like Sonde, while constructed with Schaeffer’s phenomenological morphologies of
sound in mind, are executed almost entirely in silica, and thus might be more appropriately
identified with historical antecedents like the granular synthesis work of Iannis Xennakis than
with Cage’s experiments. Furthermore, while it certainly cannot be said that computers have
chased embodiment out of musical practice entirely (indeed, computation has helped facilitate
entirely new gestural paradigms for musical expression), software, particularly the digital signal
!161
processing algorithms of the past few decades, does seem to free sound itself from a great deal
more physical constraints than did previous technologies. For example, unlike the “slices” of
sound made available by the tape recorder in early works of musique concréte, the durations of
which were limited by the structural stability of magnetic and adhesive tape, the length of the
digital sonic grain can be as short as the programmer wishes, dipping well below the so-called
“Gabor threshold,” and duplicated as many times as computational overhead will allow.
The certainty-out-of-abstraction we are afforded by software is near magical in comparison
to the uncertainty and risk of material experiment, in which mylar strips demagnetize, become
entangled in reels, and mechanical parts collect grot and fail. This apparent freedom from the
material constraints of a sounding source deepens the concept of the acousmatic and can lend the
phenomenal result of computational works a complexity that pushes against the boundaries of
sense-making—a quality that can, in turn, create a magical aura around code itself, producing a
yet another imaginary to concern us. In Snow Crash, Neal Stephenson makes this imaginary
explicit by drawing a straight line between mystical incantations and programming languages.
Similarly, Wendy Chun has coined the term “sourcery”—a portmanteau of “source code” and
“sorcery”—as a means for capturing the same connection, underscoring and critiquing the
rhetoric around the capabilities of code to instantiate what it describes (Chun, Programmed
Visions 19-54).
Field Perception and Pattern Recognition Machines
We will dive more deeply into the abstractions of software and their relationship to embodied
practices in the next chapter. For now, I would like to close the present discussion by gesturing
!162
toward some theoretical directions that I believe follow from what has been presented thus far
and to firm up the dialectic we have been examining. First, from the perspective of those who
argue for clarity and legibility in data perceptualization, the capacity for a work to overspill or
distract from cognitive tasks presents a risk of the work itself becoming a dark phenomenon,
shifting the object of analysis from the phenomenon under study to the aesthetic qualities of the
medium. Framing this potential in the negative is exactly what drives the perceived disconnect
between the analytic goals of task-oriented sonification and the experiential bent of sound and
installation art. While on the one hand, the potential for these “artistic” works to obfuscate
analytic insights is very real, on the other, we might also view the creative excesses driving them
as holding a concomitant potential for incidentally revealing patterns unaccounted for by
discipline-specific approaches. The distinction between mimetic (or alternatively, symbolic/
notational) representation and Hacking’s notion of “likenesses,” which we examined above,
points us in an interesting direction in this regard. As we have seen, “likenesses” are under no
obligation with respect to fidelity and can be identified in the absence of a referent. In a manner
that is highly consonant with both the aeolian and acousmatic listening, likenesses can evoke
behaviors and “internal proportions” that point our sensibilities toward uncanny and emergent
phenomena, both familiar and unknown.
The primary question raised here is one concerning how strict we should be in mapping the
same rubrics for clarity applied to task-oriented perceptualization onto data-driven, experiential
artworks. Following from this first question is yet another concerning how “useful” these
artworks can possibly be if we loosen those rubrics. There is no simple answer to either question
but, for present purposes, I would suggest two ways of addressing them that relate to what I
!163
would argue to be a productive art-science. As previously observed, works of both Art and
Science are—despite the formal implications of modernist accounts that tempt us down the
slippery slope of “art for art’s sake”—produced within a network of discursive contexts. It stands
to reason that data-driven or hybrid works of art-science are no exception; they simply speak to
and within a discourse that (thankfully) has yet to be fully rationalized. Moreover, because the
penumbral space between orders comprises the dissonances and resonances created by the
intermingling of disciplinary knowledge, it also stands to reason that one’s expected audience is
a constraint that jumps to the forefront of any art-science design methodology. The effectiveness
of most pattern recognition strategies for data displays are dependent upon a wealth of user
background knowledge. When brought out of the computer lab and into the gallery, science
museum, or any other context involving a lay-public, the same strategies that increase our
capacity for pattern-recognition risk falling into the trap of oversimplification, thereby creating
faulty impressions of the science, the phenomena under study, or both—a situation we will
examine more closely in chapter five.
Thus, on the one side of this dialectic (that of the Arts), it is worth questioning whether art-
science practitioners should hold pattern recognition and “insight mining” to be the primary
goals of data perceptualization. Specialists working in the bright centers of data science evince a
tendency to “reduce [the human] to a mere pattern-recognition machine” (Bjørnsten 2015), and
this tendency has, moreover, influenced the popular imaginary, giving rise in the Arts to what
Charles Stankievech has called “forensic aesthetics” (Stankievech, “Exhibit A” 43) and James
Frieze has called, in alternating fashion, “the forensic” and “the diagnostic turn” (Frieze, “Naked
Truth” 148-149). In the arena of data-driven artworks, it is specifically the word “diagnostic,”
!164
whose Greek root means “to distinguish,” that best characterizes the normative stance in most
critiques of perceptualization. Such critiques are heavily influenced by the standards of legibility
and mechanical objectivity that have long been the workhorses of scientific data analysis and
display, like those underpinning Edward Tufte’s concept of “data to ink ratio” (Tufte, The Visual
Display of Quantitative Information 93) or Marey’s partial photographs. Moreover, following
Stankievech and Frieze, what we might call “the diagnostic imaginary”—an imaginary fed upon
an endless stream of cop dramas and variations upon the Sherlock mythos, almost all of which
deploy the trope of the computer nerd at the console of a beeping, chirping, holographic data
display—reduces even a more general aesthetics of data to the evidentiary rubrics of
discernibility and pattern recognition.
With this in mind, immersive approaches to data display like aeolian theater might be thought
of as bearing more than a striking resemblance to works of participatory, immersive theater like
Punch Drunk’s Sleep No More, a retelling of MacBeth in which the audience intermingles with
the action and is allowed to explore the many floors and rooms of the building in which the play
is staged. Making an explicit connection to the forensic, Collette Gordon observes that “the
reading of the rules in SNM implies rights in the space. Audience members duly proceed as if
issued with a search warrant” (Gordon 4). No one would argue that a particular SNM audience
member should be capable of culling new insights into the murder of King Duncan by being
issued such a warrant. To the contrary, SNM’s participatory and exploratory mode of
spectatorship affords a bracketing of linear and observable causal sequences and encourages
what we might characterize as a dérive throughout the world space of the play. In its address of
“users’” embodied situations, this mode of exploration stands in radical contradistinction from
!165
the task-oriented mantra, “overview first, zoom and filter, then details-on-demand,” suggested by
Schneiderman in “The Eyes Have It” as an effective method for exploration of complex datasets
(Schneiderman 337).
On the other side of this dialectic, however, I would argue that there is no less potential for
associating the invariants derived from noisy, aeolian, or ambient works with features in the
underlying data than there is in associating numerical values with graphical lines and shapes. As
the concept of likenesses suggests, while the members of a lay public might lack the background
knowledge required for discipline-specific, task-oriented pattern recognition, they are more than
capable of organizing the affective flows of even the most unfamiliar experiences into new
perceptions and of sussing out the invariant properties therein. We might go further here and
consider that, even when the potential invariants of “external” phenomena remain dark, there
persists at least one primary invariant: human embodiment. From this perspective and that of
likenesses, the embodied user/viewer/listener is always already unconsciously organizing even
the noisiest of phenomena into structured experience. Aeolian, ambient, and acousmatic
perceptualizations can thus be said to place the logic of the body’s affective responses in-network
with a much wider range of phenomena than is typically perceived.
Of course, the point above might seem non sequitur to those concerned with task-oriented
perceptualization. For certain, the fact that humans perpetually organize the noise of the world
into pattern is the very basis of strategies for maximizing pattern recognition. In the field of
visualization, entire tomes have been written with the goal of helping specialists sharpen this
capacity to a fine point. Yet, by making the observation, I mean neither to suggest that our tacit
processes of pattern recognition are enough to ground any old perceptualization scheme, nor that
!166
ambient, data-driven, experiences cannot benefit from strategies that better facilitate pattern
recognition. What I mean to suggest is that data-driven experiences might benefit from
examining the most basic informational relationships between organisms (the user/viewer/
listener in this case) to their complex and noisy environments.
As I observed in the introduction to this work, our environments are increasingly augmented
and reconfigured by technology, the volume and complexity of data afforded by this technology,
and, as we have just seen, by our capacity for flooding our surroundings with imperceivable
phenomena (e.g. electromagnetic radiation, chemicals, bio-engineered organisms, etc.). The vast
networks and entangled relations between data, phenomena, instrument, and perceiver I would
characterize as forming living ecosystems of information, noise, and material contexts that test
the boundaries of sensibility and judgement. As we navigate and experiment within these
ecosystems, it is worth exploring not only the task-specific signals and patterns that emerge from
within them, but also their ambiences, their “textures,” their “moods,” and the possibility for new
blocs of affect, which might be organized into even newer blocs of perception. By doing so, we
might re-capture the enworldened-ness of the original phenomena as we worldize the data
produced from them. !167
4 Chapter Four: Clouds, Storms, and Space Weather Most of the historically important functions of the human eye are being supplanted by practices in which
visual images no longer have any reference to the position of an observer in a "real," optically perceived
world. If these images can be said to refer anything, it is to millions of bits of electronic mathematical data.
Increasingly, visuality will be situated on a cybernetic and electromagnetic terrain where abstract visual and
linguistic elements coincide and are consumed, circulated, and exchanged globally (Crary 2).
The slogan of the hacker class is not the workers of the world united, but the workings of the world untied
(Wark [006]). 4.1 Turner and Ikeda: Two Phenomenological Interludes
There is a painting in a London’s National Gallery that is capable of altering one’s
understanding of what representational realism can do. It is a large painting dominated by an
angry sky filled with a swirling mass of dark-gray and blue clouds. Slashes of red and orange
sunlight streak through these, illuminating the center of an oncoming storm. The artist has
pushed the horizon very near to the bottom of the canvas, where the masts of a few ships and the
zig-zagging crests of ocean waves can be seen jutting a few inches above the lip of the frame—
itself one of those gilded and ornate affairs in which early to mid-nineteenth century paintings
are commonly displayed. Whether by virtue of its size or some other factor, this particular
painting creates a phenomenon that readers familiar with the works of William Turner might find
unsurprising, but which, if one is lucky enough to experience first-hand, one might find
revelatory.
!168
If the occasion arises, engage the painting by first taking it in at a distance—a viewing tactic
that provides a “read” of the entire composition, and which makes the frame a key part of the
experience. Move slowly forward, and let the eyes scan the two-dimensional surface, searching
for those volumes that provide an optical “porthole” to the illusory space. In other words, search
for the forms that provide the realistic hooks, which pull even the most impressionistically-
rendered portions of a work together into a coherent, three-dimensional scene. Attempt a few
phenomenological variations: squint and slightly cross the eyes, for example, to get a better sense
of the tonal structure.
Finally, relax one’s vision and stand very near the work such that the frame expands as much
as possible into one’s peripheral view. The longer one looks, the less solid the forms (the clouds,
water, and ships) will become. Eventually, and in fits and starts, one’s experience of static
volumes gives way to the roiling of the clouds, the spray of the rain, and the shifting of the light
until, suddenly, one “steps” through the ornate frame and looks up. The embodied sensation of
these actions is very distinct—much as one might experience when scanning the clouds above
one’s head in the midst of an actual, oncoming storm. This is not to say that the image comes to
life in the manner of a film or immersive video game; the feeling is less one of watching a
phenomenon in motion and more one of having one’s perception set in motion.
The encounter described in my interlude above is one I have experienced personally and it
continues to raise interesting questions for me regarding both the nature of immersive encounters
and the various strategies used by the artists who create them. Of course, the feeling of passing
through a virtual window and into pictorial space was neither unfamiliar to me at the time, nor
will it be unfamiliar to anyone who has experienced a particularly engaging video game, virtual
!169
reality demonstration, or 3D film, even if they are rarely so affected by paintings. These more
contemporary media I would characterize, as others have before me, as the technological
descendants of the trompe l’oeil paintings of the Renaissance, which leverage (typically single-
point) linear perspective to create convincing illusions of “extended” interior space or sometimes
sculptural surface details on the walls and ceilings of the architecture in which they are painted.
The experience of the “realist” effects of Turner’s work described above, however, are
something quite apart from a calm stroll through rational, perspectival space. What is rendered in
the painting is not a set of perfect objects fixed statically upon a Cartesian stage, but actions,
behaviors, and forces that are affectively instantiated by the representation. Turner is, of course,
well-known for his depictions of clouds and storms, which he often employed as visual motifs to
represent the emotional tenor of events, places, ships, battles, etc. found in the fore- and mid-
ground of his paintings—a practice that places him squarely in the stylistic company of his
Romantic contemporaries. Paintings like the one above, however, stand out as proto-
impressionist, phenomenological meditations that, while not quite breaking from romantic
symbolism, go further—via the tension they exhibit between active surfaces and pictorial space
—toward the conjuration of the tacit, visual, and proprioceptive memory our bodies hold of light,
form, movement, and scale. For Deleuze, this is precisely the role of Art writ large. “In art and in
painting as in music,” he says, “it is not a matter of reproducing or inventing forms, but of
capturing forces. For this reason no art is figurative […] The task of painting is defined as the
attempt to render visible forces that are not themselves visible” (Deleuze, Logic of Sensation 56).
Turner’s storm is, accordingly, not a fixed volume of clouds, water, and light upon a picture
plane; it is a visual catalyst for the conjuration of invisible forces in the bodies of its viewers.
!170
In 2012, Montreal's DHC/ART Foundation for Contemporary Art exhibited a collection of
media artist, Ryoji Ikeda's data-driven artworks. Among the smaller, sculptural objects and still
images were displayed a number of selections from his Datamatics series. These were installed
in a space adjacent to the main gallery and a number of the projections from Datamatics shared
the same room. If one were to ignore the placards, there would seem to be no distinction between
one work and another. Such is arguably the case. While Datamatics comprises a number of
discrete entries, each of these individual works clearly relate to all of the others, forming an
almost seamless albeit modular whole.
Indeed, one could argue that very few of Ikeda's works overall stand out in any significant
way from any of his others. His composition, "A [for 100 Cars]," which he staged in Los Angeles
in 2017 might be one contender, but mostly Ikeda's works are marked by a singular, minimal
aesthetic that is highly influenced by the graphical conventions of diagnostic or analytic data
display: Alphanumerical figures, points, lines, and rectangles—rendered in white with the
occasional red or blue accent—crawl downward or across a black background in the manner of
text scrolling down a command line interface. Novel arrangements of line graphs, charts, and
what look like decision trees or mind maps spring to life, animate across the screen, and often
disappear before the viewer has time to make sense of whatever they might represent. Point
clouds appear and begin to rotate; their accompanying annotations give us hints that these are
astronomical data, but soon the points, lines, and text begin to aggregate and overlap, blotting out
the information. In a few of these works, dense grids of numbers appear, swapping the figure/
ground and forming a tapestry of visual noise that only reveals its mathematical content on
extremely close viewing. The accompanying sound composition makes effective and affective
!171
use of a low and thrumming bass that addresses the body more than the ears. These are
punctuated by an occasional high-pitched and neurologically caustic sine tone.
Ikeda's works place one in the midst of a storm of audiovisual signifiers where one will strain
to bring whatever complex relationships and underlying patterns are being expressed under the
control of the diagnostic mind. One will inevitably fail. Aside from the sheer speed at which the
graphics appear, disappear, scroll, blink, and dissolve, there is no legend, no disciplinary
background information, and very little semiotic transparency to aid us in "reading" the data
displayed. To criticize the work for this fact, as practitioners of task-oriented perceptualization
might be tempted to do, would be question-begging in the extreme; aside from the few glimpses
we get at recognizable terms from astrophysics, etc., there is little to indicate that what is being
expressed is, in fact, scientific or analytic data. Perhaps there is a treasure trove of instrumental
data operating behind the scenes. But, perhaps not. Ikeda himself is notoriously silent about this
issue. Indeed, the very ambiguity produced by the "data" in Datamatics seems to be half the
point. The other half I locate in my rumbling chest, my overstimulated optic nerves, and, my
reflection upon the degree to which data and our many strategies for displaying it has become a
crucial component of what Merleau-Ponty terms our "intentional arcs.”
The word often used to describe the effects produced by works like Ikeda's and Turner’s is
“sublime,” and while I am certain that this term remains capable of performing some conceptual
heavy lifting, I would argue that it has mostly devolved into an unhelpful shorthand meant to
describe a wide variety works that leverage the often terrifying beauty of the great outdoors (in
the case of the Romantics) or the near-infinite perceptual drift we enjoy in our immersive
encounters with light and space art (in the case of late modernism). Such interpretations of the
!172
sublime too often hold its meaning as inferred from a specific category of visual and symbolic
representations (e.g. mountain ranges, seascapes, open skies, or giant blue squares). Our
tendency, in other words, is to make reference to the sublime—to reduce it to a system signifiers
—rather than affirm it as a state to be evoked.
To my mind, whatever sublimity we might identify in the phenomenological effects I have
described above derives not so much from visual referents as it does from the capacity of these
effects to induce a sense of the infinite as well as a hallucinatory slippage of perception. The
edges of Turner’s frames dissolve. His clouds actually appear to roil. Again, there is the distinct,
embodied sensation of looking “up” into some vast expanse, and the felt time that is marked by
this virtual storm does not quite square with the measurements produced by my smart phone or
wrist watch. Works like Datamatics produce similar effects via wholly different means. They
engulf the viewer/listener with stimuli, generating spaces that speak to infinite mathematical
possibility rather than the surfaces of the world made familiar through experience. In doing so,
they overload our faculties for pattern-recognition even while leveraging the body's potential as
an affective resonator.
As simulation strategies increase in sophistication, due in no small part to advancements in
computational speed, we have become capable of pinning an increasing number of the very same
chaotic forces evoked by Turner to Deleuze's plane of reference. It turns out that, insofar as
complex objects like storms are concerned, the noisiness that works against our proclivities for
the analytic only seems noisy to everyday human sensibility. Our simulations and reconstructions
in silica reveal structure and order in the most chaotic of phenomena and, as is suggested by
Jonathan Crary’s quotation in the epigraph for this chapter, this order can and is made
!173
perceivable by means of technological devices that "see" across a multiplicity of physical
registers. Of course, none of these technologically-afforded "views" open a privileged window
onto the reality of the phenomena toward which they are directed but, when aggregated, their
representations assemble vectors, velocities, pressure, and temperature gradients into "objects"
that, borrowing from Bruno Latour, we might characterize as “immutably mobile." The
invariants that emerge from this aggregation of sensibilities afford us a greatly reduced
uncertainty with respect to knowable qualities and quantities of the imperceptible, but also, and
perhaps more importantly, with respect to our future interventions. Turner’s painted
“simulations,” by contrast, instantiate the feel, the motion, and the embodied effects of a storm as
apprehended by an always partial and situated view. They leverage the noise and uncertainty of
both the tempest and our perception in order to address an altogether different mode of knowing
—a mode characterized by "what-it-is-like-ness."
Subtending the forking paths of mathematical abstraction and phenomenal encounter, works
like Ryoji Ikeda's, which are emergent from a somewhat Borgesian practice whereby maps are
constructed as inhabitable territories, illustrate the extent to which our sensibilities are always
under construction. Ikeda's work explores the threshold of these sensibilities, inviting
interpretations of itself as commentary on our data-saturated culture. While this is a perfectly
reasonable reading, for myself, interpretations along these lines miss a decidedly posthuman
aspect of the work—an aspect which speaks on one level to the mutations of vision that so
interest Crary, and on another to how the seemingly logarithmic advancement of technological
sensing might quickly outstrip our changing, but limited, cognitive faculties. Where the analytic
mind might fail to keep pace with this flood of data, Ikeda seems to suggest that the information
!174
processing capabilities of the raw optic nerve, cochlea, and our bodies' autonomic responses
might take up the slack. Viewed in this way, the graphical and auditory language of aesthetic
perceptualization, such as that we find in Ikeda’s and other data-driven artworks, speaks
simultaneously to the phenomenal aspects of signification, in which noise and complexity often
becomes a defining quality, and the signifying potential of material agencies, which we leverage
to tame uncertainty. 4.2 About this Chapter
The purpose of this chapter is to provide an account of media praxis that tracks with the
radically interleaved relationship between I have been calling “variational” and “notational”
modes of production. While my focus here will be mostly upon philosophical issues pertaining to
the embodied practices of artists and designers, there are precedents in the history and
philosophy of Science to which I will turn for insight. The primary question that needs
addressing considers notation (i.e. symbolic abstraction) in terms of its various material
instantiations and concerns how the logic of these become disrupted and reconfigured by the
activities of creative practitioners. We have seen one already—Cage’s 4’33"—but there are
others that stand out in the history of Art. In addition to accounts like that given for 4'33", which
place focus upon a quite literal concept of notation, we will investigate what Bachelard terms the
'reification' of theory in the material construction of technological instruments. The second and
overarching question we will address concerns how the practices that disrupt and reconfigure the
logics of notation alter our sensibilities, expanding our repertoires and orienting us toward
previously unperceived (and imperceivable) phenomena. To begin addressing either question
!175
requires at least a partial synoptic look into the relationship between embodied doing and
abstract schemas, which helps lay the groundwork for a post-phenomenological account of
praxis, and a commensurable aesthetics capable of bridging the diagnostic tendencies we
examined in the last chapter with an account of affective or embodied knowledge.
The “post” in “post-phenomenological” signals a modification of phenomenological analysis
for the purposes of better accounting for our engagements with physical phenomena through
abstraction and technological mediation. Another way of saying this is that post-phenomenology
(or at least the variant that concerns us herein) is meant to provide a method for analyzing the
means by which imperceptible phenomena become part of our lived experiences via their
material effects upon the phenomena with which we are already familiar. Post-phenomenology
places itself between a “pure” phenomenology, complete with injunction to bracket naturalistic
explanation, and a philosophy of science that would seem to contradict that very injunction.
As I argued in the previous chapter, the idea that the body’s various signal flows somehow
provide “unmediated” access to the world’s phenomena is deeply suspect. Yet, this only
underlines the central sticking point that both traditional phenomenology and post-
phenomenology are meant to address, viz. the degree to which mental representations (“high-
level” phenomena as distinct from the material networks of neural signals) can be said to form
the basis for our experiences in the world. Husserl’s concept of intentionality, for example,
would hold that mental representations are, indeed, a key component to our experience, while
later approaches like J. J. Gibson’s, Merleau-Ponty’s and Dreyfus’s hold just the opposite. These
controversies are important for the issues at hand since “mental representations” and the objects
of conscious attending are closely related to symbolic representation. If our post-phenomenology
!176
is capable of providing a methodology of praxis in which the creative adoption of symbolic (read
notational) systems as a means for embodied doing becomes, with time and practice, more a
matter of tacit skill than of intellectual focus, then we need at least the beginnings of an account
for how notational tactics become entangled with our embodied, variational repertoires.
What Don Ihde has termed Post-Phenomenology and Techno-science (henceforth PPTS), and
particularly that author’s hermeneutics of scientific interfaces, prove extremely helpful for
tracing productive paths between embodied knowledge as it is understood in the Arts and the
instrumental bent of the Sciences. Inroads have also been made in this direction by such thinkers
as Katherine Hayles and Andy Clark, both of whom will later make an appearance. The
considerations raised by these thinkers, as well as those of PPTS, will serve as a means for
rethinking both the uncertainties of practice and the extent to which notational and variational
practices fold into one another in computational art. Here, noise and uncertainty is recast as the
phenomenologically-driven disruptions, such as those we find in anamorphosis, that are the
products of a radical interweaving of abstraction with material constructs. These disruptions, I
will argue, serve to drive not only aesthetics, but also Science, in productive and evocative
directions.
Along the way, my own perceptualizations of ionosphere data along with Ryoji Ikeda's
Datamatics and Ryoichi Kurokawa's data-driven, art-science collaboration Unfold, will serve to
ground these considerations within the context of an aesthetics of worldized data. One of the
more recent entries into this discourse is Orit Halpern's extensive historiography of data
visualization, Beautiful Data, wherein, Halpern identifies those sensibilities that seemingly mend
the split between abstraction and embodiment as the effects of data perceptualization itself,
!177
arguing that they are constitutive of what she terms "affective rationality." Practices of affective
rationality, says Halpern, are those in which sense is "configured to be both affective and logical"
(Halpern 133). Halpern further argues that these reforged sensibilities are a cultural by-product
of our encounters with works like Gyorgy Kepes's kinetic media sculptures and Charles and Ray
Eames's IBM pavilion at the 1964-1965 World's Fair. For Halpern, these works "force us to
consider commonly articulated beliefs often ingrained […] in our theories and histories of media
that separate objectivity, science, and rationality from embodiment, sensory affect, and
emotion" (Halpern 133) "As interior and exterior were reconceived in terms of communication,"
she continues, "the sensual, perceptual, and cognitive became part of a single order, a rational
and algorithmic set of processes or logical patterns that could be studied, built, modulated […] It
is the reformulation of abstraction as material, of perception as cognition" (Halpern 133). For
myself, the very same affective rationality Halpern identifies as emergent from interactive and
data-driven works of high modernism can just as easily be found in many of the representational
practices of the Quattrocento and specifically the archival project of Cassiano dal Pozzo, who
sought to visually catalog all of the world's curiosities in his Museo di Carteceo.
Finally, Hayles proves particularly helpful in thinking through our embodied relationships to
abstract notational systems—an account that is needed if we are to square the uncertainty-riddled
abstractions born out of embodied practice and eidetic reduction with theoretical constructions
(e.g. mathematics and other symbolic systems) that increase our certainty and precision. Before
we arriving at these considerations of how abstract knowledge is entangled with embodied
knowledge, however, we require an account of embodied knowledge itself. To that end, linear
perspective presents us with a provocative object lesson. As something of a provocation then, let
!178
us briefly consider where, when, and with whom we should locate its origins. 4.3 The Origin[s] of Linear Perspective
Should we locate the origin and originator of perspective in the early Quattrocento with the
practical experiments of Fillippo Brunelleschi, or in the theoretical, geometric constructions
found in Alberti's Dell Pittura, published only a few years later? Rephrasing the question, does
embodied practice get the credit or does semantic ascent? Ostensibly, this question might seem
overly-reductive, and indeed, we would be justified in answering, "both and for different
reasons." However, the question does encourage us to pause and consider Dewey’s problematic,
with which I began this work, in a much more tangible way. On the one hand, Brunelleschi's
experiments were clearly conducted a number of years prior to Alberti's publication and thus, if
we apply a strictly chronological definition of "origin," Brunelleschi's practice is where we
should look. On the other hand, as Michael Kubovy argues in The Psychology of Perspective in
Renaissance Art, it was Alberti and not Brunelleschi who "invented perspective as a
communicable set of practical procedures that can be used by artists" (Kubovy 26).
Thus, for Kubovy, knowledge of perspective only comes into being when it is made broadly
communicable to other people. Such a rubric locates the origins of linear perspective at the point
at which it becomes untethered from the idiosyncrasies and uncertainties of material embodiment
and is rendered certain, repeatable, and communicable in symbolic form. Importantly, this origin
underscores the movement of perspective from embodied practice to notational practice to a
symbolic representation with much deeper, philosophical, cultural, and theological implications.
As Panofsky states, quoting Ernst Cassirer, in Perspective as Symbolic Form, perspective may be
!179
“characterized as […] one of those ‘symbolic forms’ in which ‘spiritual meaning is attached to a
concrete, material sign and intrinsically given to this sign’” (Panofsky 40-41). Certainly, the
effects of schematized perspective upon culture and thought is reason enough to locate its origins
in Alberti’s publication. I would argue, however, that predicating linear perspective's origins
upon the moment its status as embodied practice becomes "legitimized" by its notational
transcription smuggles a normative stance into what is ostensibly a simple characterization of
historical trajectory. For Kubovy, there are two linear perspectives—one invented as a set of
intuitive practices and one as a collection of theoretical constructions that justify the intuitions.
Importantly, whether for Kubovy himself or for the visual science he investigates, it is the latter,
and not the former, that counts as the origin of perspective.
As one might expect, the long answer is far more complicated, but cuts to the heart of matter
at hand. Kubovy suggests (in a manner that speculates heavily upon Brunelleschi's psychology)
that Brunelleschi kept secret both his methods for perspectival construction and those for raising
domes, not because he was protective of his intellectual property, but because he lacked the
theoretical language to justify his work:
Twice Brunelleschi did not give a theoretical account of a major achievement of his. Perhaps he knew how
to erect the cupola but could not explain why this method was correct, just as he knew how to paint
startlingly realistic and perspectivally correct panels without knowing the rules of the construzione
legittima. When Brunelleschi invented perspective and when he sought the commission for the erection of
the cupola […], he may have invented a trick to paint pictures in perspective without having developed the
underlying geometric theory, and he may have come up with methods to erect a tall cupola without having
a rigorous rationale to offer (Kubovy 26).
!180
Following Michael Polanyi’s characterization of tacit knowledge, we might say that Brunelleschi
“knew” a great deal more than he was able to say or explain. None of this is particularly
surprising and, indeed, describes a schism between doing and reporting that will be familiar to
any of my readers who identify more as practitioners than as theorists. But, from this general
description of Brunelleschi's (possible) block with respect to theory, Kubovy draws a much
more, as he admits, speculative conclusion viz. that Brunelleschi's embarrassment about his
theoretical limitations was what led to his hesitancy to challenge Alberti's claim as the
"originator" of the linear perspective. Kubovy states, “[Brunelleschi] was […] a man deeply
concerned with disguising the nature of his creativity, afraid that he would not be held in high
esteem unless he was thought to possess abstract theoretical knowledge" (Kubovy 26).
If Kubovy's interpretation is correct, it reveals the extent to which the bias that holds
theoretical abstraction as the legitimizing factor for embodied practices penetrates our history of
thinking and doing. Kubovy's bifurcation of perspective into the symbolic/geometric on the one
side and the "intuitive" on the other comports with our instincts about the dualistic relationship
between theory and practice, but the question it raises is whether such distinctions have any
substantive ontological/epistemological force or whether, as Dewey believed, they are residual
from normative positions to which we have simply become habituated. The very same question
arises as often in the Sciences as it does in the context of Arts, as we will soon see. For now, let
us consider that Kubovy's characterization of Brunelleschi's approach as "intuitive" (a
characterization that resonates with Peirce's dismissal of the "legitimacy" of what cannot be
controlled) should at the very least raise an eyebrow or two. What we should avoid, however, are
“chicken and egg” rejoinders that simply swap the positions of abstraction and embodied
!181
practice in their formulation. Certainly, the fact that Brunelleschi’s techniques functioned just as
well in the absence of theoretical constructions should encourage us to reassess the legitimacy of
workaday practices and “intuitions,” but Brunelleschi’s use of his own, custom-made machinery
for the execution or demonstration of these techniques should also prompt us to reassess what we
mean by “theory.” This reassessment becomes particularly necessary as we consider the ever-
expanding role of technology in praxis and the degree to which it extends our reach into
unknown territories. 4.4 Workshop Practices and a Phenomenology of Doing
Let us take seriously Orit Halpern's observation of how works of affective rationality force us
to (re)consider our appeals to embodiment in media theory and turn to the question of what is
meant by “embodied” knowledge. If Brunelleschi was as shy of theory as Kubovy claims, and if
practical activity is shot through with uncertainty, as Dewey would have it, then what accounts
for the functional and aesthetic success of Brunelleschi’s designs? How did his embodied or
“intuitive” approaches enable him to buck Dewey’s axiom? To be clear, my intention here is not
to embark upon a detailed examination of Brunelleschi’s inventions, but only to take a brief
sighting of these and establish one point to be measured against others for the purpose of better
triangulating the embodied knowledge we have been discussing thus far.
Brunelleschi is as famous for his construction of machines to accomplish seemingly
impossible architectural feats as he is for his invention of linear perspective. The question of
whether many of these inventions did in fact originate with Brunelleschi were, according to
Prager and Scaglia in their examination of Brunelleschi’s devices, the subject of art-historical
!182
controversy in the middle of the last century (Prager F. & Scaglia G. 2). Yet, whether the
machines sprang from Brunelleschi’s head fully formed or were simply recovered by him from
earlier building traditions is largely beside the point. What is at issue is that, as was true of many
artisans of the time, Brunelleschi seems to have thought through material agency and
experimentation rather than through notational or symbolic systems. Even his famous
demonstration of linear perspective at the Florence Cathedral Baptistery was executed by means
of an extremely clever construction (fig. 7) that married traditional painting techniques with a
peephole, somewhat foreshadowing the camera obscura, and not one, but two reflective surfaces,
which face one another, creating the effect of a dynamic sky behind the highly accurate,
!183
Fig. 7. Brunelleschi’s perspective device. “Brunelleschi and the Re-discovery of Linear
Perspective,” Maitaly, https://maitaly.wordpress.com/2011/04/28/brunelleschi-and-the-re-
discovery-of-linear-perspective/. Accessed 18 Feb 2020.
perspectival rendering of the baptistery. For us, the question raised by Brunelleschi's practical
approach to devices such as this is, “What does it mean to think through material agency?”
This is where I believe a rehabilitation of Husserl’s eidetic reduction becomes highly useful. I
have been speaking thus far of a phenomenon’s “invariant” properties, which I have indirectly
borrowed from Husserl as a means for referring to those aspects of phenomena that form
perceptual “stabilities.” I should note here, however, that Husserl understood these stabilities to
be an object’s transcendental “essence,” (the very etymological root of the term “eidetic”) and
this term has been a matter of debate. For Husserl, “[t]he essence (Eidos) is a new sort of object.
Just as the datum of individual or experiencing intuition is an individual object, so the datum of
eidetic intuition is a pure essence” (Husserl, Ideas 9). Under question is not so much the stronger
definition of essences that hold these “objects” to be fixed and eternal archetypes; rather,
essences are, for Husserl, much more quotidian and are simply understood as those qualities
without which a phenomenon would be indistinguishable as that particular phenomenon among
others. Indeed, Husserl acknowledges that, “[i]ndividual existence of every sort is, quite
universally speaking, ‘contingent.’ It is thus; in respect of its essence it could be
otherwise” (Husserl, Ideas 7). For Husserl, essences are subject to change, but can be observed
to remain stable as phenomena given in the present.
Viewed in the light of Husserl’s broader scientific and mathematical interests, what he termed
“transcendental essences” might just as well be called “recurrent features” or, more precisely,
what John Bigelow, calling upon recurrences, terms “universals” (Bigelow, The Reality of
Numbers 12). Mathematics, at the end of the day, is the study of patterns—the repetition of
features, objects, entities, forces, etc—that underpin our predictive behaviors and thought
!184
processes and which constitute the invariance of certain properties. These patterns might appear
as what John Locke defined as “primary qualities”—measurable qualities like volume, length,
height, and other geometric properties—or what he termed “secondary qualities,” such as color,
which are phenomenally unstable under differing ambient conditions. Here, more skeptical
readers might suspect that the very instability of secondary qualities troubles the conceptual
foundations for invariants, but this assumes that the distinction between primary and secondary
qualities is a meaningful one. So-called secondary qualities can, in practice, always be recast in
terms of primary qualities. As Bigelow puts it, “if […] you believe material objects do really
have secondary qualities, then it is arguable that these secondary qualities must be identical with
more or less complex combinations of primary qualities. Warmth, for instance, is listed as a
secondary quality, and it is arguable that warmth is nothing over and above the average kinetic
energy of the molecules in the body” (Bigelow 14).
It would be tedious to rehearse the lines of debate that spin out in various directions from the
concept of primary and secondary qualities and, at any rate, such an exercise would take us too
far afield. For our purposes, the importance of primary qualities lies in their relationship to
measurement and the question of whether the abstractions we derive from measurement reflect
real properties of real things, but we are not quite there yet. For now, I will argue that eidetic
reduction is simply a more rigorous, philosophical framing of our much more commonplace
practical engagements with the world—engagements which, importantly, often involve tacit
decisions made independently or prior to the intentional thought required by eidetic reduction.
Therefore, to get from the more metaphysical analysis of invariant properties and essences to
!185
ordinary embodied knowledge and practice, Husserl’s concept of intentionality—the directed-
ness of mind toward its objects—requires some modification.
Martin Heidegger was the first to make adjustments to Husserl’s phenomenology with his
concept of “being-in-the-world,” and this was followed shortly thereafter by the perceptual
philosophy of Maurice Merleau-Ponty. The particular issue taken by both thinkers, as well as the
one that most concerns us here, is that, in Husserl’s framework, we find the rational mind placed
on one side of perception and “things in themselves” on the other. Understood thusly, Husserl’s
intentionality has an uncomfortable dualism lurking beneath its formulation. For Heidegger, in
contrast, the perceiver is not locked away “in here” like some tiny homunculus analyzing a world
“out there,” but is always already embedded amidst the other things of the world; there is a
continuity of body, mind, action, and object that precedes intentionality and which, upon
reflection, is readily observable in our everyday activities, even if the very act of observation
takes us out of the continuity. Heidegger’s favorite example—the one with which most people
with a philosophical streak are likely to be familiar—is our use of tools. For Heidegger, tools
become “ready-to-hand” during our use of them; that is, they become an extension of the user’s
body—a sort of temporary augmentation in which both tool and activity “withdraw” from the
conscious mind. Intentionality only comes into the picture when a tool becomes “present-at-
hand”—when it breaks, for example, or otherwise becomes a hinderance to fluid action.
Otherwise, “the less we just stare at the hammer-Thing, and the more we seize hold of it and use
it, the more primordial does our relationship to it become, and the more unveiledly is it
encountered as that which it is - as equipment. The hammering itself uncovers the specific
'manipulability' [‘Handlichkeit’] of the hammer” (Heidegger, Being and Time 98).
!186
Heidegger’s examination of tool use as a means for distinguishing between present-at-hand
and ready-to-hand can in many respects be read as an extremely simple and effective, “culled
from life” investigation of those very tensions between the situated-ness of embodied encounters
and their transcriptions onto the Deleuzian plane of reference, which we examined in the
previous chapter. In words that very much resonate with these tensions, Heidegger states,
‘Practical' behaviour is not 'a theoretical' in the sense of ‘sightlessness.’ The way it differs from theoretical
behaviour does not lie simply in the fact that in theoretical behaviour one observes, while in practical
behaviour one acts [gehandelt wird], and that action must employ theoretical cognition if it is not to remain
blind; for the fact that observation is a kind of concern is just as primordial as the fact that action has its
own kind of sight. Theoretical behaviour is just looking, without circumspection (Heidegger, Being and
Time 99).
Heidegger goes further by extending the epistemological reach of ready-to-hand such that it
relates not only to the proximal and situated objects of experience, but also a broader
metaphysics of nature itself,
In equipment that is used, 'Nature' is discovered along with it by that use—the 'Nature' we find in natural
products. Here, however, “Nature" is not to be understood as that which is just present-at-hand, nor as the
power of Nature. The wood is a forest of timber, the mountain a quarry of rock ; the river is water-power,
the wind is wind 'in the sails'. As the 'environment' is discovered, the 'Nature' thus discovered is
encountered too. If its kind of Being as ready-to-hand is disregarded, this 'Nature' itself can be discovered
and defined simply in its pure presence-at-hand. But when this happens, the Nature which 'stirs and strives',
which assails us and enthralls us as landscape, remains hidden. The botanist's plants are not the flowers of
the hedgerow; the 'source' which the geographer establishes for a river is not the 'springhead in the
dale’” (Heidegger, Being and Time 100).
!187
Thus, in terms that comport with the concerns we examined in the previous chapter,
Heidegger locates the fullness of nature in the manner in which “nature itself” overspills
conscious attending to be caught within the tacit knowledge produced through practical
engagement. Interestingly, Heidegger has very little to say about the role of the embodied senses
in his account of being-in-the-world. It would be Maurice Merleau-Ponty (henceforth, MP) who
would break more radically with Husserl in such a way and who would explore the body’s
situated and “enfleshed” condition, providing an account of intelligent action that does not lean
upon mental representations. Getting back to the matter at hand, Hubert Dreyfus’s development
of MP’s “intentional arc” will help to shed some light on our consideration of workshop practices
like Brunelleschi’s as well as the phenomeno-epistemological status of his devices. In Dreyfus's
interpretation of MP, human beings attain a “maximal grip” with respect to the world’s
uncertainties not through an internal dialog or a logical sorting of mental tokens, but through
sustained and repeated embodied engagement with objects, spaces, materials, and forces. “As we
cope,” says Dreyfus, “we experience ourselves to be getting a better or worse grip on the
situation. Such coping has satisfaction conditions but it does not have success conditions. Rather,
it has what one might call conditions of improvement. Its satisfaction conditions are normative
rather than descriptive” (Dreyfus 118). The distinction made here between “normative” and
“descriptive” is an important one. For, it loosens whatever constraints of certainty we might be
(and often are) tempted to borrow from notational practices and apply to practical activity. The
space delineated by this loosening, Dreyfus dubs “the space of motivations,” positioning it
!188
between “the space of causes,” and “the space of reasons.” By way of illustration, and calling
upon a situation that very much echoes Heidegger’s “present-at-hand,” Dreyfus states,
Animals, paralinguistic infants, and everyday experts like us all live in this space. Of course, unlike infants
and animals, we can deliberate. When a master has to deliberate in chess or in any skill domain, it’s because
there has been some sort of disturbance that has disrupted his intuitive response. Perhaps the situation is so
unusual that no immediate response is called forth. Or several responses are solicited with equal pull […]
Fortunately, the expert usually does not need to calculate. If he has had enough experience and stays
involved, he will find himself responding in a masterful way before he has time to think […] repertoire—
the ability to respond to subtle differences in the appearance of perhaps hundreds of thousands of situations
—but it requires no conceptual repertoire at all (Dreyfus 118-119).
As compelling as this account of maximal grip might be, it leaves us in something of a
quandary with respect to theoretical knowledge. Surely, it might be argued, theoretical
knowledge must be involved at some point in the process. Here, however, we must be careful
with the term, “theoretical.” Of course, following Polanyi, we might also take care with the term
“knowledge.” As I hinted earlier, for Polanyi, knowledge can be characterized as much by its
ineffability as by its communicability. Polanyi explains this by splitting the acquiring of such
ineffable knowledge into proximal and the distal effects. Drawing upon psychological research in
which subjects were given a shock whenever they uttered certain syllables, Polanyi argues that,
very often, we come to “know” a proximal object or phenomenon only through tacitly predicted
effects on a distal object or phenomenon (Polanyi 8-9). The unfortunate subjects of the
psychological experiment in question, Polanyi observes, began avoiding the use of trigger
syllables even though they were unable to report exactly what these syllables were. But, in the
!189
case of explicit rather than tacit learning, we might object, theoretical knowledge must be a
necessary factor. As Dreyfus illustrates in “Intelligence without Representation” (Dreyfus
368-370), learning a skill almost always involves focused intentionality and conscious attending
in its earliest stages as one struggles with novel bodily movements, unfamiliar equipment, or new
concepts. With time, the increase of intentionality dissipates as maximal grip is achieved, giving
way to more intuitive response as one navigates more confidently within the “space of
motivations.” While the process might involve or be initialized by theoretical learning, at no
point along this path is theory, as commonly understood, a necessary component.
What Dreyfus calls “skillful coping,” as indicated by the quotation above, can be attained by
infants and animals alike, as well as woodworkers, ballroom dancers, Humanities scholars, and
scientists. What is key is that this process ratchets up our ability to engage the uncertainties of
the world by expanding our behavioral repertoires. To illustrate the reciprocal relationship
between our perceptions of the world and our attainment of maximal grip, Dreyfus calls upon
“the intentional arc” which, put most simply, denotes, “the way our successful coping continually
enriches the way things in the world show up” (Dreyfus, Skillful Coping 107). Describing the
arc's relationship with maximal grip, Dreyfus elsewhere explains that, "[t]he intentional arc
names the tight connection between the agent and the world, viz. that, as the agent acquires
skills, those skills are 'stored,' not as representations in the mind, but as dispositions to respond to
the solicitations of situations in the world. Maximal grip names the body’s tendency to respond
to these solicitations in such a way as to bring the current situation closer to the agent’s sense of
an optimal gestalt" (Dreyfus, “Intelligence” 367). Thus, from this perspective, as we achieve
maximal grip in whatever context, our perceptions and sensibilities are quite literally broadened
!190
to accommodate an ever-unfolding array of the world’s phenomena. Most importantly for the
issue at hand, MP understood this array as containing a virtual dimension in addition to the
physical:
[…] an adapted behavior demands something more: each point of the concrete expanse currently seen must
possess not only a present localization but also a series of virtual localizations which will situate it with
respect to my body when my body moves, in such a way, for example, that I thrust my left arm without
hesitation into the sleeve which was on my right when the coat was placed in front of me. In other words, it
is not sufficient that fragments of concrete expanse, circumscribed by the limits of my visual field and each
one of which would have a spatial structure of its own, appear one after the other in the course of my
movements. It is necessary that each point of one of these perspectives be put into correspondence and
identified with those which represent it in the others (Merleau-Ponty, Structure of Behavior 89).
In a similar vein, Lyle Massey has characterized MP's concept of virtuality as “the horizon of
possibilities” (Massey, Picturing Space, Displacing Bodies 74).
With maximal grip, the intentional arc, and the virtual layer in mind, let us consider that the
workshop practices that anticipated and influenced the aesthetic, technological, and scientific
upheavals of the sixteenth and seventeenth centuries might not have been guided in any profound
way by theoretical considerations. To say such knowledge was not in play to some extent would
be highly suspect, but it is a reasonable assumption that Brunelleschi’s machines, including his
perspective peephole device, evolved out of embodied practices that held as their primary
motivation the same “satisfaction conditions” that Dreyfus describes. Indeed, this assumption
squares with the account of pre-industrial design practice given by Bryan Lawson in the book
How Designer’ s Think. “In the past,” Lawson says, “many objects have been consistently made
!191
to very sophisticated designs with a similar lack of understanding of the theoretical background.
This procedure is often referred to as ‘blacksmith design’ after the craftsman who traditionally
designed objects as he made them, working to undrawn traditional patterns handed down from
generation to generation” (Lawson 11-12). Samuel Edgerton, in The Mirror, the Window, and the
Telescope, applies a similar analysis directly to the craftsmen of Brunelleschi’s age, stating,
“Prior to Brunelleschi’s demonstrations, mechanical apparatuses of whatever sort were never
constructed from scale drawings. Although pictures were sometimes used by medieval artisan
engineers, they were only to suggest the general purpose of a machine to be constructed […] Any
skilled craftsman of that time who already knew how to build such devices would only glance at
such an image merely as a reminder of his job.” (Edgerton 168-169).
What all of this provides us is an account of embodied knowledge and practice that both
strengthens the case for “doing as knowing” and loosens our requirements for certainty. Yet, as
we will see, it goes further in placing uncertainty squarely within its picture. This uncertainty is a
key feature of the space of possibility—one which drives the need for reflection and theoretical
signification. As MP puts it,
Through its “sensory fields” and its whole organization the body is, so to speak, predestined to model itself
on the natural aspects of the world. But as an active body, active insofar as it is capable of gestures, of
expression, and finally of language, it turns back on the world to signify it. As the observation of apraxics
shows, there is in man, superimposed upon actual space with its self-identical points, a “virtual space” in
which the spatial values that a point would receive (for any other position of our corporeal coordinates) are
also recognized. A system of correspondence is established between our spatial situation and that of others,
and each one comes to symbolize all the others" (Merleau-Ponty, The Mereau-Ponty Reader 287) .
!192
Such an account of embodied knowledge better positions us to hold Brunelleschi’s peepshow
and Alberti’s window together as equals within the same epistemological frame. Where
Brunelleschi’s knowledge is perhaps best characterized by its emergence from practical
engagement—an engagement in which the intentional arc expands to accommodate the
unfolding of material agencies, relationships, and possibilities—Alberti’s theoretical framing
transcribes the emergent invariants into a system of notational heuristics, locking down their
logical structure, and thus rendering the processes of linear perspective more certain and more
broadly communicable to the public. Subtending both approaches is the uncertain and
probabilistic space of the virtual. Yet, even if we are satisfied with this account of embodied
knowledge, virtuality, and transcription, it still leaves us with a number of lingering questions.
First, if we define embodied knowledge and practice in such a way, its boundary conditions
seem readily apparent, particularly with respect to those entities made part of our intentional arcs
by scientific activity, and which are only perceivable through technological mediation. My
example in the previous chapter, which draws upon our common experience of cell phones, is
particularly relevant here. There is an important difference between knowing how to search for a
signal or make a call with a device and knowing how to produce electromagnetic effects from
first principles. A strictly phenomenological approach would simply re-absorb the behavior back
into its the account of skillful coping but, true to the spirit of epoché, would have little to say
about our warrant for claiming knowledge with respect to the invisible forces that make the
behavior possible in the first place. This raises the additional question of where the knowledge of
embodied practice truly lies. Does it reside solely in the practitioner or is it "reified" in the
instrument, as Bachelard suggests? Still another question concerns how the embodied knowledge
!193
of the practitioner relates to the embodied encounters between a cold audience and a given work.
For myself, all of these questions turn upon the degree to which both variational and notational
practices become interleaved within our intentional arcs, creating a horizon of possibility in
which embodied states are imputed a virtual layer and abstract schemas facilitate ever-more
rarefied phenomenological effects. 4.5 Perspective Devices and the Truth of Distortion
As previously stated, if Kubovy's speculative reading of Brunelleschi's theoretical
understanding (or lack thereof) is accurate, we can safely assume that Brunelleschi's machines
and his experiments with linear perspective emerged from workshop practices that fall very
much in line with the account of embodied doing given above. Whether and how theoretical
knowledge enters this picture is important not because such understanding was a necessary
condition for the invention of linear perspective, but because 1) it helped to make the technique
communicable to a wider number of people and 2) its explicitly mathematical schematization led
paradoxically to a material/philosophical challenge to the authority granted linear perspective as
the method for representing visual "truth." This disruption was concomitant with the invention of
anamorphosis—a somewhat short-lived variational practice in which intentional distortions (e.g.
the stretching of figures across a linear or curved axis of the picture plane) could be used as a
means for rendering subjects that appeared little more than smears when viewed from the
primary viewing angle, but which would resolve as three dimensional forms when viewed from
oblique angles. By “hacking” the rules of perspective in such a way, artists could create
!194
secondary images and, therefore, secondary meanings concealed “beneath” the primary subject
matter of their works.
There were a number of factors that influenced the development of anamorphosis and not all
of them were linked directly to perspective devices. Indeed, the earliest anamorphic principles
can be traced to the intuitive adjustments made by artists to the distortions already emergent from
the application of perspectival principles. Chief among these is represented by the "inverse size-
distance law," which, as Kim Veltman notes in “Perspective, Anamorphosis and Vision,” affects
only those objects rendered parallel to the image plane. "These objects," says Veltman, "diminish
without distortion. When objects are not parallel to the picture plane each point on their surface
is at a different distance from this plane with the result that they diminish with
distortion" (Veltman 96). In addition to the inverse size-distance law, both Michael Kubovy
(Kubovy 63) and Lyle Massey (Massey, Picturing Space 43) draw attention to the distortions
present at the periphery of sufficiently large perspectival works.
What is underscored by perspectival distortion and the methods developed to address it is the
degree to which linear perspective is, as Massey puts it, "parasitic on the situated embodied
viewer" (Massey, Picturing Space 19-20). Thus, the corporeal presence of the embodied eye is
both a critical component of the phenomenon and a hinderance to it. "To [perspectivists]," says
Massey, "the embodied view was both the sine qua non of perspective and the most intractable
impediment to any project that aimed to rationalize the viewpoint" (Massey, Picturing Space 20).
As artists added to their repertoires for addressing each perceptual distortion, it became
increasingly obvious that the "ideal viewpoint" represented in linear perspective was largely a
mythical construction related to the status granted to perspective as a dispositive of vision.
!195
Indeed, the potential for confronting this myth inhered within the practices of perspective from
the very start. Leonardo Da Vinci himself was one of the first to emphasize that a painting should
be constructed such that it could accommodate multiple points of view.
Closer to the present, and taking a more psychological tack, Michael Kubovy suggests that
the "ideal viewpoint" is already challenged in our encounters with perspectival images by the
extent to which our perception attempts to correct the distortions we find in them. Dubbing this
phenomenon the "robustness of perspective," Kubovy illustrates how what we have imagined to
be an ideal point is, in fact, a region of the viewing space in which the illusion created by
perspective continues to hold, despite our changes of position. “We have seen that the scene
represented in a painting does not appear to undergo distortions when a spectator moves in front
of it,” says Kubovy, “and that the robustness of perspective implies that the spectator is able to
infer the location of the center of projection of a perspective picture, to compensate for the that
[sic] the picture plane undergoes during the spectator’s movement, and to see the picture as it
would be seen from the center of projection” (Kubovy 80). Nevertheless, the region of space
Kubovy describes has limits outside of which the illusion of perspective simply will not hold.
These were noted by Da Vinci, Piero de la Francesca and others in the late fifteenth and early
sixteenth century as the artists began incorporating various intentional distortions in their
paintings as a means for correcting the unintentional distortions created by viewing the works
from oblique points of view.
Very quickly after publication of Alberti’s De Pictura, what begins as a diagrammatic or
geometric theorization of workshop practices eventually becomes reified in the gridded screens
and sighting devices of the draftsman’s aids we find in illustrations published by Albrecht Dürer
!196
alongside his theoretical accounts. These devices began to proliferate in the next century and half
after the publication of De Pictura and “encapsulate,” as Davis Baird would put it (Baird, Thing
Knowledge 68), the theory of linear perspective. Yet, we would do well to consider that the
principles instantiated in many of these devices were not strictly derived from optical theories;
what they “encapsulate” is as much the embodied practices of artisans as it is Renaissance
understandings of vision. Interestingly, however, many of these instruments were most likely
intended more as "pedagogic devices" than as practical tools (Massey, Picturing Space 88). In
words that resonate with our earlier discussion of Heidegger's "present at hand," Massey states
that, “as a shortcut or aid to the artist, [Dürer's instrument] provides little assistance. In fact it is
so labor intensive that it could substantially impede an efficient working habit" (Massey,
Picturing Space 86). Thus, some (but not all) of these devices occupy a somewhat ambiguous
status. On the one hand, many could clearly be, and often were, deployed in professional activity
for the purposes of chasing the noise of the body and the uncertainties of hand-work out of
perspective construction but, on the other hand, the fact that so many of these devices served
pedagogical or illustrative roles speaks to a decidedly discursive/philosophical trajectory.
Important for the present account is that the adaptation of the workshop techniques
developed by Leonardo and others, first into heuristic notational schemes and the language of
geometry, then subsequently into the construction of perspective devices and mechanical aids,
precipitated a radical shift in philosophical understandings of what it means see in an embodied
sense. Tracing this trajectory, Dieter Mersch agues, "that from its very beginning anamorphosis
formed a mode of reflection within the mathematics of perspective, as the examples of Leonardo,
Michaelangelo Buonarroti and Dürer […] show. At first it was explored as an experimental
!197
method of balancing out extreme angles of vision, but as early as the sixteenth century
methodical instructions such as Daniele Barbaro’s 1569 Practica della Prospettiva followed,
which used perspective apparatuses to divide the picture up into squares in order to transfer the
strange images, anamorphotic extremes and cones and cylindrical reflections exactly. They
rationalized the phenomenon in the same way as the often broken reflection systems of the
baroque which sought to make the non-visible visible and the hidden representable.“ (Mersch,
Representation and Distortion 31-32).
While most artisans undoubtedly got by quite well with their embodied intuition and a
working knowledge of perspectival rules of thumb, just as most continue to do even today, for
some, the circulation of theoretical accounts of perspective, along with their associated
mechanical aids for perspectival construction, helped to usher in practices that erode the
philosophical links between the newly formulated, ideal Cartesian subject and the ideal
perspectival point of view. Suddenly, there was no one point or region from which to apprehend
the pictorial “truth,” but multiple points from which different facets of a pictorial “reality” would
reveal themselves. Indeed, anamorphic images throw the very notion of representational truths
into question. It should come as no surprise, then, that some of the most dedicated investigations
along these lines came out of the practices and writings of Christian friars.
Two of the earliest written works to mine the depths of anamorphosis were La Perspective
Curieuse (Curious Perspectives) by Minim Father Jean-François Niceron and Perspectiva
Horaria by Niceron’s fellow friar and colleague, Emmanuel Maignan. Interestingly, both men
seem to have been well-acquainted with the philosophy of Descartes, with whom they were
contemporaries, and according to Massey, went to some length to distance perspectival point of
!198
view from Cartesian disembodiment (Massey, Picturing Space 21). Maignan in particular held
anamorphosis to be symbolic of the boundary conditions of embodied perception and the truth of
“spiritual vision,” and made distinctions between the two modes of understanding. However, as
Massey notes, “Maignan opposed Descartes on epistemological rather than theological grounds.
While Descartes grounded his epistemology in a metaphysical conception of human mind,
Maignan relegated metaphysics to the realm of the divine and then developed a sensationalist
account of human knowledge […] He maintained that all knowledge of nature comes from
sensory contact of the world” (Massey, Picturing Space 103).
For Maignan, Niceron, and a number of other artists and thinkers, anamorphosis was the
perfect philosophical instrument through which to ruminate upon the embodied and, above all,
limited character of perception. Anamorphic images force us out of perspectival depth and back
into our own embodied situations and, in doing so, they draw attention to their own artifice via
smears, visual “glitches,” and distorted forms that traverse the two-dimensional image plane. Our
eyes must trace the contours of these distortions over frescoed or oil painted surfaces as we
search for that vantage point from which we will see the hidden image resolve. This reminder of
the materiality of the body and its situated condition is given existential weight in the content of
many anamorphic works of momento mori, such as Hans Holbein the Younger’s well-known
painting, The Ambassadors (fig. 8). This double portrait features two wealthy French diplomats
dressed in extremely formal attire, posed at either side of a shelf cluttered with an assortment of
astronomical and cartographic instruments, including a polyhedral sundial and a medieval
astrolabe. Among the objects on the bottom row of this shelf, is a lute—potentially a direct
reference to the lute depicted in Albrecht Dürer’s illustration of his draftsman’s aid from the 1525
!199
edition of his Treatise on Measurement—and a book of mathematics. The scientific instruments
so often depicted in works of memento mori invite interpretation of these works as a warning to
the viewer (often a wealthy and cultured patron) that, while science and technology create a more
precise picture of the world and help us to gain mastery over its uncertainties, the world itself, as
well as our embodied apprehensions of it, are illusory and temporary.
In the Ambassadors, this reminder comes to us in the form of a curious smear that runs
diagonally from the very bottom, near-left, to just below the right bottom edge of the shelf. If one
!200
Fig. 8. Holbein, Hans the Younger. The Ambsassadors, 1533. National Gallery,
https://www.nationalgallery.org.uk/paintings/hans-holbein-the-younger-the-
ambassadors. Accessed 20 Feb 2020.
stands very near and to the right of the painting and looks obliquely down the painting’s surface,
the image of an extremely realistic skull will come into focus. This skull, along with a number of
other signifiers in the work (a few inaccurate details in the scientific instruments, the page to
which the hymn book is turned, and the broken lute string) have long been interpreted as
symbolizing the disharmony of the church during the Tudor period. However, given the
popularity of memento mori paintings throughout the sixteenth and seventeenth centuries, and
given that a number of these employ anamorphosis, it is difficult not to interpret this skull and its
“encryption” within a secondary perspectival view as a reminder that the scientific image and the
instruments that make it possible are, for all the certainty and control they afford us, a product of
enfleshed and ultimately mortal beings that must soon face a much more profound reality beyond
the thresholds of perception and sense making.
Anamorphosis thus stands out, along with the aeolian instruments examined in the last
chapter, as excellent examples of philosophical instruments. The transcription of the embodied
practices of Quattrocento artisans into first a heuristic geometry, then a mathematical theory, and
finally into the physical design of drafting tools, created the conditions for both the conceptual
linkage of the Cartesian subject to the singular, perspectival point of view and the eventual
eroding of this link. If the artisans who built aeolian instruments understood themselves to be
giving voice to an intelligence in nature that withdraws from our “immediate” sensitivities, artist/
philosophers like Niceron and Maignan adopt a critical skepticism bolstered by the plasticity
inherent to both our embodied perceptions and the representational schemas used to address
them. These theorist/practitioners leveraged the uncertainties of both embodiment and
abstraction to emphasize the materiality and artifice of visual representation and to relegate the
!201
“truth” of nature’s intelligence (in their view a truth that belongs to God) to its own separate
realm. For myself at least, and despite the theological bent of these theorist/practitioners, both
Niceron and Maignan can be characterized as the hackers of their day. In A Hacker Manifesto,
Mckenzie Wark predicates the term “hacker” on a practitioner’s facility with abstraction.
“[…]where education teaches what one may produce with an abstraction,” she says, “the
knowledge most useful for the hacker class is of how abstractions are themselves
produced” (Wark [007]).
Following Wark, what Niceron and Maignon helped to facilitate was an “untying” of the
workings of their visual world in both its abstract and embodied dimensions. As I will later
argue, works of data and scientific visualization are prime for a similar distortional interventions.
Like aeolian instruments, works of anamorphosis exhibit interesting conceptual resonances with
more modern scientific instrumentation. In the abstractions and conventions of data display, as
much as in the physical construction of instruments, there lies the same potential for injecting
noise into the schema—a noise born of embodied doing coupled with theoretical learning. Such
noise affects a distortion within the more plastic components of rationalizing frameworks
without breaking them entirely, producing a fragmentation of perspectives and spurring a critical
reflection upon whatever cultural/philosophical norms have been lurking therein. 4.6 Fragmented Vision and Distortional Dialectics
Linear perspective was a radical break from the stylistics of the Trecento, which was still, in
the century or so before Brunelleschi, under no small amount of influence from Byzantine and
Romanesque understandings of physical and pictorial space. As Samuel Edgerton points out in
!202
The Renaissance Rediscovery of Linear Perspective, in the years after antiquity, perspective was
"verstigial" in the practice of painting, and "really only 'space-signifying' and not 'space-
enclosing' (Edgerton 158). The concept of 'space-signifying' will be particularly important for the
analysis to follow but, for now, suffice it to say that before linear perspective, the "space"
depicted within European painting was constructed from manifold points of view within which
flattened iconographic figures and objects were arranged with solely narrative or descriptive
purposes in mind. For example, the table in Lippo Memmi's Last Supper (fig. 9) is tilted far more
"forward" than is congruent with the rest of the painting's spatial construction. As is standard in
“pre-perspectival” art, few of the orthogonals within the image can be traced to a shared
vanishing point. Despite these perspectival inconsistencies, however, it would be presumptuous
!203
Fig. 9. Memmi, Lippo. The Last Supper. 14th century. Wikimedia
Commons, https://commons.wikimedia.org/wiki/
File:SG_NT_The_Last_Supper_Lippo_Memmi.JPG. Accessed 20 Feb
2020.
to assume that Memmi did not have at least a tacit intuition for the visual effects of linear
perspective, even a century before the circulation of Alberti's De Pictura. According to Edgerton,
there is evidence that vanishing point systems were well known to even the artisans of ancient
Greece (Edgerton, Renaissance Rediscovery 71).
So, why does Memmi's table, or indeed any of the tables rendered thusly in Gothic art, tilt
impossibly toward us as though about to spill its contents onto the laps of those sitting “nearest”
to the picture frame? If we understand the table to be in correct perspective, but not the diners,
then why are each of the figures positioned at our eye line, which itself jumps up and down the
vertical axis of the picture? Why do we not see the tops of the figures' heads? A reasonable
speculation here would be that what was important for Memmi was not the realistic depiction of
a rational space, but the semantic contents of that space. If Memmi had drawn his table in proper
perspective, as did Da Vinci when constructing his own Last Supper approximately a century and
a half later, not only would many of the objects on the table have been occluded, but even the
diners on the opposite side of the table would have been obscured by the figures sitting with their
backs to us (indeed, Da Vinci placed all of his diners on the far side of the table as a solution to
this problem).
Furthermore, if a mostly "top-down" perspective had been used in congruence with that of
the table as rendered, we would see only the flat discs of various halos facing us (save, of course,
in the case of Judas). Thus, just as Picasso and Braque would (re)discover in the late nineteenth
and early twentieth centuries, Memmi and his contemporaries found that much could be
communicated through an aggregation of multiple and sometimes contradictory viewpoints. The
purpose of these painters was to tell a story and the picture plane was treated as a surface upon
!204
which to arrange a system of visual signifiers for that purpose. In perspectival images, where the
spatial situations of objects are given as much attention as their denotations, the opportunities for
occlusion abound. Indeed, occlusion itself is a fundamental principle underpinning the
construction of perspectival imagery. In some respects, anamorphosis recovers the fragmentation
of perspectives already present in painting before the Quattrocento. Importantly, however, it does
so not through a flat out negation of such principles of object placement as occlusion, but
through direct manipulation of the placement of the viewer. Through this manipulation, a
different understanding of occlusion is brought into play. As we change our point of view, new
information reveals itself—information which, before our change of perspective, is read as little
more than visual noise.
Yet, stripped of its Cartesian overtones, linear perspective itself has helped produce its own
fragmentation of vision. While it could be argued that the technique no longer enjoys the
conceptual gravity with which it once pulled metaphysicians into its orbit, the effect it continues
to have on visual culture is, in many respects, much more immediate and profound than the
artists of the Quattrocento could have expected. In “Visualisation and Cognition: Drawing
Things Together,” Bruno Latour argues that it was not only the Gutenberg press that instigated a
revolution in the exchange of data and information, but also the representational affordances of
linear perspective. These affordances had non-trivial implications for colonialism in particular,
which resulted from the technique's ability to transcribe the objects of the world into what Latour
calls “immutable mobiles” (Latour, “Visualisation and Cognition” 21). Equipped with linear
perspective, an artist in the colonies could transcribe any object, no matter its scale, and transport
it from its situated context back to the homeland in a collectable form. Unlike Turner’s
!205
dissolving windows onto dynamic phenomena characterized by motion, noise, and the vast
proportions of natural events, the immutable mobile results from the transcription of an object's
three-dimensional properties into the small, portable, and perfectly rational two-dimensional
space of the printed page. Importantly, the term “immutable” speaks to the fact that the
mathematization afforded by linear perspective allows the image-maker to render an object
translatable across its three axes of rotation. In the language I have been using thus far, these
geometric translations are a given object’s invariant properties, abstracted and recorded upon the
picture plane. Drawn in linear perspective, an object may be depicted from any angle but, if
effectively done, its representation is of this object in particular as opposed to any of its type. An
image of a turnip, for example, is no longer an iconographic signifier for turnips in general, but
denotes this turnip in particular.
A perfect example of the effects of the immutable mobile can be found in the archival
project, undertaken by a contemporary of Galileo, that stands out as an early precedent for
today’s visual databases. In the mid-seventeenth century, antiquities dealer and natural
philosopher, Cassiano dal Pozzo, commissioned a great number of European artists to produce
drawings of… everything. Or almost everything. Among the roughly seven thousand drawings in
dal Pozzo’s Museo di Cartaceo (paper museum), we find images of natural specimens like
vegetables, flowers, coral, geological specimens, mammals; the list goes on. As dal Pozzo held
an abiding interest in ornithology, his commissions feature a disproportionate number of birds
and often multiple of the same species, each of which display subtle variations of their
phenotypes (Freedberg, Eye of the Lynx 8). In many respects, dal Pozzo’s paper museum is the
two-dimensional equivalent of the Wunderkammers assembled by wealthy collectors and
!206
amateur scientists who were often as interested in natural curiosities as they were in cultural
artworks.
What is notable about dal Pozzo’s project is that it took full advantage of the by-then widely
understood techniques for naturalistic rendering for the purposes of gathering a vast array of the
world’s objects into one container for “data.” While it might be argued that the cultural effects of
this application of the immutable mobile are not as dire as those Latour might identify in the
contexts of colonization, the effects for science practice and the scientific image were profound.
According to David Freedberg, included among the drawings of birds, coral, and roman armor,
are some of the very first drawings of phenomena observed under a microscope (Freedberg, Eye
of the Lynx 8). Cassiano’s paper museum was, for its time, an unmatched exercise in rendering
the data of a wide array of natural objects communicable and under near-complete visual control.
It is no coincidence, then, that the scientific society of which both Galileo and dal Pozzo were
members was dubbed the Accademia dei Lincei (Academy of the Lynx-eyed), so-named for that
animal’s incredibly sharp visual acuity. Yet, whereas philosophical understandings of linear
perspective had imagined the all-seeing eye to be a fixed, disembodied, and objective point, dal
Pozzo’s paper museum, as well as the academy itself, established scientific vision as a compound
eye, produced through the aggregation of multiple perspectives taken up by multiple
investigators.
It is important to note that the Museo di Carteceo’s visual database was facilitated not by
ornithologists, botanists, or geologists, but by working artists many of whom, like Nicolas
Poussin, were (and are) quite well known. One could argue that this was, in fact, a boon for the
project in terms of observational accuracy as much as for cultural appeal. For example, many of
!207
the drawings depict the sorts of morphological outliers (e.g. a gargantuan head of broccoli) that
would otherwise be omitted from or altered in the production of scientific illustrations. David
Freedberg observes,
A fair number of drawings clearly showed anomalous specimens; but was it only the elements of gigantism
and monstrosity they displayed that inspired their production? This question seemed especially pressing in
the case of one of the largest single groups of drawings, namely the spectacular series of drawings of citrus
fruit. here were oranges, lemons, and citrons in great abundance. Many seemed ordinary enough
specimens; but then there were hybrids and monstrosities, elephantine citrons with phallic growths, wrinkly
and rugged lemons, and oranges with tuberous and tumorous excrescences […] Some did not look like
members of the citrus family at all, while others seemed sooner to belong in a museum of marvels. […] The
general classificatory thrust of these drawings could not have been clearer—and whatever system of
classification there was seemed less encumbered with irrelevance than the rambiling natural historical
textbooks […] (Freedberg, The Eye of the Lynx 27-30).
One could argue that what made the liveliness and accuracy in these drawings possible was that
many of the artists commissioned by dal Pazzo were not bound by what Daston and Galison call
“truth to nature”—an understanding of objectivity in which the scientist/illustrator distorts their
representations of a specimen’s observed features such that they are better congruent with a
predetermined abstract schema. Free from such strictures, dal Pozzo’s artists simply drew what
they saw, warts and all.
As Daston and Galison observe, the distortions and biases that crept into scientific
visualization throughout the seventeenth and eighteenth centuries were due in no small part to
early understandings of truth and objectivity, and were ultimately the primary drivers for the
introduction of the kinds of mechanical sensors and measuring devices observed previously in
!208
the work of Etienne-Jules Marey. If we circle back and place “truth to nature” in conversation
with our earlier account of anamorphosis, there arises from this narrative what we might call a
“dialectics of distortion.” Ostensibly, the later devices of mechanical objectivity resonate with
the very same concerns for chasing out the uncertainty and noise of embodied action as those we
saw in the design of draftsman’s aids and perspective machines. Yet, as we have also observed,
anamorphosis and its machines emerged out of the intentional distortions introduced by Da
Vinci, della Francesca and others in order to correct for the unintentional distortions that occur
when a painting is viewed at oblique angles. In short, the distortional practices of anamorphic
artists, which included the use of the machines, were rooted in embodied effects. By contrast, the
distortions of illustration practices guided by “truth to nature” were rooted in an adherence to a
priori assumptions that submitted each specimen to a fixed and eternal archetype within the
natural order. Two “truths” were thus at stake in the interplay between these very different
distortional strategies—one predicated on our uncertain, situated, and enfleshed condition, and
the other on the rigid certainties of mind. 4.7 TEC: Points, Clouds, and Electron Drift
From its very beginnings, there were phenomena that created trouble for the logos of linear
perspective. At stake was the question of whether certain phenomena could be represented
“truthfully” by perspectival means. Clouds in particular were viewed as problematic subjects and
Filippo Brunelleschi went so far as to find a technological workaround to avoid depicting them
by means of his newly-minted representational strategy. This technological “hack” involved
replacing the portion of sky in his painting of the Baptistry with a layer of highly reflective
!209
silver. When a user of his perspective device peeks through the hole in the painting in order to
view the Baptistry's reflection in the attached mirror, what she sees in place of a painted sky is
yet another reflection—one of the real sky and whatever clouds might occupy it. In this respect,
Brunelleschi’s invention foreshadows not only the camera obscura and eventually cinema, but
also the mixed reality experiences of today. In short, reliant as it was upon the solidity of (mostly
rectilinear) objects, Brunelleschi's perspectival technique struggles at best to make the
amorphous and gradient nature of clouds sensible within its internal logic. While clouds can
certainly be shown to obey the visual laws of atmospheric perspective, their lack of clearly
defined edges or scale makes them extremely difficult to submit to perspective’s rationalizing
methods. Even at its very beginnings, linear perspective’s all-seeing eye was not so
encompassing as it ostensibly seemed.
The problem of representing clouds becomes more pronounced when we consider that there
are, in fact, clouds above our heads that are invisible to the naked eye. These clouds are not those
produced by condensation, but are formed from gradient densities of electrons stripped from
gasses in the upper atmosphere as they are ionized by solar radiation. Known as the ionosphere,
the shell of electron densities encompassing the planet is measured by a number of methods,
including satellite sensing and ground-based ionosondes (radio transmitters that bounce signals
off of the ionosphere at a given height). Acquiring accurate measurements and maps of the
ionosphere is important due to the negative effects of high-electron densities on satellite and
radio communications.
For myself, however, the ionosphere became interesting as a subject of artistic practice for
very different reasons. It first caught my notice during the initial research for WaveForm and
!210
then, later, during my investigations for Aeolian WiFi. Particularly fascinating are the number of
conspiracy theories that have evolved from speculations around the “true” nature of the HAARP
(High-frequency Active Auroral Research Program) facility in Alaska. HAARP makes use of an
enormous bank of phased-array antenna for the purposes of ionospheric research and, perhaps
due to its shear size and its alien, science fiction presence marring the pristine white of Alaska’s
natural environment, this antenna array has become the center of a number conspiracy theories
that interpret it as a means for “programming” weather or even human minds. Other theories put
the array at the very center of global climate change and even the devastating 2010 earthquake in
Haiti (Butter et. al., Conspiracy Theories 74). For the most part, I have avoided tackling this
imaginary directly in my work and so will not go further in this direction here. Nonetheless, the
sort of unreflective skepticism it represents stands out as evocative example of the reach of the
scientific imaginary into popular discourse and the need for a measured, cultural responses.
As stated previously, my suspicion about such imaginaries is that they are produced by a
deep disjuncture between our everyday intuitions and the images we acquire through Science of
wholly imperceptible phenomena. This disjuncture, placed in the context of general political
turmoil and a deep distrust of the military-scientific-industrial complex, all but begs for
conspiracy theories to abound. For myself, what is more fascinating about the HAARP array
(and indeed any scientific sensing array of its scale) is not untethered speculation pertaining to
whatever effects it might have on the weather or human minds, but the actual effects it has on our
capacity to “see” and understand. Such effects should not tempt us into simply placing the
compound eye of Science in that passive and detached position where the Cartesian perspective
once enjoyed epistemological authority. To the contrary, I will repeat that the basic premise
!211
herein is that the devices and representational schemas of Science are deeply entangled with the
material logics of their objects. For myself, however, this view of material complexity only
underscores the partial and plastic nature of technological sensing and hermeneutic display, thus
requiring further, more diversified aggregations of perspective as well as further critical
reflection.
My first set of experiments with ionospheric data are largely congruent with conventional
methods of scientific visualization. The ionospheric used for these experiments are made freely
available by NASA and were acquired from NASA’s Reverb | ECHO web portal, which at the
time of writing, is no longer a live site (the repositories once found on ECHO have since been
relocated to NASA’s Archive of Space Geodesy Data: http://cddis.nasa.gov). The dataset
employed for my initial experiments is a recording of total electron content (TEC) in the upper
atmosphere taken by satellite over a twenty-four hour period on January 17th, 2013. Thse data
have no particular importance, conceptually, and were chosen at random during this early stage
as a means for acclimatizing myself to the structure of the data files. After spending a fair
amount of time analyzing the file format, I wrote an algorithm in Python to strip the longitude
and latitude values along with their associated sensor readings out of the original file. Next, the
script collates these values and produces a new document arranged in much more user-friendly
structure. The visualization software is written in Processing, a graphics-oriented framework
built upon the Java programming language. My first approach to visualizing the data was to
construct a false color animation of the ion cloud’s movements across an underlying map of the
planet. The primary challenge here was that of finding a suitable world map and aligning it with
the longitude and latitude values recorded in the data. The false color scheme was constructed by
!212
mapping the TEC values at each time slice and spatial interval to an array of color values culled
from a pre-made “.png” graphic. The result of this initial experiment can be viewed at http://
brianacantrell.com/ionviz.html.
I could be argued that scientific false color maps of the type used in the first visual prototype
for TEC_1_17_13 bring us into Don Ihde’s “hermeneutic relations” with both their own physical
properties and with the phenomena to which they refer. Ihde explains such relations thusly,
“Instrument panels remain ‘referential, but perceptually they display dials, gauges, or other
‘readable technologies’ into the human-world relationship. And while, referentially, one ‘reads
through’ the artifact, bodily-perceptually, it is what is read” (Ihde, Postphenomenology 43). Ihde
thus accounts for the manner in which imperceptible phenomena might become part of our
intentional arcs by situating hermeneutic technologies, including graphic displays, as
translational, material agencies operating between our senses and the phenomena represented.
Speaking to the use of false color images specifically, Ihde, in Technology and the Lifeworld
states, “To transform a number pattern through the artifice of perceptual patterning enhanced by
false color […] is a productive mode of imaging” (Ihde 186), and elsewhere, “To ‘read’ the
images calls for an explicit awareness of the transforming and translation processes. The false
colors reveal, but reveal in a distinctly mediating mode. I must take account of my perceptions
within a hermeneutic process” (Ihde, Experimental Phenomenology 143).
Yet, while I mostly agree with this account and believe it goes far in helping to square
intentionality with embodied encounters, for myself, it glosses over some important nuances. For
one, it isn’t clear that we are, in fact, always aware of the translation process. False color
visualizations are of course highly interpreted, but for those who rely upon them on a near-daily
!213
basis, particularly researchers engaged in task-oriented contexts like monitoring the state of a
system, the interpretations are deeply dependent upon background knowledge that becomes tacit
along with the development of expertise. Indeed, in The Tacit Dimension, Polanyi indicates the
manner in which such interpretive modes of perception become tacit by appealing to the famous
example of a blind man with a stick (Polanyi 12). As the man probes the ground with his stick,
he is attending to neither stick nor ground directly; neither is he attending to background
knowledge. He attends to the vibrations sent through the stick (proximal effects), which conduct
information about the ground and the potential and/or consequences for action (distal effects). So
long as the distal effects continue to produce satisfactory conditions for further action or lead to
satisfactory inferences, his attending to the proximal effects will become more and more tacit. As
we saw in our earlier account of embodied knowledge and workshop practices, the objects and
information will, with sustained engagement, “withdraw” from the forefront of the mind. I would
argue that the visualizations, readouts, and displays are often subject to a similar
phenomenological trajectory.
On the one hand, false color schemes in scientific visualization remain popular and useful
due to their leveraging of optical contrast for the purposes of making more pronounced the subtle
distinctions in the sampled regions of a phenomenon. It could also be argued, however, that as
effective as they are, their ontological and phenomenological simplicity flattens their referents,
eliding a great deal of physical complexity. Such is the trade-off between legibility and accuracy.
In the case of my early experiments with ionospheric data, this flattening was quite literal. I
noticed that the color itself creates an impression of the data that imputed to them a granularity
and dimensionality that quite simply is not there. By way of analogy, if we were to understand
!214
these datasets as optical camera images, they would be visually equivalent to the highly-
pixelated videos that proliferated online in the mid- to late nineties. Moreover, the two-
dimensionality of these images obscure the fact the each sampled point within the data represents
not a two-, but a three dimensional region best described as a vertical column of electrons
stretching between the orbital distance of the satellite and the surface of the Earth. After these
initial experiments, I began considering how this higher dimensional aspect of the electron drift
be visualized.
Next, I set out to construct a tool that would allow the user to switch between a number of
exploratory visualization types. Only one of these was the false color array used for the initial
test, and all of them were mapped to a three dimensional sphere representing the Earth rather
than to a two dimensional map. Nothing particularly intriguing came out of this exercise except
for the use of point clouds to communicate the three dimensional reality of the sampled electron
content regions (fig. 10), however, the “bump map” strategy I employed to add this third
dimension creates a terrain-like topography that gives a false impression of the phenomenon. My
first hypothetical solution to this problem was to create a point cloud in the form of a thick
“shell” around the sphere, using the data as baseline density values to control the number of
points in each region. Another solution would have involved using isosurfaces, which allow the
designer to visualize volumetric regions more faithfully. After a few quick tests of the first
solution, I found that far more vertices would be needed than my processor would be able to
handle. Future attempts in this direction might be made using GPU shaders rather than the
approach taken here. My experiments with the second strategy quite simply did not produce
!215
results that would have been satisfactory for either analytical or aesthetic purposes. They did,
however, point me in a fruitful direction.
The final work in this series of experiments, TEC_1_17_13, is linear rather than interactive
and certain details of its production come closest to the distortional dialectics described above in
relation to anamorphic practices. Like the work that preceded it, the first iteration of TEC
comprised a number of exploratory visualizations. The video itself cuts between different visual
experiments at varying intervals. One visualization in particular, however, stands out as most
successful, particularly when viewed with the accompanying sonification. Indeed, it is the tight
coupling between the sonification and the visualization that I attribute to the work’s relative
success.
!216
Fig. 10. Still from ionSandBox, 2016.
The central motif of TEC is a two-dimensional, black and white animation built upon the
same code used to produce the original false color experiment. Rather than map the pixel values
from a pre-made false color swatch, however, I simply mapped the density values to intensities
of gray. In other words, the relative range of intensities from the data are fed to a function that
maps, for example, 0 density to 255 on the RGB scale and the highest density to 0 on the RGB
scale. This strategy resulted in a simplified, black and white variant of the original display. Next,
drawing upon the same motivation that drove my point cloud experiments, I created a noise
function, which produces pseudo-random, correlated values filling a two dimensional array. The
result of this function is multiplied by the values in the data producing a convoluted visualization
!217
Fig. 11. Still from TEC_1_17_13.
that breaths with what reads as static (fig. 11). This strategy is very similar to those employed in
perception experiments investigating what is known as “stochastic resonance.” Studies like that
conducted by Kosko and Mitaim have shown that the interleaving of noise within visual gratings
has the effect of pushing barely discernible visual patterns over a particular neuronal threshold,
increasing contrast and legibility, albeit with less significance as noise increases (Kosko and
Mitaim, “Stochastic Resonance in Noisy Threshold Neurons” 761). In short, the introduction of
noise, rather than obfuscating pattern, can have the effect of making pattern more discernible.
Second, the aesthetic effect created by the introduction of noise addresses precisely those
same concerns that drove my experiments with 3D point clouds. The noise effectively helps to
recover the kinetic “life” in the phenomenon under study. In many respects, it also, somewhat
counterintuitively, recovers the three-dimensionality of the phenomenon. More importantly,
however, it works to instantiate, rather than signify electron drift and density. This potential for
instantiation has led Stuart and Nersessian to define such visualizations as “exemplars.” In their
article “Peeking Inside the Black Box,” the authors argue, “Because an exemplar instantiates
features of a target, it provides the user with access to those features. This is epistemologically
relevant because access to features of interest are necessary (if not sufficient) for some kinds of
knowledge and understanding” (Stuart and Nersessian 88). It is notable that all of the effects
introduced by the noise function remain tethered to the structural logic of the data. The direction
in which this strategy gestures, however, is one in which the function is also tethered to the noise
of the phenomenon. Such “exemplars” effectively help to shake the static boundaries of each
sampled point and to make perceptible the uncertainty inherent to the quantum nature of the
!218
phenomenon. It reasserts each sampled point as a gradient region whose delineation into contour
is a product of the analytic mind.
The final visualization strategy for TEC also points us in the direction of particularly knotty
issues that have haunted the history of scientific visualization and continue to crop up today. In
the study of observational rendering, there is an axiomatic saying that is taught to beginners that
states, “value change equals form change.” The point is that gradations of value in images are,
nearly without exception, visually interpreted as changes in three dimensional form. Yet, despite
the luminocentric tenor of my overarching theme of the penumbral, gradients are a key feature of
any number of phenomena. Using the example of microscopy, James Elkins observes the trouble
inherent in our mapping of gradients according to the habits of visuality, stating, “[T]he
manufacture of contrast is a problem specific to microscopy. In one sense it has been solved: in
optical microscopy it is possible to achieve convincing relief, apparent shadows, apparent bulk,
and density. But with the enhanced contrast comes a trade-off […] What appear to be shadows
may not be; what seems to be a light source may be an artifact of the optics; what looks solid
may be empty medium” (Elkins, Six Stories from the End of Representation 118).
Complicating the situation Elkins outlines is that some sensing devices, like ultraviolet and
infrared detectors, or even the radio transmitters observed in the last chapter, invite optical
interpretations due to their sensitivities to electromagnetic radiation. In fact, such instruments are
often used by NASA and other scientific research agencies to detect the “air glow” created by
ionization in the upper atmosphere. By contrast, instruments like ionosondes “sound” the
densities of free electrons in a manner that is more congruent with sonar. It is reasonable to
assume that similarities observed between phenomena, which might follow from aesthetic
!219
similarities between modes of visualization, is mitigated by training and background knowledge.
For the lay public, however, such an assumption would be unreasonable in the extreme. For
myself, this conundrum speaks more directly to the role of pixels and points as signifiers, which I
will table for the present and examine in more depth below.
Despite the deeper issues I have identified with such visualization strategies as that used in
the final iteration of TEC, both their effects and their structure comport with the final approach
taken for the sonification. In my original experiments in this direction, I constructed a
computational synthesis algorithm in Max/MSP that produces a separate sine wave for each
sampled value in the data. The pitch and amplitude of each of these voices is mapped,
respectively, to the density value and global position of each data point. Out of curiosity more so
than out of strategic thinking, I implemented a simple, one to one mapping of the values rather
than constructing a table of note values as is common in most sonifications of this sort. Not only
was this strategy extremely computationally expensive, the effect it created displayed little
connection to the data or the underlying phenomenon. The second and more fruitful approach
involved an in-depth engagement with the structure of the data. First, I identified those values
that correspond to the samples taken along the prime meridian, or zero longitude. I then created a
graphical overlay on the animation that places a static, vertical stripe along this line of longitude.
In the sonification, only those values that pass “under” this line are sonified. Constructed in this
way, the synthesis model implements only as many voices as correspond to the fixed number of
geographical locations along this strip. Functionally, the line operates as as something as a
“playhead”: as the animation progresses, our attention to the values at each point along the center
!220
line has a corresponding pitch in the sonification, which alters as the values increase with the
progression of the sun. Higher pitches are correlated to higher total electron content.
As a means for further separating the pitch values, I created one additive synthesis generator
as a replacement for the separate sine wave voices. This generator takes the lowest sampled
value as a parameter for establishing a fundamental tone and produces harmonic partials from all
of the other values. Interestingly, the pitch rises with the ionization in the upper atmosphere in a
manner that is quite fluid, producing a crescendo appropriate to a phenomenon produced by the
movements of heavenly bodies. Moreover, the partials and overtones in the sonification bear
further aesthetic resemblance to the relations between areas of high value density and low value
density of the pixels or points within the moving image. Thus, the “clouds” of electrons are given
movement, voice, and visual signifiers that approach, at least as much as the perceptualization
scheme will allow, their dynamic and boundary-less nature. Like the painting by Turner, with
which we began this phenomenological journey, TEC is an attempt to instantiate the
phenomenon under study, only by means of abstract signifiers set into motion by an embodied
approach to computational logic rather than by the embodied gestures of mark-making. 4.8 Clouds, Signs, and Space
Toward the beginning of his semiotic analysis of Correggio’s painted cupolas in A theory of /
Cloud/, Hubert Damisch argues that, in these works, the cloud image becomes a signifier that,
“thanks to the textural effects to which it lends itself, contradicts the very idea of outline and
delineation and through its relative insubstantiality constitutes a negation of the solidity,
permanence, and identity that define shape, in the classic sense of the term” (Damisch 15).
!221
Emphasizing the ambiguous status that clouds occupy with respect to representation, Damisch
argues that /cloud/ (which he places between slashes to emphasize its status as signifier) "does
not play a solely pictorial or decorative role; it also serves to designate a space” (Damisch 17).
For Correggio, the space to be marked was heavenly and infinite—a quality which the
architectonic forms of the cupola only served to contain. The human figures he depicted on the
interior of the Parma Cathedral's dome (fig. 13) spiral upward into the space of a trompe l'eoeil
sky and toward the light of God, which shines through the center of a vortex of human bodies.
The figures rest upon and, in places, are intertwined with cottony billows of white cloud, which
themselves "spiral" upward.
!222
Fig. 12. Da Correggio, Antonio. Assumption of the Virgin. 1526-1530. Piazza Duomo
Parma, https://www.piazzaduomoparma.com/en/cattedrale/. Accessed 1 April 2020.
As Damisch argues, the clouds transform our spatial reading of the figures: “Bodies entwined
in clouds defy the laws of gravity and likewise the principles of linear perspective, and they lend
themselves to the most arbitrary of positions, to foreshortenings, deformations, divisions,
magnifications, and fanciful nonsense” (Damisch 15). The bodies, more so than the clouds, act as
a basic unit of spatial measurement, but the space they delineate is one populated with a visual
complexity that complicates the typical perspectival framing of rational space. Having no
invariant property of shape, the clouds in Correggio's cupola cannot be perspectivally tamed, yet,
for Damisch, they serve to mark an infinite space that spills over and outside of the cupola's
geometry. Placed in other visual contexts, Correggio's clouds would likely be read as stones or
lumps of clay rather than as varying densities of condensation—a fact that underscores their
status as signifiers. Accordingly, if we follow a Saussurean approach to signification, their
meaning is produced when they are placed in relation to other signifiers within a wider system of
signification. We should be careful, however, of going too far down this path. It is not simply that
Correggio’s /clouds/ serve the same narrative/semiotic role as the objects on Memmi's table; as
Damisch suggests, in serving to designate a space, Correggio's clouds operate to transform our
perception of the cupola, as well as the space of representation.
In today’s audiovisual aesthetic, one is more likely to encounter the cloud as a swarm of
points or pixels than as Correggio’s billowing masses. Point clouds have become one of the most
ubiquitous tools in the toolkits of many data and scientific visualization practitioners. Each point
is a Damisch /cloud/ in miniature, shrunken to the space of a few pixels, thus invoking the
abstraction of the geometric point, marking an infinitely small coordinate with no dimension.
Point clouds set into motion a system of signs that are imputed a neutrality with respect to their
!223
referents. In many respects, the point has become the base unit for the measurement of any
number of phenomena, from starling murmurations to the joints of an actor on a MoCap stage.
Yet, if one cuts through the modernist stylistics and unpacks what lies beneath the points
themselves, one finds that each point within the cloud designates not a infinitely small vector, but
a region. These might be spatial regions or regions of what Barad terms the “intra-actions”
between material agencies (e.g. electrons, molecules, proteins, forces, etc.). John Bigelow sets
this understanding of a point against “point instances” as a means for unpacking the long-
standing problem of recurrence in mathematics:
An interval of time is real enough, but an instant of time, a moment, something with no duration whatever,
may well seem a mere abstraction. Similarly, a region of space is real enough, but a point of space,
something with no size at all, may well seem a non-existent but merely imagined limit of a nested sequence
of progressively smaller and smaller finite regions. On such a view, all that exist are regions of space-time,
and there are no point-instants. Hence there can be nothing which does not occupy more than one location
at any given time (Bigelow 21).
Confronting this problematic directly, Bigelow argues that, in order to ground mathematics in the
relations and properties of things and forces in the physical world, we must, even if it is to an
admittedly limited extent, be able to locate objects and their properties within identifiable
regions, thereby enabling us to make claim to the recurrences and universals they represent
(Bigelow 18-27). There are, as Bigelow admits, many problems with the idea of recurrence
within regions. Center of gravity, for example, is often depicted as a point, but is not a “thing” in
the traditional sense. It is a property of matter, which is to say, a property of relations between
things and thus “point” as implied by the term “center” is not a locatable thing.
!224
With these problems in mind, we can see that the idea of each point in a point cloud as
referring to a “thing out there”—a thing which is represented by an infinitely small and abstract
vector with no dimensionality—is similarly fraught. Total electron content, for example, is not a
thing, but complex and ephemeral phenomena produced from the interactions between things.
There are, however, given we loosen our rubric for what the point signifies, identifiable regions
in which the recurrences that emerge from these interactions might be given a quantity and a
signifying marker. Following this logic, each point in a point cloud can, like Correggio’s clouds,
be characterized as signifying neither “things” nor non-existent point-instants, but spatial
regions. Importantly, in the case of point clouds, it is the how the space between regions is
articulated that lends the representation both its particular aesthetic and its usefulness.
Accordingly, point clouds, when set into motion algorithmically, might best be described as
signifying and instantiating the rules between sets of spatial relations. Just as with the notational
rest in music, it is not the points, but the space between them and how it is expressed that creates
the structure. Given that, in the work of artists like Ikeda and Kurokawa, these markers and their
interrelations are computationally articulated within the representational logic of linear
perspective, just like Damisch’s /cloud/s, they signify as much or more about the space of
representation itself as they do the properties of natural phenomena. 4.9 Notation, Variation, and the Distortional Attitude
The works of both Ryoji Ikeda, which we examined in the interlude at the start of this
chapter, and media artist, Ryoichi Kurokawa, make extensive use of point clouds within their
respective design languages. Indeed, point clouds have become a key term in the analytic
!225
vernacular of a great number of scientific and aesthetic visualizations, along with novel
variations on scatter plots, bar charts, mind maps, and the ubiquitous signifiers for networks
examined by Patrick Jagoda in his book, Network Aesthetics. Whether these works are influenced
by, or have specific designs toward, scientific visualization, the particular visual rhetoric they
deploy often intersects directly with the concerns of computational aesthetics. Most germane are
those aspects of the latter that address “questions such as how to define numerically determined
rules for the analysis, codification, and prediction of the world; how to account for digitally
interfaced modes of sensing; and how to theorize new spatio-temporally distributed and
networked prospects for cognition” (Fazi & Fuller, “Computational Aesthetics” 283).
Alexander Galloway, in The Interface Effect, provides what is perhaps the best description of
the meeting place of computational and informational aesthetics when he observes that data have
no necessary visual form. “Data,” claims Galloway, “reduced to their purest form of
mathematical values, exist first and foremost as number and, as number, data’s primary mode of
existence is not a visual one. Thus to say ‘no necessary’ means that any visualization of data
requires a contingent leap from the mode of the mathematical to the mode of the
visual” (Galloway 82-83). Data visualization is thus understood as “first and foremost a
visualization of the conversion rules themselves, and only secondarily a visualization of the raw
data… any visualization of data must invent an artificial set of translation rules that convert
abstract number to semiotic sign” (Galloway 83). Accepting this statement as a fair
characterization, how are we to understand the epistemic/aesthetic nature of these rules and their
relationship to the phenomena to which they refer? Kurokawa’s work is of particular interest here
due to its sometimes more explicit art-science motivations.
!226
In 2016, Kurokawa partnered with astrophysicist Vincent Minier of the appropriately-named
Institute of Research into the Fundamental Laws of the Universe to produce a media experience,
entitled Unfold, which is driven by astrophysics data. It is a fair assumption that the “conversion
rules” used by Kurokawa to make artwork out of the data collected from instrumental readings of
star formation was a matter of some discussion with his scientific collaborator. Yet, whether or
not the scientists involved in the design of Unfold’s visualization schema were satisfied with the
approach taken, it would be difficult to argue that its “conversion rules” are, for lay audiences at
least, as apparent as Galloway’s rubric would have it.
As one might expect, Unfold employs the ubiquitous point cloud among its other minimalist
signifiers for quantitative display, placing these in context with optical and radio telescope
imagery. The resulting animations “unfold” on an array of stacked LED screens that arc above
the viewers’ heads. While the series of information displays, sensor readings and 3D imagery are
designed in congruence with the analytical/forensic aesthetic previously discussed, these burst
into life, flicker, fizzle, rotate, and fade into one another in a choreographed sequence that resists
any attempt at task-oriented insight mining. For myself, however, the sonification is the data-
driven component of the work that raises the most curiosity. Aesthetically and structurally, the
sound we hear bears more resemblance to Kurokawa’s other works than it does to conventional
sonifications of astrophysics data—a fact that thwarts analysis of both the data and the
translational schema employed.
Despite all of this, Unfold speaks to something more than the logic of mapping schemas or
the discrete tasks of insight mining. Moreover, the effect it achieves is one quite different from
Ikeda’s audiovisual storm of information. Like Correggio’s cupola, Unfold addresses space on no
!227
less than three levels. There is, of course, the fact the data in question quite literally concerns
outer space phenomena; but, Unfold also addresses the embodied space of the viewer/listener as
well as its own space of signification through the expedient of what Laura Marks terms “haptic
visuality.” In The Skin of the Film, Marks explains,
Haptic visuality is distinguished from optical visuality, which sees things from enough distance to perceive
them as distinct forms in deep space: in other words, how we usually conceive of vision. Optical visuality
depends on a separation between the viewing subject and the object. Haptic looking tends to move over the
surface of its object rather than to plunge into illusionistic depth, not to distinguish form so much as to
discern texture (Marks 162)
Kurokawa’s treatment of imagery depicting such extraterrestrial objects as solar plasma jets and
stellar nurseries addresses precisely this mode of looking. As the piece progresses, these objects
often fill the surface of every screen. Our eyes become saturated with detailed topographies of
gas clouds and the surfaces of stars. Only when the “camera” retracts from these surfaces do the
objects to which the surfaces belong begin receding into perspectival space. These moments are
punctuated by rapid successions of the previously mentioned point clouds and other
diagrammatic devices, but the flicker and glitch of these graphics rake across our eyes as they
rotate at various speeds, leaving tracers and after-images etched into our retinae.
Thus, it is not only Kurokawa’s treatment of the phenomena in his source material that
comports with Marks’s haptics of vision, but also his treatment of the apparatus. Like the
anamorphic artists we have already examined, Kurokawa places our attention on the surface of
the two dimensional image plane as much as the surface of his subjects. Drawing attention to this
!228
dual address of the materiality of both data and mediation, Vincent Milnier, Kurokawa’s
scientist/collaborator, observes in a promotional interview for Unfold, “What is very interesting
is the way [Kurokawa has] managed to give some kind of life to the pixels… We have the image
pixel, the data pixel…” (FACT). The effect of this pixel/data haptics is bolstered by the manner
in which the screens are conjoined vertically to wrap above the viewers’ heads. This heightens
the immersion of the work, encouraging the very same tilt of the neck one experiences, albeit
virtually, in encounters with Turner’s storm. Within this immersive informational,
representational, and embodied space, the constant glitch, the distortions, and the roll of
horizontal lines speaks to the material agencies that are interleaved with technoscientific vision
at every level.
For myself, Kurokawa’s haptic approach to his subject and his apparatuses raises deeper
questions around computational practice. The most fundamental is this: Is programming a
variational (embodied) or notational (semiotic) practice? At first blush, we have a situation in
which notation appears to play a double role within the creative practices of computational
artists. There is, at one register, an engagement with the syntactical structures of programming
itself, and, at another, the stringing together of visual signifiers for the purposes (or at least the
rhetorical deployment of) analytic data display. Troubling this picture, however, Kathrine Hayles,
in How We Became Posthuman, invokes a middle ground between embodiment and signification
with her concept of the “flickering signifier,” which she provides as an alternative to Lacan’s
“floating signifier,”
!229
"Language is not a code," Lacan asserted, because he wanted to deny one-to-one correspondence between
the signifier and the signified. In word processing, however, language is a code. The relation between
machine and compiler languages is specified by a coding arrangement, as is the relation of the compiler
language to the programming commands that the user manipulates. Through these multiple transformations,
some quantity is conserved, but it is not the mechanical energy implicit in a system of levers or the
molecular energy of a thermodynamical system. Rather it is the informational structure that emerges from
the interplay between pattern and randomness (Hayles 30).
For Hayles, the flickering signifier is characterized by “the compounding of signal with
materiality,” (Hayles 29) and she offers the term, “incorporation,” as distinct from “inscription”
as the mechanism by which it comes about. The latter, says Hayles, is “normalized and abstract,
in the sense that it is usually considered as a system of signs operating independently of any
particular manifestation” (Hayles 198). In contrast, “[a]n incorporating practice such as a good-
bye wave cannot be separated from its embodied medium, for it exists as such only when it is
instantiated in a particular hand making a particular kind of gesture. (Hayles 199). Thus, for
Hayles, practices like typing and, presumably, coding have incorporative and inscriptive facets.
It serves to keep in mind that, at the end of the day, Hayles is a writer. The “material” with
which she is most familiar is words. Congruently, floating signifiers are predicated on what she
calls a “chain” of signification—a chain that stretches down from the symbols on the screen,
through programmatic scripts, and eventually into machine code. Here, however, I would evoke
Friedrich Kittler’s “There is No Software,” in which he reminds us that, “[a]ll code operations,
despite such metaphoric faculties as call or return, come down to absolutely local string
manipulations, that is, I am afraid, to signifiers of voltage differences […] The so-called
philosophy of the so-called computer community tends systematically to obscure hardware with
!230
software, electronic signifiers with interfaces between formal and everyday languages” (Kittler,
Truth 150).
Thus, while the chain of signification to which Hayles refers does indeed stretch down into
machine code, the chain itself, as Kittler suggests, relies heavily upon bridging the meaning-
making of everyday speech with the material/mathematical requirements of functional hardware.
From the Kittlerian perspective, the flickering signifier is much more physically grounded than it
would at first seem. Indeed, M. Beatrice Fazi and Matthew Fuller seem to suggest a similar
weakening of the linguistic/constructionist over-coding of programming practices when they
state that they “understand the construction of computational aesthetics as a process that is
“internal” to the notion of computation, and should therefore not to be approached from any
particular disciplinary ground” (Fazi & Fuller, “Computational Aesthetics” 284).
For myself, computational media practices, which determine the structure of Hayles’s
material/semiotic chain according to whatever perceptual or aesthetic results it produces, are best
characterized in phenomenological terms that resonate with Brunelleschi’s workshop practices.
Using a nested “for loop” or a specific data structure to solve a particular coding problem, for
example, are decisions that are often solicited by the same “satisfaction conditions” that pull
embodied action into equilibrium with environmental conditions. The common practice of
cutting and pasting code from user help forums like Stack Overflow speaks to the extent to
which software practices more often resemble building with Lego bricks than they do writing.
Using such tactics, even the greenest of novices are capable of constructing reasonably
sophisticated software, cobbling together blocks of code, the functions of which the programmer
might barely understand. With a modicum of syntactic understanding of how these blocks should
!231
be implemented, much of the underlying theory and structure can be black-boxed but still be
used to meet the satisfaction conditions. The family of graphical, node-based programming
languages like Max/MSP, which I use in the construction of many of my own works, only
deepens this homology.
None of this is to say that wholly unreflective approaches to computational art are preferable
to theoretical or critical approaches; quite the contrary. The work accomplished by Wendy Chun
and other theorists of the computational oriented toward unpacking the ideological biases lurking
within coding practices is, I would argue, highly needed. The case I am making is, therefore, not
for a turn away from signification, but more of a suggestion that the “over-coding” of coding, or
indeed of writing itself, as primarily linguistic/semiotic overlooks the phenomenological
mechanisms that drive many notational practices. Indeed, examples of the sort of distortional
approaches I have observed in anamorphosis abound in modern and postmodern literature. The
surrealist game of Exquisite Corpse and William Burroughs’s cut-up method are only two. Closer
to the present, Mark Z. Danielewski’s House of Leaves stands out as the most evocative example
of how signification and embodiment can be bound up together to such an extent that each term
overspills its primary function, gesturing toward something wholly alien to experience. To read
that work is to get lost, like the protagonist in the eponymous house, in a representational space
that mutates constantly as footnotes stretch on for pages and words wrap up and around the
margins, prompting continual shifts in one’s readerly orientation, both physical and narrational.
The house of leaves in question is thus both the house of the fiction as well as the very book one
is holding.
!232
What I am suggesting is that, on closer inspection, variational and notational practices share
important features that go beyond (or emerge from a synthesis of) the incorporative vs.
inscriptive dialectic. Chief among these is a quality of mutability, which imputes to distortional
practices their transformational potential. As we saw in the case of anamorphosis, this mutability
is often as extant within the most mathematical of systems as it is in material/phenomenological
constructions. In each case, the logic of organization is subject to disruption and transformation
with a deft and well-timed “What if?” Gaston Bachelard gives an example of precisely this sort
of disruption in his arguments for how seemingly contradictory systems of abstraction co-exist
within the scientific image. For Bachelard, formal systems like Euclidian geometry are extended,
rather than overturned, through structural changes to their internal logics.
Non-Euclidian geometry emerges by either relaxing the metric requirement of Euclidean
geometry, or by replacing the parallel postulate. “Non-Euclidean geometry,” says Bachelard in
The New Scientific Spirit, “was not invented in order to contradict Euclidean geometry. It is more
in the nature of an adjunct, which makes possible an extension of the idea of geometry to its
logical conclusion, subsuming Euclidean and non-Euclidean alike in an overarching ‘pan-
geometry’” (Bachelard 8). Bachelard extends this theme to a number of other structural changes
to abstract schemas in the Sciences, including Newtonian and non-Newtonian mechanics and
Maxwellian and non-Maxwellian physics. “Science,” Bachelard says, “is like a half-renovated
city, wherein the new (the non-Euclidean, say) stands side by side with the old (the Euclidean).
Anyone who thinks that such diametrically opposed idioms such as these are mere means of
expression, more or less convenient systems of notation, attaches precious little importance to
the proliferation of new scientific tongues” (Bachelard 7).
!233
While both the Sciences and the Arts have seen disruptions to their respective abstract
schemas, it could be argued that the Arts, operating under the slogan, “do whatever,” are much
more susceptible to these influences. It could also be argued that they are proportionately
susceptible to the gravity of institutional forces. If we take a sighting from music criticism,
Jaqcues Attali, in Noise: The Political Economy of Music, gives a compelling argument for how
distortional practices, which he dubs “composing,” disrupt the cycles of what he calls
“repetition,” defined as the recording and circulation of music as an exchange commodity.
Composing, says Attali, “is a foreshadowing of structural mutations, and farther down the road
of the emergence of a radically new meaning for labor, as well as new relations among people
and between men and commodities” (Attali 135). Important for our purposes, is his coupling of
composition directly to embodiment and otherness:
In composition, to produce is first of all to take pleasure in the production of differences. […] For example,
in the language of jazz, to improvise is "to freak freely." A freak is also a monster, a marginal. To
improvise, to compose, is thus related to the idea of the assumption of differences, of the rediscovery and
blossoming of the body. […] Composition ties music to gesture, whose natural support it is; it plugs music
into the noises of life and the body, whose movement it fuels. It is thus laden with risk, disquieting, an
unstable challenging, an anarchic and ominous festival, like a Carnival with an unpredictable outcome
(Attali 142).
If, however, we subscribe to de Duve’s endless cycle of tradition and betrayal in the Arts, then
what follows is that “composing,” or what I have termed distortional practice, is always at risk of
being reabsorbed into the mechanisms of repetition, subsumed under the logos of
!234
representations, or, by way of the “validations” of jurisprudence, made part the very tradition it
disrupts.
Viewing these matters through a different lens, what is perhaps more troubling than the
homogenizing effects of repetition on aesthetic practices is its effects on scientific visualization.
Bachelard touches precisely upon this habituation of practitioners to particular models, diagrams,
graphical conventions, or modes of visuality, warning that such visual/epistemological habits
tend to dull the “sharp point” of scientific abstraction. As we will see in the next chapter, not
only is this situation commonplace, it has become a matter of urgency for many researchers as
their fields become increasingly siloed, yet inundated with instrumental data. I would argue here
that this urgency is exactly why art-science projects like Unfold are so valuable. Like Turner’s
storms, the stellar nurseries presented upon the screens of Unfold come as close as we can get to
perceuptualized instantiations of the phenomena represented—phenomena that are
overwhelming in scale, and a great deal of which lies beyond our sensory thresholds. This is
made possible not through appeals to representational precision, as accurate as the underlying
data or the algorithms may be; it is made possible through the grounding of the experience in the
materiality of the body and the senses as well as that of scientific instruments and media
technologies. The compound eye of science is, in short, given corporeal presence.
Brushing away the worries around noise, uncertainty, and embodiment that drove scientists
away from a perception-based empiricism and toward mechanical objectivity, projects like
Unfold embrace all of these issues and make them the fundamental condition for the phenomenal
encounter. The question is not whether the particular audiovisual language of Unfold or works
similar are more are less accurate, more or less faithful to their referents, or facilitate better
!235
pattern recognition; in many ways, this question presents us with a false dilemma, as it pits
materiality and embodiment against analytical abstraction. In works like Unfold, a better
question would be one of how such haptic and visceral artworks, like their anamorphic
predecessors, might locate an alternate point of view, reconfigure perception and its devices, and
enable us to better apprehend the complexities of a digitized universe.
!236
5 World in a Cell Although estimates vary, the PDB could triple in size over the next five years. Not only is it likely that the
number of structures will increase dramatically, but the information about each structure is destined to grow
as well. Uniform experimental practices will offer the opportunity to automatically collect more data from
each structure determination. The quality of structures from high-throughput experiments may be more
variable, depending upon such factors as the extent of refinement. Now, in addition to archiving the results
of structural biology projects driven by the need to answer questions arising from the results of biochemical
experiments, we will need to catalogue structures for which little or no functional information is available.
How will the PDB respond to changes in quantity, quality and available functional information in this new
era? (Berman, et. al. 957). 5.1 About this Chapter
In the last two chapters, I have attempted to make a case for an aesthetics and practice of
instrumental data display that I have characterized as “worldizing.” This aesthetic is one that
takes embodiment, material agency, and a multiplicity of perspectives to be qualities that can,
under specific conditions and with due diligence, augment rather than inhibit the analytical
abstractions we produce through mechanical objectivity. It is one that seeks to challenge the
analytical or “forensic” rhetoric of data displays that, in the absence of critical reflection,
presents itself as a “crystal goblet” for the information it references. Important to this
understanding of data and their representation is that they each are understood as plastic albeit
grounded in material constraints. The concept of distortional practice follows directly from this
understanding and underscores the need for equal parts “embodied” and theoretical knowledge as
!237
a means for identifying the extent to which the “hard points” of material and abstract schemas
can be altered, replaced, added to, mutated, or fragmented.
In the present chapter, many of the themes we have examined thus far will be seen in situ,
but more as a general ambiance drifting in and out of picture than as the center of focus. The
primary motifs of this chapter connect most explicitly to the concerns of chapter two, in which
we examined the iconographies of Art and Science and the question of how these might be
disarticulated in the service of forging productive hybrid practices. As we will be attending to
art-science collaboration more directly here, the issues of communication, language, and
cooperative learning will take center stage. The project under discussion in this chapter, The
World in a Cell, presents its collaborators not with an explicitly theoretical or linguistic set of
problems to solve, but with a much knottier set of issues centered on the effectiveness of
representational models. As we will see, designing and developing an approach to these issues
requires a nontrivial rethinking of the disciplinary languages common to the respective fields
involved in the collaboration.
Accordingly, Peter Galison’s historiography of early twentieth century microphysics turns
out to be particularly useful. Galison’s concept of “creole” and “pidgin” languages stands out as
an especially fruitful way of unpacking the sort of radical restructuring of communication
required to undertake such hybrid projects as World in a Cell. Yet, the issues of language bear
upon not only the interpersonal relations and cooperative activity within the laboratory, but also
the “design language” chosen for the representations themselves. The question of what and how
such models should communicate turns out to be much more fraught than it would seem on its
face. In many respects, the concept of “design language,” which is so ubiquitous in the field of
!238
design and which describes the system of formal and communicative features chosen for a
particular product, turns out to be much stickier when applied to the referential objects meant to
communicate phenomena to which we have no direct access and very little embodied intuition.
In such cases, there are two paths for hybrid practitioners. One is toward metaphor as a means
for communicating fundamental and higher order principles; the other is toward the data
produced by the instruments deployed to mediate the objects under study. Each approach has its
advantages and disadvantages and each was taken, with varying qualitative results, during
different stages of World in a Cell’s development.
In what follows, I will examine the motivations for World in a Cell and recount many of the
themes that emerged during its development. My examination of the project’s first phase will
center primarily on the difficulties of developing a shared language between collaborators, the
question of arbitrariness in representation, and the value of modal thinking and metaphor.
Examination of the second phase will mostly concern the direct use of instrumental data for
establishing a design language and will expand upon many of the same issues that follow from
the previous discussion. As I outline the particular problems faced by the team and the solutions
implemented to solve them, I will make the case that the value of the work is not only found in
the analytic or aesthetic function of the final product, but also in the culture that continues to
emerge from its creation, which, in some respects, approximates the “third culture” many have
called for in response to Snow’s Rede Lecture.
In other respects, however, the interdisciplinary approach that has evolved out of this project
is idiosyncratic in ways that trouble its candidacy as an “order,” in the traditional sense. As I
observed at the beginning of this work, everyone wants to talk about art-science as cake; almost
!239
no one wants to talk about how the cake is made. If World in a Cell stands as an example—a so-
called “good object”—for how art-science might be productively undertaken, its primary lesson
is that the recipes for art-science are likely to be as idiosyncratic as the people involved and the
products under discussion. Congruently, the changes to disciplinary thinking for those directly
engaged—at least in cases where the collaboration is successful—will be profound yet difficult
to codify into a general set of rubrics.
As a means for returning to coda with respect to the conceptual themes outlined in the
previous two chapters, I will follow my examination of World in a Cell with a discussion of one
of my own works of audiovisual data perceptualization—one which took a great deal of
inspiration from my deep involvement with World in the Cell’s initial research phase and overall
sound design. Entitled GLP1R, this work evolved out of a prototype I developed for a procedural
audio pipeline that uses PDB (Protein Data Bank) files to generate sound design for World in a
Cell’s (phase two) virtual reality experience. As we shift focus to this work, the themes of aeolian
instruments, anamorphic thinking, and distortional practice will return to the foreground. 5.2 The City and the Cell: Beginnings
In the winter of 2016, a meeting was organized between the USC School of Cinematic Art’s
World Building Media Lab, The Bridge Art and Science Alliance, and Dr. Raymond Stevens in
order to discuss, what was to my mind at the time, a moonshot project whose successful outcome
seemed highly uncertain at best. The goal was to build a functional and immersive model of a
human cell using the guiding metaphor of a large and vibrant city. There were two stakeholders
behind this endeavor: the aforementioned Raymond Stevens and his lab at the USC Michelson
!240
Center, and Havas, an advertising and branding firm with expressed interests in exploring the
future of so-called “smart cities.” At this meeting were approximately twenty-five to thirty game
designers, graduate students, scientists, and other experts, most of whom were seated around the
circumference of a large circle of tables, chatting, or else staring at one another in bemusement.
Professor Alex McDowell, head of the WbML, soon called the meeting to order. He began by
introducing himself and the function of the lab and then invited discussion centered upon the
question of how to develop an immersive, functional model of the pancreatic beta cell, using the
city as a guiding metaphor, while staying true to the science. I can only describe the hour or more
that followed as chaotic.
The most outspoken in the group seemed focused upon the enumerable ways such a thing
simply could not be done and, as evidence, they recounted their own experiences designing
similar media projects under similar constraints. Others, nearly as outspoken and drawing
equally upon their own professional experiences, launched into rejoinders describing precisely
how such a thing could be done. The three who seemed most interested in tackling the
practicalities and requirements for getting the project off the ground were Professor McDowell,
Dr. Stevens—both of whom would act as the project’s principle investigators—and Dr. Todd
Richmond, whose background as both chemist and as researcher at USC’s Institute for Creative
Technology affords him a unique and invaluable perspective on such projects. The meeting
ended that evening without much being settled but, despite this fact—and here I am speaking for
myself—there was a sense that if a collaboration came out of this endeavor, it would be highly
unique and potentially paradigm-shifting with respect to the task of representing cellular
structure.
!241
By the following May, a much smaller team of investigators had been assembled, including
myself, Dr. Richmond, three graduate students from the Keck School of Medicine, Kyle
McClary, Kate White, and Jitin Singla, as well as my fellow iMAP colleague, Laura
Cechanowicz. As co-P.I.s, McDowell and Stevens largely relegated the bulk of the project’s
development to this core team and to project manager, Ronni Kimm, checking in periodically for
progress reports and to offer general guidance and direction. My own role during this initial
phase of what was then entitled The Cell & the City was that of design research lead and
principle sound designer, although this latter role would be deferred until we approached the end
of production and development. Over the course of the initial research and discovery phase, a
number of other professionals, including concept designers, screenwriters, and architects were
brought on board to provide insight and expertise. 5.3 The Cell & the City: Motives and Methodology
The research methodology of Alex McDowell’s particular variant of “worldbuilding” is
oriented toward the mapping of the various complex systems that make up a world into
“ecologies,” each comprising a number of “domains.” Domains can be as diverse as architecture,
education, infrastructure, transportation, language, dance, waste disposal, etc., and while
ecologies like structure, energy, and politics remain mostly static across various projects,
domains change often and are dependent upon the particular research context. Research is
typically conducted by means of literature reviews, gathering of insights from domain experts
(often referred to in other design disciplines as “SMEs” or “subject matter experts”), and through
what Joachim Halse has termed “ethnographies of the possible” (Halse, “Ethnographies of the
!242
Possible”, 181-182), which, for the WbML, involves the creation of speculative characters who
inhabit the world under construction and who help to provide a first person, “on the ground”
view of the world space and the various intersections of its domains.
Having conducted interviews and gathered insights, the research team is better informed and
thus better equipped to design and construct the “rules’ that lend a fictional world its plausibility.
The grounded speculations afforded by in-depth research are capable of providing further,
actionable insights with respect to each domain and often suggest possible or plausible
interventions into real world problems. To this end, provocations that gesture toward the future
trajectory of the built world are generated through interrogations of the system in the form of
“What if?” questions. The discussions that follow these provocations, particularly in design
scenarios centered on future reality, often resonate with Stuart Candy’s rubric of “possible /
plausible / probable / desirable” (Candy 45-48), with the “What if?” questions oriented
particularly toward “desirable.” More often, however, designers following this methodology are
encouraged to avoid utopian as well as dystopian scenarios as a means for keeping the
speculation grounded in plausibility. While much of the approach I have just outlined was taken
in the earliest days of research for World in a Cell, it soon became apparent that the particular
scientific nature of the project required serious modifications to the methodology. These changes
and the reasons for them planted the conceptual seeds for the present chapter.
For Dr. Stevens and the other scientists involved, there are two primary catalysts for World in
a Cell. The first relates to the perceived complacency of structural biologists with respect to the
current visual models of the cell. As Bachelard observes, this complacency, born out of comfort
and overfamiliarity, can often stifle progress. The second catalyst is the barriers to
!243
communication that have been created within structural biology, biochemistry, and
bioinformatics due to the siloing of research specialists within their respective disciplines. As
each group of specialists work toward solving ever-more rarified problems, they consequently
develop equally rarified languages, conventions, tools, and representational schemas. Very often,
the data files they produce are reflective of this situation and are formatted according to
protocols completely foreign to their colleagues working in adjacent areas of study. Part of the
problem, however, might come down to perennial issues of funding and recognition as much as
from the inadvertent effects of disciplinary siloing.
In a 2016 editorial for Nature Methods, entitled “Data Sharing Comes to Structural Biology,”
the authors highlight two data repositories, The Structural Biology Data Grid, and EMPIAR, an
electron microscopy archive for the Electron Microscopy Data Bank as examples of the strides
being made in recent years toward making data and collection methodologies more transparent.
Therein, the authors observe that many of the most pressing issues that are only now being
addressed relate to the sheer volume and size of data being archived:
Fundamentally, raw data sets from both X-ray crystallography and cryo-EM used for high-resolution
macromolecular structure determination consist of series of 2D images. A typical X-ray diffraction data set
is about 5 gigabytes, and a large cryo-EM data set could top out at an astounding 10 terabytes. It is only in
the past few years, especially with the advent of inexpensive cloud storage systems, that hosting and
sharing data sets of such enormity has even been technically realistic (“Data Sharing” 381).
They also note that the steps being taken to tackle these issues are part of a cultural shift within
the sciences with respect to data sharing and the value of peer review, but they provide the caveat
!244
that some have not been so quick to get on board, stating that, “given the competitive nature of
this field, there are understandably dissenters on this position who worry about releasing their
precious data sets into the wild to be potentially exploited by other groups that put no time, effort
or money into their generation (“Data Sharing” 381).
For Dr. Stevens, despite the fact that correctives to the situation above are beginning to
appear, there remains a great divergence of disciplinary knowledge that serves only to stifle
discovery, which, given the nature of the Pancreatic Beta Cell Consortium’s work identifying
drug targets for diabetes, means that lives are quite literally at stake. Dr. Stevens’s eventual (and
admittedly very long-term) goal is to create an immersive, interactive model of the entire human
body complete with detailed simulations of all its various systems. For now, the pancreatic beta
cell serves as a test case for immersive media designers to discover whether and how the objects
they create are capable of either communicating a sufficient amount of information, or
generating a sufficiently evocative collaborative space, to make the undertaking worthwhile.
Given the complexity of the inner workings of the cell, the radical differences in scale even
between the cell’s component parts, and the fact that so many of the processes involved test the
limits of our everyday intuitions, it is safe to say that realizing this goal is no trivial undertaking;
the testing of design methodologies, due both to these and cultural factors, as well as the design
team’s testing of the boundaries of scientific principles, have fueled the idiosyncratic nature of
World in a Cell’s hybrid research culture.
While the goals I have outlined above were those communicated to the research and design
team by Dr. Stevens, as previously noted, the first phase of the project, The Cell & the City, was
given something of dual brief. The gist of the request was for the team to ingest knowledge from
!245
structural biologists on one side of the aisle, and architects, city planners, and technologists on
the other, with an eye toward identifying those insights that might be applied across disciplines.
Ostensibly, one might suspect that, given the complexity of the biological subject matter alone,
this brief would be one nearly impossible to meet. From my own perspective, this turns out to be
very near to the truth, but not for reasons of complexity. Again, from my own perspective, the
research conducted by the team through literature reviews, interviews and, as we will see,
informal pedagogical sessions given by our scientist collaborators was sufficient for successful
prototyping in nearly any design context. The trouble with the brief turned out to be of a much
more philosophical nature, which I will unpack in what follows. 5.4 The Trading Zone
In his history of the progress made in detecting and visualization subatomic particles during
the earliest years of microphysics, Peter Galison posits two concepts that I believe are useful in
describing the hybridity of practice that emerged from City and the Cell and its followup, World
in a Cell. The first is that of the “trading zone,” which Galison describes as “an intermediate
domain in which procedures [can] be coordinated locally even where broader meanings
[clash]” (Galison, Image and Logic 46). Key to this idea is that the trading zone is a space where
“two dissimilar groups can find common ground. They can exchange fish for baskets, enforcing
subtle equations of correspondence between quantity, quality, and type, and yet utterly disagree
on the broader (global) significance of the items exchanged” (Galison, Image and Logic 46). The
second concept is that of “interlanguage,” which Galison characterizes, borrowing from the field
of linguistics, as “creoles” and “pidgins.” Each of these hybrid vernaculars is a means for
!246
communication that develops around the shared problems of adjacent cultures. “Pidgins and
creoles,” says Galison, “are specific to the uses to which they are put and the languages they
connect. As such, they are emphatically not global lingua francas” (Galison, Image and Logic
49).
Galison’s original motivation for developing both concepts was that of reconciling Thomas
Kuhn’s two influential ideas, “paradigm shifts” and “normal science,” both of which we briefly
examined in chapter two. Galison’s argument addresses the fact that, despite the
incommensurability that emerges between new and old scientific paradigms, normal science
seemingly progresses unimpeded. But, how? If Einsteinian physics is ostensibly
incommensurable with Newtonian mechanics, how do scientists, some of whom continue to hold
fast to older paradigms, continue to work and make progress together? Galison’s answer is drawn
from examinations of how the culture of research scientists and that of engineers, working
together to construct the material means for investigating subatomic particles, developed new
modes of doing, understanding, and communicating. Moreover, they did so despite the fact that
their beliefs and languages with respect to the nature of the phenomena differed wildly.
In “Trading Zones and Interactional Expertise,” Collins and his colleagues make a case for
different varieties of trading zone. According to these authors, the type that most concerns
Galison is the “interlanguage trading zone” (Collins et. al. 8). Other types of trading zones
include “enforced trading zones,” in which the adoption of paradigms is driven by institutional
power; “subversive trading zones,” in which the adoption of a colonizing paradigm is driven by a
slow replacement of its alternatives; and, finally, the “fractionated trading zone,” which the
authors divide into two sub-variants: the “boundary object trading zone,” and the “interactional
!247
trading zone.” (Collins et. al. 11-12). The first of these sub-variants focuses on the trade of what
Galison terms “boundary objects”—a name given to any material product that is germane to the
practices and values of both cultures involved in cooperation and which therefore becomes an
object of trade. Notably, the boundary object trading zone can develop in the absence of a shared
language while, in contradistinction, the interactional trading zone develops around language
specifically and often in a complete absence of boundary objects (Collins et. al. 12). Yet,
however distinct each variant might be, the problem of language and communication more
broadly is one these authors place at the very center of the concept, arguing that, “‘trading zones’
[are] locations in which communities with a deep problem of communication manage to
communicate. If there is no problem of communication, there is simply ‘trade,’ not a ‘trading
zone’” (Collins, et. al. 8).
The interdisciplinary space created between designers and scientists during the development
for both The Cell & the City (henceforth, CatC) and World in a Cell (WiaC) can be best
characterized as a hybrid of all of the forms of trading zones listed by Galison and Collins et. al..
Given the nature of the project and the sources of funding, a great deal of top-down enforcement
of specific paradigms became a subject of much debate during the process. While “enforcement”
is perhaps too strong a word, there were time when the (nearly always tacit) philosophical
positions of the scientists shaped the direction of design discussions in ways that made power
imbalances evident. I will note, however, that this observation describes a complex set of
interactions that are highly subject to interpretation and that what power dynamics did and
continue to exist for WiaC rarely present(ed) serious problems for productive collaboration. The
!248
creative input from designers was and is highly valued, never explicitly stifled, and, I would
argue, is given better toe-holds by the constraints presented by scientist-interlocutors.
I would be remiss, however, in failing to note the presence of the power imbalance itself, as
the subject of funding is a key component of the network of factors that make projects of this sort
possible. Accordingly, the shape of this particular “enforced trading zone” resists the
characterization above that holds enforcement as a “top-down” institutional factor, and more
closely resembles the usual client/designer dynamic. As one the primary motivations of the
WbML is to rethink design methodology and practice in ways that spur positive changes within
other domains and to reciprocally make positive changes within its own domain by ingesting
knowledge from collaborators, the typical client/designer interplay is usually one to be avoided.
The problem of funding, however, is intractable and pervasive for design labs like the WbML—a
fact that very often brings power imbalances to the fore and makes operating in the preferred
way exceedingly difficult or, at worst, impossible.
All of the other forms of trading zones listed above were constructed in pieces and cobbled
together during various phases of C&tC’s and WiaC’s development. The interlanguage trading
zone and the boundary object stand out as the foundations upon which the rest were constructed.
However tempting it might be to perform a wholesale remapping of Galison’s and Collins’s
respective understandings of trading zones onto the interdisciplinary exchange produced by the
C&tC and WiaC teams, there are factors that trouble this move which deserve unpacking. First,
Galison’s trading zones are posited as a means for making historical sense of the progress made
between engineers and research scientists and, as siloed as these theorists and practitioners
undoubtedly were (and arguably remain), their fields are nevertheless highly adjacent to one
!249
another. Thus, they share deep epistemological foundations that help to shore up whatever
differences in paradigm that separate them. It is therefore reasonable to question whether
productive collaboration would be possible if such were not the case.
To employ a useful bit of familiar reductio ad absurdum, we might ask what trading zone
would be possible between a cosmologist and a fundamentalist creationist. Both take the origin
of the universe to be their shared interest, but their set of epistemic assumptions and constraints
could not be more antithetical. This is not to say that such a trading zone would be impossible,
but it is difficult to imagine the amount of philosophical ground one would have to retread in
order to achieve it. An objection might be made that this example doesn’t exactly fulfill the
trading zone’s prerequisite of shared goals since the creationist has no need to conduct inquiry
(or at least inquiry into causalities). The creationist’s goals, it might be objected, are those of
identifying a tenable morality that ensures a positive destiny for the immortal soul—a goal that,
safe to say, would be entirely irrelevant to the inquiries of cosmologists. In fact, this disjuncture
is precisely what drove Stephen Jay Gould to formulate his (in)famous theory of
“nonoverlapping magisteria,” which, as the name suggests, posits that Science and Religion
occupy two distinct domains that are fully partitioned from one another. “The lack of conflict
between science and religion,” says Gould, “arises from a lack of overlap between their
respective domains of professional expertise—science in the empirical constitution of the
universe, and religion in the search for proper ethical values and the spiritual meaning of our
lives” (Gould 2).
I will not rehearse the many arguments against this claim as it would take us too far from the
matter at hand. But, what they all have in common can be found in the following objection: Even
!250
if they aren’t directly engaged in active inquiry, fundamentalist creationists do make
cosmological claims, which would indicate that these two ostensibly nonoverlapping magisteria
do, in fact, overlap at the point of their greatest incongruity. What follows is that the disjuncture
is one produced by wildly differing epistemic foundations. For our purposes, what is most
notable about Gould’s concept of nonoverlapping magisteria is its familiar tone. It rings precisely
of those epistemological assumptions that follow from the iconographies of Art and Science.
Such hard boundaries between domains, as argued previously, are difficult to maintain. Their
penumbral edges most interleave with one another where the certainties produced by each order
are at their haziest.
What is perhaps more troubling for our account of WiaC as trading zone is that the metaphor
of overlap (or interleaving) is not as straightforward as it seems on its face. Often, what we are
likely to find in the space between those orders we wish to bring together in cooperation is not
the hazy and unordered qualities of Waldenfels’s “twilight,” or that of my own penumbral field,
but, potentially, the territories previously colonized by a wider range of disciplines. In exploring
what was ostensibly thought to be an unmarked space between the expertise of, for example,
media designers and that of structural biologists, we are likely to find ourselves walking blindly
into the well-mapped terrain of cognitive scientists, biologists from neighboring disciplines, or
even game designers. Of course, this problem is one that well-grounded research is meant to
address, but it looms nevertheless over any endeavor that seeks to bring disciplines together in
cooperation that are themselves separated by a vast conceptual distance. For the WiaC team, the
concrete manifestations of this distance are the boundary objects of visual/representational
models, which, taken together with the issues of language and disciplinary terminology (what we
!251
might understand as “the noise of the disciplinary other”) was a subject of much confusion and
debate from the project’s beginnings. Indeed, the issue of representation, as we have seen
previously, is one that is radically entangled with the problem of embodied intuition, theoretical
background knowledge, and the extent to which scientific imaginaries color one’s thinking. 5.5 The Abstract Vector of Science
In many respects, the relationship between the artists and scientists who come together in
today’s art-science collaborations echoes past relationships between artisans and the Church. A
common view of the artists who produced the religious imagery adorning the walls and ceilings
of Medieval, Gothic, and Renaissance churches is one that holds them as lone geniuses from
whose hands and minds sublime works of visual storytelling sprang fully formed. However, not
only did these artists rarely execute their works in isolation, but it stands to reason that the
Church played a much more pivotal role in guiding the content than such an understanding
would suggest. The Church had strict guidelines in place for how biblical stories were to be
depicted; thus, the artist played what might be considered a highly constrained hermeneutic role
in the visual communication of what is almost exclusively narrative content. This role was
highly managed by Church literati; the well-known tension that existed between Michelangelo
and the clergy after the former’s execution of the frescoes in the Sistine Chapel is a testament to
this fact. I would argue that the most common high-profile collaborations we find in art-science
discourse today are born out of similar relationships between artists and scientists. As previously
noted, the flows of capital that keep these collaborations afloat are, moreover, similarly
asymmetrical, with the Sciences supplanting the Church in the role of patron.
!252
What is more striking, however, is that both the Church and the Sciences request the artist to
depict in a language of the senses a set of truths that are extant within a cosmic reality well
beyond the spectral range of the senses. In the case of religious patronage, it was a spiritual
reality the artist was expected to capture despite, as we saw in our discussion of anamorphosis,
the partitioning of such reality from the illusions of biological perception. In the case of the
Sciences, it is very often abstract or instrumentally mediated phenomena that escape everyday
intuition, which the artist must communicate in the language of lived experience. The respective
“realities” that Science and Religion position outside of our biological reach began to collide
around the time of Galileo’s first astronomical observations with his telescope, and the tension
between them is made explicit in the painting, Sight, from Peter Paul Rubens’s and Jan Brueghel
the Elder’s allegorical series, The Five Senses (fig. 13).
Ostensibly, Sight is a fairly straightforward depiction of a “Kunstkammer”—a collection of
paintings, antiquities, and natural curiosities that were popular among the educated elite during
the sixteenth and seventeenth centuries. However, closer examination reveals a complicated
tangle of spiritual and scientific themes centering on the epistemic authority afforded to vision.
Firstly, and most obviously, the image is a classic example of mise en abyme. Not only are there
secondary images depicted within the primary image, but the wreathed Virgin and Child shown
in the lower right suggests a tertiary level of representation—a painting of a painting within a
painting. Thus the two artists suggest in the most general way possible the degree to which
representation can be understood as an infinite hall of mirrors reflecting no primary
reality. Secondly, resting on the frame of the wreathed Virgin and facing the partially obscured
oculus set into the back wall is a red parrot, which we may read as symbolic of mimicry.
!253
The third point of Sight’s triangulation is found in the lower mid-left where we find what is
likely to be one of the earliest depictions of the newly-invented Kepler telescope (Molaro &
Selvelli, “On the Telescopes” 330-331) resting in pride of place between who we may presume
to be Venus and Cupid. The telescope is pointed heavenward, toward the oculus, but neither
Venus nor Cupid seem to be interested in looking through it. They are, instead, gazing at an
image depicting the story of Christ healing the blind man. At Cupid’s feet, we see a monkey—a
classic symbol of humanity’s foolishness—holding a pair of spectacles. Taken together, what all
of this symbolism seems to suggest is that the only “true” vision is a spiritual vision. While
spectacles might correct our poor eyesight and the telescope might extend our vision into the
distance, revealing hitherto unseen worlds, biological vision itself produces only illusions; it is
!254
Fig. 13. Brueghel, Jan the Elder, and Peter Paul Reubens. Sight. 1617. Meuseo Del Prado,
https://www.museodelprado.es/en/the-collection/art-work/the-sense-of-sight/
494fd4d5-16d2-4857-811b-e0b2a0eb7fc7?searchid=d94c178d-39c4-ace1-0b93-44d4bdcade13.
Accessed 1 April 2020.
no more veridical than the imitations performed by the parrot. In the final balance, only Christ
can provide a window upon the truth of things.
Sight’s message is, in fact, one that bears a striking resemblance to many perspectives in
modern neuroscience and cognitive science, which hold the veridicality of perception to be at
most a matter of the predictive and pattern-recognizing function of the brain’s various neuronal
structures. According to this view, incoming sensory signals are constantly checked by these
structures against stored models of past successful actions; having retrieved the appropriate
model, “best guesses” are subsequently made with respect to the most appropriate action to the
given stimulus. The brain reinforces these models when actions are successful and modifies them
when actions fail. “What is the relation […] between what we seem to perceive, and the
probabilistic inner economy?” asks Andy Clark in Surfing Uncertainty, “We seem to perceive a
world of determinate objects and events, a world populated by dogs and dialogues, tables and
tangos. Yet underlying those perceptions […] are encodings of complex intertwined distributions
of probabilities, including estimations of our own sensory uncertainty” (Clark 168). Yet, Clark
warns against an understanding of these encodings that view them as tiny representations
standing between us and the world. In words that very much recall our earlier examination of
embodied doing, Clark states that, “it is the guidance of world-engaging action, not the
production of ‘accurate’ internal representations, that is the real purpose of the prediction error
minimizing routine itself […] Rather than aiming to reveal some kind of action-neutral image of
an objective realm, prediction-driven learning delivers a grip upon affordances: the possibilities
for action and intervention that the environment makes available to a given agent” (Clark
168-171).
!255
Biological perception, as outlined above, is always partial, out of sync with events occurring
in “real time,” and—as we have discussed at length—is only sensitive to a limited range of the
world’s phenomena. This, of course, is well-known to scientists and, indeed, to a very large
cross-section of the lay public. For our purposes, however, there two problems that follow from
this fact that can be consistently relied upon to trouble any epistemological truce to be made in
art-science collaborations. Even at best, these problems are what arguably create the
epistemological disjunctures that make the trading zone difficult to create and sustain. The first
problem, simply put, is that despite the discursive facet of Arts practices, what we might call the
“proximal” knowledge of artists is often deeply and inextricably linked with sensory experience
—visual, auditory, proprioceptive, etc.. Meanwhile, despite the indispensable material agencies
that undergird scientific practice, the physical laws accounted for by scientific theories exhibit,
per Bachelard, “vectors of abstraction” (Bachelard, The Formation of the Scientific Mind 26), a
proficiency for which requires considerable time, effort, and study.
More often than not, vectors of abstraction point directly away from the embodied intuitions
that are afforded by sensory perception. As James Elkins states in Six Stories from the End of
Representation,
From a scientific standpoint it probably doesn’t make sense to try stretching your intuition to grasp
something tiny or gigantic, because intuitive understanding rarely helps and can often be misleading. It is a
constant refrain in physics: intuition won’t work—it is irrelevant or useless. In astronomy, intuitive
understanding has severe limits: people talk about the behavior of gases in nebulae by analogy with jets and
bubbles in fluids, but there is nothing in human experience analogous to a neutron star or black
hole” (Elkins 94)
!256
What follows is a yet a second problem, viz. the difficulties that emerge when developing a
cooperative or creole language with which artists or designers, many of whom might have had
little scientific exposure since their high school years, will be able to communicate with
scientists about counterintuitive phenomena. Of course, intuition can be helped along with
research aids like models, diagrams, and other forms of representation, provided these are not
conflated with the phenomena or principles they communicate. We will examine this topic more
directly in due time. For now, however, let us prime that discussion by identifying the core
challenge it represents: Many of the pedagogical tools used to communicate scientific principles
already come loaded with metaphors that speak to our embodied/experiential intuitions, but
which are, to scientists, profoundly misleading.
The problem outlined above is one side of a Janus-faced issue. The other side of it is that
scientists collaborating with artists often misunderstand the nature of embodied or intuitive
making. During the development of WiaC, it was commonly expressed that many of the
scientists involved, even those who were ecstatic about the products of design practices, found
the opaqueness of Arts methodologies disconcerting. This, I suspect, is one of the side-effects of
semantic ascent: Working, as scientists do, in a field in which embodied labor is funneled into
verbal report for the purposes of methodological transparency and peer review, it stands to
reason that the work of artists, who are very rarely held responsible for such report, would seem
to scientists little more than black magic. This is doubly true for scientists who give credence to
the more insidious iconographies of Art.
!257
5.6 Where Intuition Fails
In order to get better conceptual purchase for examining the challenge of constructing
“better” modes of representation, let us first turn to the problem of the abstract vector’s many
challenges to the intuitions we develop within our manifest images. There are a number of
physical principles that biologist take as a matter of course, but which, it is fair to say, gave
designers of C&tC and WiaC trouble from the start. Two of these stand out as particularly good
examples of how intuition fails in relation to the abstract vector. In order to understand the
challenge they present, however, we need a more precise definition of “intuition.”
As Gore and Sadler-Smith argue in their article “Unpacking Intuition: A Process and
Outcome Framework,” intuition is not a monolithic or unitary concept; there a number of
psychological processes that produce “gut feelings,” which we have given the label (Gore &
Sadler-Smith 304-305). Moreover, these mechanisms fall into two separate categories: “domain-
general,” and “domain-specific.” The latter category includes such intuitions that are
commensurable with the development of domain expertise. These intuitions include those that
are applied to task-specific problem solving, but also include socially constructed intuitions like
those involved in moral or ethical decision-making (Gore & Sadler-Smith 305). As the name
suggests, “domain-general” includes those intuitions that are developed affectively and
holistically through our encounters with the world at large.
In their article “Development and Validation of a New Measure of Intuition: The Intuition
Scale,” Pretz et. al. further subdivide (mostly domain-general) intuitions into what they term the
“Types of Intuition Scale.” These, they label “holistic,” “inferential,” and “affective,” defining
them thusly:
!258
Holistic intuitions are judgments based on a qualitatively non-analytical process, decisions made by
integrating multiple, diverse cues into a whole that may or may not be explicit in nature. Inferential
intuitions are judgments based on automated inferences, decision-making processes that were once
analytical but have become intuitive with practice. Affective intuitions are judgments based primarily on
emotional reactions to decision situations (Pretz, et. al. 454).
Other varieties of intuition have been posited, but generally speaking, these three
characterizations are more than comprehensive for our purposes. There are, however, two things
of note in these distinctions. First, they comport with our previous account of embodied doing
and tacit knowledge. What becomes tacit, according to this account, becomes a matter of
intuitive decision-making. Second, the differences between each of these varieties of intuition
reveal a latent potential for one type to interfere with the formation of another. For example,
affective or holistic intuitions might interfere with the process by which analytical processes
become those of inferential intuition. Such interference, I would argue, is exactly how
“everyday” intuition often jams our capacity to fully grasp those principles or phenomena
captured by the vector abstraction.
The first concept that presented a challenge to the inferential intuitions of the C&tC
worldbuilding team is that of agency. During the project’s initial research period, it became
obvious that not only is the term employed differently between scientists and designers, its
relevance to the scientific subject matter points in a direction precisely away from human-
centered design. As the initial goal was to cull mutually-beneficial insights from biology as well
as architecture and city planning, these near-antithetical definitions of agency presented serious
problems and began to overburden our discussions. For designers of interactive experiences as
!259
well as for those who study architecture and cities, “agency” is understood to be a quality
possessed by human users and human inhabitants. For the interaction design of C&tC, this raised
the question of the role played by the user who enters the VR “city.”
For those of us tasked with designing the functional language of the experience, it raised
much more troubling issues with respect to mapping the logic of cities onto the functions of
proteins and vice versa. The inner workings of the cell are characterized by noise, pressure, and
stochastic motion. Many processes are highly dependent upon the successful outcome of a
number of ancillary processes. Often, the uncertainty of these processes means that success
comes down to probabilisitic factors of quantity or saturation. Simply put, the more of a
particular signaling peptide, enzyme, or other substance a process is capable of producing, the
greater the probability that the appropriate “message” will be passed along the signal chain. At
no point during the unfolding of these processes does agency, understood in relation to intention
or free will, play a role of any sort.
Given the highly uncertain, nondeterministic nature of the cell, how does one begin
rethinking city infrastructure through cellular processes without subsuming human agency any
further into the emergent environment created by blind, algorithmic processes any more than it
already is? One approach explored by the C&tC team was to examine the processes operating in
cities that might be characterized as emergent or autonomic. Traffic jams, for example, have long
presented as an emergent phenomena produced out of thousands of complex and automated
processes and interactions. Could there be lessons present in cell signaling that present
opportunities for reimagining one or more of these processes? Could such a reimagined process
be facilitated in the near future by ambient computing, IoT, and smart materials? As the team
!260
moved down this speculative path, we turned to experts like Ben Cerveny, president for the
Foundation of Public Code and Alvise Simondetti, head of the Arup Explores program, whose
research in biomimicry, machine learning, and digital fabrication investigates precisely these
sorts of possibilities. What we learned from this exercise is that, while the insights gleaned from
biological systems might certainly be applied to the technological side of IoT systems in cities,
accounting for the human factor throws a monkey wrench directly into whatever insights might
flow back in the opposite direction.
Compounding the problem above is that even the cellular processes themselves are often,
despite their relative efficiencies, exceedingly baroque in their “construction”—a fact that chafes
against the “simpler is better” ethos of architecture and city planning. A constant refrain among
the CitC and WiaC scientists is “the cell is an extremely complex Rube Goldberg machine”—one
that, thanks to natural selection, serves a function. In more precise terms, the characterization of
cellular machinery as “Rube Goldberg-like” is meant to underline the fact that, despite the
seemingly unnecessary complexity of these systems, they are actually quite efficient and both
their complexity and their efficiency is the result of evolutionary adaptations. Yet, for some, this
explanation did not clarify why agency might be a problematic concept. The trouble lies in what
Daniel Dennett has called “Darwin’s strange inversion of reasoning.” In short, what Darwin’s
theory teaches us is that natural selection has no prior intention; rather, its solutions are always
applied post hoc to the challenges of the environment. The reproducibility of these products is
perpetually being tested against selection pressures:
!261
When we observe the caddis fly’s impressive food sieve we can see that there are reasons for its features
that are strikingly similar to the reasons for the features of another artifact for harvesting food from water,
the lobster trap. The difference is that the reasons in the former case are not represented anywhere. Not in
the caddis fly’s ‘‘mind’’ or brain, and not in the process of natural selection that ‘‘honored’’ those reasons
by blindly homing in on the best design. These are examples of the ubiquitous ‘‘free-floating rationales’’ of
evolution (Dennett 10063).
As Dennett’s characterization of free-floating rationales suggests, the challenge presented by
Darwin’s strange inversion is one to inferential rather than to affective or holistic intuition. The
challenge is therefore one that is arguably much easier to meet. Indeed, this understanding of
biological function—one which forces a rethinking of agency—presented the C&tC team with
narrative opportunities as much as it did conceptual challenges. One of the more evocative
“What if?” questions that emerged from these considerations centered upon a fictional machine
learning system that uses evolutionary algorithms to “evolve” a city in silica using a combination
of cell and city data for its training models. This became the basic narrative premise for C&tC’s
virtual reality experience prototype. “Visitors” to the fictional and virtual city, would enter and
explore it through an equally fictional VR headset, effectively answering the question of “who is
the user?” in a highly self-referential way. The user can thus be placed in any narrative role—
scientist, visitor, architect—and in a way that removes any explicit need for direct interaction. In
resonance with our earlier discussion of the forensic aesthetics of Sleep No More, the user enters
the experience as a “ghost in the machine” who can go anywhere and interrogate any part of the
city’s systems that strikes their interest, while the operations of the systems themselves continue
“on rails” in the immersive environment.
!262
As is standard in the production of any prototype, there were a number of problems that rose
to the surface only after the approach above was implemented. For our scientist-collaborators,
the most obvious relates to the accuracy of the “architectural” designs with respect to their PBD
referents. The most intractable problem that followed from this was that of scale, which, in the
first prototype, is never quite resolved either between the user and the world or between the
various “constituents” that make up the world. Thus, we arrive at the second challenge to
intuition—this one of a sort tied directly to our embodied situations. As suggested by Edgerton’s
observation, our situated, holistic intuitions of scale often interfere with our attempts to grasp the
exceedingly small, large, or distant. And this fact can apply to raw numbers as much as to their
physical referents. When we speak of inches, yards, or kilometers, most of us have some
intuition for the spatial relationships invoked. In the case of measurements at the scale of
nanometers and Ångströms, quite the opposite is true. It was commonly expressed among the
scientists of C&tC that even they and their colleagues, all of whom have had extensive
theoretical training with respect to such incomprehensibly minuscule regions of space, have
difficulty imagining them. In the follow-up research for World in a Cell, both fidelity to the PDB
data and the issue of scale were addressed much more directly and with a great deal more care. 5.7 Language Construction: The Distortional Effects of Metaphor
One of the primary tensions and potential delights of art-science collaboration—at least those
that operate under such briefs as those given for C&tC and WiaC—is the negotiation of design
constraints that are in agreement with well- or even half-understood scientific principles. For this
to happen, of course, requires all collaborators to be on the same page with respect to the content,
!263
and this means sharing a language with which to organize thought and action with respect to
such counterintuitive principles as those we have been examining. It stands to reason that, for
scientists, whose models and diagrams are solely intended to encapsulate and transmit scientific
information, the aesthetic considerations of immersive or experiential design might often seem at
best irrelevant or, at worst, misleading. The same distortional effect can, however, be produced
by ill-chosen metaphors. In many respects, the difficulty of selecting appropriate metaphors for
certain processes or principles follows directly from the various challenges to intuition we have
examined.
Following from the discussion in the previous chapter, “distortion,” is understood as having a
twofold potential: its positive potential is developed in relation to ones understanding of
theoretical “hard points,” while its negative potential acts to inhibit that very understanding. As
mentioned in the opening of this chapter, the earliest research period of C&tC involved a number
of pedagogical sessions conducted by three doctoral researchers, Kyle McClary of the Bridge
Institute for Convergent Biosciences as well as Kate White, and Jitin Singla of Dr. Stevens’s lab.
These sessions proved invaluable for developing the understanding required for C&tC’s design,
but, more importantly, they also serve as interesting examples of how Galison’s creole languages
develop in the trading zone created between practitioners of disciplines with such wildly
differing epistemes. Indeed, it was during these sessions that the most counterintuitive scientific
principles came to be understood as potential barriers for productive collaboration.
During these sessions, McClary, White, and Singla consistently underscored that the
metaphors they were using (e.g. evoking mitochondria as “the powerhouses of the cell”) were
“not quite right.” With each explanation, it seemed, the metaphor employed was given this
!264
particular asterisk. The reason for this is plain: while metaphor is useful for explaining cellular
processes to laypersons in the most rudimentary way, it becomes much more problematic for
communicating those characteristics of the processes for which he have no experiential
reference. In addition to the one just given, there are two more examples to be culled from the
pedagogical sessions conducted for C&tC that illustrate this point. The first is the so-called
“motor protein,” defined in the Dictionary of Developmental Biology and Embryology as,
“proteins that carry cargo (e.g., mitochondria) along tracks (e.g., microtubules) within
cells” (Dictionary “motor proteins”). This very definition gives us our first clue regarding the
nature of the metaphors used most often to explain the function of these proteins: They “carry
cargo.”
Not only are motor proteins understood to transport “goods” along the “tracks” of
microtubules between organelles, the visual appearance of their structures, when determined and
visualized from x-ray crystallography data, triggers a near-automatic, anthropomorphizing
response. This situation is further heightened when one views the accurate-as-possible
animations, produced by Janet Iwasa and others, which illustrate the kinetic movement of the
motor protein as it transports its “cargo.” Two animations stand out as particularly evocative in
this regard. First, XVIVO’s “The Inner Life of a Cell,” produced for Harvard University,
attempts to visualize the cellular environment as accurately as possible, using what was at the
time the most state-of-the-art 3D animation. At approximately the 1:15 mark of the video (found,
at the time of writing, at https://www.youtube.com/watch?v=wJyUtbn0O5Y ), we see the
distinctive “walk” of the motor protein along the “track” of the microtubule. Above its “head,”
!265
we see the gigantic “sack” of cargo it pulls along. Its slow and plodding movement lends a
distinctly human interpretation to the story being told, which is difficult to avoid.
The anthropomorphic leanings of “The Inner Life of the Cell” are made more explicit in
Hoogenraad Research Lab’s video “A Day in the Life of a Motor Protein,” which, at the time of
writing, can be found at https://www.youtube.com/watch?v=tMKlPDBRJ1E&t=7s. In this
animation, we see that the motor protein has been given a single blinking eye, a smiling mouth, a
pair of jeans, and actual feet, which are clad in a colorful pair of shoes. We watch as the motor
protein exits through the door of a loading dock and sets off down the sidewalk with its
enormous sack of cargo in tow. It makes its way through the city, “walking” in such a way as to
mimic the plodding movements of the motor proteins in “Inner Life,” but exaggerating them to
cartoonish levels. After passing a number of flesh and blood humans on the sidewalk, it soon
joins a line of other anthropomorphized motor proteins as they approach a row of yet more
loading dock doors. The video ends as the army of motor proteins delivers its many sacks of
cargo. Throughout the work, this whimsical sequence is intercut with “serious” animations that
illustrate the more accurate (and decidedly non-anthropomorphic) scientific view of the process.
Of course, the deployment for these sorts of visual and conceptual metaphors for the purpose
of explaining scientific principles to a lay public is nothing new. Indeed, it is argued by Lakoff
and Johnson in Metaphors We Live By that metaphors penetrate our language “all the way
down,” so to speak, shaping the very means by which we process new information generally.
Lakoff and Johnson’s original outline of conceptual metaphor theory holds this to be as true for
the most abstract scientific concepts as it is for such obviously metaphorical utterances as “life is
a journey”:
!266
So-called purely intellectual concepts, e.g., the concepts in a scientific theory, are often-perhaps always-
based on metaphors that have a physical and/or cultural basis. The high in "high-energy particles" is based
on MORE IS UP. The high in "high-level functions," as in physiological psychology, is based on
RATIONAL IS UP. The low in "low-level phonology" […] is based on MUNDANE REALITY 1s DOWN
(as in "down to earth"). The intuitive appeal of a scientific theory has to do with how well its metaphors fit
one's experience. (Lakoff & Johnson 18-19).
While the example above illustrates how the adoption of metaphors might be influenced by our
spatial and embodied situations, Lakoff’s and Johnson’s theories build more generally upon what
the authors describe as the mapping of concepts between “source domains” and “target
domains.” In words that forefront their own metaphorical use of language, the authors state,
“Because concepts are metaphorically structured in a systematic way, e.g., THEORIES ARE
BUILDINGS, it is possible for us to use expressions (construct, foundation) from one domain
(BUILDINGS) to talk about corresponding concepts in the metaphorically defined domain
(THEORIES)” (Lakoff & Johnson 52). Here, we find a powerful means for explaining why the
metaphors that pervade the linguistic communication of cellular functions so often turn to the
source domains of “shipping,” “delivering,” “walking,” etc.. Simply put, the processes
themselves bear structural resemblances to those with which we are most familiar in our daily
activities. Indeed, this view resonates deeply with Ian Hacking’s concepts of resemblance we
examined previously. Yet, conceptual metaphor theory makes a stronger and more troubling
assertion: the metaphors penetrate not only explanatory models, but the very language we
construct to produce those models. Therefore, what we understand as “reason in itself” or, by
extension, Science’s vectors of abstraction, are fundamentally rooted in embodied experience
!267
and cannot escape (pace Latour and Woolgar) the network of metaphors that make their
existence possible.
Are science communicators therefore trapped, like the Venus and putto of Allegory of Sight,
within a mise en abyme of representation with no recourse to a method for accurately depicting
cellular activity? My argument is that we are not. Admittedly, Lakoff’s and Johnson’s conclusion
about the situated and embodied origins of metaphor would ostensibly seem to dovetail cleanly
with my arguments for a worldizing aesthetics of data. To my mind, however, the theory rings
too strongly of an untenable correlationism that simply replaces individual minds with
“conceptual metaphors” In other words, it fails to account for how reality, as Rescher’s axiom
states, outruns experience, and how this outrunning structures scientific intervention. As
powerful as conceptual metaphor theory undoubtedly is for helping us understand the
metaphorical structure of everyday language, its original outline in Metaphors We Live By suffers
from a chronic case of overreach. A full accounting of the evidence for this would span multiple
volumes, but Stephen Pinker has provided in The Stuff of Thought what is, for present purposes,
the best illustration of how the reach of the theory could be said to exceed its grasp. He begins by
noting,
[Lakoff] believes there is a nonmetaphorical, physical world, and he believes that a human nature,
embedded in our bodies and interacting with the world, provides universal experiences that ground many
metaphors in ways that are common to humanity. Yet he also believes that many of the metaphors that
ground our reason are specific to a culture, and even his universalism is a kind of species relativism: our
knowledge is nothing but a tool suited to the interests and bodies of Homo sapiens (Pinker 247).
!268
Pinker goes on to trouble this picture by arguing that,
It won’t do to say that scientific metaphors are merely “useful,” unless one stifles all curiosity about why
some metaphors are useful and others are not. The obvious answer is that some metaphors can express
truths about the world. So even if language and thought use metaphors, that doesn’t imply that knowledge
and truth are obsolete (Pinker 247).
Pinker’s rejoinder points us to the fact that, while the phenomena science investigates are
often explained by means of the best available (usually technological) metaphors, some of these
encapsulate more of a particular phenomenon’s complexity than others. For example, to fully
understand the form of Descartes’s intervention into the mind/body problem one must also
understand the influence upon his thinking of the automata that were so popular in his time. For
Descartes, the body was best understood as a mechanical construction of flesh, bone, organs, and
sinew in place of cogs, springs, and wheels. Much later, and returning us to representations of
biological systems, we have Fritz Kahn’s illustrations of the body’s many processes interpreted
as factories, train stations, telegraphs, or other industrialized constructions and activities (fig.
14). While Kahn’s metaphors might strike us as quaint or exceedingly imprecise, it is also
reasonable to say that the brain’s processes, for example, are more like the signals of a telegraph
than they are like the clockwork of an automaton. Following this logic, today’s updated
computational metaphor for the brain’s function is more accurate still.
I would add to Pinker’s rejoinder, however, that achieving higher “accuracy” with respect to
our representations of Science’s more counterintuitive abstractions requires more than simply
identifying better metaphors; it requires a multiplicity of differential representations, embodied
!269
activity, and an attitude of critical reflection that forefronts the incomplete and insufficient nature
of the representations themselves with respect to the domain knowledge they encapsulate.
Scientific models, diagrams, and the products of imaging devices nearly always elide some or
even most aspects of a phenomenon in order to isolate the most salient properties under
investigation. They are not merely isolated objects that carry more or less information but are
produced within a discursive network with multiple epistemic nodes. As the research for C&tC
progressed, McClary’s, White’s, and Singla’s pedagogical sessions became what is perhaps best
described as “information performances” in which the researchers improvised between nodes as
a way to reinforce concepts and to insure that the undesirably distortional effects of any one
!270
Fig. 14. Kahn, Fritz. The Seven Functions of the Nose. 1939. “Fritz Kahn: Human Body as an
Industrialized World,” by Fosco Lucarelli, Socks, 2012, http://socks-studio.com/2012/08/24/
fritz-kahn-human-body-as-an-industrialized-world/. Accessed 1 April 2020.
metaphor were minimized. 5.8 Shaping the Interlanguage: Dual Codings and Pidgin Terminology
Jitin Singla, who works in the field of bioinformatics, was particularly adept at the sort of
performative information exchange I have just described. Over a period of roughly four months,
Singla filled whiteboard after whiteboard with diagrams, drawings, lists, and flow-charts while
giving detailed verbal accounts, correcting his own metaphors where necessary, and fielding an
endless stream of questioning. These sessions continued throughout the multiple sprints of the
design phase and right up to the production of the final VR prototype. The primary purpose of
these sessions was to explain to designers the lower-level details of processes depicted within
specific regions of the diagram shown in fig. 15. Yet, as extensive and effective as these sessions
were for that particular purpose, there remained a number of misconceptions among a few of the
designers with respect to both the more counterintuitive principles and the epistemological status
!271
Fig. 15. Diagram of the signaling pathways of the pancreatic beta cell.
of various representational conventions that have long served as the accepted means for
depicting these processes.
I have already discussed at length the trouble the concept of agency presented for the team’s
process of mapping metaphors between domains. There were, however, other much more
pedestrian challenges of this sort, which were emergent from the misunderstandings about the
disciplinary context of certain terms and the use of these same words in everyday language. A list
of these terms includes such words as “transcription,” “binding,” “signal,” and “conformation.”
To one degree or other, all three of these words presented opportunities for misinterpretations
akin to those regarding “agency.” For those artists who were brought onto the project much later
in its development, terms like “signal” and “transcription” were particularly problematic. The use
of both terms in everyday speech denotes intentional communication of some sort—a message to
be sent or copied whose content is known to the sender and read by the receiver. In the case of
the various “signaling” and “transcription” processes of the cell, there is no “awareness” of the
message in this sense. Moreover, there is no system of “signs” by which to “transcribe” semantic
content. What we find instead are physical processes and structural changes that cascade down
the line like dominos falling.
In her book The Ontogeny of Information, Susan Oyama discusses at length the manner in
which anthropomorphizing language creeps into the tacit understanding of laypersons and
scientists alike. For example, when we refer to DNA as a “blueprint,” as we so often do, we
sneak an incorporeal form of “information” into the hollow shell of the material body. “Just as
we place a man in the head to receive and interpret sensation and to issue commands to limbs,”
!272
says Oyama, “[…] so do we place a plan in the man that assembles and controls
him” (Oyama12). She continues,
In our very vocabulary, form and intent, pattern and control are fused. Consider “design” as pattern and as
intention, or “pattern” as noun and verb. It is just this step, short and habitual as it is, between descriptive
and prescriptive rule, between observed regularity and prior plan as guarantor of that regularity, that I think
we should resist. […] Because we are unaccustomed to thinking in other ways, it invites literal
interpretation, and thus the illusion of having given a causal account […] (Oyama 12-13).
For Oyama, DNA “information” only becomes meaningful when it is made so through
phenotypic expression and by the entire developmental system. An intervention similar to
Oyama’s with respect to information and DNA must also be made for many of the other terms
mentioned above as each arises, making the translation of these concepts into experiential
products extremely hard won.
When we speak of “signals” in the cell, for example our imaginations immediately conjure
electromagnetic waves, telephones, or other technologies that send “information” as packets of
incorporeal signifiers. In reality, so-called “signaling pathways” result from the structural
matching of molecules, physical changes that are affected through binding, and the degree to
which signaling “agents” are concentrated in the cellular environment. In Life’ s Ratchet, Peter
Hoffman explains simply that “[o]nce a receptor binds to a chemical target, it undergoes a
conformational change, which releases or binds a control molecule, setting off a cascade of
feedback loops inside the cell, leading to a ‘macroscopic’ response of the entire cell” (Hoffman
235). As our design team learned, the mostly automatic responses solicited by words like
!273
“signaling,” which are the result of habituation to the everyday meanings of words, can distort
one’s thinking with regard to what is known scientifically but, perhaps more importantly, can
actively forestall one’s moving in the directions of what are potentially more interesting designs.
Thanks in no small part to the frequent pedagogical sessions conducted by Singla and his
colleagues, the language of the cell was increasingly revealed to our designers as one descriptive
of form, structure, and chance. Nearly all of the emergent processes the design team needed to
address had these three principles as their conceptual foundations.
As the design language for the work became increasingly modular and geometric, the creole
developed between the scientists and designers became proportionally idiosyncratic. Early in the
design process, Alex McDowell insisted that the team move away from everyday use of the
scientific terminology used to describe the various molecular structures and systems. Processes
like the intake of glucose into the GLUT transport and the binding of the GLP1 to the GLP1R
receptor were relabeled “neighborhoods,” and each were given proper names. GLP1 signaling,
for example, became known as “Catalysia.” The opening and closing of ion channels became
known as “Polaris”—a name that reflects the role of this process in “depolarizing” the
membranes of organelles and the cell itself. The smaller proteins, which were understood as the
“characters” populating the world, were given names like “campers” (CAMP) and “Gab Trio” (G
Protein Complex) and were even provided with psychological profiles. It is important to note
that none of these monikers were intended to stick; it was assumed that more appropriate names
would be chosen as the narrative developed. However, as is perhaps the nature of language
construction, once the team fell into the habit of using the agreed-upon labels, it was extremely
difficult to make changes. The first hybrid team of researchers had identified the main
!274
components of the creole for their interlanguage trading zone and these became entrenched as the
project developed. Importantly, this set of linguistic conventions squares best with what Galison
terms a “pidgin” (Galison 48) due its extremely small vocabulary and its virtual indecipherability
for practitioners unfamiliar with the project.
If C&tC is any indication, spoken language can only go so far as a communication tool in
interdisciplinary art-science contexts. Singla in particular made as much or more performative
use of diagrams and models as he did the team’s pidgin terminology for the purposes of nuancing
the principles at hand. One of my primary tasks as design research lead was to transcribe Singla’s
visual explanations into simplified diagrammatic tools that could inform the animations
produced by artists and developers. Figures 16 and 17 show, respectively, one of the many
!275
Fig.16. Whiteboard from one of Jitin Singla’s Cell and the City pedagogical sessions.
whiteboards produced by Singla during his “pedagogical performances” and the synthesis of the
board’s contents into an internally-facing communication tool. Viewed as artifacts, both appeal to
a form of diagrammatic reasoning that bears much in common with the representational schemas
we have observed at length in the present work. Chief among these similarities is that, as
effective as such tools might be for organizing and communicating concepts, much is elided in
their abstractions. Therefore, to use or understand their full potential requires emphasizing their
artifice as well as their original, multimodal contexts.
One way to understand the role of the diagrammatic in multimodal contexts is through the
lens of Gross’s and Harmon’s “enhancement” of Dual Coding Theory (EDCT), which they
outline in their examination of visual rhetoric in Science: From Sight to Insight. One of the
!276
Fig. 17. Diagram produced from the whiteboard shown in figure 16.
fundamental premises of this theory is that there can be no scientific argument the components of
which are exclusively visual (Gross & Harmon 31). Importantly, the key to this position lies in
the careful distinctions between scientific activities like problem-solving, evidence-gathering, or
model-making, and the propositional character of arguments. It is the latter, these authors claim,
that can have no exclusively visual character. Of course, we have been speaking not of argument
per se, but of visual explanations. Insofar as these are concerned, DCT holds that “[i]n texts that
combine the verbal with the visual, meaning is the consequence of the interaction of the verbal
and the visual” (Gross & Harmon 31).
For Gross and Harmon, there are four key features of EDCT that perform the above
interaction. First, the structures and components of visual aides must be perceived as Gestalt
patterns, which is to say they are organized first by pre-conscious processes. Second, these
structures and components are identified and explored through various scanning and matching
regimes, which the authors hold to be perceptual processes but processes that involve
consciously directed attention. Third, the structures and components of visualizations are
interpreted by means of Peirceian semiotics (icon, symbol, index)—a strictly cognitive process.
Finally, the interpretations are integrated into semiotic wholes by means of argumentative and
narrative structures—also a strictly cognitive process (Gross & Harmon 38). What is striking
about these four features of EDCT is that they bear a striking similarity to the account given in
previous chapters of the manner in which the fruits of embodied perception become rationalized
into systems of signification. Here, however, this process is rarefied such that the designer
exercises a great deal more reflection and control over which Gestalts to admit into the schema,
!277
and which to leave out. Of particular interest for present purposes is EDCT’s understanding of
the role played by this selection process in the formation of narrative structure.
The most glaring example of what can be lost or elided in the process of selecting and
funneling Gestalts out of complex processes and into the diagrammatic is found in the deceptive
hint of linearity that is present in the diagrams created from Singla’s whiteboard sessions. One
will notice (see fig. 17) that the visual relationship between the “lock” and “key” match between
components provides no indication that the actual relationship to which it refers is anything other
than “one to one.” In other words, in this particular version of the cell signaling narrative, there is
a tacit suggestion of “one key per lock,” which is very far from the reality. As previously stated,
the cell is an environment marked by chaos, noise, and above all, chance. The probability of a
particular “key” finding a particular “lock” increases as the density of multiple copies of that
“key” within the general environment increases. This particular issue of representation laid
dormant and unaddressed within both the design diagrams and their conceptual metaphors until
very early into the project’s second phase.
As Gross and Harmon suggest, objects of scientific study undergo “alterations of semiotic
valence” as they progress through different stages of argumentation or narrative redesign (Gross
& Harmon 125). Often, the objects culled from data begin life squarely within the category of the
indexical (which is to say they have the ontological status and epistemological foundation of the
physical trace). They may subsequently be instantiated as icons and symbols as they are placed
within narrative or explanatory frameworks, moving through iconographic and symbolic
registers and potentially back to the indexical. As we move into discussion of World in a Cell,
this trajectory will become more important as it bears directly upon the issue of whether “Big
!278
Data” require “Big Theory,” or whether they obsolesce theory almost entirely. But, it is worth
noting here that if the pedagogical sessions conducted by Singla, McClary, and White reveal
anything regarding the communication of cellular processes as determined through instrumental
and experimental activity, it is a confirmation of EDCT’s general assumption that images alone
cannot encapsulate the entire explanatory narrative. They must be understood within the larger
discursive contexts of theory which produce a finer net beneath them in order to catch important
nuances that slip through their abstractions. 5.9 Cell & the City: Preliminary Findings and Lessons Learned
The Cell & the City VR prototype, a trailer for which can be viewed, at the time of writing, at
https://vimeo.com/268047125, succeeds in a number of ways, both in its testing of the potential
uses for virtual reality in the field of scientific communication and in its proving of the design
laboratory as a productive art-science trading zone. One of the more promising outcomes was the
“disembodied head” approach to navigation. Due to the problem of agency, the question of the
user’s narrative role, and the lack of explanatory voiceover, users of Cell & the City inhabit a
“ghost’s eye view” and are allowed a radical freedom of movement within the Cell. This follows
in lock stop with our previous discussion of the “search warrant” ethos of Sleep No More and is
achieved through two technical implementations. First, direction of movement is linked to the
camera view; wherever the user looks (i.e where their head is turned) is the direction in which
they will move. Second, forward and backward movement is linked to the triggers of the right
and left HTC Vive controllers. If the user is moving and the triggers are released, they will ease
to a stop.
!279
When idling, the user drifts slightly as though suspended in a liquid medium. This choice was
made as a way to underscore the material nature of the cytoplasm without modeling its primary
molecular substance: water. Much thought went into the question of how to represent water and,
indeed, the overall density of the environment generally since the interior of the cell is not
“spatial” in the same way as perspectivally rendered in virtual reality. “Molecules in a living
body are subject to violent thermal motion,” says Phillip Hoffman, “at the elevated temperatures
of a living body, atoms rattle, shake, and bump into each other at high speeds” (Hoffman, Life’ s
Ratchet 64). The inside of a cell is like a bean bag in which every bean is tightly packed with its
neighbors and imputed kinetic motion by the forces Hoffman describes. Any immersive
experience one might create that visually accounts for this feature, however, would run directly
counter to both user comfort and to task-oriented goals. The idling mechanic implemented goes
far in suggesting the noisy quality of the cell without needing to represent it visually. The sound
for C&tC was also designed with these factors in mind, however, I will leave a more thorough
account of my own work with the sound design for later discussion as C&tC’s narrative
constraints shaped the direction of the sound design in ways that would take us too far from
present goals.
While the virtual reality prototype for Cell & the City succeeds in the ways I have outlined
and suggests a number of evocative directions for art-science collaborations, its somewhat
contradictory brief produced conceptual ambiguities that obscured much of the underlying
science. The attempts of our team to produce a world and an overarching narrative from the
complexity described by high-level biological concepts met with substantial difficulties when put
in dialogue with the equally high-level concepts of city planning and architecture. This is not to
!280
say that such a remapping of principles between the two domains is impossible; it is only to say
that, for the WbML team, such a brief called for out-of-the-box, blue sky thinking that arguably
pushed the entire project into a realm considered too alien for the goals of both stakeholders.
The primary issue taken by Dr. Stevens and Dr. Helen Berman, the latter of whom would
become the project’s scientific advisor during the second round, was the radical divergence of the
visual design from what is known, via instrumental imaging and PDB data, about the various
structures of the beta cell’s constituents. As might be expected, this divergence was due in no
small part to the team’s attempts at building “architecture” that met the side of the brief as given
by Havas. The trouble, however, is that the structures of the molecular machines in the cell have
meaning for scientists that is deeply rooted in functionality. In short, what was communicated
among the team through the interlanguage had not been sufficiently reflected in the language of
the design. It was found that without some degree of veracity with respect to the structures in
question, the forms themselves are mostly unrecognizable to scientists. 5.10 World in a Cell: From Interlanguage to Form Language
For World in a Cell, the collaboration saw the influx of a number of new designers and
researchers who, under the guidance of Dr. Helen Berman, co-founder of the Protein Data Bank,
turned more directly to scientific principles and the PDB data as their lodestar for the visual
design and behavior of the VR and its animations. In a number of ways, the direction taken by
this second team is reflective of that aspect of the project I have tentatively characterized as an
“enforced” trading zone—one in which the designers and their decisions were steered according
to very specific “top down” paradigms. Thus, one would be warranted in questioning the extent
!281
to which WiaC continues to qualify as a truly hybrid art-science endeavor. As I have already
noted, WiaC, and to some degree, C&tC, exhibit the features of a more traditional designer/client
relationship. The move toward veracity with respect to better representing those structures in the
data that are already well-established would seem to strengthen this case. In what way then does
WiaC stand out from the usual design-for-hire scenario other than by virtue of its scale and
chosen medium?
There are subtleties that follow from the question above that deserve unpacking. First, our
earlier textual analyses of distortional practices and data-driven art drew upon examples in the
“Fine Arts”—a domain which, as we have seen, is today governed by the slogan, “do anything.”
The mercurial quality this slogan imputes to Arts methodologies makes hybridity a much more
complicated issue (easier to sell in some respects, harder in others), even while it speaks to an
iconography that is extremely easy to destabilize. In the case of design, whose methodologies
tend to be much more rationalized, we are required to adjust our thinking slightly. Without
getting into the weeds with a rehearsal of the many differences between Art and Design as
disciplinary fields, I will begin with the obvious: that the identification of design constraints is
widely accepted as a crucial, preliminary step for most design methodologies.
For designers, constraints are presented as a topology of problems to solve or to otherwise
work around in order to achieve desired outcomes, and (as the truism holds) are the very engine
for creative action rather than a hinderance to it. This observation is not enough, in itself, to
make a convincing case for hybridity, but it does identify one of the mechanisms that drive it. In
its anchoring of systematic approaches to problem-solving in the “hard points” presented by
constraints, design more closely mirrors scientific disciplines (e.g. engineering) than it does
!282
those of its own neighboring disciplines. The fact that design is so often oriented toward function
serves to bolster this case. But, despite such similarities between design and science, we have
still not quite identified a compelling case for hybridity. To that end, it is perhaps more
instructive to turn to how each order differs from the other with respect to their primary
motivations.
As noted, the main goal for the team of scientists involved in WiaC is the creation of new and
better representations of the data, which might act to synthesize knowledge from adjacent
domains. For the designers, the main goal is not simply to meet the brief as delineated by the
stakeholders, but to do so by creating a compelling and evocative experience of a world. To
achieve this, however, requires a willingness to subsume the linear, analytic style of
representational address to the complex interactions and surface features that make up a world.
In short, for the WbML, turning to the data for constraints implicitly means worldizing the data;
it means submitting the geometric abstraction as well as its accompanying background theory,
however counterintuitive, to the logic of the embodied encounter. Thus, a dialectic is created
between the analytic goals of scientists and the experiential goals of designers out of which a
synthesis emerges, both in the final product and in the culture developed to achieve it. This, I
would argue, is what points us toward a tenable notion of hybridity. To my mind, it was the
insistence on the part of the WbML’s scientist/collaborators that the project move away from
more untethered modal speculations and toward the anchor points of the data that resulted in the
most evocative features of the final product and in a more productive blurring of disciplinary
boundaries. From this perspective, the interleaving of penumbral contours is better achieved by
!283
the practitioners of each order pushing against the constraints that emerge from the certainties of
their interlocutors’ disciplines.
Nevertheless, it is clear that the asymmetry that characterizes the enforced trading zone
remains present in the collaboration. Yet, equipped with a dialectical understanding of hybridity
as outlined above, we are better placed to rethink this asymmetry apart from its connotations of
“power imbalances.” To work through this productively, it might be helpful to return to Levi
Bryant’s concept of “gravity,” mentioned briefly in the introduction of this work. For Bryant,
gravity as metaphor helps us to better think through the asymmetry in the relations produced
within networks, assemblages, etc. without always placing the human at the center the
construction, as is done in many explanatory appeals to “power.” Bryant makes a careful
distinction in his ontology between what he calls the “plane of expression,” which includes
human communication, and the “plane of content,” on which we find a human-independent,
physical world. When we address the plane of content through the lens of power, argues Bryant,
we over-code it with human concerns. “Gravity” is Bryant’s way of accounting for the
“enforcement” expressed through certain relations in a manner that affirms non-human agency.
In Onto-Cartography, he argues that,
Not only does power tend to be deployed in an occult fashion to explain social phenomena, but it also
suffers from the drawback of being overly anthropocentric in its connotations. The concept of power draws
our attention to how narratives, signs, discourses, language, and human institutions are formative of social
relations. It’s not that this is wrong, but that […] these explanations […] focus too exclusively on the plane
expression to the detriment of the plane of content […] The rhetorical advantage of the concept of gravity
over terms like power and force is that it gives an all-purpose term capable of straddling both humans and
non-humans, the social and the natural, so that we avoid fixating on the cultural (Bryant 188).
!284
So, on the one hand, the direction taken for World in a Cell could be described as a lateral
step from the interlanguage trading zone into the enforced trading zone and, therefore, into an
asymmetrical power dynamic that troubles the project’s radically hybrid character. On the other
hand, such an understanding risks placing too much emphasis on the social and institutional
factors in play. While these factors are important—a fact that I have underscored in my account
of the problems faced by this team as we developed our particular interlanguage—appeals to
“power” or social relations alone do little to account for how such problems are negotiated in
relation to the gravity of boundary objects. The constructionist rejoinder here might assert that
the language itself or, more specifically, the social activity present “all the way down,” produces
the gravity of the phenomena under study. Yet, despite the fact that this position would represent
a classic case of ontological slip, it is made further untenable when faced with the problem of
accounting for how such activity affords direct intervention into the structures of the material
objects of study, such as those we find in the creation of drugs that target specific peptide
receptors.
As WiaC transitioned into its second phase, the instrumental data produced from protein
structures became the primary boundary objects around which the lab’s activities were organized.
Clearly, social and institutional factors play an important role in shaping the discourse around
these objects, but, following Bryant’s metaphor, we acquire a better picture of how the social/
epistemological thrust of these factors are reshaped by the gravity exerted by data, instruments,
representations, and phenomena at the center of collaboration. When viewed exclusively through
the lens of social negotiation, representations of proteins such as the ubiquitous “ribbon diagram”
!285
or the related “vine diagram” are reduced to pawns in a larger sociological game—a product of
the scientific dispositive of vision that is constructed discursively and which bears no relation to
“things in themselves.” Contrary to this view, the topography of WiaC’s trading zone is, I would
argue, primarily commensurable with a curiosity on all sides regarding the physical status of the
phenomena in question. This curiosity is guided, at base, by the physical activities and processes
with which we, as Hacking puts it, intervene in order to inquire.
Similar to what we observed in the case of metaphors, while the representations that follow
from instrumental interventions provide differentially better or worse “handholds” on those
patterns which inhere in the data, these handholds are, in principle and by necessity, always
exemplary of a situated and partial view; they are radically dependent upon the perspective one
chooses to occupy within the overarching field of complexity. By intervening into the
representational logic itself, whether through distortional practice or from first principles, artists
and scientists like those working together in WiaC’s trading zone identify a boundary object that
subtends the trajectories of each domain in a concrete way. The negotiations around the boundary
object center upon its relative affordances for new perspectives and, thus, new interventions and
philosophical points of view.
In many respects the conceptual coupling of situated perspective with practical intervention
dovetails cleanly with Karen Barad’s notion of the “agential cut,” which she argues in Meeting
the Universe Halfway is produced through the “intra-actions” of material agencies (Barad 140).
Agential cuts are, for Barad, an account of the production of differences within the entangled
relations of phenomena; they are the means by which “agents” separate from one another,
generating their own autopoietic boundaries. Importantly, both for Barad and for the present
!286
study, the concept is a means for placing the devices of representation (e.g. scientific
instruments) back into the picture of representation itself. This is crucial, in Barad’s view, for
understanding how knowledge can be produced from the regular effects of, for example, x-rays
upon material bodies or from the data recorded through spectrometry without our appealing to
relativism on the one side, or GEVs and a disembodied rationality on the other. Simply put,
Barad views the agent performing the intervention (whether instrument or human or both) as
radically “entangled” with the phenomenon under study, a fact that, for her, collapses the implicit
dualism often found in appeals to mechanical objectivity. “Crucially,” she says “[…] the
apparatus is both causally significant (providing the conditions for enacting a local causal
structure) and the condition for the possibility of the objective description of material
phenomena, pointing toward an important reconciliation of the Cartesian separation of
intelligibility and materiality, and all that follows” (Barad 175).
Barad’s account of material agency and its relation to representation has much to offer the
present study, even if, from my own perspective, it tends to over-code its objects as “relational all
the way down,” thus risking a tumble into infinite regress. Just as troubling, it often runs afoul of
Searle’s ontological slip. Indeed, Barad herself seems to proleptically address this issue by
characterizing the metaphysical framework of agential realism as “ontoepistemological” (Barad
44). My discomfort with both of these issues, however, is one that is born more of a skepticism
with respect to ontologies in general more than of agential cuts specifically; which is to say, they
are objections based on philosophical minutiae rather than practicalities and so I will not hammer
the point any further. What is important for present purposes is that the material agency that
concerns Barad as well as myself is one that forms the anchor for a realist and, indeed,
!287
physicalist approach to representation. This is particularly important for any account concerned
with representations of data that have been severed from their original material context and
subsequently archived.
My goal in placing so much emphasis on physicalist interpretations of data and materialist
realisms generally, as well as in making careful distinctions between power and gravity, is that
many of the issues they represent emerged time and again in the design sessions for C&tC. On
more than one occasion during the first phase of the collaboration an artist or designer would
float the relativist suggestion that all representations of the cell are equally valid or, more
strongly, that scientists “don’t really know” if their representations are accurate. This stronger
suggestion, it should go without saying, was met with a great deal of resistance from the
scientists. The former assertion, however, prompted some debate about the definition of
“arbitrary” with respect to how one chooses particular representational schema. Following
Bachelard, it is clear that our overfamiliarity with a conventional representation might foreclose
prematurely upon what are potentially more insightful or appropriate representational paradigms
or otherwise dull a scientific theory’s “sharp point of abstraction.” Indeed, this was one of Dr.
Stevens’s primary motivation for turning to the WbML in the first place. Nevertheless, if we
understand “arbitrary” to mean that one representation is “as good as” another (for example, just
as the word “squeegee” might arbitrarily stand in for the word “tailpipe”) or if we believe that
the choice of one representation over its alternatives is a matter of socio-historical contingencies,
not only does the primary motivation for the brief evaporate, we miss an important feature of the
ontological status of scientific representation itself.
!288
The alternative view I am suggesting is one that, like Barad’s and Hacking’s respective
realisms, underscores the chain of physical causalities that undergird instrumental interventions.
Importantly, it is also one that positions Ihde’s “hermeneutic interface” as another link in this
chain of physical processes. Only by investigating the mechanisms for how we “see through” the
scientific representation to its physical referents, I would argue, can artists bring the distortional
attitude to bear upon the flexibilities of these representations in a productive way. As WiaC
moved into its second phase, such focus on data and its relation to physical referents presented
the core methodology for identifying what would become the project’s primary design language:
Origami. 5.11 World in a Cell: The Limitations of Representation
Before examining the particular breakthrough Origami represented for the WiaC team, let us
first take a sighting of what is at stake for structural biologists specifically in rethinking familiar
modes of scientific representation and providing alternatives. The particular representations the
WiaC team was tasked with reconfiguring were those most familiar to structural and molecular
biologists: the ribbon, ivy, “ball-and-stick,” and space-fill diagrams. These models are
exceedingly effective for communicating the inherent structure of particular proteins (e.g. the
relationship between the carbon “backbone” and the amino acid side chains). In many respects,
the comfort scientists evince toward these representations is born of a combination of task-
specific effectiveness and sheer familiarity and it presents us with a seemingly intractable
problem. Simply put, these diagrams “just work” in the context of the most familiar analytic
tasks; thus, for many, the problem might appear to be no problem at all.
!289
Enhanced Dual Coding Theory helps us to get a handle on how and why the diagrams are so
effective. Representations like those depicted in fig. 18, for example, present excellent visual
anchors for a scanning and matching regime that simplifies the task of identifying particular
structures, assuming that the “reader” is equipped with the requisite background knowledge. The
representations exhibit enough complexity to communicate important details, yet remain abstract
enough to prevent unnecessary information from becoming noise. Yet, as is implied by the
qualifier “unnecessary,” the decision to amplify a particular class of signals over others is a key
feature of the agential cut made by the designer. We have seen time and again that noise itself has
no strict ontological identity. As abstractions, the affordances of each diagram are therefore
limited in different ways. According to research conducted by Harle & Towns and published in
the journal, Biochemistry and Molecular Biology Education, a majority of biochemistry students
surveyed noted four general shortcomings of ribbon, space-fill, and vine diagrams: 1) Lack of
!290
Fig. 18. “Ball and stick,” “spacefilling,” and “ribbon” diagrams. PDB-101, https://
pdb101.rcsb.org/learn/guide-to-understanding-pdb-data/molecular-graphics-programs.
Accessed 1 April 2020.
interaction; 2) a distracting amount of detail (in the vine diagrams specifically); 3) obscuring of
secondary structure (again with respect to vine diagrams); and 4) the obscuring of task-specific
details (e.g. the presence of the ion channel in diagram used) in the case of the space-fill
representation (Harle and Towns 355).
Interestingly, Harle and Towns conclude from their observations that “correct” interpretation
of any of the diagrams is highly dependent upon the extent of a student’s knowledge of the
background theories in which these models acquire meaning. In their conclusion they state,
“Analysis of the data in our study demonstrated that if the students do not possess the required
prior knowledge then they are stymied in their interpretations of the representations. Further they
retrieve their prior knowledge and attempt to use it even if the representations do not contain that
information” (Harle and Towns 356). The onus is therefore placed by these authors upon
professors and the larger pedagogical context for ensuring that the required background theories
are communicated prior to interpretation of the models. If this picture is accurate, it strengthens
arguments of “theory-ladenness” in scientific representations, but it also suggests that the
representational schemas we deploy for interfacing with data problematize in a very direct and
material way the prediction that data analytics will obsolesce theory entirely.
The notion that Big Data is making theory obsolete has already been taken to task by Rob
Kitchin, who observes that, “[…] rather than testing a theory by analysing relevant data, new
data analytics seek to gain insights ‘born from the data’” (Kitchin 2). He continues, noting that it
is not clear such independence from theory is truly achievable since, “systems are designed to
capture certain kinds of data and the analytics and algorithms used are based on scientific
reasoning and have been refined through scientific testing. As such, an inductive strategy of
!291
identifying patterns within data does not occur in a scientific vacuum and is discursively framed
by previous findings, theories, and training; by speculation that is grounded in experience and
knowledge” (Kitchin 5). It is interesting to note that while Harle’s and Towns’s findings resonate
deeply with the core of Kitchin’s argument, the authors do not once consider how the
representations themselves might be reconfigured in the light of the theoretical/phenomenal
considerations that are obscured or omitted by current formats.
Ultimately, abstractions separate a chosen set of phenomena from a background of undesired
complexity in which further insights might inhere. Today, this notion of complexity applies as
much to collected data and the methods for mining it as it does to physical phenomena and the
instruments used to measure them. Of course, individual models, diagrams, or visualizations are
never expected to communicate the totality of what is known or knowable of a phenomenon.
They are limited both practically by the temporal/perceptual constraints of “filtering,” as we
observed in discussion of Moles and signal processing, and in principle, as Nicholas Rescher’s
fallibilist ontology helps to illustrate. This is, however, precisely why the fragmentation of
perspectives is so important. As the methodologies behind the creation and deployment of
cognitive tools become well-understood and mostly accepted across the particular disciplines in
which they are employed, they become grandfathered in and run the risk of calcifying the
imaginations of the researchers wielding them. Such is the situation Dr. Stevens and the WiaC
team seek to rectify with respect to ribbon, ivy, and space-fill diagrams.
!292
5.12 World in a Cell: Origami, Scale, and the Base Unit
The interlanguage of WiaC’s trading zone became expressed most productively through the
design’s form language once the team identified Origami as the metaphor with the most
functional and aesthetic potential. Origami had been raised as a guiding principle early on in the
design discussions for C&tC and had influenced the visual design of the models, but had been
implemented as an exclusively stylistic device. There were specific theoretical considerations
that informed this decision—e.g. protein folding as one of the primary mechanisms for self-
assembly in the cell—but the expression of folding logic in the design language was mostly
absorbed into the visual vernacular of buildings, robots, and other macro-level structures. There
were few indicators in the final iteration of C&tC’s VR experience that the structures one
encounters therein are “constructed” according to a logic of self-assembly and folding. This
obscuring of functional logic was exacerbated by the fact that low-poly models are extremely
common in VR due to their reduction of computational overhead. To my own eyes, as well as
those of VR-savvy user-testers, it was difficult to discern whether the “Origami-like” qualities of
the objects in the world were residual from technical compromises.
The situation above was rectified in the second phase. In fact, every model from the previous
iteration was scrapped and a team of researchers, modelers, and 3D animators began fresh by
turning directly to the models made publicly available on the PDB website. This process
involved identifying the PDB identification number (typically by consulting with Dr. Berman or
one of the other scientists) for each cellular constituent, downloading the corresponding models,
and using these as templates for each constituent’s origami “shell.” On its face, this would seem
little more than an aesthetic make-over for the models in question, but there was, in fact, a
!293
methodology employed that speaks directly to both theoretical first principles and the project’s
need for a functional metaphor.
Origami provided not only a means for capturing the construction logic of protein folding; it
also provided a base unit with which to incorporate a logic for scaling. The basic building block
chosen for this purpose is the tetrahedron, a form derived from the basic structure of the amino
acid backbone. The backbone consists of a carbon atom, known as the alpha carbon, with four
distinct groups of atoms attached: hydrogen, carboxylate, amino (nitrogen and two hydrogen),
and the R group (or side chain). The ability for amino acids to self-assemble into peptides,
polypeptides, all the way up to organelles and even the cell membrane, comes down to the
chaining capacity of this basic structure, making it a perfect template for WiaC’s base unit. Fig.
19 shows a general schematic of the scale relations between the base unit and the larger
structures built from its various combinations as interpreted by the design through the Origami
!294
Fig. 19. Base unit and scale relations between the “constituents” for World in a Cell.
metaphor. Equipped with this building logic, the team was capable of communicating both the
emergent, self-assembling nature of the cell as well as create a unified and cohesive aesthetic that
aligns with origami’s sculptural logic.
Perhaps more importantly, the tetrahedral assembly rules create a structural vernacular for
communicating the vast ranges of scale within the VR experience itself. As observed earlier, one
of the primary shortcomings of the first VR prototype was that differences in scale did not quite
“read” and therefore had little impact on the user. This was mostly due to the ambiguity with
respect to the user’s role in the world and the absence of any identifiable features to provide
clues pertaining to the scale of any one object. Admittedly, much of this remains true for WiaC’s
VR prototype. The user has no defined role in the narrative and nothing in the world is
identifiable in the usual sense. However, as we observed earlier with Schaeffer’s articulation of
“internal proportions,” the various constituents of the VR experience operate together as a formal
whole, creating scale differentials amongst themselves that provide spatial cues for perceptual
distinctions to be made. As the viewer floats above the massive Golgi apparatus, for example,
objects like mitochondria in the mid-distance, or the tiny glucose in the extreme foreground
accentuate the scale without the need for recognizable objects or systems for measurement (fig.
19).
Overall, the WiaC VR experience succeeds in pulling together a large amount of data and
narrativizing what is known regarding the interactions between the objects of the data. Yet, while
the prototype effectively communicates relative scale and is accurate to the structural
relationships represented in the data, there are two principles that remain elusive to the project’s
design. The first is the scale of the cell itself. Again, when we describe spatial regions only a few
!295
Ångströms in size, we are giving descriptions that mean little to our embodied sensibilities. Most
of us are simply incapable of fathoming what the world at such scales must be like. Thus, it may
well be that our only means for experientially communicating such inconceivably small regions
of space will necessitate our falling back upon the vector of abstraction and embedding these
experiences within a larger educational context or within more didactic modes of information
display. The second principle that has gone mostly unaddressed in the VR experience is that of
the Brownian motion driving the frenetic molecular noise characterizing the cell’s ambient
environment. I will turn to this more directly in the next section, but to round out this account of
WiaC’s form language, I would like to address how the logic of the construction rules shapes our
understanding of space within the VR experience, leading us away from exactly those noisy and
chaotic qualities described in scientific theory.
!296
Fig. 20. Still from the World in a Cell VR Experience.
During our initial round of interviews, the team spoke with Dr. Scott Fraser, director of
Science Initiatives at the University of Southern California and provost professor in the Dornsife
College of Letters, Arts, and Sciences. Dr. Fraser’s expertise in imaging techniques and
molecular analysis helped to shape the team’s understanding of the instrumental procedures that
are employed for capturing data from molecular structure and the concomitant problems with
these procedures. What emerged from that discussion was a general agreement on Dr. Fraser’s
part with respect to the necessity for better metaphors and representations of the cell and its
constituents. But, more importantly, Dr. Fraser helped to outline the problem faced by imaging
specialists in a concrete way. During the interview, he compared attempts at understanding
complex interactions through available imaging techniques to hypothetically attempting to
understand football by killing every player where they stand during a game: “[I]f you kill the
whole team and analyze what they're doing at that instant you wouldn't understand football. If
you isolated […] one player, you wouldn't understand football. If you dropped them into a
blender you wouldn't understand football. You know, taking a bad TV movie of football, you
might still not understand it but you'd get closer.”
In other words, imaging techniques like x-ray crystallography are still bound by material
constraints with respect to their address of kinetic motion similar to those with which Etienne-
Jules Marey grappled in the late nineteenth century. These instruments “freeze” their objects of
study and lift them out of their complex relations. Indeed, the crystallization of proteins that
facilitates this form of instrumental analysis involves locking the various instances of a protein
into a regimented lattice such that their kinetic potential is lost entirely. For structural biologists
like those with stakes in WiaC, this process is a boon as it isolates precisely those spatial
!297
coordinates one needs in order to understand the morphology of the protein under study. But, if
one wishes to understand the noisy and complex entanglements out of which order arises, these
techniques are severely limited. Much of what is known about protein interactions was therefore
filtered into hand-crafted animations for WiaC. The interactions as translated into the Origami
logic of the world therefore reflect a great deal of the embodied intuitions of the artists. Thanks
to the efforts of the development team (who, I will admit, endured a great deal of cajoling on my
part regarding these matters) Brownian motion is represented in the movements of many of the
smaller objects filling the space, but, due to the restrictions of spatial legibility, this motion is
limited in its speed and complexity.
While the WiaC experience lacks the planar, orthogonal surfaces that one expects from VR’s
cool and rational reification of perspectival space, the logic of perspective is nonetheless
pervasive. This is, perhaps, where the foundational epistemes of the structural biologists and
designers involved most overlap. Each side of this collaboration hold to an analytical aesthetic,
whether for purposes of actual scientific analysis or for those of the raw, experiential encounter,
that privileges linearity and the figure/ground relationship. During his interview, Dr. Fraser
foreshadowed this by stating, “I think that the tools and people's training are very good at
thinking about linear processes and very bad about thinking about distributed interconnected,
weakly coupled processes. If you use those tools to explain how communities work it would
[result in] something more regimented than an army.” It would be a stretch to say that the objects
populating WiaC’s VR experience are “regimented” in the way Dr. Fraser describes, but, at the
time of writing, the logic of the world is one that is a far cry from the noise of the molecular
!298
storm. 5.13 Saccades: GLP1R_extracellular_domain
“If you could listen to a protein, what characteristics would you want to hear?” I posed this
question to Jitin Singla at the end of one his two-hour pedagogical sessions. Without hesitation,
he replied, “the side-chain molecules.” I was surprised at the speed with which he had arrived at
this answer and, noticing this, he quickly reasoned, “the side-chain molecules are the most
interesting part of the protein because they determine how it interacts with its surroundings. They
are like the interface to the rest of the environment.” Singla continued to give an extra five
minute tutorial on the structure of amino acid chains and how their relative hydrophobicity (the
measure of the degree to which a molecule interacts with water) helps to determine their
respective roles within the cell. We had been over most of this information before, but these
particular details had never, even after nearly two years of research, surfaced as a potential
design consideration. For myself at least, it was a highly evocative and productive five minutes,
and it prompted me to rethink the strategy I was employing to give voice to microscopic
structures.
As I stated earlier, the sound design for C&tC diverged radically from the scientific content
of the collaboration due to narrative and aesthetic considerations. Of course, sound in itself bears
little direct relationship to the biological principles at the heart of project, however, there existed
a number of potential uses for data sonification within the VR experience that were quickly
shelved as the narrative began to take shape. Thus, while many of the sounds produced for the
first iteration of the project sit quite well within the overarching world aesthetic, they have very
!299
few conceptual linkages to either the science or the data. As the project shifted into its second
phase and the decision was made to utilize the PDB models more directly, I began formulating a
data-driven approach to the sound that was similar to that taken by the modelers and animators.
Due to compressed production timelines and other practical matters, this approach was never
explored at length; neither was it implemented in the production pipeline. Instead, the sound for
WiaC is based almost exclusively upon aesthetic interpretations of processes like binding,
cycling, signaling, and depolarization, as well as synchresis with the visual aesthetic.
Nevertheless, the initial proof of concept I constructed to test the data-driven strategy became the
basis for the last of the personal works I will discuss herein: Saccades:
GLP1R_extracellular_domain.
Taking Jitin Singla’s response to my questioning as starting point, I quickly identified those
entries in the x-ray crystallography data that correspond to the coordinates for each atom in the
various amino acid side chains as well as those that correspond to the alpha carbon backbone.
These are given in the data as a series of cartesian (x, y, z) coordinates relative to an origin
established for the overall protein and are labeled with abbreviations for their associated amino
acids. Due to the spatial nature of this particular data, it quickly became apparent that a visual
rather than auditory modality of expression might be called for, whether by means of the usual
3D models or through some other type of visual/spatial display. If the structural data were
presented as a time series (for example, as a record of motion), a sonification schema might have
been more readily apparent. As it stands, the data itself provides few obvious numerical
candidates for the construction of a compelling sonification.
!300
After a fair amount of digging through the Protein Data Bank web portal’s educational videos
and literature, as well as a few consultations with Singla and Dr. Berman, I hit upon a strategy
that uses the relative hydrophobicity levels of the molecular side chains rather than the spatial
coordinates of their constituent atoms to excite the sonification model. However, yet another
problem followed from this decision, namely that the hydrophobicity values in themselves are
not recorded in the x-ray crystallography data sets. These are, rather, well-established
quantitative values that correspond to the hydrophobic/hydrophilic characteristics of each of the
twenty amino acids and fall mostly along a pre-established scale. In short, these values, while
measurable, make up the constants of a background theory that supports the crystallography data.
With some searching of the available literature, I identified a suitable table of hydrophobicity
values in the documentation for UC San Francisco’s Chimera software—one of the more popular
3D imaging tools for visualizing protein data. Given that there were now two dimensions of data
at my disposal, I decided that the best course of action would be to produce a multimodal data
display rather than one that is exclusively auditory.
The architecture of Saccades comprises a visualization written in the C++ OpenGL wrapper,
openFrameworks, and a sonification model constructed in the DSP programming environment,
Pure Data. In addition to rendering the display, the visualization software also parses the data for
use in the sonification. The program loops through the coordinate values from beginning to end
over an arbitrary duration, and extracts the label for the current amino acid at each frame of
animation. This label is subsequently checked against a dictionary of hydrophobicity values and,
once the corresponding value is identified, it is sent to the sonification model via the Open Sound
Control protocol. When the sonifcation software receives the hydrophobicity value, it uses this
!301
number to index a predetermined scale of notes and excites the synthesis model. Thus, higher
and lower hydrophobicity values are expressed as higher and lower pitches respectively. On its
face, this would seem congruent with the strategies used for most parameter mapping
sonifications. However, much of the aesthetic design of the work complicates this categorization
in subtle but crucial ways.
As previously discussed, the principle that most escaped the conceptual reach of WiaC was
that of the stochasticity of the cell’s ambient environment. As my intention to reflect this quality
in the sonic character of the VR experience was never fully realized, I took the opportunity to
explore it more thoroughly in the design of Saccades. The approach taken was two-fold. First,
the sonification model was constructed in multiple layers. The pitch of the tonal elements are
affected, as described, by the incoming hydrophobicity values, but there is an additional timbral
layer, created through noise shaping procedures, which is associated with the number of atoms in
the side chain. The envelopes of both of these layers are short and staccato, according to the
speed at which the data is parsed, giving the run of noisy tonal values a slightly chattering or
babbling character. This chatter occurs within a wash of subtly drifting, low-pitched ambience.
Passed through a simple Schroeder reverb algorithm, the sound of the amino acid side chains is
spatialized such that it seemingly emerges from some virtual darkness and rolls around the
auditory scene very briefly before decaying back into the ambient bed.
The second way in which I addressed the noisy and stochastic character of the cell was
through the visualization, which is produced in a manner that departs radically from the smooth,
near-sculptural quality imputed to the data by the PDB model. In the case of space-fill models
like that depicted in fig. 18, we see what look like smooth orbs rendered at each atomic
!302
coordinate, which fuse into one another forming a bumpy and slightly plastic-looking surface.
The diameters of these orbs are proportional to the atomic radii, which are, yet again, known
values that are not represented directly in the PDB data, but which make up part of the
background theory used to visualize them. The various distances between the nuclei, on the other
hand, are representative of the bond lengths between atoms and are implicit in the data
themselves. For myself, the representation of these measurements imputes a visual quality to the
model that elides the noisy and kinetic matter/wave duality of the electron bond.
In many respects, we see here a similar iconography as the one discussed with respect to
points and regions in the previous chapter. In this case, what the shell of each orb represents is a
region occupied by one or more electrons shared between atoms—a region whose noisy contours
are delineated by the probabilistic nature of electrons themselves. Moreover, the space-fill
diagram pictured communicates a rigidity that is limited in its capacity to express the balance
between structure and elasticity required for the protein to remain stable but also be capable of
undergoing the deformations necessary for shape conformation—a key factor in cell signaling.
However, what I find most striking about the conventional means for expressing PDB data is
that, outside of animations by researchers like Janet Iwasa, the stochastic movements of each of
these structures in the noisy environment of the cell go mostly unaddressed.
Given the limitations of my own domain expertise, my departure from the representational
paradigm above can at best be described as highly speculative, but, just as we saw in my account
of the motivations for TEC, there are also more philosophical considerations at play that center
upon the phenomenal and affective potential of the media used to express the data. My strategy
for visualizing the GLP1R receptor involves isolating the amino acid backbone and displaying it
!303
as a slowly rotating mobile. In the place of the “shell” used in space-fill diagrams to
communicate atomic radii, Saccades renders a single white point, only a few pixels in diameter,
which shakes rapidly as computational noise is injected into the values of its spatial coordinate
with every frame. When all points are processed within a frame of animation, the entire
backbone appears against a sea of black. The side chains are drawn in red such that they pull the
viewers focus from region to region wherever the next side chain is found.
Also in red, is a list of the corresponding abbreviations for the amino acids that unfolds at the
upper left-hand portion of the screen. Overall the choice of points over spheres resonates with the
earlier discussion of points as signifiers, but it is also a direct reference to the means by which x-
ray crystallography data is generated. In simplest terms, the data found in the PDB files is not
produced directly by the x-ray crystallography process; rather, it is produced after the fact
through analysis of refraction patterns that are produced when an x-ray is passed through the
crystal lattice. Because this process occurs in predictable ways, the images produced can be used
to backtrace the structure through which the x-ray has passed (fig. 21).
The sequential approach for rendering the side chains was inspired by experiments in
perceptual psychology that investigate the nature of saccades. These are the rapid movements of
the eyes that help facilitate better exploration of a complex visual scene. They were first studied
by the Russian psychologist Yarbus, who used rudimentary equipment (fig. 22) to record the
movements of subject’s eyes when presented with a visual stimulus. Yarbus would subsequently
superimpose the inscriptions produced over the original stimulus image as a means for analysis.
Indeed, the “wireframe” render of both the backbone and the flashing side chains in Saccades
was inspired by recordings of patterns like those in fig. 23. The adoption of saccadic eye
!304
!305
Fig. 22. Eye tracking equipment used by Yarbus. “Eye
Tracking Through History,” by EyeSee. 20 May 2014,
Medium, https://medium.com/@eyesee/eye-tracking-
through-history-b2e5c7029443. Accessed 3 April 2020.
Fig. 23. Recordings of saccadic movements. In Palmer,
Stephen E. Vision Science: Photons to Phenomenology.
The MIT Press, 1999. p. 529.
movement as an operational metaphor for this work was a means for investigating the odd
disjuncture between the non-linear spatiality of the protein coordinates and the linear nature of
hearing. In a sense, saccades are the manner in which the visual system parses the non-linear
complexity of a visual stimulus into a linear succession of “data.” The question I asked myself
was therefore one that centered on the potential in conjoining this visual “linearity” with the
linear nature of listening. In each case the movement of either the object of perception or the
biological apparatus of perception generates an uncertainty that inheres within the analytic task.
!306
Fig. 21. X-Ray crystallography diffraction pattern. “What is X-Ray
Crystallography,” by Lorch, Mark. 3 Feb 2014, The Conversation,
https://theconversation.com/explainer-what-is-x-ray-
crystallography-22143. Accessed 3 April 2020.
It is this uncertainty that, for my own aesthetic purposes, squares best with the stochastic nature
of the cell itself.
The final aesthetic consideration for Saccades was that of the material presence of the
medium. This is somewhat addressed by the choice of two dimensional points over illusionistic,
three dimensional spheres for denoting atomic coordinates. In reducing the referenced atom to a
point, the work both calls up the tension between “points” and “regions” and foregrounds the
pixel as the pictorial unit for today’s most common optical modalities of representation. To that
same end, the visualization software for Saccades makes use of two post-processing shaders that
affect the overall image. One adds a layer of slight gaussian blur to the protein model, smearing
each point such that it appears to occupy a less definite region of the screen space. The other
adds a layer of non-correlated white noise to the final pixel array. Both of these techniques are
meant to foreground the artifice and materiality of the medium, but they also call up the material
nature of scientific instruments and their hermeneutic displays. In this sense, there is, in addition
to the raw, haptic visuality imparted by the noise, another scientific imaginary in play that
references the CRT static made familiar by science fiction films, television, or even the visual
artifacts produced from actual medical procedures. At the time of writing, Saccades:
GLP1R_extracellular_domain can be viewed at https://vimeo.com/331730405. !307
Conclusion Ultimately, the distortional and worldizing aesthetic applied to works like Saccades is
intended to forge a conceptual and material link between the uncertainties of measurement, the
uncertainties of the physical world, and the embodied perceptions of the viewer/listener
communicated through discrete works of computational media. Overall, the distortional attitude
is one that forefronts the artifice of representation while producing new and often novel points of
view. It is the aggregation of these perspectives that is, to my mind, the most valuable boundary
object for the trading zones of Art and Science. By producing a multiplicity of perspectives out
the artifice of representations, one simultaneously affirms that portion of reality that outruns our
sensibilities and that which overspills our abstract schemas while creating a space for our
representations and their epistemic foundations to be contested. Importantly, in taking seriously
the constraints that follow from the epistemological status of our abstractions, distortional
practices perform this aggregation of new and useful perspectives while also generating the
critical distance needed to ensure that representation is not conflated with its physical referents.
In collaborations like World in a Cell, it is almost assured that distortional approaches will be
met with skepticism on many sides, both from artists and from scientists. Indeed, the social
character of the trading zone all but insures that nearly any agreement with respect to a
functional aesthetics will be hard won. Nevertheless, there are ways in which the distortional
attitude reveals itself even after much difficult interdisciplinary negotiation, as we see with WiaC
and its adoption of origami as a base metaphor. It could be (indeed, it has been) argued that
WiaC’s metaphor serves a stylistic more than it does a functional purpose. I could not disagree
!308
with this position more. At each stage of the design collaboration, realist constraints, the general
lack of intuitive “handholds” in the theory, and the vast chasm between each side’s respective
epistemes were negotiated to fruitful ends and the current status of the project reflects a
productive hybridity in its aesthetic form as much as in its analytic potential. Here, I have
attempted to identify where I believe the limitations of the approaches explored are most
problematic, but for the purposes of the brief and, moreover, for the purposes of general
education, World in a Cell presents an evocative solution as well as a compelling example of
where Art and Science can productively meet.
For myself, World in a Cell also raises important questions pertaining to whom, exactly, we
hold our respective knowledge to belong. What the formation of the interlanguage serves to
illustrate is that knowledge, grounded in the world, is not entirely contained within our linguistic
habits, our specific and partial representations, or our disciplinary biases. Like our knowledge of
imperceivable entities, such knowledge is rather found in the doing. By doing together, across
multiple registers of material practice and communication, art-science collaboration has the
potential to open new windows on those realities that outrun our discrete sensibilities. In both my
own practice and in future art-science collaborations, my hope is for the distortional attitude to
help us better square such embodied knowledge as that we analyze phenomenologically with the
explicit knowledge such as that we gather and produce through abstract schemas. As World in a
Cell shows, scientific data, and indeed scientific instrumentation itself, can serve as evocative
boundary objects around which to organize these efforts and to which the distortional attitude
can be productively applied. The key to achieving this potential, I would argue, lies in our
acknowledgement that the blurring of boundaries does not entail a loss of disciplinary identity.
!309
Indeed, the dialectical nature of art-science collaborations—a dialectic that is anchored in each of
our approaches to physical, material, and theoretical constraints—is the very engine that
synthesizes whatever “third order” that might arise from the collaborations themselves.
There are arguably as many methodologies for conducting art-science as there are
practitioners seeking to engage in it. This is one of the advantages and disadvantages of art-
science remaining such an undertheorized and penumbral field. As I hope to have shown, before
we can turn the distortions of discipline that occur in this hybrid space to productive ends, we
must ensure that our particular understanding of our collaborators’ epistemological foundations
do not fall into caricature. If hybrid art-science praxes are to hold, those who engage in them
must move beyond the old polemics and into a sustained engagement with unfamiliar
methodologies, systems of abstraction, and material cultures.
A distortional dialectic of representation is only one way to conduct art-science, but it is a
compelling one. It is where, I would argue, the embodied knowledge of the artist can wriggle
past the epistemic barriers of intuition and offer new perspectives on old problems of perceptual
access. As anamorphosis shows, a distortional dialectic can fracture hegemonic points of view
into partial perspectives, which are subject to aggregation into a “compound eye” that comports
with objectivity as rethought above. If such distortional practices are to be productive rather than
simply iconoclastic, we should commit as much creative energy to systems of notation and
abstraction as we do to our embodied practices. We are also required to afford a proportional
amount of epistemic validity to the latter means for knowing as we do the former.
!310
Bibliography
Adib, Fadel, and Dina Katabi. See Through Walls with Wi-Fi! PDF file. https://
people.csail.mit.edu/fadel/papers/wivi-paper.pdf
Attali, Jacques. Noise: the Political Economy of Music. University of Minnesota, 2009.
Bachelard, Gaston. The Formation of the Scientific Mind: a Contribution to a Psychoanalysis of
Objective Knowledge. Translated by Mary McAllester Jones, Clinamen Press, 2002.
Bachelard, Gaston. The New Scientific Spirit. Beacon Pr., 1984.
Bacon, Francis. The Works. Longman, 1858. PDF file.
Baird, Davis. Thing Knowledge: a Philosophy of Scientific Instruments. University of California
Press, 2004.
Barad, Karen. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter
and Meaning. Duke University Press, 2007.
Barad, Karen. “Posthuman Performativity.” Material Feminisms, Edited by Stacy Alaimo and
Susan Hekman, Indiana University Press, 2008.
Barone, T & Eisner, E 2012, 'What is and what is not arts based research?', in Arts based
research, SAGE Publications, Inc., Thousand Oaks, CA, pp. 1-12, viewed 26 June 2020,
doi: 10.4135/9781452230627.n1.
Baumberger, Christopher. “Art and Understanding: In Defence of Aesthetic Cognitivism.” 2011.
PDF file.
Berger, Peter, and Thomas Luckmann. The Social Construction of Reality: A Treatise in the
Sociology of Knowledge. Penguin, 1991.
!311
Berman, Helen M., et al. “The Protein Data Bank and the Challenge of Structural Genomics.”
Nature Structural Biology, vol. 7, no. 11, Nov. 2000, pp. 957–59. www-nature-
com.libproxy2.usc.edu, doi:10.1038/80734.
Bhaskar, Roy. A Realist Theory of Science. Routledge, 2008.
Bigelow, John. The Reality of Numbers: a Physicalists Philosophy of Mathematics. Clarendon
Press, 2001.
Blakinger, John R. Gyorgy Kepes: Undreaming the Bauhaus. Mit Press, 2019.
Bogost, Ian. Alien Phenomenology, or, What It’ s like to Be a Thing. University of Minnesota
Press, 2012.
Bradley, Ryan. “Static on the Line.” Popular Science, Winter 2019: 66-71. Print.
Bryant, Levi R. Onto-Cartography: an Ontology of Machines and Media. Edinburgh University
Press, 2014.
Bryant, Levi R. The Democracy of Objects. Open Humanities Press, 2011.
Burnham, Jack. Beyond Modern Sculpture: the Effects of Science and Technology on the
Sculpture of This Century. George Braziller, 1975.
Butter, Michael, et al. Conspiracy Theories in the United States and the Middle East: A
Comparative Approach. De Gruyter, Inc., 2014. ProQuest Ebook Central, http://
ebookcentral.proquest.com/lib/socal/detail.action?docID=1317879.
Byerly, Henry C., and Vincent A. Lazara. “Realist Foundations of Measurement.” Philosophy of
Science, vol. 40, no. 1, 1973, pp. 10–28., doi:10.1086/288493.
Candy, Stuart. The Futures of Everyday Life: Politics and the Design of Experiential Scenarios.
2010. Retrieved from ProQuest Dissertations Publishing. (3429722).
Carnap, Rudolf, et al. “On Protocol Sentences.” Noûs, vol. 21, no. 4, 1987, p. 457., doi:
10.2307/2215667.
!312
Cateforis, David, Steven Duval, and Shepherd Steiner, eds. Hybrid Practices: Art in
Collaboration with Science and Technology in the Long 1960s. University of California
Press, 2018.
Chion, Michel. Guide to Sound Objects. Translated by John Dack and Christine North, 2009.
PDF file.
Chion, Michel. Sound: an Acoulogical Treatise. Translated by James A. Steintrager, Duke
University Press, 2016.
Chun, Wendy Hui Kyong. Programmed Visions: Software and Memory. MIT Press, 2013.
Clark, Andy. Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Oxford
University Press, 2016.
Collins, Daniel L. “Anamorphosis and the Eccentric Observer: Inverted Perspective and
Construction of the Gaze.” Leonardo, vol. 25, no. 1, 1992, pp. 73–82. JSTOR, JSTOR, doi:
10.2307/1575625.
Collins, H. M. Tacit and Explicit Knowledge. The University of Chicago Press, 2013.
Collins, Harry, Robert Evans, and Michael E. Gorman. “Trading Zones and Interactional
Expertise.” Trading Zones and Interactional Expertise. The MIT Press, 2010. Web.
Crary, Jonathan. Techniques of the Observer: on Vision and Modernity in the Nineteenth Century.
MIT Press, 1992.
Dagognet François. Etienne-Jules Marey: a Passion for the Trace. Zone Books, 1992.
Damisch, Hubert. A Theory of Cloud: toward a History of Painting. Stanford Univ. Press, 2008.
Daston, Lorraine, and Peter Galison. Objectivity. Zone Books, 2018.
!313
“Data Sharing Comes to Structural Biology.” Nature Methods, vol. 13, no. 5, May 2016, pp.
381–381. www-nature-com.libproxy2.usc.edu, doi:10.1038/nmeth.3862.
A Day in the Life of a Motor Protein. YouTube, https://www.youtube.com/watch?
v=tMKlPDBRJ1E&t=7s. Accessed 2 Mar. 2020.
De Duve, Thierry. Kant after Duchamp. MIT Press, 1999.
Deleuze, Gilles. Francis Bacon: The Logic of Sensation. Continuum, 2003.
Deleuze, Gilles, and Felix Guattari. What is Philosophy? Columbia University Press, 1994.
Dennett, Daniel. “Darwins ‘Strange Inversion of Reasoning.’” Proceedings of the National
Academy of Sciences, vol. 106, no. Supplement_1, 2009, pp. 10061–10065., doi:10.1073/
pnas.0904433106.
Dewey, John. Art as Experience. Perigree/Penguin Group, 2005.
Dewey, John. Experience and Nature. Dover Publications, 2018.
Dewey, John. The Quest for Certainty: A Study of the Relation of Knowledge and Action.
Minton, Balsch & Company, 1929.
Digital Ethereal. http://www.digitalethereal.com/. Accessed 1 Mar. 2020.
Dreyfus, Hubert L. “Intelligence without Representation – Merleau-Ponty’s Critique of Mental
Representation The Relevance of Phenomenology to Scientific Explanation.”
Phenomenology and the Cognitive Sciences, vol. 1, no. 4, Dec. 2002, pp. 367–83. Springer
Link, doi:10.1023/A:1021351606209.
Dreyfus, Hubert L. Skillful Coping: Essays on the Phenomenology of Everyday Perception and
Action. Edited by Mark A. Wrathall, Oxford University Press, 2014.
!314
Drucker, Johanna. “Graphical Approaches to the Digital Humanities.” A New Companion to
Digital Humanities, 2015, pp. 238–250., doi:10.1002/9781118680605.ch17.
Dunne, Anthony. Hertzian Tales: Electronic Products, Aesthetic Experience, and Critical Design.
MIT Press, 2005.
Finetti, Bruno de. “Probabilism.” Erkenntnis, vol. 31, no. 2-3, 1989, pp. 169–223., doi:10.1007/
bf01236563.
Edgerton, Samuel Y . The Mirror, the Window, and the Telescope: How Renaissance Linear
Perspective Changed Our Vision of the Universe. Cornell University Press, 2009.
Edgerton, Samuel Y . The Renaissance Rediscovery of Linear Perspective. American Council of
Learned Societies, 2011.
Elkins, James. Six Stories from the End of Representation: Images in Painting, Photography,
Astronomy, Microscopy, Particle Physics, and Quantum Mechanics, 1980-2000. Stanford
University Press, 2008.
Engel, Carl. “Æolian Music.” The Musical Times and Singing Class Circular, 23(474), 432-436.
Doi:10.2307/3355798.
FACT. Unfold - Ryoichi Kurokawa. 2016. Vimeo, https://vimeo.com/159521082.
Farnell, Andy. Designing Sound. MIT Press, 2010.
Fazi, M. Beatrice, and Matthew Fuller. “Computational Aesthetics.” A Companion to Digital Art,
John Wiley & Sons, Ltd, 2016, pp. 281–96. Wiley Online Library, doi:
10.1002/9781118475249.ch11.
FOULAB/Project-COGSWORTH. 2018. The Montreal Hackerspace, 2020. GitHub, https://
github.com/FOULAB/Project-COGSWORTH.
Fraser, Scott. Personal Interview. 25 May 2017.
!315
Freedberg, David. The Eye of the Lynx: Galileo, His Friends and the Beginnings of Modern
Natural History. The University of Chicago Press, 2004.
Frieze, James. “Naked Truth: Theatrical Performance and the Diagnostic Turn.” Theatre
Research International, vol. 36, no. 2, 2011, pp. 148–162. uosc.primo.exlibrisgroup.com,
doi:10.1017/S0307883311000228.
Frigg, Roman, and Matthew C. Hunter, editors. Beyond Mimesis and Convention Representation
in Art and Science. Springer, 2010.
Galison, Peter Louis. Image and Logic: a Material Culture of Microphysics. University of
Chicago Press, 2005.
Galison, Peter Louis. “Trading with the Enemy.” Trading Zones and Interactional Expertise. The
MIT Press, 2010. Web.
Galloway, Alexander R. The Interface Effect. Polity, 2012.
Goodman, Nelson. “The Significance of Der logische Aufbau der Welt.” The Philosophy of
Rudolph Carnap. Edited by Paul Arthur Schilpp, Open Court, 1963. pp. 545-558.
Gore, Julie, and Eugene Sadler-Smith. “Unpacking Intuition: A Process and Outcome
Framework.” Review of General Psychology, vol. 15, no. 4, Dec. 2011, pp. 304–16. SAGE
Journals, doi:10.1037/a0025069.
Gorman, Michael E. Trading Zones and Interactional Expertise: Creating New Kinds of
Collaboration. MIT Press, 2011.
Gould, Stephen J. "Nonoverlapping Magisteria." Natural History. New York NY, vol. 106, no. 2,
1997, pp. 16-22+. ProQuest, http://libproxy.usc.edu/login?url=https://search-proquest-
com.libproxy2.usc.edu/docview/743711439?accountid=14749.
Gross, Alan G., and Joseph E. Harmon. Science from Sight to Insight: How Scientists Illustrate
Meaning. The University of Chicago Press, 2014.
!316
Grosz, Elizabeth. Chaos, Territory, Art: Deleuze and the Framing of the Earth. Columbia
University Press, 2008.
Haack, Susan. Defending Science - within Reason: between Scientism and Cynicism. Prometheus
Books, 2007.
Halpern, Orit. Beautiful Data: a History of Vision and Reason since 1945. Duke University
Press, 2015.
Halse, Joachim. “Ethnographies of the Possible.” Design Anthropology: Theory and Practice,
Edited by Wendy Gunn, Tom Otto, and Rachel Charlotte Smith, Bloomsbury, 2013, pp.
180-196.
Haraway, Donna. “Situated Knowledges: The Science Question in Feminism and the Privilege of
Partial Perspective.” Feminist Studies, vol. 14, no. 3, Feminist Studies, Inc, 1988, pp. 575–
599. uosc.primo.exlibrisgroup.com, doi:10.2307/3178066.
Harle, Marissa, and Marcy H. Towns. “Students Understanding of External Representations of
the Potassium Ion Channel Protein, Part I: Affordances and Limitations of Ribbon
Diagrams, Vines, and Hydrophobic/Polar Representations.” Biochemistry and Molecular
Biology Education, vol. 40, no. 6, May 2012, pp. 349–356., doi:10.1002/bmb.20641.
Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature,
and Informatics. University of Chicago Press, 1999.
Hegarty, Paul. “Brace and Embrace: Masochism in Noise Performance.” Sound, Music, Affect,
Edited by Marie Thompson & Ian Biddle, Bloomsbury, 2013, pp. 133-146.
Heidegger, Martin. Being and Time. Blackwell Publishers Ltd, 2001.
Hermann, Thomas, et al. The Sonification Handbook. Logos-Verl., 2011.
Hoffmann, Peter M. Life's Ratchet: How Molecular Machines Extract Order from Chaos. Basic
Books, 2012.
!317
Hong, Sungook. Wireless: from Marconis Black-Box to the Audion. MIT Press, 2010.
Husserl, Edmund. Ideas Pertaining to a Pure Phenomenology and to a Phenomenological
Philosophy. Martinus Nijhoff Publishers, 1983.
Ihde, Don. Embodied Technics. Automatic Press, VIP, 2010.
Ihde, Don. Experimental Phenomenology: Multistabilities. Suny Press, 2012.
Ihde, Don. Postphenomenology and Technoscience: the Peking University Lectures. Albany:
SUNY Press, 2009. Print.
Ihde, Don. Technology and the Lifeworld: from Garden to Earth. Indiana University Press, 1990.
The Inner Life of the Cell. YouTube, https://www.youtube.com/watch?v=wJyUtbn0O5Y .
Accessed 2 Mar. 2020.
Jensenius, Alexander Refsum. “Some Video Abstraction Techniques for Displaying Body
Movement in Analysis and Performance.” Leonardo, vol. 46, no. 1, 2013, pp. 53–60., doi:
10.1162/leon_a_00485.
Kahn, Douglas. Earth Sound Earth Signal: Energies and Earth Magnitude in the Arts. University
of California, 2013.
Kitchin, Rob. “Big Data, New Epistemologies and Paradigm Shifts.” Big Data & Society 1.1
(2014): n. pag. Web.
Kittler, Friedrich A. The Truth of the Technological World: Essays on the Genealogy of Presence.
Translated by Hans Ulrich Gumbrecht, Stanford University Press, 2014.
Korb, Kevin. Philosophy of Science, Bayesian Reasoning - YouTube. https://www.youtube.com/
watch?v=t-AbB26oA V8&t=227s. Accessed 1 Mar. 2020.
!318
Kosko, Bart, and Sanya Mitaim. “Stochastic Resonance in Noisy Threshold Neurons.” Neural
Networks, vol. 16, no. 5-6, 2003, pp. 755–761., doi:10.1016/s0893-6080(03)00128-x.
Kubovy, Michael. The Psychology of Perspective and Renaissance Art. Cambridge University
Press, 1986.
Lakoff, George, and Mark Johnson. Metaphors We Live By. University of Chicago Press, 1981.
Latour, Bruno. Reassembling the Social: An Introduction to Actor-Network Theory. Oxford
University Press, 2005.
Latour, Bruno. “Visualisation and Cognition: Drawing Things Together.” Knowledge and Society
Studies in the Sociology of Culture Past and Present, edited by H. Kuklick, Jai Press vol. 6,
1986, pp. 1-40
Latour, Bruno. “Why Has Critique Run out of Steam? From Matters of Fact to Matters of
Concern.” Critical Inquiry, vol. 30, no. 2, 2004, pp. 225–248., doi:10.1086/421123.
Latour, Bruno, and Steve Woolgar. Laboratory Life the Construction of Scientific Facts.
Princeton Univ. Press, 1986.
Lawson, Bryan. How Designers Think (Second Edition). Butterworth-Heinemann, 1990.
Leavy, Patricia ed. Handbook of Arts-Based Research. New York: The Guilford Press, 2018.
Print.
Leavy, Patricia. Method Meets Art: Arts-based Research Practice. Guilford Publications, 2020.
Leavy, Patricia. Research Design: Quantitative, Qualitative, Mixed Methods, Arts-based, and
Community-based Participatory Research Approaches. Guilford Publications, 2017.
Longino, Helen E. Science as Social Knowledge: Values and Objectivity in Scientific Inquiry.
Princeton Univ. Press, http://www.nybooks.com/articles/2002/11/21/looking-for-a-black-
swan/1990.
!319
Liamputtong, Pranee, and Jean Rumbold. Knowing differently: Arts-based and collaborative
research methods. Nova Publishers, 2008.
Marks, Laura U. The Skin of the Film: Intercultural Cinema, Embodiment, and the Senses. Duke
Univ. Press, 2007.
Massey, Lyle. Picturing Space, Displacing Bodies: Anamorphosis in Early Modern Theories of
Perspective. Pennsylvania State University Press, 2007.
McGinn, Colin. Looking for a Black Swan. Nov. 2002. www-nybooks-com.libproxy2.usc.edu, .
Meillassoux, Quentin. After Finitude: an Essay on the Necessity of Contingency. Translated by
Ray Brassier, Bloomsbury Academic, 2016.
Merleau-Ponty, Maurice. The Merleau-Ponty Reader. Edited by Ted Toadvine and Leonard
Lawlor, Northwestern University Press, 2007.
Merleau-Ponty, Maurice. The Structure of Behavior. Translated by Alden L. Fisher, Beacon
Press, 1963.
Mersch, Dieter. “Representation and Distortion: On the Construction of Rationality and
Irrationality in Early Modern Modes of Representation.” Volume 2 Instruments in Art and
Science, On the Architectonics of Cultural Boundaries in the 17th Century, edited by
Schramm, Helmar, et al., De Gruyter, 2014. DeGruyter, doi:10.1515/9783110971910.
Miller, Sean. “A Return to the Eleventh Dimension: String Theory as a Scientific Imaginary.”
Strung Together, University of Michigan Press, 2013, pp. 27–58. JSTOR, JSTOR,
www.jstor.org/stable/10.3998/mpub.4999338.5.
Milutis, Joe. Ether: the Nothing That Connects Everything. University of Minnesota Press, 2006.
Molaro, Paolo, and Pierluigi Selvelli. “On the Telescopes in the Paintings of Jan Brueghel the
Elder.” Proceedings of the International Astronomical Union, vol. 5, no. S260, 2009, pp.
327–332., doi:10.1017/s1743921311002481.
!320
Moles, Abraham. Information Theory and Esthetic Perception. Translated by Joel E. Cohen,
Univ. of Illinois Pr., 1968.
Morton, Timothy. “Of Matter and Meter: Environmental Form in Coleridge’s ‘Effusion 35’ and
‘The Eolian Harp.’” Literature Compass, vol. 5, no. 2, 2008, pp. 310–35. Wiley Online
Library, doi:10.1111/j.1741-4113.2007.00520.x.
"motor proteins." Dictionary of Developmental Biology and Embryology, Frank J. Dye, Wiley,
2nd edition, 2012. Credo Reference, https://libproxy.usc.edu/login?url=https://
search.credoreference.com/content/entry/wileydevbio/motor_proteins/0?institutionId=887.
Accessed 29 Feb. 2020.
Nagel, Thomas. "The limits of objectivity." The Tanner Lectures on Human Values 1 (1980):
75-139.
Nagel, Thomas. The View from Nowhere. Oxford University Press, 1986.
Oyama, Susan. The Ontogeny of Information: Developmental Systems and Evolution ; Foreword
by Richard C. Lewontin. Duke University Press, 2000.
Panofsky, Erwin. Perspective as Symbolic Form. Zone Books, 1991.
Peirce, Charles Sanders. Philosophical Writings of Peirce. Edited by Justus Buchler, Dover
Publications, 1955.
Pinker, Steven. The Stuff of Thought. Penguin Publishing Group, 2008. Kindle Edition.
Polanyi, Michael. The Tacit Dimension. Doubleday & Company Inc., 1966.
Popper, Karl. Popper: the Logic of Scientific Discovery. Routledge Classics, 2002.
Prager, Frank D., and Gustina Scaglia. Brunelleschi: Studies of His Technology and Inventions.
The MIT Press, 1970.
!321
Pretz, Jean E., et al. “Development and Validation of a New Measure of Intuition: The Types of
Intuition Scale.” Journal of Behavioral Decision Making, vol. 27, no. 5, 2014, pp. 454–
467., doi:10.1002/bdm.1820.
Quine, W. V . Word and Object. New ed., MIT Press, 2013.
Rescher, Nicholas. Realism and Pragmatic Epistemology. Pittsburgh University Press, 2005.
Roads, Curtis. Microsound. MIT Press, 2001.
Roden, David. Posthuman Life: Philosophy at the Edge of the Human. Routledge, 2015.
Rothbart, Daniel. Philosophical Instruments: Minds and Tools at Work. University of Illinois
Press, 2007.
Schaeffer, Pierre. “Acousmatics.” Audio Culture: Readings in Modern Music, edited by
Christoph Cox and Daniel Warner, Continuum, 2004. pp. 76-81.
Schneiderman, Ben. “The Eyes Have It: A Task by Data Type Taxonomy for Information
Visualizations.” Proceedings of the 1996 IEEE Symposium on Visual Languages, IEEE
Computer Society, 1996, p. 336.
Sconce, Jeffrey. Haunted Media: Electronic Presence from Telegraphy to Television. Duke Univ.
Press, 2000.
Sellars, Wilfrid. In the Space of Reasons: Selected Essays of Wilfrid Sellars. Edited by Kevin B.
Scharp and Robert B. Brandom, Harvard University Press, 2007.
Shaffer, Elinor S., ed. The third culture: Literature and science. V ol. 9. Walter de Gruyter, 2011.
Shannon, C., and W. Weaver. The Mathematical Theory of Communication. Univ. of Illinois Pr.,
1964.
!322
Stankievech, Charles. “Exhibit A: Notes on a Forensic Turn in Contemporary Art.” Afterall: A
Journal of Art, Context and Enquiry, vol. 47, 2019, pp. 42–55., doi:10.1086/704197.
Sokal, Alan and Jean Bricmont. Fashionable Nonsense: Postmodern Intellectuals' Abuse of
Science. Picador, 1998.
Stokes, Dustin. “Art and Modal Knowledge.” Knowing Art: Essays in Aesthetics and
Epistemology, edited by Matthew Kieran and Dominic McIver Lopes, Springer
Netherlands, 2007, pp. 67–81. Springer Link, doi:10.1007/978-1-4020-5265-1_5.
Stuart, Michael T., and Nancy J. Nersessian. “Peeking Inside the Black Box: A New Kind of
Scientific Visualization.” Minds and Machines, vol. 29, no. 1, 2018, pp. 87–107., doi:
10.1007/s11023-018-9484-3.
Sullivan, Woodruff T. “The History of Radio Telescopes, 1945–1990.” Experimental Astronomy,
vol. 25, no. 1, Aug. 2009, pp. 107–24. Springer Link, doi:10.1007/s10686-009-9140-2.
Tatar, Maria M. “From Mesmer to Freud:: Animal Magnetism, Hypnosis, and Suggestion.”
Spellbound: Studies on Mesmerism and Literature, Princeton University Press, 1978, pp.
3–44. JSTOR, JSTOR, www.jstor.org/stable/j.ctt13x13wx.5.
Thorowgood, Henry. “A Description of the Æolian-Harp, or Harp of Æolus, from the Earliest
Account to the Present Time, as Approved by the Late Dr. Hales & Jas Oswald, Esqr.
(Some Time Chamber Composer to His Majesty) Which Are Made on the Truest
Mechanical Principles by Henry Thorowgood, Musical Instrument Maker and Musick
Printer at the Violin & Guitar ...” Library of Congress, Washington, D.C. 20540 USA,
https://www.loc.gov/resource/muspre1800.100750/?st=gallery. Accessed 1 Mar. 2020.
Thoreau, Henry D. The Writings of Henry David Thoreau: Journal. Edited by Bradford Torrey,
Houghton Mifflan and Company, 1906. PDF file.
Tufte, Edward R. The Visual Display of Quantitative Information. Graphics Press, 1983.
Tuttle, Harry. “Unspoken Cinema: Sound Is Just Sound (John Cage).” Unspoken Cinema, 4 Sept.
2010, http://unspokencinema.blogspot.com/2010/09/sound-is-just-sound-john-cage.html.
!323
Veltman, Kim H. “Perspective, Anamorphosis and Vision.” Marburger Jahrbuch Für
Kunstwissenschaft, vol. 21, 1986, pp. 93–117. JSTOR, JSTOR, doi:10.2307/1348664.
Waldenfels, Bernhard. Order in the Twilight. Ohio University Press, 1996.
Waldenfels, Bernhard. Phenomenology of the Alien: Basic Concepts. Northwestern University
Press, 2011.
Wark, McKenzie. A Hacker Manifesto. Harvard University Press, 2004.
Wells, Francis. The Heart of Leonardo Foreword by HRH Prince Charles, The Prince of Wales.
1st ed. 2013., Springer London, 2013.
Wittgenstein, Ludwig, et al. Tractatus Logico-Philosophicus. Routledge, 2002.
!324
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Somatic montage: supra-dimensional composition in cinema and the arts
PDF
Enigmatic form-finding
PDF
Poetic science: evoking wonder through transmedia discovery of science
PDF
Designing speculative rituals: tangible imaginaries and fictive practices from the (inter)personal to the political
PDF
Neuro-avantgarde
PDF
Media of the abstract: exploring axioms, techniques, and histories of procedural content generation
PDF
Machines of the un/real: mapping the passage between the virtual and the material in the attraction
PDF
Reality ends here: environmental game design and participatory spectacle
PDF
Transmedia aesthetics: narrative design for vast storyworlds
PDF
From the extraordinary to the everyday: fan culture’s impact on the transition of Chinese post-cinema in the first twenty years of the twenty-first century
PDF
Infrastructures of the imagination: building new worlds in media, art, & design
PDF
Coding.Care: guidebooks for intersectional AI
PDF
Collectivizing justice: transmedia memory practices, participatory witnessing, and feminist space building in Nicaragua
PDF
Inventing immersive journalism: embodiment, realism and presence in nonfiction
PDF
Encounters with the Anthropocene: synthetic geologies, diegetic ecologies and other landscape imaginaries
PDF
Dying to be: prefigurative performances of necrontology against neoliberal subjectivity
PDF
Real fake rooms: experiments in narrative design for virtual reality
PDF
Toward counteralgorithms: the contestation of interpretability in machine learning
PDF
Marquee survivals: a multimodal historiography of cinema's recycled spaces
PDF
Sick cinema: illness, disability and the moving image
Asset Metadata
Creator
Cantrell, Brian
(author)
Core Title
Worldizing data: embodiment, abstraction, and distortion in art-science praxis
School
School of Cinematic Arts
Degree
Doctor of Philosophy
Degree Program
Cinematic Arts (Media Arts and Practice)
Publication Date
09/28/2020
Defense Date
06/25/2020
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
artscience,art-science,computational aesthetics,data aesthetics,data sonification,data visualization,media art,OAI-PMH Harvest,phenomenology,philosophy of science,post-phenomenology,sound studies,theory and practice
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Kratky, Andreas (
committee chair
), Fisher, Scott (
committee member
), Willis, Holly (
committee member
)
Creator Email
bcantrel@usc.edu,brn.cntrll@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-380963
Unique identifier
UC11665976
Identifier
etd-CantrellBr-9021.pdf (filename),usctheses-c89-380963 (legacy record id)
Legacy Identifier
etd-CantrellBr-9021.pdf
Dmrecord
380963
Document Type
Dissertation
Rights
Cantrell, Brian
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
artscience
art-science
computational aesthetics
data aesthetics
data sonification
data visualization
media art
phenomenology
philosophy of science
post-phenomenology
sound studies
theory and practice