Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Media of the abstract: exploring axioms, techniques, and histories of procedural content generation
(USC Thesis Other)
Media of the abstract: exploring axioms, techniques, and histories of procedural content generation
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Media of the Abstract:
Exploring Axioms, Techniques, and Histories of Procedural Content Generation
by Todd Furmanski
Media Arts and Practice Ph.D.
Faculty of the USC Graduate School
Tracy Fullerton
Holly Willis
Scott Fisher
University of Southern California
August 2017
Furmanski 2
Contents
Introduction ..................................................................................................................................... 3
Components of Procedural Content Generation ............................................................................. 9
Randomness .............................................................................................................................. 11
Chaos ......................................................................................................................................... 16
Procedure ................................................................................................................................... 20
Simulation ................................................................................................................................. 25
Play ............................................................................................................................................ 28
Histories ........................................................................................................................................ 32
The Oulipo Group ..................................................................................................................... 34
Youngblood’s Expanded Cinema.............................................................................................. 43
The Demoscene ......................................................................................................................... 48
Case Studies .................................................................................................................................. 75
Perlin Noise ............................................................................................................................... 76
Cellular Automata ..................................................................................................................... 83
Pitfall! ........................................................................................................................................ 89
Telengard ................................................................................................................................... 94
Rogue ...................................................................................................................................... 100
Noctis ...................................................................................................................................... 106
Dwarf Fortress ......................................................................................................................... 112
Other Works ............................................................................................................................ 118
Personal Work ............................................................................................................................. 130
Games of Life .......................................................................................................................... 133
The Night Journey ................................................................................................................... 138
5 Processes (Wu Xing) ............................................................................................................ 145
LocusSwarm ............................................................................................................................ 152
Forska ...................................................................................................................................... 154
New Communities, New Focus .................................................................................................. 168
Conclusions ................................................................................................................................. 173
Bibliography ............................................................................................................................... 178
Furmanski 3
Introduction
The collision of computation and creativity produced countless new forms, suggesting
novel approaches to artistry that can be applied to virtually any form of media. In this
dissertation I will be examining the nature of Procedural Content Generation (PCG) through
histories, case studies, and my own work involving the subject. An inherent quality of
procedures is that they can be played with, adjusted, and re-appropriated. My examination of
source code involved in the case studies to follow demonstrates this changeability and illustrates
why authors made particular decisions. I will look at specific structures, constraints, and
algorithms, pulling apart source code in a few places to examine how each process works
towards generating content.
The existence of a digital creative work on a computer already suggests a level of
procedurality, as operating systems, compiled code, user input, and the transformation of data
already create something of a separation of process and content not normally as manifest in other
media. When one observes the inherent aspects of digital media, it can be hard to determine
what might qualify as PCG and what is not. What are realtime 3D graphics but procedurally
generated perspective, a computer mimicking an Italian Renaissance artist?
Still, the works I will explore embrace automatism and algorithmic production in an overt
manner that revels in the systems that already support their existence. As an example, a
computer displaying an image would not qualify as procedurally generated content in and of
itself. Writing a program that generates a new image generally fits the criteria for examination
and would qualify as PCG. For the purposes of discussion, I will define “Procedural Content
Generation” as the creation of systems that, in turn, create media, either entirely on their own or
Furmanski 4
as an aid to a human creator. While any media that is generated digitally already relies on a
number of systems, data, and processes dependent on code, hardware, and defined characteristics
such as resolution, color depth, power voltage, and so forth, with procedurally generated content
the creative output becomes cybernetic, with portions coming from a space outside of conscious
or unconscious human thought, from systemic prescription or as unforeseen consequences of a
crafted algorithm.
Any study or exposure to the ideas behind PCG will encounter a series of axioms: these
statements, goals, and promises have varying degrees of truth, but their qualities deserve
examination. These axioms, outlined below in bold type, have a large amount of overlap, and
while not all of them are present everywhere, they will come up again and again throughout the
histories and case studies that follow.
Procedural content generation can produce an indefinite amount of novel media, in
that perceived content is just an instance of an effectively infinite body of potential works.
PCG takes the lure of mass production to incredible levels. Even a novice programmer
can make a program that produces thousands of pages of text in seconds, and games that
generate galaxies consisting of billions of stars have become a well-described genre at this point.
A designer using PCG doesn’t define a single chair, they program a machine to produce a
quantity of chairs.
Procedural content generation can generalize and quantify systems, entities, and
other elements in a manner that allows ease of alteration, expansion, digitization, and
quantification for the purpose of interaction.
Furmanski 5
Increasing complexity in simulation promises parallel increases in the affordances and
agency within a virtual space. Instead of crafting individual interactions, one can generalize
dynamics so that user actions can apply more universally. Sitting on a chair can also apply to
sitting on the floor, tables, etc. One major dream of interactive agency comes not from a
repeatable (and hence ultimately static) cinematic experience, or even a scripted digital sequence,
but rather an open experience in a living space, one that can consistently respond to a human
user. Any digital projects that attempt to produce such an experience will generally explore PCG
as an option, often (inaccurately) seeing it as a panacea to their difficulties.
Procedural content generation can dynamically generate an amount of data that
would not be feasible to store or create through other human-authored means, offering a
profound means of compression.
The reasons for this can be myriad. The sheer scope of a project might require
automation—many examples I will describe involve depictions of entire galaxies, billions of
stars and planets, which would be impossible for any group of people to “hand-author” in their
entirety. On the other end of the spectrum, minuscule details and imperfections can be
automated as well—a platonically perfect teapot has its own sense of the uncanny, and
mathematically adding dust, scratches, chips, and other imperfections could tax the sanity of an
artist, but can be trivial for a computer simulating the natural forces of entropy. Alternatively,
data might need to be compressed, grown from a comparatively small but indefinitely iterative
equation instead of being stored in an intuitively readable but voluminous manner.
Procedural content generation suggests a mastery of codification, a demonstration of
ability, used in a manner both enhanced and handicapped by structural concerns.
Furmanski 6
Generally speaking, it is easier to paint a picture than build a machine that paints a
picture. One can argue about the quality and utility of such a machine, but an implicit
demonstration of mastery proved to be a surprisingly common thread in many of the works
discussed in this text. The writers of the Oulipo group discussed the “acrobatics” involved in
writing using their literary constraints. The demoscene produced a fascinating body of digital
animation whose primary motive seemed to be showing off coding skills. Commercial games
using PCG, for instance any game with a computationally built galaxy along the lines of No
Man’s Sky or Spore, uses its technology as a key marketing component, offering billions of
worlds as proof of the developers’ abilities to create a grand experience.
The effectiveness of this self-promotion might come from society’s continuing struggle
to grasp the potential of the computer (or language, in the case of the Oulipo group), but within
the spectacle and aggrandizement one can find an amazement, a liberation, a way of creativity
that would not have been considered possible (or simply even imaginable) before.
Procedural content generation carries, within its implementation, a suspicion of the
limits of conscious creative effort, either in scope, power, or endurance.
Within procedural content generation, by definition, a programmer surrenders some
aspect of conscious creative decision making to an algorithm, rule, or formula. The
philosophical implications of this surrender have their own allure—John Cage complimented
composer Toshi Ichiyanagi on finding ways to “free his music from the impediment of his
imagination.”
1
This potential method of augmentation is related to the promise of novelty listed
1
Kenneth Silverman, Begin Again: A Biography of John Cage. (Knopf Doubleday Publishing Group, Kindle
Edition, 2010).
Furmanski 7
above. Having one’s own artwork surprise and in some ways transcend the artist has proven to
be a seductive idea.
Procedural content generation —defined as the design and implementation of
dynamics, or the interplay between systems —becomes its own art form.
That is to say, procedural content generation can be considered a method of expression in
and of itself, independent of however it might manifest in other media.
The craft and artistry involved is not specifically visual, auditory, or cinematic (although
it can manifest as any or all of these), nor are these techniques specifically valued for their
efficiency in the manner of a computer scientist’s evaluation of an algorithm (although technical
qualities like scalability and efficiency do come into the discussion). Procedural techniques can
be their most profound when considered on a purely abstract level, and perhaps even more
profound when considered not individually, but in combination with media and other systems.
With these axioms in mind, this dissertation will continue in a series of sections. First, I
will describe terms frequently used (and misused) in discussions on PCG, such as Randomness,
Chaos, Procedure, and Play. I hope not to prescriptively define such terms but rather to define
them in a manner the enables further discussion. Second, I will examine the histories of two
groups known to make use of procedural methods—the Oulipo writers and the demoscene—
bridged by a short nod to Gene Youngblood’s Expanded Cinema, an interesting work that both
surveys cinematic experiments just before digital computing became widespread and also further
speculates about advancements in society and technology with regards to media. The third
section will include a series of case studies of influential games and techniques, including source
code and design considerations that continue to affect current uses of PCG. In the fourth section
Furmanski 8
I will take a look at my own work with procedural content generation, showing my own
concerns, goals, and methods in developing these projects. Finally, I will conclude with a brief
note on current trends and concerns in the use of PCG today, with reference to a growing trend
of philosophy behind the techniques as much as the techniques themselves.
Furmanski 9
Components of Procedural Content Generation
The following definitions of Randomness, Chaos, Procedure, Simulation, and Play can
illuminate various aspect of how and why PCG functions, and they will provide something of an
introduction to various aspects of procedurality and the tools that underlie its functioning. These
principles and tools interact between the creator of a procedural system and the audience of that
system.
One way to frame the tension between the author and the audience is to look at how the
authors wish to express themselves through the work and how the audience will potentially react
to it. The author’s desire can for expression can manifest as an obsession with a particular shade
of red, or show itself as a particular rhythm in story structure, or in themes and ideas the author
wishes to provoke the audience to ponder. When interactivity and game design come into play,
Katie Salen and Eric Zimmerman note: “As a game designer, you are never directly designing
for your players. Instead, you are only designing the rules of the system….the goal of successful
game design is meaningful play, but play is something that emerges from the functioning of the
rules…Game designers create experience, but only indirectly.”
2
In my own efforts experimenting with second-order creation, I look to a wide variety of
sources and precedents for both inspiration and technique. I aspire to have my work retain some
element of animism, an appearance of internal movement. In game design I like to invite
meaningful choices, and to this end I believe a thoughtfully crafted simulation can allow open-
ended interaction, creating a space that allows creative action while maintaining coherency.
Such an approach is not guaranteed, nor easy. Constraints can liberate; too much choice,
2
Katie Salen Tekinbas ̧ and Eric Zimmerman, Rules of Play: Game Design Fundamentals. (Cambridge, Mass.: MIT
Press, 2003), 168.
Furmanski 10
particularly irrelevant choice, can paradoxically constrict the experience of agency. Systems can
erupt like a boiling kettle or deflate like a failed soufflé.
I will divide my discussion of various elements I have used in the service of creativity into
various sections, providing examples, both practical and historical, in order to help illuminate
their context beyond simple definitions. These examples will provide precedent for how I have
seen concepts like “procedure” and “chaos” used in ways I find applicable to my own work.
Furmanski 11
Randomness
The last refuge of the novelty-seeker will be the first force I discuss here. Randomness is
the spice of process, with the ability to strengthen a technique or progressively erode its
coherence.
According to John Cage, a number of his contemporaries used randomization in order to
seek the liberation from the self. In Cage’s words, “For at least some of these composers, then,
the final intention is to be free of artistry and taste…It is simply that personal expression, drama,
psychology, and the like are not part of the composer’s initial calculation: they are at best
gratuitous.”
3
Cage often used the Chinese system of I Ching, a series of binary triplets classically
used for divination, to generate his musical compositions. Symbols need to be generated through
some random means, such as flipping a series of coins. With HPSCHD, Cage moved from
flipping coins to a computer program, resulting in a spliced-together mix of tones that included
ten thousand unique pieces of sheet music that specified further adjustments to tone, volume, and
other parameters to create a unique performance. Randomness came from process: even the
silent 4’33” was composed using a notecard system he had applied to earlier works, resulting in
three movements whose duration added up to the eponymous time, “…an exquisite conclusion of
his quest for self-erasure.”
4
Cage experienced a room said to be totally sound-proof room, yet
still heard his blood flowing and electric hum of his nervous system, and realized there was no
such thing as silence, only our inattention to our surroundings. 4’33” can never be truly silent,
even when built from a random process that ignores all instructions of sound, and, more than
other works, it is meant to draw our attention to sound that might otherwise pass our perception.
3
John Cage, Silence: Lectures and Writings. (London: Marion Boyars, 1995), 68.
4
Kenneth Silverman, Begin Again: A Biography of John Cage. (First Edition edition. New York: Knopf, 2010), 118.
Furmanski 12
One of John Cage’s direct influences was Erik Satie, who shared his technique of using
chance and randomness: “Taking the works of Satie chronologically (1886-1925), successive
ones often appear as completely new departures. Two pieces will be so different as not to
suggest that the same person wrote them […] An artist conscientiously moves in a direction
which for some good reason he takes […] but Satie despised art [...] The artist counts: 7, 8, 9,
etc. Satie appears at unpredictable points springing always from zero: 112, 2, 49, no etc.”
5
Cage
once staged a production of Satie’s “The Ruse of the Medusa”, a Dadaist play he found in an old
rare book from the New York public library.
6
Through Satie and Dada we find not only an
attempt to escape the self and its limitations but an attempt to escape a culture as it was.
While the Dada movement followed the route of negation with regards to art, Dadaists also
used randomness to attack the mechanization and destruction they saw as having ruined culture
in the aftermath of World War I. Consider Tristan Tzara’s recipe:
To make a Dadaist poem
Take a newspaper.
Take a pair of scissors.
Choose an article as long as you are planning to make your poem.
Cut out the article.
Then cut out each of the words that make up this article and put them in a bag.
Shake it gently.
Then take out the scraps one after another in the order which they left the bag.
Copy conscientiously.
The poem will be like you.
And here you are a writer, infinitely original and endowed with a sensibility that is
charming though beyond the understanding of the vulgar herd.
7
If all art were in some way propaganda, what motive could be ascertained from a collection
of random words, and how could something be propaganda without a motive? The Dada
5
John Cage, Silence: Lectures and Writings. (London: Marion Boyars, 1995), 79.
6
Kenneth Silverman, Begin Again: A Biography of John Cage. (First Edition edition. New York: Knopf, 2010), 76.
7
Mary Flanagan, Critical Play: Radical Game Design. (Cambridge, Mass.: The MIT Press, 2009), 128.
Furmanski 13
movement’s embrace of collage worked in a similar way, as it took fragments of images and text
to create a haphazard product. This very act of defiance was political, an assertion that even
random acts can have meaning. The artistic movement strained and transformed, and scattered
as its stance grew overt. A more specific political agenda developed from Dada’s initial
nihilism, for instance in a mock trial of Maurice Barres, an anarchist-turned-nationalist.
8
The
walls of culture are hard to see and harder to see through.
Even when following every reasonable-seeming mode of progression, one might find
themselves at a limited, limiting solution. In the realm of artificial intelligence this is known as a
“local maximum,” the peak of a small hill where every step away is a progression downward. If
a traveler’s goal is to reach the highest peak, simple (or rather simplistic) logic would leave them
stranded at the top of a small hill, even if Mount Everest is a short distance away. Randomness,
then, is the last chance to escape this little hill, whether one’s specific method consists of the I
Ching in musical composition by John Cage, or enforced genetic mutation in simulations known
as Artificial Life, or as a last-ditch effort to escape a culture that had seemed to lose all
justification after a world war, as with Dada. Randomness can be a long shot, but the emergence
of large-scale computing has brought coin-flipping and dice-rolling to sublime levels of
efficiency, and randomness has always had a way of thrilling both the gambler and the artist.
Unbounded randomness in art relies all too much on context. 4’33” makes sense in the
context of Cage’s larger body of work and goals, acting as a valid point of progression in his
compositions. Without the context of a concert, or knowledge of Cage, one could be
unknowingly and unceasingly pirating Cage’s piece. Tzara’s true poetry comes not from the
resulting poem but from the act of cutting and copying the work, perhaps more so from Tzara’s
8
Claire Bishop, Artificial Hells: Participatory Art and the Politics of Spectatorship. (Verso, 2012)
Furmanski 14
very publication of the instructions. The digital domain, with its own reliance on numbers and
the ambivalence of their meaning, can see random values as meaningless noise or as the crucial
spine of a lifelike simulation. Randomness need not be extreme, although in the last analysis it
might be the most radical of actions.
One way to corral randomness is to “shuffle”, to take an existing set of symbols or values
and mix them. In game design this can help mitigate (or encourage) misguided ideas on the
nature of probability. Shuffling a balanced card deck can guarantee karmic balance – low cards
early imply high cards later, a type of relationship that one should not assume when betting on
Roulette table, although enough people do that the term “Monte Carlo Fallacy” describes just
that—an assumption that random results will eventually favor “underrepresented” outcomes.
Truly random settings are ambivalent about winning streaks. Shuffling existing works of media
can provide a mix of familiarity and novelty that might otherwise be hard to fabricate.
Modelling random generators after card decks, “weighting” values by stacking virtual “decks,”
can have a level of control over that of simulating a dice roll.
For one project, a game jam, I found a website called TuneToys,
9
which, among other
tools, allows you to shuffle a MIDI music track. I lack training in musical composition, but time
was short (I had 24 hours to create a playable game) and I desired something ambient and
unsettling for my work. Choosing Beethoven’s “Für Elise” and how many parts to divide the
work into, I produced an arrangement that maintained certain rhythms while producing new tonal
relationships, a strange beast that might remind someone of Beethoven’s work but only as a half-
remembered dream. With the press of a button I followed Tzara’s formula, although the context
has changed so much my use of TuneToys can scarcely be considered a similar act. Artists may
9
Tim Thompson, “Tune Toys,” http://www.nosuch.com/tjt/tunetoys.html
Furmanski 15
borrow and steal, but what do we say of those who shuffle and mangle? Randomness comes
from dice, coins, or is simulated by way of iterated arithmetic, but the illusion of random
behavior can arise fairly easily through a series of simple, individual, deterministic actors. We
know this manifestation as chaos.
Furmanski 16
Chaos
Chaos is what happens when Randomness hires a screenwriter. From a distance, an
observer might not see order in a chaotic system, but oddly enough, every individual part of such
a system knows where it is going (at least in the very short term), has a pretty good idea of how it
will behave, and certainly can track how it got to where it is. As more components and variables
come into play, however, a global view of the system can defy any obvious or even reasonable
explanation. Unlike randomness, though, chaos leaves a paper trail. Harnessed properly, chaos
can provide a robust, interesting, dynamic environment that is satisfying to author and audience
alike. On the other hand, chaotic systems can bring the term “Malicious Compliance” to entirely
new levels, causing untold destruction without breaking a single rule given—indeed, by doing
exactly what they are told with aplomb.
The literary technique of “stream of consciousness” echoes this serial aspect of chaos,
much as Dada uses randomness to express a nihilistic worldview. Where Tristan Tzara pulled
article fragments from a hat to compose a poem, surrealists used “The Exquisite Corpse”, having
different people draw one word or image after another, into a stitched, chaotic result. Each artist
only saw the last part of the previous one’s work, and would leave a further “edge” for another
artist or writer to continue. Rather than defying society, Surrealists looked to explore
subconscious, through exercises like “The Exquisite Corpse.” Dreams are the result of a life’s
chaotic collection of experiences and anxieties—and therefore be seen as confusing, perhaps, but
not random.
The Artificial Life movement that started in the mid-1980s saw animism within chaos.
Participants in the movement say a way to find understanding of life, even a possibility to create
Furmanski 17
life, using simple rules that somehow produced vibrant dynamics, patterns. Such rules could be
used to describe behavior that previously had no easy explanation. Complicated cellular
structures could be mimicked through a systems called cellular automata, and simple games like
“The Prisoner’s Dilemma” could show ecosystem-like punctuated equilibria, producing periods
of stability followed by surprising spikes in volatility.
Pure chaos fits into one end of a spectrum of systems
10
normally used to describe cellular
automata, but it can be productive to apply to a wider variety of systems in general. “Fixed” lies
on the other end, providing an unchanging, static scene. “Periodic” will cycle between two or
more states. Chaotic falls into the classical definition described above as apparent randomness.
Between “Periodic” and “Chaotic”, however, exists a transitional region where “Complex
Systems” exist, that provide a calculable mixture that provides just enough variance to be
interesting and just enough order not to get pulled into the noise. This region of complex
systems is prime hunting ground for finding the ever-elusive “Emergence”. Emergence is the
goal of aspirational game design and artificial life alike. Emergence is about providing a series
of rules, the simpler and more elegant the better, that gives rise to complicated consequences and
strategies. Emergence might be another word for Domesticated Chaos.
One complex experiment simulated an ecosystem populated by agents playing “The
Prisoner’s Dilemma.”
11
This foundational game frames actions in terms of two criminals kept
separate from each other. Should neither confess, the police have little evidence, and both will
only serve a slight prison sentence of one year. Should one confess, he will be released while his
companion gets three years. Should both confess, they both are sentenced to two years. This
10
Workshop on Artificial Life, Langton, Christopher G, and N.M.) Santa Fe Institute (Santa Fe, eds. Artificial Life.
Redwood City, Calif.; Don Mills, Ont.: Addison-Wesley Pub. Co., 1991), 43.
11
Ibid., 295.
Furmanski 18
simple game is a foundational topic in game theory, and at least one survey of the possible game
theory-based two-player games show that it has among the most variable strategies.
12
In this
simulation, each agent was given a limited memory of who had betrayed them and who
cooperated. Even with a memory of two turns, the graph had no obvious long-term winner, but
produced cycles of strategies that would fall to another. With a memory of four turns, long
periods of stability would suddenly fall into periods of chaos, with a number of strategies rising
and falling in flux, before stabilizing again into a new stabled status quo. This simple situation
demonstrated the evolutionary theory of “punctuated equilibrium”, and showed that periods of
catastrophic “extinction” might occur without an obvious initial cause.
As I will go into in later chapters, creating “random” values through algorithms is an art
unto itself, but on the computer even the most random or chaotic system is deterministic.
Starting with the same initial values will always give you the same result. The Random Number
Generator (or “Random Number God” in the snarky parlance of game designers and players)
almost always falls under the term “pseudorandom”, as methods for providing random values
come from involved mathematical processes. One Artificial Life paper countered criticism that
this sort of determinism was grounds for disqualifying any sort of digital process from being
classified as “life:”
“As a thought experiment, suppose that we connect a Geiger counter near a radioactive
source to our computer, and use the interval between clicks to determine the values in our
random number generator. The resulting behavior would no longer be deterministic or
repeatable. However, the results would be the same…determinism and repeatability are
irrelevant to emergence and to life. In fact, repeatability is a highly desirable quality of
synthetic life because it facilitates study of life’s properties.”
13
12
John H. Miller and Scott E. Page. Complex Adaptive Systems: An Introduction to Computational Models of Social
Life. (Princeton, N.J: Princeton University Press, 2007), 187.
13
Workshop on Artificial Life, Langton, Christopher G, and N.M.) Santa Fe Institute (Santa Fe, eds. Artificial Life.
(Redwood City, Calif.; Don Mills, Ont.: Addison-Wesley Pub. Co., 1991), 396.
Furmanski 19
For computer scientists, this ability of pseudorandomness to reproduce sequences is a
feature, not a bug, and a pretty profound one at that. For virtual spaces, one can encode a wide
variety of different instances from a single, often simple process or program. Scientists wish they
could do this in the physical world—on a certain level, “true randomness” might be better suited
as a metaphysical question instead of a mathematical one.
Some other vocabulary might help describe some properties and boundaries of emergence.
The term “Edge of Chaos” comes from the work of Christopher Langton, describing the values at
which processes like Cellular Automata move from predictable, stable patterns to noisy,
seemingly random ones.
14
Chaotic systems can fall into “attractors”, stable values that the
system seems to gravitate toward, even if other forces try to push the system in different
directions. “Hypercycles” are systems that produce species at each step in a cycle, and allow
information to be passed cyclically. Basic hypercycles have only one attractor
15
but larger forms
can support multiple species, as well as interesting divergent dynamics like parasites and
interspecies competition. I prefer a growing vocabulary of descriptors and dynamics to the
original mystery of “it just happens!” Emergence still feels magical, but like a garden it can
grow best with the right mixture of elements and environment.
14
Ibid., 41
15
Ibid., 274
Furmanski 20
Procedure
Procedure might combat chaos, but a small set of procedures (such as in the three body
problem) can produce chaotic behavior. If a “procedure” is a linear set of instructions, though,
the word can describe practically anything done on a computer.
Even with rigid adherence to a single process, much can be made from a single random
number. The RNG described above in “Chaos” can give a rigid amount of order, if used
carefully and sparingly (often only once). Mathematical “random seed” compression was
classically used in the early years of home computing, when storage memory was at a premium.
The idea of random seeding and procedural generation has had something of a renaissance in
recent years in a variety of areas, from the Independent Games movement to large scale
procedural modeling in Hollywood visual effects. The do-it-yourself, large-content-from-simple-
process appeals to the small scale operations of independent and experimental gaming.
Paradoxically, the demand for detail and increased scale of computing demands some
automation involving chaotic processes on large scale productions as well. Modeling a forest or a
city at a fidelity expected from a modern processor (not to say audience) would not be feasible to
be “hand authored” by even hundreds of specialists. The limiting resource is human time, not
computer time.
Procedures can create a large amount of output relatively quickly, leading to a common
issue with PCG. Kate Compton called the problem “10,000 Bowls of Oatmeal.”
16
Quantity does
not guarantee variety, and just about any group of things will blend together in sufficient
quantities. Millions of outputs can technically be different, but particularly when viewed as a
16
Kate Compton, “So You Want to Build a Generator”. http://galaxykate0.tumblr.com/post/139774965871/so-you-
want-to-build-a-generator (Accessed Feburary 22, 2016).
Furmanski 21
group will lack saliency. Crafting a process that allows qualities to manifest in meaningful ways
is not impossible within PCG, but can be among the hardest things to accomplish, and is a
crucial element to the art.
Bruce Morrissette humorously points out the frustration and potential obscurity of hidden
structures, particularly in literature:
It’s frequently asked, for example, what would have happened if James Joyce had not
used the title Ulysses…But not only did he use the title Ulysses, he even became
impatient with the critics and he told a number of friends where to look in Homer, in The
Odyssey, for the sources of the various parts and from that moment on, the criticism of
Joycian structures soared. The critics now are finding things, probably, that Joyce
himself didn’t intentionally put in. This has happened with a number of other writers.
17
Morrissette immediately follows with the more poignant example of Raymond Roussel, a
novelist who grew frustrated that no one had found evidence of his generative system for writing.
Finally he relented and wrote How I Composed Certain of My Novels, a year before his suicide.
“The critics were first amazed, then incredulous, and now they are constantly finding,
everywhere, new aspects of this principle in Roussel, some of which are very unconvincing.”
18
Depending on the circumstances, the overt presence of an underlying structure might be crucial
to a reader’s immediate experience, or as unseemly and distracting as viewing a dancing
skeleton. Formal structure might be easy to find and justify in programmed works, since by
definition programs function through explicit symbols. Thematic structure, though, as always,
may flit away into obscurity only to be hunted by critics and readers trying to make sense of their
reading and experience, and can remain wonderfully mercurial despite the digital’s quantization.
17
Alain Robbe-Grillet. Generative Literature and Generative Art: New Essays. Fredericton, N.B., (Canada: York
Press, 1983), 29.
18
Ibid.
Furmanski 22
Procedure includes parameters, the buttons and knobs that can produce countless products
from a single process. Casey Rheas in “Form + Code” describes parameterization: “In this
capacity, the designer is no longer making choices about a single, final object but creating a
matrix encompassing an entire population of possible designs…This involves searching for and
exploring a population of designs that meet certain requirements, behave in particular ways, or fit
the desires of the designer.”
19
Janet Murray looks at both procedure and narrative archetypes as
a means to reconcile the apparent incompatibility of agency and narrative: “to anticipate all the
twists of the kaleidoscope…and to specify not just the events of the plot but also the rules by
which those events would occur…at first this kaleidoscopic composition seems like a violent
break with tradition, but when we look at how stories have historically developed, we find
techniques of pattern and variation that seem very suggestive for computer-based narrative.”
20
I
usually find the author’s voice as much in the instructions they create as in the instances they
might specify.
A lot of the early narrative AI projects like ELIZA and TALESPIN
21
seemed to fail on the
grounds that they were too locked into a procedure, shattering verisimilitude when this occurred.
ELIZA mimics a psychiatrist, and can be very convincing for short periods of time. The source
code, however, reads like a hybrid of a telemarketing script and a mad-lib. ELIZA can be
considered something of a satire of psychiatric techniques, reflecting user questions back in a
vague manner that allows the user to provide their own meanings. A simpler antecedent was
19
Casey Reas, Chandler McWilliams and Jeroen Barendse. Form+code in Design, Art, and Architecture. (New
York: Princeton Architectural Press, 2010), 93.
20
Janet H. Murray, Hamlet on the Holodeck: The Future of Narrative in Cyberspace, (Free Press, 1997). 185-186.
21
Noah Wardrip-Fruin, Expressive Processing: Digital Fictions, Computer Games, and Software Studies.
(Cambridge, Mass.; London: MIT Press, 2012), 122.
Furmanski 23
known as the “Yes/No” game,
22
a simple conversational game where an inquirer was given
answers to their questions on a scrap of paper. The questions were randomized, and the
conversations were surprisingly complex. On the other hand, TALESPIN has the opposite
problem. In 1977 it was a rather sophisticated model of character desires and behaviors. As
sophisticated as the models were, they were not expansive enough, and the stories told were
often baffling, repetitious, or defied common sense, common sense being the bane of Artificial
Intelligence.
23
Expressive Processing compares ELIZA, appearing more complicated than it is,
with TALESPIN, hiding a complicated model under deceptively simple output. Both had
shortcomings in emulating conversation and narrative, in part due to limited and inadequate
assumptions of human behavior, but also following a procedure too rigidly. TALESPIN often
had only two characters and a very limited environment, and ultimately crunched simple
numbers based on characters desires and wants. ELIZA had a script a savvy conversant could
deduce in a few minutes. A combination of scope too ambitious, a scale too small, and a model
too simple limited each program.
Procedures and structures are crucial to understanding digital media, and I’ve found
Platform Theory useful in articulating the study of how code might be formed by considerations
of larger cultural and technological forces. Nick Montfort explains: “Platform is the abstraction
level beneath code…connecting the fundamentals of digital media work to the cultures in which
that work was done and in which coding, forms, interfaces and eventual use are layered upon
22
Stephen Ramsay, Reading Machines: Toward an Algorithmic Criticism. (Urbana: University of Illinois Press,
2011), 60.
23
Noah Wardrip-Fruin, Expressive Processing: Digital Fictions, Computer Games, and Software Studies.
(Cambridge, Mass.; London: MIT Press, 2012), 195.
Furmanski 24
them.”
24
Cultural paradigms shape process; even if output might be similar or identical, the
process might have been drastically shaped to fit these paradigms in order to function.
24
Nick Montfort, Racing the Beam: The Atari Video Computer System. Platform Studies. (Cambridge, Mass: MIT
Press, 2009), 147.
Furmanski 25
Simulation
Looking at any particular work, it might be hard to separate the ideas of “Procedure” and
“Simulation”, although the difference is profound. Michael Cepero’s blog devoted to building
large scale virtual spaces noted that “procedural” methods tend to be valuable by being small and
relatively simple to codify, and while the products they produce may prove incredibly large, the
disparity in size comes from their static nature. Cepero observes: “Simulations are special
because they acknowledge the existence of time, cause and effect. You pick a set of rules, a
starting state and you let things unfold from there. If humans are allowed to participate by
changing the current state at any point in time, the end results could be very surprising.”
25
Simulation is iterated procedure, and might have hired the same screenwriter that Randomness
did to produce continuity in Chaos.
The idea of scope defines crucial properties of any simulation, with design questions
revolving around what to leave in and what to leave out. Rules of Play describes the “Immersive
Fallacy,” a false equation between realism and immersion, and the bane of many an ill-
constructed simulation.
26
Realism does not always correspond to immersion, and the design task
generally falls under finding “relevant” things to simulate rather than “more.”
Proper Architecture, while nominally coming from procedure, like blueprints, and
appearing static (what could be more immoveable than a building?), truly shows its resilience
when subjected to acting processes. Christopher Alexander, in the “The Timeless Way of
Building”, describes how rooms, buildings, and cities come together through a series of
25
Miguel Cepero. “Procedural Information.” Procedural World.
http://procworld.blogspot.com/2014/03/procedural-information.html (Accessed March 8, 2014).
26
Tekinbas ̧ , Katie Salen, and Eric Zimmerman. Rules of Play: Game Design Fundamentals. Cambridge, Mass.: MIT
Press, 2003. 450.
Furmanski 26
interlocking patterns. His “pattern language” corresponds to the idea of procedure, but the
multiplicity and persistence of these patterns strongly resemble what would be called simulation
in the digital space. Alexander prescribes planning a building in a way similar to designing a
simulation: “Each living pattern resolves some system of forces, or allows them to resolve
themselves. Each pattern creates an organization which maintains that portion of the world in
balance.”
27
He asserts there are a surprisingly small number of relevant patterns within a house
or even a city, but careful attention to the environment, the passing day and night, the seasons,
and the world at large prove relevant to keep a building alive, and to help a city grow. This idea
of balancing forces goes back to the landscape concepts in Feng Shui as well as the works of
Vitruvius. A single procedure will not be enough to respond to the forces that shape the physical
world, but working in tandem, a series of procedures, interlocking and iterating through time,
might be able to adjust and endure.
Noah Wardrip-Fruin summarizes the fundamental idea of the rhetorical capabilities of
procedure and simulation when he describes what he calls the “SimCity Effect”: “Successful play
requires understanding how initial expectation differs from system operation, incrementally
building a model of the system’s internal processes based on experimentation.”
28
In other words,
the game teaches a system, and how one might interact or manipulate it. SimCity embodies a
model of a growing urban landscape. Like all simulations, this system had boundaries. Like all
simulations, SimCity also makes certain ideological assumptions. Ian Bogost notes, “The
relationship or feedback loop between the simulation game and its player are bound up with a set
27
Christopher Alexander, The Timeless Way of Building. (New York: Oxford University Press, 1979), 134.
28
Noah Wardrip-Fruin. Expressive Processing: Digital Fictions, Computer Games, and Software Studies.
(Cambridge, Mass.; London: MIT Press, 2012), 302.
Furmanski 27
of values; no simulation can escape some ideological context.”
29
While the challenge might be
to communicate one’s ideas clearly, each simulation does have something to say. So it goes with
films, music, and the other arts. Instead of offering visuals, or a single narrative, simulations can
invite the audience to experiment, potentially communicating complicated ideas in ways that can
be experienced, learned, and developed in both the work’s terms and those of its user.
A good simulation invites play.
29
Ibid., 303.
Furmanski 28
Play
There can be many relationships between the author and audience. “The author speaks to
the audience,” “The author controls the audience,” “The author shows the audience,” but the
phrase “the author plays with the audience” and the reciprocal “the audience plays with the
author” prove most intriguing, not least because of the wide spectrum of the meanings of the
word “play.” Play figures into the motives of the Oulipian writers, Dadaists, and Surrealists,
among many other artistic movements. Play implies freedom, safety, and creativity. After
traveling through other definitions in this dissertation, I also find myself asking what the
audience brings to my work, and how can I enable or enhance their experience through what they
possess?
Miguel Sicart focuses on the role of ethics within computer games, not as far as what
actions should or should not be simulated, but how the structure of a game encourages self-
examination, raising questions a player might ask themselves in playing the game and how they
relate to it. Categorizing games in terms of “ethically open” and “ethically closed,”
30
Sicart
differentiates them by their relationship to apparent player agency. Open ethical games
encourage player choices that then affect the world and the player’s relationship with its
characters. Closed ethical games typically have the player follow a character who behaves
according to rules within the set universe. Sicart does not necessarily privilege one over another.
An open ethical game that pretends to open agency in moral choices can fall flat on its face if it
does not follow through with them, giving numerical values for deeds good and ill in a way that
the player might ignore in favor of a simple resource choice. A closed ethical game can still
force the player to examine their actions in relation to the character’s. An experience can lose a
30
Miguel Sicart, The Ethics of Computer Games. 1st edition. (The MIT Press, 2009), loc2699.
Furmanski 29
lot of its power by being didactic, but retain an ethical center by choosing what questions to ask.
The context of and safety of play allows a freer (and less stressful) examination of one’s ethics,
assumptions, and biases, and not taking advantage of this space would be a lost opportunity.
Play also applies to the mindset of a creator, a willingness to experiment, ambivalent to
failure, to satisfy curiosity by action. Another fine vocabulary word to use, coming from a major
component of “play”, is “autotelic”
31
—an “autotelic experience” is one which is its own reward.
While this may seem like a basic requirement for games, many games have received
considerable criticism for appearing to lack this quality—many “free to play” games offer
monetary incentives to skip parts of them, and more than a few narrative-heavy games might
simply reduce to being “movies held hostage.” Instead of playing through such a game,
someone might gain a more satisfying experience viewing the linear portions separately, say,
posted online and edited into a cinematic experience. Since interactivity, on one level, is about
crafting an experience, such an experience needs to prove engaging on its own, or it might work
better shifting its structure or adapting its concept to a different medium.
Are there systems of procedurality that defeat the creative act? Dadaists certainly thought
so, but on a scale of society and politics. Oulipan writers might claim that procedurality is their
creative art, but they nevertheless built in protocol to break protocol. For my part, I look to see
how procedure can serve a creative act, and my hypothesis is that all creativity can be codified,
on some level, as instructions. If the human mind is not a linear machine but rather a series of
overlapping dynamic systems, a description might help but true prediction may prove
impossible. Karlis Racevskis concludes his essay The Imaginary, the Symbolic, and the Real:
31
Katie Salen Tekinbas ̧ and Eric Zimmerman, Rules of Play: Game Design Fundamentals. (Cambridge, Mass.: MIT
Press, 2003), 332.
Furmanski 30
“Instead of taking place in an inspired soul, the artist appears as a function that represents a
nexus in a rhizomatous network of cultural artifacts...Instead of creating, the artist generates, and
the work of art…points to an impossible origin and aims to an exteriority of infinite
possibilities.”
32
I find agency a defining component of interactivity, and still a frontier that we are just
beginning to understand in the context of media. However, the literature I’ve read has often
warned of the possibilities and hazards in experimenting with the relationship between the author
and audience. Simply increasing participation is not enough, either in games or in larger
experiments of social agency. I wish I could devote more time to doing Claire Bishop’s
Artificial Hells justice, as in many ways this was the book I was looking for coming into my
studies. I had assumed the lack of attention to games also implied a complete lack of
acknowledgement of the potential for agency, both personal and social. While I still believe that
to a certain extent, Bishop looks at a century of history exploring the overlap of art and social
participations, from the Italian Futurists to Dadaists, Communists, to artist Paul Chan’s social
collaboration revolving around Waiting for Godot and the aftermath of Hurricane Katrina. The
book finds a wide variety of causes, effects, and decidedly mixed results that require a sober (but
still hopeful) approach. A parting piece of advice gives me further material for thought
regarding the establishment of the Internet, with all of its participatory potential. “This new
proximity between spectacle and participation underlines the necessity of sustaining a tension
between artistic and social critiques. The most striking projects that constitute the history of
32
Alain Robbe-Grillet, Generative Literature and Generative Art: New Essays. Fredericton, N.B., (Canada: York
Press, 1983), 37.
Furmanski 31
participatory art unseat all of the polarities on which discourse is founded (individual/collective,
author/spectator, active/passive) but not with the goal of collapsing them.”
33
33
Claire Bishop, Artificial Hells: Participatory Art and the Politics of Spectatorship. (Verso, 2012), Loc 5436.
Furmanski 32
Histories
A definitive, complete history of procedural content generation would be overwhelming
and contain numerous arguments over where to begin and what topics should be included.
Valid, instructive examples of PCG can be centuries old. Examples include 13
th
century
philosopher Raymon Llull who devised a “machine,” in essence a circular diagram, to combine
phrases, producing theological decrees. Jorge Luis Borges considered this construct to have
more potential in poetry than philosophy.
34
In Gulliver’s Travels Jonathan Swift had the
Academy of Projectors follow the decrees of an “Engine” that would produce random text, a
device for satire.
Gillian Smith in particular has produced a number of studies of the history of procedural
content generation, focusing on game-related works,
35
both digital and non-digital. Her own
examination of pre- and non-digital procedural games has examined the role of tabletop
roleplaying games, with their use of logical and random systems to adjudicate and simulate
shared group experiences.
36
I’ve chosen to primarily focus on the development of procedural content generation as
digital media began to enter mainstream use, looking at groups and works that seemed to
anticipate, if not outright desire, a change in media production and experience that was
ultimately brought by the computer and digitization. Starting in the 1960s, I will outline goals
and philosophies of the Oulipo group, whose work draws upon the common drives of procedural
content generation. I will also examine Gene Youngblood’s book Expanded Cinema, as well as
34
Jorge Luis Borges and Eliot Weinberger, Selected Non-Fictions (New York: Penguin Books, 2000). 155-159.
35
Smith, Gillian. "An Analog History of Procedural Content Generation." Paper presented at the Foundation for
Digital Games, Monterey, CA, 2015.
36
Smith, Gillian. “History of Procedural Content Generation”. http://sokath.com/main/blog/2015/05/23/history-of-
procedural-content-generation/ (Accessed June 2015).
Furmanski 33
Lapis, one of the cinematic pieces contained in Youngblood’s survey. Expanded Cinema was
published in 1970, when the ideas behind media were clearly changing but before the idea of the
digital dominated how media would evolve. I will conclude this section with a history the
demoscene, a digital subculture with a strange origin and even stranger consequential body of
work, a movement that parallels the development of the personal computer and the emergence of
networked communities as they moved from bulletin board services to the Internet. The three
historical examples prove instructive in discussing how evolutions in media transformed, and in
many ways failed to transform, ideas and aspirations contained within PCG. Despite an apparent
revolution in technology and communication, the common axioms, promises, and potential of
procedural art seem largely unchanged, but watching these ideas as they are pursued during
emergence of digital media proves a useful journey to take.
Furmanski 34
The Oulipo Group
It would be tempting to just outright declare that any aspirations or achievements made
by various explorers of procedural content generation have simply been previously articulated,
accomplished or anticipated by the Oulipo group.
The term “OUvroir de LItterature POtential” roughly translates to “The Workroom of
Potential Literature.” Discussions about the meaning and goals of the Oulipo group call into
question each part of the assembled title, but the terms “workroom” and “potential” share
particular qualities with later digital methods adopted as computing grew more accessible to a
wider range of creators. The Oulipo writers themselves anticipated the computer’s own potential
for their mechanical approach to writing.
37
The first vocation can be classified as “Automatic Systems,” methods of writing an
original text from a blank piece of paper. Examples include lipograms (eschewing specific
letters) and other specific writing constraints. The second vocation uses automatic
transformation of an existing text, such as the game S+7. S+7 requires a dictionary and a written
work of your choice: simply shift any noun found in the text seven words down in the dictionary,
replacing each person, place or thing with a new one. The third vocation is called Transposition,
or the application of different systems and constraints. A perfect example would be one of my
favorite Oulipian works: Word Matrices, which “multiply” blocks of words together in the same
manner as mathematical matrices. Much like Italo Calvino’s city of Despina,
38
where sailors
think of the arid coastal city as one of camels, and desert caravans think of the same city as one
of ships, what can be learned from the word matrices really depends on whether one approaches
37
Warren F. Motte, ed., Oulipo: A Primer of Potential Literature, 1st Dalkey Archive ed, French Literature Series
(Normal, Ill: Dalkey Archive Press, 1998), 17.
38
Italo Calvino, Invisible Cities. (Mariner Books, 2013), 17.
Furmanski 35
the process as a mathematician or a writer. Language translation might qualify as a
transposition, especially when considering additional constraints that might be added to a written
work.
Looking at “procedure” in Procedural Content Generation—the Oulipo group embraced
procedure and instruction, to the point of defying the element of randomness altogether: “The
members of the Oulipo have never hidden their abhorrence of the aleatory, of bogus
fortunetellers and penny-ante lotteries: ‘The Oulipo is anti-chance,’...Make no mistake about it:
potentiality is uncertain, but not a matter of chance. We know perfectly well everything that can
happen, but we don’t know whether it will happen.”
39
Many Oulipian exercises, indeed a central
tenet of the philosophy, revolved around the concept of “exhaustion.” Given a set length, there
are only so many ways a set of letters can be arranged, and a far fewer set of letters that make
any sort of sense. Applying a constraint, such as a lipogram that forbids the use of a specific
letter or letters, culls potential meanings even further. Raymond Queneau’s “Hundred Thousand
Billion Poems”, created using individual lines of ten sonnets, are a lot of potential poems, but
still a finite amount of meaning, and in a way a process that is definitively “solved”. Finding an
interesting system would be a higher calling than producing a simple “masterpiece”:
We intend to inventory—or to invent—the procedures by which expression becomes
capable of transmuting itself, solely through its verbal craft, into other more or less
numerous expressions. It’s a question of deliberately provoking that which masterpieces
have been secondarily produced—produced into the bargain, as it were—and especially to
render clear in the very treatment of words and phrases what the mysterious alchemy of
masterpieces engenders in the superior spheres of aesthetic meaning and fascination.
40
39
Warren F. Motte, ed., Oulipo: A Primer of Potential Literature, 1st Dalkey Archive ed, French Literature Series
(Normal, Ill: Dalkey Archive Press, 1998), 17.
40
Ibid., 49.
Furmanski 36
Even with strict adherence to rules, Oulipan writers were not above breaking them, if that
breakage were itself in the rules. “Clinamen” refers to the purposeful breaking of a constraint, a
rule to break a rule, so to speak. The term comes from ancient Greek atomic theory, describing
the motion of atoms in a way analogous to the modern definitions of chaos. Georges Perec used
the idea of “clinamen”, intentional errors or glitches,
41
made conspicuous in the consequences in
the resulting structure. Perec fills his novel Life, a User’s Manual with clinamen, omitting a
single chapter when his chess-puzzle traversal of a 100-space apartment building, or leaving out
a final sentence in a multi-page acrostic hiding letters that spell out “EGO”, in a chapter hinting
that a particular character acts as a stand-in for the author. Clinamen and other sorts of
maneuvers can take the place of randomness, giving a slight shift away from too much
regularity, enough to give character, and their exact placement can betray further authorial intent
and meaning.
Where “dice-rolling” randomness was supposedly rejected, clinamen clearly corresponds
to a chaotic element. And what if one could produce random results through procedure? While
sounding paradoxical, this task is precisely what a digital computer must do. Certain methods
will be detailed in later examples, but Oulipian writers were not above similar activities.
Queneau looked at the distribution of prime numbers because “They imitate chance while
obeying a law.”
42
The predilection of digital computers for binary numbers provides a similar
lawful randomness, but that is a story for later.
The focus on systems, constraint, data compression, but above all potential would be
familiar to many statements of the role of procedural content generation in a digital computation
41
Kimberly Bohman-Kalaja, Reading Games: An Aesthetics of Play in Flann O’Brien, Samuel Beckett & Georges
Perec. (Champaign, IL: Dalkey Archive Press, 2007), 161.
42
Warren F. Motte, ed., Oulipo: A Primer of Potential Literature, 1st Dalkey Archive ed, French Literature Series
(Normal, Ill: Dalkey Archive Press, 1998), 18.
Furmanski 37
setting. The explicit image of a workshop, a group focusing on craft, would be very familiar
with the tinkering and experimental aspects of groups playing with different types of PCG.
One source of differing opinions between writers of the group came from the role of the
process versus the product:
If the Oulipo is unanimous in promoting the use of formal constraint, there is, however,
some internal debate as to the number of texts resulting from any given constraint.
Queneau […] calls for unicity: once a constraint is elaborated, a few texts are provided to
illustrate it […] Roubaud […] cautions against the proliferation of texts resulting from a
given restraint; for him, ‘the ideal constrant gives rise to one text only.’
43
Oulipo writer François Le Lionnais classified any produced texts as “applied Oulipo.”
44
No
matter the exact qualities the produced texts might have, members of the group gave distinct
attention to the processes they used.
As I look into later digital algorithms and games, we can find similar constraints being
used in multiple places. The adaptable algorithm that produces Perlin noise (covered in the case
studies section later) would be analogous to an Oulipian constraint, and, being modular in a way
only digital constructions can be, is more of an idea that can be applied to imagery, motion, time,
and other concepts. Applied Perlin noise is a different entity than the ideas implied behind its
original form, an algorithm designed to wield a random, yet mathematically continuous set of
numbers. The myriad applications of Perlin noise seem to share the aspirations of various
Oulipian constraints. Finding a tool that can aid a writer, in literary construction or even simply
in thought, is a major, if not primary goal for the Oulipo.
43
Ibid., 11.
44
Ibid.
Furmanski 38
Acrobatics
La Lionnais writes:
Most writers and readers feel (or pretend to feel) that extremely constraining structures
such as the acrostic, spoonerisms, the lipogram, the palindrome, or the holorhyme (to cite
only these five) are mere examples of acrobatics and deserve nothing more than a wry
grin, since they could never help to engender truly valid works of art. Never? Indeed.
People are a little too quick to sneer at acrobatics. Breaking a record in one of the
extremely constraining structures can in itself serve to justify the work; the emotion that
derives from its semantic aspect constitutes a value which should certainly not be
overlooked, but which remains nonetheless secondary.
45
What could be more acrobatic than writing a novel without using the letter “e,” the most
commonly used letter in either French or English, and then translating it? La Disparition/A Void
contains hints at its unnameable absence within its 25 chapters, and overall the feat is frankly
impressive. A writer would need an exceptional vocabulary, and, more importantly, a sense of
structure bridging two languages, and a creative, open, and adaptable energy to navigate the
winding barriers and passages formed by that taboo vowel. This maze (labyrinth) erects (lifts)
barriers (walls), but also reveals (shows) paths to continue (go forward).
The virtuosity and mastery, that is to say the “acrobatics,” often required of writing
within constraints would be understandable to other groups relying on procedural methods,
working under limitations imposed more by technological and infrastructural constraints. The
“demoscene,” an international, informal group of hackers, coders, artists, and musicians arising
from early home computer communities, had “acrobatics” as a primary motivator for the imagery
they produced in competitions over the decades starting from the earliest widespread presence of
home computers.
45
Ibid., 30.
Furmanski 39
Oulipian works were almost never explicitly interactive (there are a few notable
exceptions), but the literature demanded the attention of the reader in much the same way a game
or puzzle might. Indeed the distinction between an Oulipian text and a puzzle may be marginal
at best. Queneau asks a certain amount of effort of the part of his readers, effort he imagines
many crave: “Why shouldn’t one demand a certain effort on the reader’s part? Everything is
always explained to him. He must eventually tire of being treated with such contempt.”
46
The
artifice might be hidden, but the texts seem to increase their meaning when given proper
attention, even if the search by a reader does not reveal the structure, but rather (or especially)
other details. The interplay begs for a digital program for the reader to play with (if one does not
already exist one would be straightforward to write), and the dynamic of Oulipian systems invite
the reader to take part.
I mentioned PCG’s role of compression—Queneau’s 100 Trillion Poems (Cent mille
milliards de poèmes, also translated as A Hundred Thousand Billion Poems, One Hundred
Million Million Poems, etc. In short, an incredible number) comes from 10 sonnets, crafted so
that their lines are interchangeable. 100 Trillion Poems comes from this tiny set of sonnets,
barely a pamphlet, due to the methods of combination enabled by the sonnet structure.
The promise of an unimaginable number of poems, more than anyone could read in
multiple lifetimes, indeed larger than all the previous literary output of the human race to this
point, might seem familiar to marketing buzz around games relying on PCG—instead of poems,
stars might be the object in question. While quantity might have a quality all its own, the
quantities involved bring in simple economic questions—how much is each sonnet worth, then?
What meaning can be gleaned in a given sonnet that was formed without the conscious portion of
46
Ibid., 21.
Furmanski 40
the writer’s mind? And, while big, there is an odd suggestion of the finite nature within the
language itself. By exploiting the structure, Queneau has implicated all sonnets, connecting
them to the 10 exemplars and their myriad offspring.
Exercises in Style
Queneau compiled 99 variants on a short narrative in the book Exercises in Style. The
narrative involves the meeting of a belligerent man on a bus, who is later seen advising someone
to get another button for their overcoat. The story is banal, if not pointless, and this serves to
highlight the goals of the exercises, or at least not get in their way.
While a meditation on language forms, the work serves to be rather educational—I’d be
tempted to use the work in a class teaching ideas like “Litotes” and “Synchysis,” which are two
of the 99 exercises that also include examples of Cockney and Spoonerisms.
One might be tempted to claim Exercises in Style isn’t really an “Oulipian” work. Within
the overall plan of altering a text, there’s no math (individual exercises might have plenty,
however). That is to say, the variants are qualitative rather than quantitative. An exercise might
be based on a constraint (haiku for instance) but the collection as a whole, unified by a
deliberately banal story, would be difficult to distill into rules in the way, say, S+7 would be.
The constraints are based on styles, slang, and wordplay. The preface of Barbara Wright’s
translation divides the work into several types of sub-exercises: poetic forms, jargons, rhetorical
forms, etc. The qualitative nature of the overall work does suggest an aspiration of both Oulipian
constraints and procedurality. One fantasy of PCG would be a device that could create
something very much like Exercises in Style: given a skeletal narrative, outline and apply a style
Furmanski 41
to the narrative the same way one might apply a color or texture to a graphical element. If a
primary aspect of Oulipo is finding and exhausting potential constraints and structures, the
implicit quantitative question for Exercises in Style is: “How many styles are there?” To begin to
even fail at answering this question, one needs to define what a “style” is, and to realize how
inclusive the term “style” can be. The word easily encompasses the 99 examples given,
spreading across language, structure, cultural affectations, and several domains any one author
would certainly miss. Exercises in Style forces contemplation of style in seeing potential, in
reading a work, or directing an author towards paths not yet seen. Queneau summarized as
much: “People see it as an attempt to demolish literature. That was not my intention. In any
case my intention was merely to produce some exercises; the finished product may possibly act
as a kind of rust-remover to literature…”
47
While my ignorance of French is regrettable, Oulipian works seem to gain in translation,
if only in their own demonstration of acrobatics. Translator Barbara Wright meditates on how
difficult and appropriate the process of translation was:
Queneau told me that the Exercises was one of his books which he would like to be
translated—(he didn’t suggest by whom). At the time I thought he was crazy. I thought
that the book was an experiment with the French language as such, and therefore as
untranslatable as the smell of garlic in the Paris metro. But I was wrong. In the same
way as the story as such doesn’t matter, the particular language it is written in doesn’t
matter as such. Perhaps the book is an exercise in communication patterns, whatever
their linguistic sounds. And it seems to me that Queneau’s attitude of enquiry and
examination can, and perhaps should?—be applied to every language…
48
Liberation through constraint may be most obvious here. What defines and constrains literature
more than language? Yet I doubt I’d be making much of a stretch in declaring any Oulipian
work that has not been translated is incomplete. Translation reveals the walls of language, their
47
Raymond Queneau, Exercises in Style, trans. Barbara Wright, 2nd ed. edition (New York: New Directions, 1981),
15.
48
Ibid., 16.
Furmanski 42
inherent cracks and qualities. The effort of translating La Disparition was slightly easier than it
might first appear, as “e” appears in many of the same words in both tongues (”le” and “the”, for
example), already providing a large, shared pool of words to ignore upon writing. The
constraints seek to liberate in part because they force attention on what a writer might otherwise
take for granted or simply accept. Proscribing “e” makes all vowels important, their
contemplation essential. The multiple types of “style” in Queneau’s “Exercises” tend to exist in
history, culture, and rhetorical structure, their translation making what defines these “styles”
paramount to consider.
The Oulipo group managed to address each of the PCG axioms listed in the introduction,
mostly before computers became the common way to frame such work and techniques. The very
idea of “potential” looks at the incredibly large yet ultimately finite space of language. Mastery
and “acrobatics” look to the idea of writing as an activity in and of itself, and I am tempted to
equate languages like French and English to hardware and software platforms that need to be
learned in order to develop digital works and methods of expression. Compression and
systemization go hand in hand, as well as the open question of how important a given “output”
from an Oulipian constraint might be. These constraints were the object of focus, the medium,
perhaps the true goal of the group, even as the importance of a given poem or novel was debated
among the writers.
Furmanski 43
Youngblood’s Expanded Cinema
While Oulipo searched for a core of techniques to mold language, other artists explored
similar paths in cinema. Moving images could, paradoxically, be considered quite static.
Cinema classically consists of a series of still photographs, just as animation consists of
unmoving drawings. A theatre-goer cannot turn their head to see what lies beyond the edge of
the frame. Changes in technology, but more importantly changes in thought, were in turn
changing cinema and media in general. A static image promised to become a mutable image, as
well as one that could be shared. Through the 20
th
century, emerging artistic movements in
visual and auditory media promised to engage other senses and the mind in unprecedented ways.
The 1960s and 1970s were just before the concept of the digital almost completely dominated
discussion on how media was to continue to evolve.
Gene Youngblood’s Expanded Cinema, written in 1970, attempts to come to terms with
this media in transition, often referring to terms like “The Noosphere” (taken from Teilhard de
Chardin) as well as “The Paleocybernetic Age.” Youngblood clearly talks about potential
concerns of what could later be called procedural imagery and digital media, but during a time a
few scant years before computing would be adopted by mainstream culture. In looking to
possible post-celluloid methods of cinematic imagery, he mentions real-time polygonal graphics
on a multi-million dollar machine, alongside case studies that include analog computing and
holography. Expanded Cinema describes a point in time when aesthetics, but more importantly
thoughts on aesthetics, were changing due to emerging technologies. Though not named as such,
the book describes qualities of what would later be termed procedural content generation, as well
as virtual reality and immersive media. Youngblood fixates on television as the technological
infrastructure of choice in illustrating his views on where and how media will evolve. However,
Furmanski 44
much of what he says about television applies all the more so to what the Internet has become in
the last few decades.
Answering what Youngblood means by “expanded cinema” takes the course of the book,
but a few introductory ideas show that it is a dynamic rather than a static entity:
Expanded cinema isn't a movie at all: like life it's a process of becoming, man's ongoing
historical drive to manifest his consciousness outside of his mind, in front of his eyes.
One no longer can specialize in a single discipline and hope truthfully to express a clear
picture of its relationships in the environment. This is especially true in the case of the
intermedia network of cinema and television, which now functions as nothing less than
the nervous system of mankind.
49
One subset of expanded cinema is synaesthetic cinema, that is, cinema that plays to a
variety of human senses, often substituting one for another. A temptation would be to equate this
with virtual reality, while further exploration shows that interactive elements and deeper
psychological stimulation can be contributing elements. Provocative terms like “The End of
Drama” and “The End of Fiction” come up within the prose, but these hyperbolic terms are
clarified by the idea that technique in many of the works explored supersede theme. Artifice
works to encompass rather than separate a viewer. On a stage or film set the picture plane of a
camera disappears, one way or another, and does not communicate fictional contrivances but
internal psychological states. Youngblood saw the intangible qualities of this type of “cinema”
to have transcendental potential:
The fundamental subject of synaesthetic cinema—forces and energies—cannot be
photographed. It's not what we're seeing so much as the process and effect of seeing: that
is, the phenomenon of experience itself, which exists only in the viewer. Synaesthetic
cinema abandons traditional narrative because events in reality do not move in linear
fashion. It abandons common notions of "style" because there is no style in nature. It is
concerned less with facts than with metaphysics, and there is no fact that is not also
metaphysical. One cannot photograph metaphysical forces. One cannot even “represent"
49
Gene Youngblood. Expanded Cinema. First edition. (New York: E P Dutton, 1970), 41.
Furmanski 45
them. One can, however, actually evoke them in the inarticulate conscious of the
viewer.
50
Among the many examples Youngblood lists in his survey are two films from the brothers
John and James Whitney, Yantra (1957) and Lapis (1966). Both animations depict the abstract
motion of countless points of light, and their titles refer to mystical ideas. Yantra took several
years and was done entirely by hand, punching holes in notecards and projecting the light onto
film. Lapis instead used a repurposed analog computer that had been used for artillery guidance
in World War II. Lapis took only a fraction of the time due to the mechanical aid, and the
depicted motion is very different, more continuous and regular, while producing unique
kaleidoscopic visuals. John Whitney had been looking to create a common platform for abstract
film: “People tried all different techniques of abstract cinema, and it's strange that no one has
really invented anything that another experimental filmmaker can take up and use himself. It's
starting afresh every time.”
51
John Whitney moved from analog to digital computing, an
innovation that offered a common place to build the techniques he desired. Indeed, the qualities
that make Lapis unique seem to come from the fact that it narrowly predates the arrival of digital
computing in the popular imagination. James Whitney, on the other hand, appears to have been
disillusioned by the mechanistic processes that drove Lapis, and did not make another film for
many years after, instead spending that time working with ceramics.
50
Ibid., 97.
51
Ibid., 214
Furmanski 46
Fig. 1: James Whitney, Lapis, 1963-1966. From: Ken’s Blog, http://blog.kenperlin.com/?m=200801&paged=2
Fig 2: Machinery used to construct Lapis. From: Old Siggraph Site,
http://old.siggraph.org/publications/newsletter/v33n3/columns/machover.html
The synaesthetic strives for an asymptotic goal, a Utopian promise where being and
action become one and the same. As Youngblood mentions later, “It is the belief of those who
work in cybernetic art that the computer is the tool that someday will erase the division between
what we feel and what we see.”
52
In interactive works, the experience can more validly be
52
Ibid., 189.
Furmanski 47
considered tactile or emotional rather than visual; you are acting in a simulated space, instead of
simply viewing it. The interactive element in digital works like games captures more than a
simple visual stimulus. Engaging the agency and proprioception of a game player or decision-
making audience member would fall squarely into Youngblood’s phenomena that cannot be
photographed. His near-exclusive use of linear media becomes fascinating and frustrating in
equal measures, as much of what he says become all the more applicable to the purely digital
works that in 1970 were just starting to enter mainstream consciousness.
Youngblood later focused on the potential of digital media to democratize broadcasting,
and the political and social implications at stake. The survey of emerging types of media shown
in Expanded Cinema allude to a variety of paths, showing how the static media of photography
and classic cinema will take on digital, and ultimately procedural, elements.
Expanded Cinema’s survey, written just before of the emergence of dynamic,
computational, and cinematic networked communities, still has much to offer nearly fifty years
later. Somewhat perversely, I will examine one such community that developed a unique
cinematic tradition almost by accident, and has origins in the dawn of home computing and home
software piracy. This “Demoscene” offers a strange but informative narrative incorporating both
cinematic and PCG elements, its evolution running parallel to that of the home computer and its
fusion with video, music, interactivity and connectivity.
Furmanski 48
The Demoscene
While artists and engineers continued to explore the potential digital media through the
80s and 90s, a subculture known as the “demoscene.” This movement primarily began as an
international group of teenagers and young adults engaging in questionably legal activities who,
almost by accident, created an incredibly large and storied body of digital work. Youngblood’s
predictions, both accurate and inaccurate, prove strangely instructive when exploring how and
why this connected digital community would create cinema-like artifacts that defy most previous
definitions.
“Demos”—executable, realtime animations designed for viewing on home computers, have a
long story, a storied community, and reveal an interesting facet of digital media. Demos contain
animation, music, even video, and play with color and motion in a manner expected of most linear
media like film. However, programmed dynamic entities, systems that act as both content and form,
serve as a component of what defines a realtime demo and differentiates it from even its pre-rendered
computer animated cousins—in their own way a noun, an adjective, and a verb. These sorts of
dynamic entities exist in other digital media such as games and other programs, but their use for a
specific sort of cinematic purpose, one decades old and still evolving, encourages and rewards a closer
look.
A Definition of Demo
Demos have grown in complexity and scope over decades, and having roots almost as old as
personal computers themselves. Though fascinating to watch, demos have remained enigmatic and
obscure even within digital media criticism. Part of any discussion on demos still needs to help define
Furmanski 49
what exactly demos are and what they are for. The utility of demos proves puzzling, and seeps into any
lengthy examination of these artifacts. While slang synonyms for demos include “products”, or
“prods”, and many could act as a sort of resume for would-be programming job hunters, coders do not
create demos to sell. Famous demos are shown (and often coded) during gatherings and specific demo
competitions, in a theatre-like setting, while the vast majority of demos are viewed on home computers,
either downloaded from the internet or, in days of yore, a bulletin board service or floppy disk. Demos
are specific digital objects, belonging to a very real subculture that produces and consumes them. With
roots in practices of vanguard computer enthusiasts and software pirates, demos might be considered
something of an underground movement, artwork arising from computerized graffiti and jargon given
dynamic form.
The story of the demoscene offers many parallels to a living ecosystem complete with its own
genealogy and geologic record. Through a continuous evolution, its initial artifacts bear little evident
resemblance to its current incarnations, and the scene has had its share of environmental upheavals
ranging from platform migration to splits in philosophy, branching techniques, and methods of
distribution. Even the demos themselves could be likened more to living creatures than static records.
Many demos “decompress” and unfold like a butterfly from a chrysalis; others perform the rather
dangerous practice of rewriting their own source code while they run. Demos share these types of
activities with the far more malevolent computer viruses, and many virus checkers find the presence of
a demo on a hard drive questionable enough to red flag. Various tropes, effects, and trends within
demos undergo a similar virus-like mutation, escalation, diversification, and sometimes extinction.
Demos fall into a larger category of digital cultural artifacts that critics find fascinating and
tricky to codify. Demos share their reliance on the philosophy of “real time” with machinima, for
instance. Machinima rose to prominence with the arrival of widespread, practical real time 3D in the
Furmanski 50
mid 1990's, when hackers dissected the source code of popular games and turned it toward their own
cinematic ends. Games such as Id Software's “Doom” and “Quake” offered a good start and handy
toolkit analogous to a stage with props and costumes. Still, the ideology, goals, and technology behind
demo creation diverges from machinima in many ways. The demoscene's avoidance of interactivity
contrasts with the necessity for user interaction with machinima's digital puppetry. A level of
interaction with code and product also separates machinima and demo coding. A large portion of
machinima comes from appropriating existing assets, manipulating objects and virtual characters that
already exist, while demo coding romanticizes starting code from scratch, accessing the computer at a
deeper level. A demo might include narrative or thematic elements, yet its execution implicitly draws
attention to the question of how it was created.
Virtuosity acts as a key component to appreciating a demo. Producing animation and sound
from code combines skills and experience reminiscent of both musicians and magicians. Viewers can
simply enjoy the parade of effects and music in and of themselves, but the primary audience of demos
are computer users and fellow demo programmers. A major component of pleasure is admiration of
the skill necessary to implement a sequence, as well as bewilderment on how such an effect was
accomplished. Legendary demos invoke the question “How did they do that?” as much as the
exclamation “That's amazing!” Quantity, such as more polygons or sprites, could impress, but certain
effects or dynamics could simultaneously animate beautifully while challenging a viewer to figure out
how such an effect could be coded to display in real time.
A demo could also lampoon such a competitive atmosphere. “The Real 40k”, while
legitimately fitting inside the tiny file size of 40 kilobytes, displays all the candor of a snarky magician,
starting by displaying 40 instances of the letter “K”. Technically and aesthetically impressive sections
Furmanski 51
mingle with further pun-delivered promises, such as promising a screen with “16 planes”, and then
showing 16 bitmapped airplanes, as opposed to bitmap or geometric planes.
“The Real 40k” intersperses commentary with spectacle, giving a vaudevillian sort of feel, and
many early demos share this sort of atmosphere. The old saying of “tell them what you're going to do,
do it, them tell them what you did” was not always followed, but since earlier demos were typically
explicit “demonstrations”, the format proved useful. As demos developed through the mid-1990s,
demos might show a prompt describing an effect while it occurred (“Phong shading”, “spline
animation”, etc).
Courtesy limits most demo lengths to around 5-11 minutes. The limit apparently comes not
from technological or commercial restraints, but rather the patience of a viewing audience, and possibly
a desire to avoid escalation in that particular instance. The 45 minute length of a 1991 demo
competition winner, “Odyssey”, might have something to do with this convention.
53
As mentioned
below, making an incredibly long, even indefinitely long demo can be trivial, but given methods of
distribution and a desire for spectacle, demos resemble sprints more than marathons.
Hardware plays an obviously important role in demo coding, both in providing a common focus
to demo communities and well influencing the form the products ultimately take:
Sometimes, the features of a specific platform lead the author to use a specific, "platform-
optimal" means of representation (for instance, preferring vertical scrolling direction on the
Atari 2600, or using simple and inexpensive copper tricks for transitions on the Amiga 500). In
this way, each demoscene platform builds its own platform-specific audiovisual "dialect".
Similarly, size-restricted categories and software platforms also build their own "dialects".
54
53
Carsten Cumbrowski. “The History of the Demoscene.” Roy/SAC. http://www.roysac.com/blog/2008/01/the-
history-of-the-demoscene/ (Accessed January 23, 2008).
54
Ville-Matias Heikkilä. “Putting the demoscene in a Context.” Countercomplex.
http://www.pelulamu.net/countercomplex/putting-the-demoscene-in-a-context/ (Accessed July 11, 2009).
Furmanski 52
Demos invite both a direct acknowledgment of their underlying construction as well as a view of their
protean past. Monfort and Bogost's words on Platform Studies, used to explore the methods behind
programming the Atari 2600, prove equally, if not more relevant here:
The detailed analysis of hardware and code connects to the experience of developers who
created software for a platform and users who interacted with and will interact with programs
on that platform. Only serious investigation of computing systems as specific machines can
reveal the relationships between these systems and creativity, design, expression, and culture.
55
Since coders produced demos specifically for consumption by fellow demo producers, or at least an
audience with a higher than average knowledge of computers, platform analysis can act as one method
of understanding demos as a medium. Still, a technological influence does not equate to a
technological determinism. Coders fight, subvert, and often ignore aspects of a computer to create a
desired effect, and even a cursory look at the history of the demoscene leads to suspicions that the
scene could have evolved in many different ways, with hardware being only one factor.
The Copper (“Coprocessor”) Age of Piracy
The current demoscene product has few obvious ties to its roots in software piracy, but even a
cursory examination betrays a continuity that has solidified into the historical lore. The demo group
Fairlight, founded in 1987, is still associated with both “warez” sites and cutting-edge demos.
56
The
act of “cracking” software mixed puzzle-solving, tinkering instincts, system design, code skill, a
55
Nick Monfort and Ian Bogost. Racing the Beam: The Atari Video Computer System. (Cambridge, Mass: MIT
Press), 3.
56
James Delahunty, “Piracy group resurrects after being raided.” Afterdawn News.
http://www.afterdawn.com/news/article.cfm/2004/09/03/piracy_group_resurrects_after_being_raided (Accessed
September 3, 2004).
Furmanski 53
competitive spirit, and audacity, all of which were coincidentally the qualities of a good demo
programmer.
Piracy groups can be traced to the Apple II in the late 1970's. Before the days of broadband
internet were the days of 300 baud modems and Bulletin Board Services. A common use of such
digital communication was the acquisition of free software that was copyable due to some enterprising
altruist “cracking” it. Cracking software involved a knowledge of low-level assembly code and
convoluted copy protection schemes. One method of copy protection added bad sectors to a disk, so
arguably cracking a piece of software involved “fixing” it. On phone-based BBSs, crackers also
preferred compressing the size of the software to allow faster transfers and lower phone bills. Anyone
who could change the way a program was copied could also insert code, and this ability led to crackers
signing their names, or rather aliases, on each program they cracked and uploaded to share. This
activity often ensured preservation in one form or another. Much like the film prints of silent movie
director Georges Méliès, many of the programs that have survived from this era are the pirated
versions, undoubtedly due to their larger proliferation.
57
Another consequence of gaining intimate
knowledge of a program's code was that certain quantitative values could easily be changed. Cracked
computer games often contained “Trainers”, giving players unlimited lives or other “cheats”, another
aspect of cracking that added to the blend of altruism and ethical compromise that composed the
cracker scene reputation. An atmosphere of never-ending competition to see who would be the first to
crack a new game meant crackers worked fast, and like many criminal enterprises the community was
filled with liars, braggarts, amoral geniuses, and teenagers.
57
Jason Scott, “Apple II Pirate Lore.” Archive.org. http://www.archive.org/details/Apple-II-Pirate-Lore (Accessed
March 29, 2003).
Furmanski 54
Fig. 3. Crack Screen for the Apple II Donkey Kong, From: Artscene, http://artscene.textfiles.com/intros/APPLEII/
An escalation of embellishing one's signature augmented the great cracking competition. The
original addition of names, groups and numbers of BBSs expanded to include slogans, messages and
taunts to fellow crackers, which eventually led to animations and sound. The Apple II lacked any
reasonable sound support, but the Commodore 64 had its own sound chip, which led to musical
accompaniment to cracking intros, or “cracktros” as they came to be called. Programming teams grew,
and some devoted themselves entirely to crafting animations or composing music. Around this point in
time, the mid to late 1980s, demos that had nothing to do with cracked software, or any other use
beyond digital spectacle, came to exist in their own right. The shift of focus from software cracking to
demo coding might have been interpreted as a scene dying, but for demos themselves this development
compares to multi-cellular life, or an ancient amphibian first crawling onto land on some primordial
shore.
Furmanski 55
Eras and Evolution
One scandalous instance in the schism between crackers and demo coders came during the 1989
Ikari-Zargon Party. Both a copy and demo party, the organizers changed out the jury and rigged the
results so that they would win their own competition. In the aftermath, frustrated demo coders formed
the group “Federation Against Ikari.”
58
They released a demo inviting sceners to a new party, which
included a rant describing the situation, and a number to call to join the Federation. Demo coders had
occupied a lower tier of respect than crackers at this point, and from a cracker standpoint, the jury-
rigging was not considered a big deal.
59
Demos had grown beyond simple embellishments anyway, and
the transition of demo dominance from the Commodore 64 to the Amiga, originally released in 1985,
was well underway.
The Amiga demoscene proved strong in the late 1980s to mid 1990s, as the computer had more
direct audio and graphical support. The Amiga's popularity in Europe helps explain some of the
demoscene's prominence in that region. The iconic red and white checkered “Boing Ball”, eventually a
mascot for the Amiga, came from a demo that displayed the checker ball, spinning and bouncing across
a computer screen in polygonal 3D during the 1984 Winter Consumer Electronic show.
60
The demo
itself took advantage of the sound chip, color cycling, and various other display hacks of the sort that
fill a typical demo coder's virtual tool kit.
58
"Federation Against Ikari” The C-64 Scene Database.
http://noname.c64.org/csdb/group/?id=2431&show=current
59
Tamas Polgar. “The Full History of the Demoscene”. Youtube.com https://www.youtube.com/watch?v=8iDr-
8odlqo (Accessed February 23, 2014).
60
Gareth Knight. "Boing Back – The origins of the Boing Ball”. Amiga History Guide.
http://www.amigahistory.co.uk/boingball.html
Furmanski 56
Fig. 4. Amiga “Boing Ball”. Amiga History Guide. http://www.amigahistory.co.uk/boingball.html
The emergence of a real PC presence in the demoscene during the mid to late 1990s probably
came as much from a more widespread use of sound cards as from any graphical evolution. 3D
ultimately proved more feasible on the PC due to programming paradigms. Individual pixels could
easily be accessed, while the same infrastructure that allowed convenient use of palette control and
sprite manipulation on the Amiga complicated the procedures typically used for line and polygon
drawing. Real time 3D games came into widespread use during the mid-1990s, so pushing 3D graphics
in particular was not only possible but something of an imperative. Crafting a demo was programming
prowess, and with the 3D revolution at hand, knowledge of low-level 3D systems could lead to fame
and/or gainful employment. The original Amiga stopped production in 1996, while the PC had
continued support and a growing market share. Despite the emergence of PC dominance and Amiga
obsolescence, Amiga demos have never really died out, and still win competitions against PC demos
even at the time of this writing.
Among other game-changers, the World Wide Web component of the Internet came to the
forefront, changing the scope of community communication and demo distribution. Each of these
changes had their influence on how demos were coded, distributed, looked, and on a more subtle level
what they ultimately represented. A controversy between technological mastery and design sensibility
grew to prominence.
Furmanski 57
While Amiga and PC demosceners developed in tandem and shared a community, the Atari
computer demo scene existed in surprising isolation, developing their own contests, conventions, and
even slang.
61
Atari sceners would often distribute several demos, “megademos,” in collections that
would offer a game-like menu interface to access their contents. The Union Demo in 1989 is a classic
example, with a cartoon character navigating a hallway. Behind each door within the hallway is a
different demo “screen” that could either be a demo effect, digital music (something hard to do on the
MIDI-based Atari ST), or game. A “screen” is somewhat analogous to portion of a demo in the
Amiga/PC community, but in many Atari demos could be accessed in this more interactive, non-linear
context.
Another major development and concern was the wide scale adoption of 3D accelerator cards.
Like sounds cards before them, 3D cards turned into a necessity for many mainstream PC users. 3D
accelerators offered the Graphical Processing Unit (GPU), the flashy counterpart to the typical CPU
any home computer came with. From the outset the cards automatically accomplished what demo
coders spent the bulk of their skill doing: creating fast geometric rendering in real time. The classic
demo effect of the “spinning cube,” for instance, was exactly the sort of thing 3D cards did trivially.
Like many previous technological developments, the fear that coding effects might become “too easy”
seeped into the scene. A lingering preference for MS-DOS echoed the decline of the Amiga's presence.
If demos were to demonstrate cutting-edge coding ability, newer cards and computers compromised
this environment:
So far the demo scene hasn't evolved from concentrating on technical excellence instead of
content and maybe this is one of reasons why the demo is slowly dying away. Most PC demos
are still made for DOS and because of this they don't take fully advantage of today's hardware
(e.g. 3D accelerators)...If the main point of watching demos was to see something "cool" which
61
Tamas Polgar. “The Full History of the Demoscene”. Youtube.com https://www.youtube.com/watch?v=8iDr-
8odlqo (Accessed February 23, 2014).
Furmanski 58
wasn't possible to do in games, the point is now gone, because state-of-the-art games for
Windows using cheap 3D accelerator cards blow current demos away.
62
The scene did adapt with the turn of the millennium, albeit with a share of reluctance and all-to-
human resistance to change. In 1999, major demo parties known as “Assembly” “The Party” split their
main demo competition into accelerated and non-accelerated entries. The winning non-accelerated
demo, “Melrose Space”, differed incredibly from its accelerated fellow winner “Kasparov”, but the
differences were most obvious in style and structure, not technical quality. “Melrose Space” continues
the tradition of a sequence of effects, many of them using 3-dimensional graphics, but also mixes
effects in very eclectic ways. Even some video (made monochromatic) can be seen interspersed with
irreverent commentary and sound samples. “Kasparov” leverages the obvious strengths of a 3D card,
and the demo itself takes place within a series of connected 3D scenes, with animated robots,
landscapes, and factories. Quite a bit of artistry can be seen in the control of the virtual camera,
synched to music, as it travels through the crafted spaces. The winner of Assembly the next year,
“Exceed”, was software-based with no acceleration, a feat that only grew more amazing and ever more
rare with the age of 3D cards established. In the following years, however, demo coders looked at
GPUs as less of a crutch and more as another device to take apart, subvert, exploit, and break new
limits.
62
Petri Kuittinen. “Computer Demos – The Story So Far.” Media Lab Helsinki. http://mlab.uiah.fi/~eye/demos/
(Accessed April 28, 2001).
Furmanski 59
Fig. 5. threestate, Melrose Space. 1999. From: Pouet.net, http://www.pouet.net/prod.php?which=162
Fig. 6. elitegroup, Kasparov. 1999. From: Pouet.net, http://www.pouet.net/prod.php?which=374
3D cards, after all, were programmable, and provided another piece of technology to artistically
benchmark. Incredible possibilities opened up for small-scale 64k and 4k demos, for instance, since a
lot of the heavy lifting could be decompressed and handled very adeptly by GPUs. Experiments with
shaders – programmatic effects that influence both geometry and pixels directly – also came to the fore
of techniques and disciplines used in the GPU age. In 2000 the group Farbrausch released “fr-08: .the
Furmanski 60
.product”, widely considered revolutionary in pushing the limits of procedural discipline and
compression. It was a 64k demo “that would have easily won against all the demos in the demo
competition of the event it was presented at; most of those demos occupied several Mbytes of space.”
63
The term “product” focused on the process in which the programmers created the work. Dierk
Ohlerich (aka “Chaos”),
64
one of the coders, admitted “I didn't want to make a 64k demo, but I wanted
to make the tool. Unfortunately, 64k demos are the only good use for such a tool.”
65
Small scale
Farbrausch demos, including “.product”, usually contain a lengthy text crawl at the end, using
wandering, editorial rants to use up the last few bytes of space, showing that as impressive as their
demos might be, they still have room to spare. Current 4k demos contain music and cutting edge
graphics, and can run just as long as a demo without such limits, shrinking the arena further.
The last few years have brought elements of the demoscene into comparatively wider exposure.
Will Wright cited inspiration from and hired demo coders in developing the procedurally-driven,
galaxy-simulating game Spore.
66
Sony invited a number of groups to develop demos and similar open-
ended projects for the PS3. Other elements provide uncertainty as computer culture in general changes,
and like many underground movements, many participants feel increased exposure compromises the
subculture. The demo group keWlers retired in 2006 with their demo “1995”, with music containing
the refrain “Bring back the only demo art/Bring back for me only the old times”. Breakpoint, a demo
party celebrated since 2003, advertised closing down in the Farbrausch-authored invitation demo “like
there's no tomorrow.” The demo used a comedic mashup of elements from several previous demos,
63
Claus Dieter Volko. "The Demoscene – A Short Introduction”. Hugi.
http://www.hugi.scene.org/adok/articles/demoscn.htm (Accessed 2004).
64
Dierk Ohlerich. Homepage. http://www.xyzw.de/
65
Claus Dieter Volko. “Making of fr-08: .the.product.” Hugi. http://www.hugi.scene.org/adok/articles/adfr08.htm
(Accessed 2001).
66
Dave Kosak. “Will Wright Presents Spore...and a New Way to Think About Games.” Gamespy.com.
http://www.gamespy.com/articles/595/595975p1.html (Accessed March 14, 2005).
Furmanski 61
including discoball dancers from “the.popular.demo” and the dissolving cityscape from “debris”, all in
service of depicting an apocalyptic dance party.
Scenic Communities
Examining demo products requires a look at who created and consumed the demos as they
spread from computer to computer and party to party. A science less exact than analyzing frames or
logging code algorithms, delving into the habits and histories of sceners themselves leads into decades
of shady archives and near-forgotten lore. Divided between the “elites” who coded the best, and
multitudes of “lamers” of less skill and little to contribute, the scene sifted through time as best a group
of tinkering computer users might.
My method of viewing these demos came during my own introduction to the Internet in my
high school years, starting with a friend introducing me to various demos, then finding CDs with
collections, and later seeking out new releases online. As I learned to program, I viewed these works
with my own personal mix of curiosity, ambition, and envy. Geographically the scene itself never
made much of a foothold in the United States as such, and most of the groups I followed were in
Europe, particularly Scandinavia. The demo parties themselves were fabled events I only gradually
learned about.
Demo groups ebbed, split, and flowed like mercury through the years. While the most famous
groups have attained an air of venerability, the vast majority do not last long, and rarely make more
than one demo (if that).
67
Disk magazines and forums were filled with the equivalent of want ads, and
67
George Borzyskowsky. “The Hacker Demo Scene and It's Cultural Artifacts.” Scheib.net.
http://www.scheib.net/play/demos/what/borzyskowski/ (Accessed October 17, 2000).
Furmanski 62
part of their purpose was to facilitate communication within the demo community. Any cry of “the
scene is dying!” usually laments the lack of new blood, and the community's inability to continue to
make new content or replenish energy.
George Borzyskowski wrote one of the first major surveys of the demoscene back in 1994,
looking at the people who authored demos and their motivations for doing so. One interview brought
up some justification beyond amateur displays of skill:
"The good thing of democoding is that the coder makes experiences and starts to know the
machine. This is a very positive fact and it's very useful for the future...But the democoding
must not overlap the other facts of life of a guy, because otherwise the guy after some years will
find himself with 3-4 wasted years, and I mean wasted in doing nothing for his life, because you
can be as famous as you want in the scene, but REAL life is completely different."
68
Borzyskowski also lists an interesting stratification taken from the diskmag “Freedom Crack”, an
unofficial hierarchy starting with famous demo coders and BBS groups at the top, working its way
down to those who simply used demo-making software and users who just wanted to play games. This
bottom rung proved particularly troublesome, as demo parties often turned into giant LAN parties with
many people using the gathering's network to play computer games instead of coding demos. Game
rooms had to be set aside in designated areas, especially as the demoscene grew in size and popularity.
As mentioned above, the demoscene combined a pioneering spirit with juvenile and criminal
elements, communicating through phone lines and conventional mail at first. Disk magazines, or
“diskmags” were another form of mass communication. These executable files worked like demos, but
usually contained articles instead of effects, while including some artwork and musical
accompaniment. Years of diskmags can be currently obtained in online archives and viewed through
emulators. The majority were for the Commodore 64, Amiga, and MS-DOS, hence they are difficult to
68
Ibid.
Furmanski 63
run on most current machines without specific software. Markky Reunanen and Antti Silvast surveyed
several diskmags in looking at home computer adoption and platform shifts within the demoscene:
“The first impression the reader gets from Sex’n’Crime [an example diskmag] is that the Commodore
64 cracker/demo scene of the late 1980s was a hostile environment. Numerous accusations, rumors,
and news about wars between groups appear practically in every issue of the diskmag.”
69
The
competitive aspects of cracking and demomaking, combined with the anonymity computing over a
wide network provided, certainly made much of the scene far from civil. Of course, this mode of social
interaction proved hardly unique in contemporary and future digitally mediated communities.
Diskmags themselves drifted into obsolescence with the arrival of the internet. A 1993 copy of
the diskmag “Imphobia” talked about a developing series of internet archives to host demos, while
providing physical email addresses to send disks “assuming that no one in your demo group has
internet access, otherwise YOU can ftp it to the demo site.”
70
Several archives grew on the internet,
including the venerable but now defunct Hornet.org, as well as Pouet.net and Scene.org. Scene.org
currently serves as a primary news site, detailing parties and news.
Demos share another aspect with other digital forms, PCG or otherwise: intermittent searches
for validation as “Art”. The subject of spreading demos to a wider audience has cropped up in
editorials and discussions over the years.
71
The rhetoric might sound very familiar to a scholar
studying just about any other aspect of digital media; common questions include whether such
justification is necessary, the potential of academic study, acknowledging (if not approving of) sceners
who like to keep things “underground”, and other issues hardly unique to demos.
69
Markku Reunanen and Antti Silvast, “Demoscene Platforms: A Case Study on the Adoption of Home
Computers,” in History of Nordic Computing 2, ed. John Impagliazzo, Timo Järvi, and Petri Paju, vol. 303 (Berlin,
Heidelberg: Springer Berlin Heidelberg, 2009), 289–301.
70
Pall Bearer, Toxic Zombie. “The Internet Demo Scene”. Imphobia #6. August 1993.
71
Claus Dieter Volko. “Presenting the Scene to the Public: Good or Bad?” Hugi.
http://www.hugi.scene.org/adok/articles/present.htm (Accessed 2003).
Furmanski 64
One reason to expand awareness of the demoscene was not fame or validation, but to find new
blood as older groups left the scene. The classic demo programmer was in their teens, and finding a
job or getting a computer science degree often meant moving on to other things, relegating demo
coding to even more of a hobby if not abandoning it altogether. A sort of survival of the fittest
dynamic came with this phenomenon, as the best demo groups remained longer, even as the scene
suffered bouts of shrinking population.
72
Still, interest in the scene comes in waves, and for different
reasons as mentioned above, and the scene has grown again, changed but still with its strange,
continuous pedigree.
Demo Tropes, Components, and Dissections
Demos contain the motion, sound, color, and spatial metaphors associated with cinema and
video, but another model to describe a demo would be to separate out a series of “effects”, the most
common way demos showcase a coder's prowess and a machine's capabilities. Demos have their fair
share of tropes, shared amongst themselves and across the decades. The pressure of competition acts as
an evolutionary guide, along with qualities and quirks of whatever computing system served as the
demo's platform.
Effects defy simple categorization as objects or motions. An effect might grow in scope as
coders improve or change it to suit their needs, and effects can stack or augment one another. “The
Tunnel”, a demoscene blog, described one composite creation:
One of the best tunnels the scene remembers is the one In Lasse Reinbøng by Cubic Team.
Initially it's not a tunnel, it's just a drop of water on the Cubic Team and $een logo. Then the
72
Tamas Polgar. “The Full History of the Demoscene”. Youtube.com https://www.youtube.com/watch?v=8iDr-
8odlqo (Accessed February 23, 2014).
Furmanski 65
scene rotates, distorts and gets transformed in the proper tunnel, that is deformed, radially
multiplied and at the end, fades out. With "Lasse Reinbøng", probably, a category of 2d effects
like the tunnel effect reached their climax and disappeared. Or better, they evolved into more
complicated effects.
73
A coder could classify each effect as a specific “thing”, but many effects could also apply to other
effects, transforming into an “adjective” or even, given an effect’s animated qualities, a “verb”. A
virtual camera could fly over a landscape, or through a tunnel, or through a landscape wrapped onto a
cylinder like a tunnel. These effects also contain classical cinematic or animated components like
framing and motion, but the effects use such components in a profound, overarching relationship, much
like calculus applies to geometry.
“Demo Fire” acts as a good example of a dynamic system, and can help show why these
systems act as units distinct from a frame, motion, or object in other media. Classically, “fire” comes
from a simple algorithm: Make the bottom row of pixels a random set of red or yellow. Shift the row
of pixels upward, blend them slightly together, and then create a new bottom row, and repeat. The
result is a blending conveyor belt of turbulent pixels that billows, branches, and congeals like licks of
actual flame. Among the oldest instances of this effect occurs in “Inconexia” in 1993. Starting from
this basic idea, “demo fire” was enhanced, embellished, and often combined with other effects.
“Drift”, a 64k demo, used demo fire to outline a vector shape as its introduction. Images could be “set
on fire”, using a drawn picture as the source of initial pixels instead of the more representative
collection of red and yellow. Demo fire itself belongs in a large category of pixel algorithms that
include plasma, noise, sine wave distortion, and many others. Such 2-dimensional effects might
coexist and filter a 3D scene, or animate as a texture over a 3D model. The computer's quantitative
abilities provide ample opportunities for qualitative exploitation and semantic confusion.
73
friol. “demoreview#6: zooooooming in............” The Tunnel.
http://datunnel.blogspot.com/2007/10/demoreview-6-scene-zooms.html (Accessed April 10, 2007).
Furmanski 66
Demos usually include one or several still images, illustrations that might stand on their own or
be included in a larger, layered effect. A demo group would include a coder, musician, and potentially
a “graphician”, who would be credited with this image. Creating an image for a demo is an art in itself,
especially on older, machines with limited color and resolution. Scanning images is frowned upon, and
older platforms demand many techniques like hand dithering be used in order to produce an
aesthetically (and technically) pleasing picture. The ZX Spectrum computer had 16 colors, but only 2
colors could share a given block of pixels, leading to artifacts called “bleeding” or “attribute clash”.
Creating images that looked like they were not bound by these initially intimidating limitations took
skill, and this skill reflects a similar challenge to what an assembly coder had to do to create the larger
demo. Coding practices could trick a machine into producing more colors or pixels than a
manufacturer had specified for a machine. Digital collections can be found online that simply contain
these images, independent of whatever demo they may have belonged to.
Most demos include “greets” in some manner, giving the names of fellow demo groups or
coders within the runtime of a demo. Greets can come in the form of movie credit-type text crawls, or
fancy 3D effects, or entire landscapes spelling out each name in a colossal, fractal spectacle. “Crystal
Dream 2” 3D-renders each name over a star field, but has each fly by at nearly unreadable speeds and
angles with the disclaiming introductory words: “DANGER DO NOT TRY TO READ THIS TEXT”.
The demo “Toasted” simply lists a series of group names in the bottom corner of the screen while they
play a literal roller coaster finale. The difficulty of reading the greets while a showstopping effect is in
progress is not lost on the group, as they intermittently place admonishments in like “WATCH THE
EFFECT!!!” The greets obviously act as an acknowledgment to the typical demo party venue.
Ancestors of current greets likely lie in old cracktros and nods from cracking messages. Many greets
Furmanski 67
greatly resemble the often rambling, dynamic, text-scrolling intros of yore, even if the rest of a demo
lacks such a similarity.
Tracking Audio
Demos all but require a strong audio component, and demo competitions often include a
separate category for audio tracks in and of themselves. Musical culture had an odd mix of heavy
metal influence, perhaps more obvious in artwork and handles than the music itself, and as musical
composition gradually grew practical on home machines, techno came to the fore almost by default.
The popularity of Demo parties grew concurrently with the rise of Rave culture in Europe, which
provided another influence.
Composition necessitated its own brand of trickery to overcome limited tracks, memory space,
and other hurdles computers had in providing musical experiences. MODs, which were collections of
sound samples sequenced together for melody, were a common format in the Amiga days. File sizes
meant there were few other ways to include audio, and years would pass before the MP3 and its kin
could adequately integrate into more modern demos.
Within the growing PC scene, the rivalry between the demo-supported Gravis Ultrasound
(“GUS”) cards and the more widespread SoundBlaster cards was yet another conflict of platform in the
annals of the demoscene. Gravis gave away boards to famous demo groups, in a move mirroring
Apple's saturation of computers in schools. Issues of the Imphobia diskmag apologized for only
supporting SoundBlaster. The choice to optimize one card over another proved just as relevant as what
computer a coder used.
Furmanski 68
The interplay between (often abstract) visuals and synchronized music makes a given demo
product superficially resemble a music video. Response from a now iconic 1992 demo, “State of the
Art”, proved controversial for just such a reason. “State of the Art” featured a vector outline of a
dancing woman captured from a video reference. The silhouette would transform into other shapes and
then back again while music played. The demo now looks like a precursor to the modern Flash
animation paradigm. It proved popular, winning “The Party” competition that year, but sceners also
attacked it on technical and ideological grounds. A contemporary fanzine response said, “[the coder]
should get himself a video camera and make music videos. Nothing is real-time in this prod […] If you
like it, watch MTV, you will get something better.”
74
Ironically, MTV did end up broadcasting a
number of demos, “State of the Art” among them.
75
Fig. 7. Spaceballs, State of the Art. 1992. From: Pouet.net, http://www.pouet.net/prod.php?which=99
74
John Thompson. “Top 10 Demos That Will Blow You Away”. PCPlus. http://pcplus.techradar.com/feature/top-
10-demos-11-08-10 (Accessed August 11, 2010).
75
Claus Dieter Volko. "The Demoscene – A Short Introduction”.
http://www.hugi.scene.org/adok/articles/demoscn.htm (Accessed 2004).
Furmanski 69
Music forms an integral, yet potentially separate portion of a demo. A demo connoisseur could
simply collect, play, and create music without caring much for the animated content. Some demos did
not include the music separately but provided a program meant to extract the song from the existing
executable. Internet sites provide MODs and MP3s of demos in the same vein as a movie soundtrack,
and like some films, the soundtrack may be more fondly remembered than the original intro or demo.
Platform Analysis and Medium Specificity
A given computer setup and code paradigm obviously influences how a coder builds a demo.
Computer platforms have a slightly more complicated relationship to the works coders produce,
however. Manufacturers might design chips and software systems for specific purposes, allowing
easier implementation of various components like sound, 3D graphics, or bitmaps, but a large element
of demo virtuosity comes from subverting software as well as pushing it. To paraphrase John
Kennedy: “We do not do these things because they are easy, but because they are hard”. The group
Future Crew released “Second Reality”, a groundbreaking PC demo, in 1993. Another group, Smash
Design, ported it to the far more primitive Commodore 64 in 1997. This adaptation has differences like
lower resolution and color depth, but for the most part replicates each of the effects, including a fully
3D flythrough of a city. Smash Design even changed a “93” to a “97” within the city geometry.
Furmanski 70
Fig. 8. Second Reality demo running on an IBM PC in 1993…
Future Crew, Second Reality. 1993. From Pouet.net, http://www.pouet.net/prod.php?which=63
Fig 9. …and a port to the Commodore 64 in 1997
Smash Designs & The Obsessed Maniacs, Second Reality 64. 1997.
Noel Carroll criticizes ideas of medium specificity that cater to a form’s strengths to the
exclusion of other possibilities: “I do not see how, given art as we know it, we can accept principles
Furmanski 71
that presume the best line of production in art is the line of least resistance or easiest approach.”
76
Carroll's criticisms apply just as well here as they do to photography, painting, and cinema, even if
perceived limitations might come in the more explicit forms of digital platforms and chip paradigms.
The demoscene lacks any obvious form of “medium purity”, given that it combines the
chimeric collection of imagery, evolved code, appropriations of painting, video, and procedural
algorithms, and mixtures of elements that are not quite collage, not quite animation. Potential, not
purity, acts as a key component in the medium of computer demos. The many “deaths” of the scene
have usually come off as growths or mutations, but have not destroyed the scene as a whole. Any new
technological platform has ended up presenting new and ultimately welcome challenges instead of
eliminating them. Virtuosity drives the typical demo more than any specific technology, which is why
both Commodore 64 demos and 4k raytracing GPU demos maintain a level of coexistence.
While the computer offers an immensely compelling platform and incredible potential for
interactivity, the evolution and reasoning behind demos actually does little to embrace the purposes
behind typical user interaction. Ville-Matias Heikkilä ponders the virtual lack of interactivity within
the demoscene, bringing up the centuries-old medium specificity argument in general:
“Interactivity is an example of a dimension that has always been within an easy reach for
demoscene artists but has still remained nearly untouched. While demos experiment very deeply
on the audiovisual nature of various technological platforms, they have barely managed to
scratch the surface of what I consider part of the very essence of the real-time computer art
medium itself.”
77
76
Noel Carroll. Theorizing the Moving Image. (Cambridge University Press). 10.
77
Ville-Matias Heikkilä. “Putting the demoscene in a Context.” Countercomplex.
http://www.pelulamu.net/countercomplex/putting-the-demoscene-in-a-context/ (Accessed July 11, 2009).
Furmanski 72
The minor interactive elements that demo coders do employ appear, mainly in user interfaces. The
rather involved game-like menus on the Atari megademos discussed earlier provide an intriguing
example, but for the most part demos fall under the umbrella of linear media. Semantics raise one
issue; an “interactive demo” could very easily be considered a “game”, or something else entirely.
Simple definitions aside, if a demo means to demonstrate coding and design ability, showing off
requires very careful planning, analogous to direction in a movie or staging a magic trick. User
interaction typically compromises computer performance and allows less leeway for trickery—severely
limiting and complicating the tasks of a demo coder. The classic venue of a shown demo, the demo
party, also eschews typical user interactivity, resembling a film festival more than an arcade. A variety
of selection pressures, from community, technology, and history have resulted in the form demos take
today, and might further change fundamental components in the future. If the demos had not evolved
beyond the core essence of their original purpose, the community might simply have gained more
efficiently copyable (and stolen) software over the last few decades.
If demos in general lack interactivity, how can they be different from a hand drawn or even pre-
rendered computer animation? Most demos could theoretically have been pre-rendered or 'hand
animated” to some extent, but such a hypothetical animator would have many difficulties executing or
even conceiving of many of the effects and layered dynamics found in even the simplest demos.
“Verses” contains a sequence of morphing fractals made up of thousands of dots flowing in
mathematical precision. Given the automation of the home computer, the effect leverages the
recursive, reproducible qualities that fit dynamic code like a glove. Further ideas, such as immense
data compression, could be used for distributing a pre-rendered animation, but for demos compression
represents a direct incarnation of the product's identity, defining a genre and directly affecting how a
coder builds and directs the visual content. The process of coding a demo belongs with the total
Furmanski 73
experience of the demo, in the same way appreciating a demo's content requires an understanding of
how such a creation works.
Data compression adds a practical side to the question of specificity and purpose. Video on a
home computer was impossible for setups whose hard drives could hardly fit a dozen high-resolution
images, nor could video be practically distributed through a community that communicated through
floppy disks and phone lines. Real-time coding allowed animations that lasted several minutes to fit
through modems and onto floppies. Strangely, time does not act as a physical limitation in a computer
the way it defines the length of a film reel or the quality of a video tape. A two hour long celluloid
movie has twice the amount of frames as a one hour movie, but such a difference between two demos
could simply be a single changed value within the source code. Demos can run indefinitely, and
cracktros typically did, faithfully awaiting the user to start the game or other piece of acquired
software. Complexity might have more of an effect on a demo's size, but fractals can consolidate an
infinite amount of detail into a finite (and usually very small) equation. 64K and 4K demos make
extensive use of such tricks, so a bulk of their size comes from parameters rather than explicit
definitions. M.C. Escher's visions of tessellation and infinity manifest easily in such a realm of thought
and design.
These qualities do not box in or stifle the demo medium, but rather can serve to expand ideas of
motion and structure. Such developments do not have to fall into Carroll's descriptions of misguided
“media purity,” but can reveal and influence other media while they continue to grow in a digital
wilderness. Demos appropriate and share aspects of other digital artifacts like games and machinima,
but a have through multitudes of reasons developed into a recognizably separate entity. With an
author's ego as a driving component, perhaps the “purest” component a medium can have, demos have
survived and evolved over 30 dynamic years.
Furmanski 74
Demos and their accompanying scene provide an intriguing facet to digital culture, grown not
from a utilitarian need of specific purpose, but from a secondary aspect of piracy and general computer
tinkering. They can be considered to be built not out of static objects of meaning, but living dynamics
and pliable, programmable systems. As computers and demos evolve, perhaps some comfort might be
found in the fact that communities and people do as well. The supposed limitations of technology do
not corral tightly, even (and perhaps especially) in programmed media. The changes, shifts, and
growth of the demo as a form does not come solely from computers getting faster. Rather, the aspects
of performance and ego inherent in demos reward cleverness, ingenuity, and “acrobatics,” and the
scene grows from viewers curious enough to ask how such works are possible, and determined enough
to find out.
Furmanski 75
Case Studies
The case studies I will look at are hardly exhaustive, but come from sources I have used
in learning and developing my own work, through their play, dissection, hacking, and study.
Source code accessibility comes partially with age, partially with the simpler days of
computing. While I can find and analyze pieces of code from 1980’s Telengard, gaining access
to the source code for, say, No Man’s Sky would be far more difficult. Even with access to code
from more recent works, programmatic structures have become more complicated as the scope
and sizes of developing teams and projects increase. Dissecting code made by one or two
authors is far easier than code developed by a dozen people on a single work. Many of these
older works are somewhat iconic or have inspired more recent works, and their examinations can
prove revealing in their technical and thematic development of PCG as a digital art form.
These case studies are meant to illustrate how various works play into the PCG axioms
described in the introduction, as well as to show how design goals and algorithms shaped the
work. Often their procedural nature came from specific technological constraints and
considerations, while their stated design goals often sound very familiar to the axioms in terms of
ambition. Be they the product of incredibly technical limitations like Pitfall! or Telengard, or an
ongoing experiment in simulated systems expanding the scope of world-building, like Dwarf
Fortress or Noctis, each case study adds some dimension to how PCG can function and to what
purpose it can be applied.
Furmanski 76
Perlin Noise
Perlin Noise can be considered an algorithmic technique, a “spice” for an existing
technique, or even a state of mind. Given the proper amount of coding, Perlin Noise can look
like television static, stucco, clouds, marble, woodgrain, or fire. When applied to time, it can
give life to a static object, or character to a moving one.
The implementation earned its inventor, Ken Perlin, an Academy Award in 1997. The
text of the award is as follows:
To Ken Perlin for the development of Perlin Noise, a technique used to produce natural
appearing textures on computer generated surfaces for motion picture visual effects.
The development of Perlin Noise has allowed computer graphics artists to better
represent the complexity of natural phenomena in visual effects for the motion picture
industry.
78
Ken Perlin notes in his creation of noise that technological constraints played a large part
in shaping why and how it worked:
Unfortunately (or fortunately, depending on how you look at it) our…computers, while
extremely fast for the time, had very little RAM, so we couldn't fit detailed texture
images into memory. I started to look for other approaches…The first thing I did in 1983
was to create a primitive space-filling signal that would give an impression of
randomness. It needed to have variation that looked random, and yet it needed to be
controllable, so it could be used to design various looks.
79
Perlin came upon an age old issue of computer memory being more limited than
processing power (as opposed to the equally infuriating problem of processing power being more
limited than computer memory. It seems that every computer through the ages has had at least
78
Ken Perlin, “Noise and Turbulence”. NYU Media Research Lab. http://mrl.nyu.edu/~perlin/doc/oscar.html
79
Ken Perlin. “Making Noise”.
https://web.archive.org/web/20160303232627/http://www.noisemachine.com/talk1/ (Accessed March 3, 2016).
Furmanski 77
one of these two problems). This limitation shaped how and why the algorithm would be used,
creating detail from dynamic math instead of stored data.
At its core, Perlin Noise is an averaged value of a series of adjacent random values. The
key difference between Perlin Noise and random numbers is that Perlin Noise is continuous, in
the classical mathematical sense. If you walked along a series of hills defined by basic Perlin
Noise, you might find yourself moving up and down considerably, but you’d never have to use
stairs or rock-climbing gear.
If you zoom into the created numbers far enough, you will see a smooth curve. Increase
the frequency of the numbers, and the pattern becomes harsher. Adding different sets of noise
with different frequencies can result in some usefully chaotic patterns, often used to generate
natural-looking features like mountain ranges.
Messing With the Code
If we take a typical example of basic Perlin noise, we have a smooth, static-like pattern:
Furmanski 78
Fig. 10. Perlin Noise Pattern.
A key feature of noise is that, as we zoom in, we might lose detail but we can always
assume a smooth transition between any two points:
Fig. 11. Perlin Noise “zoomed in” or with a lower frequency.
As we zoom out, the pattern may repeat, but the detail becomes so regular that it reduces
to a basic grey.
Furmanski 79
Fig. 12. Perlin Noise “zoomed out” or with a higher frequency.
At the very beginning Perlin defines two key formulas that make the spine of how his
noise works:
#define s_curve(t) ( t * t * (3. - 2. * t) )
#define lerp(t, a, b) ( a + t * (b - a) )
“Lerp” is an abbreviation of “Linear Interpolation,” a mathematical way of finding a
point between two values. Halfway between points a and b becomes a simple average, but the
formula gives a value that is defined as t between a and b.
The s_curve is a method of smoothing out values as a noise function gets closer to a
specific whole number. Ignoring the s_curve would introduce “sharp” points of transition,
creating artifacts like in figure 13.
Furmanski 80
Fig. 13. Perlin Noise without “s curve” smoothing.
The s-curve continues the general purpose of Perlin Noise—to keep a “random” value
smooth and continuous with any neighboring value, no matter how small.
Adding multiple sets of noise with different frequencies can result in more detailed and
complex patterns; a typical one might create a cloudlike form. This combined noise pattern is
called octave noise, which can be used for larger scale graphical imagery. Other combinations
can make a marble pattern —as shown in figures 14 and 15, subtracting two different sets of
octave noise from each other tends to create veins (or canyons if you make these greyscale
values heightmaps and convert the 2D image to a 3D landscape):
Furmanski 81
Fig. 14. Octave Noise before subtraction
Fig. 15. Octave Noise after subtraction.
Perlin “Wood” comes from other biases, adding in evenly spaced bright and dark bands,
which with the added noise creates a “grain”:
Fig. 16. Octave Noise before pattern subtraction.
Fig. 17. Patterned texture based on sine wave.
Furmanski 82
Fig. 18. Resulting “wood” pattern.
Perlin noise can exist in multiple dimensions—not just in space but in time. Effects like
smoke and fire, even if two dimensional, take advantage of continuity in time to allow smooth
animation if noise is treated as temporal.
Perlin noise has been likened this noise to salt, or a spice.
80
On its own it’s not too much,
but it’s meant to be added to something. Adding just a tiny amount of randomness or chaos to a
regular, definable value, be it an image, animation, shape, or pattern can augment detail, realism,
or character.
80
A. Lagae et al., “State of the Art in Procedural Noise Functions,” in Eurographics 2010 - State of the Art Reports
(Eurographics 2010, Norrköping, Sweden: The Eurographics Association, 2010).
Furmanski 83
Cellular Automata
A basic form of simulation simple enough to demonstrate mathematical chaos and its
potential is the cellular automaton. Many of the case studies I looked at rely on mere
randomness, but cellular automata are practically chaos personified. Cellular automata come in
many different forms, but perhaps the most famous is John Horton Conway’s “Game of Life,”
first published in 1970 in Martin Gardener’s Mathematical Games column in Scientific
American.
The design goals of this particular “game” were as follows, and should sound somewhat
familiar to philosophies of other PCG systems:
The basic idea is to start with a simple configuration of counters (organisms), one
to a cell, then observe how it changes as you apply Conway's "genetic laws" for births,
deaths, and survivals. Conway chose his rules carefully, after a long period of
experimentation, to meet three desiderata:
1. There should be no initial pattern for which there is a simple proof that the
population can grow without limit.
2. There should be initial patterns that apparently do grow without limit.
3. There should be simple initial patterns that grow and change for a considerable
period of time before coming to end in three possible ways: fading away
completely (from overcrowding or becoming too sparse), settling into a stable
configuration that remains unchanged thereafter, or entering an oscillating phase
in which they repeat an endless cycle of two or more periods.
In brief, the rules should be such as to make the behavior of the population
unpredictable.
81
Given the 1970 date, the article recommends using graph paper or a checkerboard with flat
pieces in order to play the game yourself. However, the emergent, unpredictable, and complex
81
Martin Gardener. “The Fantastic Combinations of John Conway’s New Solitaire Game ‘life.’” Scientific
American, no. 223 (October 1970): 120–23.
Furmanski 84
consequences coming from a simple set of rules makes the game tailor-made for a digital
computer, to the point where understanding the finer qualities of the game seems to require
computational iteration.
The classic version of Conway’s Game of Life consists of looking at a grid populated by
cells or empty spaces. Each turn this grid is evaluated:
1. An empty grid space that has three cells as neighbors will grow a cell in the
next round.
2. A cell that has two or three other cells as neighbors in adjacent grid spaces will
survive to the next round.
3. A cell with four or more neighbors, or fewer than 2, dies from overcrowding or
isolation.
These three rules can produce an incredible amount of dynamic patterns, fulfilling the 3
goals outlined earlier. Indeed, this game can build patterns considered “Turing Complete,”
meaning that a large enough grid with the proper cell patterns can theoretically simulate any
possible computer program.
Implementing cellular automata usually takes two “boards” to calculate. The first is the
current state of the cellular simulation, which is used as reference to generate the second board.
This simulates instantaneous changes in cell generations, a critical (if implicit) rule in the
dynamics described, and perhaps the trickiest concept to learn and implement for a first-time
programmer.
Simple patterns remain static, while others, like the iconic “glider,” will appear to move
diagonally through the cellular generations. Factories can be built from cells that create such
gliders; large-scale “spaceships” retain their shape as they travel across the grid space.
Furmanski 85
Fig. 19. Game of Life “Glider” sequence. From: Chinese Language & Etymology,
http://www.chinese-word-roots.org/conway.htm
The patterns and dynamics of the cellular automaton change dramatically when the
simulation changes the number of neighbors each cell requires for birth, death, and survival.
But other slight variations can generate mazes, complete with “mice” that slide through them.
The maze pattern is also reminiscent of a fingerprint, or sand dunes, or other natural shapes.
While a visual depiction of the cells are the most common way to view the “game,” the
rule set is ultimately an abstraction, and can manifest is a variety of ways. Conway’s initial
description is of patterns, a desire for an unpredictable yet stable dynamic, and this idea of a
pattern might be considered the “medium.”
The above patterns come from a 2-dimensional version of the game. A one dimensional
game grows like a tree or coral. Instead of a grid, the simulation is visualized line by line, the
next generation a line of cells and gaps produced from the line above. Cells do not have eight
spaces to compare, but three (the cell directly above, above and to the left and above to the
right). 3 spaces, each being able to be living or dead, is the same as a 3-bit number.
Exhaustively storing every possible cellular state, then, requires 2 to the 3rd power, or 8 bits of
information. A shorthand for the rules on whether a cell generates or perishes can be encoded
Furmanski 86
into this 8-bit number, giving 256 possible rule sets. The first bit describes three empty spaces,
the second bit a living cell on the right side, and so forth. If a “bit” is 1, then the cell lives, if “0”
the cell dies. Every possible rule can thus be described in this single, relatively small number.
Oulipan writers would be proud.
Some 1-dimensional rules are fairly boring or sparse. Rule 0 generates no living cells
whatsoever beyond any the player might place in the first generation. Other rule sets creating
branching, tree-like shapes, or art-deco like grid patterns. Rule 30 looks like a mountain scape,
but also appears as a pattern on the shell of the cone snail Conus textile.
Fig. 20. Game of Life Rule 30. From: Wikimedia Commons.
https://upload.wikimedia.org/wikipedia/commons/9/9d/CA_rule30s.png
Furmanski 87
Fig. 21. Seashell showing Rule 30 patterning.
From: Stephen Coombes, “The Geometry and Pigmentation of Seashells.”
https://www.maths.nottingham.ac.uk/personal/sc/pdfs/Seashells09.pdf
Conway’s Game of Life is not the only type of cellular automaton. Diffusion Limited
Aggregation, or DLA for short, isn’t necessarily a cellular automaton, but can be implemented
very easily within one. The rules are simple:
1. If a cell is next to a crystal, it becomes a crystalized, that is, unmoving.
2. If a cell is not next to a crystal, it moves in a random direction.
After seeding a simulation with a number of mobile cells, placing a single crystal can
produce patterns shown in Fig. 22, with the white pixels as crystalized cells, and the purple ones
wandering randomly:
Furmanski 88
Fig. 22. Diffusion Limited Aggregation.
Knowing how these patterns are created, and building them, one can suddenly see
patterns like it everywhere. Jack frost patterns, lightning, mold growths. I’ve used DLAs to
create mountain ranges on maps.
Reaction diffusion systems, or cyclic cellular automata, come from cells that overcome
certain other types of cells. Imagine a living ecosystem of rocks, papers, and scissors. The
implied ecosystem is not a food chain as much as a food loop. The cellular structures tend to
cycle, as well as have a tendency to produce spirals or fingerprint type patterns.
Furmanski 89
Pitfall!
Pitfall! offers an excellent example of a work where PCG was not only an enabling facet
but a necessary one. Just about any trick would and could have been used in an era where every
bit of information counted.
Pitfall! was released for the Atari 2600 in 1982, and was one of the first “platform”
games, which focus on a player avatar viewed from the side navigating various obstacles.
Gravity tends to be the chief antagonist in the platforming genre. In Pitfall!’s case, other
obstacles include quicksand, pits, scorpions, alligators, and snakes. The player traverses the
jungle screen by screen, attempting to find and collect treasure.
Fig 23. David Crane, Pitfall! 1982. Screenshot from Wikipedia.org, https://en.wikipedia.org/wiki/Pitfall!
David Crane wanted to make a game in a novel setting, and Pitfall!, for the time, was
very different from abstracted sports games and space shooters. The concept for the game
started with simple idea of a running person. After a long time with the idea in the back of his
Furmanski 90
head, the main “design document,” including the paths, subterranean sections, and obstacles, was
conceived in a 10 to 15 minute sketch session.
82
The jungle explorer traverses an obstacle-filled jungle screen-by-screen, some 255 of
them. Pitfall! uses an algorithm to create this series of screens. Crane likens the algorithm to the
methods that were used to generate random numbers. His method, instead of just iterating
forward every time it was run, could be reversed as well, creating a list of pseudo-random
numbers that one could traverse in two directions. Given the byte-based structure of the Atari,
values that would “loop” from the last generated screen back to the first could be generated.
Each number could then be looked at as 8 single binary “switches,” each value determining what
obstacles were present, what background image to use, etc. A large part of the game design,
then, came from finding the best place to start, finding a region in these values where the
encounters of the game could ramp up in difficulty, instead of starting a new player in the middle
of a particularly difficult screen. Starting in a difficult area would not only lead to a frustrating
learning curve on the part of the player, but also leave the easier parts as boring discoveries
should the player manage to continue beyond the initial screen. Being able to explicitly author
the experience, screen by screen, was simply not an option given the limitations of RAM the
Atari 4K cartridges gave. Finding a good place to start the experience was more like location
scouting than set building.
A byte consists of 256 values, typically expressed as 0-255. This number can be a value
in and of itself, but, when viewed as a binary value, can also act as 8 different true/false values.
Increases in memory over the decades have often made coders ignore the value of the byte and
82
David Crane. “Classic game Postmortem: Pitfall!”. GDC Vault.
http://www.gdcvault.com/play/1014632/Classic-Game-Postmortem-PITFALL (Accessed 2011).
Furmanski 91
bit, but when an Atari cartridge generally contained only 4096 bytes, creating a plausible jungle
to explore would need every single one of them. Further operations that proved indispensable in
working with binary values came in the form of Boolean logic.
Boolean logic, a foundation of computer science, can be summarized as by the
commands AND, OR, and NOT. NOT simply inverts a statement—a value will change from
true to false, or from false to true. AND and OR involve the comparison between two true/false
values, AND returning true if both these values are true, OR returning true if either (or both)
values are true. It’s important to distinguish the more common English use of “or,” which is
considered an “exclusive OR” that returns “false” if the compared values are “true.”
While these operations are designed with binary values in mind, there’s nothing stopping
the computer from applying these commands to our more familiar decimal numbers. As
mentioned above, a the number 255 can be thought of as 8 “true” values, as 255 = 11111111 in
binary. ANDing 255 with itself will yield 255; exclusive ORing 255 with itself will yield 0.
Bit shifting is another operation that can manipulate binary input values. Shifting moves
the binary values either left or right. Since binary is a base-2 number system, this effectively
doubles or halves the number. What happens to digits that “fall off” either end of the byte comes
down to a number of semantic differences. Values might be replaced by zero, or “wrap around”
the byte. Coders might be able to determine what occurs themselves, but handling the edges of
shifted bytes can even vary by hardware platform.
The number 11 (00001011) shifted left one space yields 22 (00010110), and shifted right
produces 5 (00000101). Shifting doubles or halves a number, with any remainder removed. A
computer can shift a number more quickly than generally multiplying it, and many optimizations
Furmanski 92
come from “multiplying” by shifting. This feature of shifting and binary systems is one of the
main reasons computers have a predilection for storing values as powers of 2.
“Rotating” a binary number involves shifting the values to the left or right, with the digits
that “fall off” either edge coming back to the other side. The number 10, expressed as a byte
(00001010) rotated right by 2 yields 130 (10000010).
All of this is to say, these operations are very simple for a computer to do, and can
produce some very non-intuitive results very quickly. Since “non-intuitive” can be considered
another word for “random,” these operations are the foundation of many random number
generation algorithms.
The following list of operations serves as the primary method of generating Pitfall!’s 255
screens:
83
lda random ;
asl ;
eor random ;
asl ;
eor random ;
asl ;
asl ;
eor random ;
asl ;
rol random ;
dex ;
bpl .loopRandom ;
jmp ContRandom ;
83
Sam Trenholme. “A 4096-byte Jungle”. Sam Tremholme’s Webpage. http://samiam.org/blog/20130606.html
Furmanski 93
“random” in this case is a simple number, processed by a series of these binary operations. It’s
up to another part of the code to interpret this value, ranging from 0 to 255, as a series of
true/false statements, depicting a given screen of play.
Only a few initial numbers will allow a cycle that goes through 255 values once (0 will
return 0, resulting in a “stuck” state, so avoiding that value is necessary for the whole scheme to
work). Different cycles also can require a different number of operations, in turn using more
memory, which was a crucial consideration in a situation where every byte was precious.
The length of a binary value varies between hardware platforms, and provides a challenge
in updating and “translating” old code, as many results assume a specific number of “digits,” that
is to say binary “bits.” An 8-bit computer would store “1” as 00000001, while a 16-bit computer
would store that same “1” as 0000000000000001, making many of the tricks listed above
produce wildly different results. Recreating older techniques requires special attention to what
the original machine assumed about memory storage.
Along with the use of PCG for compression, the qualities of Pitfall! show how data can
be encoded, modified and quantified through various mathematical methods. Boolean operations
techniques are surprisingly easy for computers to implement, and have continued to be used in
low-level algorithms like random number generation, encryption, and compression into the
present day.
Furmanski 94
Telengard
Telengard, a computer role-playing game, was programmed by Daniel Lawrence in 1978,
and the game was ported to several early home computing platforms over the next few years. Its
direct ancestor, dnd, was one of the earliest computer games, from an age when computers filled
entire rooms, and elements from it can be found in most subsequent RPGs, including World of
Warcraft and Diablo.
The game of Telengard itself embodies a classical formula of fantasy role-play, the act of
delving into a dungeon, slaying monsters, and collecting treasure. In this case, the player weighs
the chances of their avatar’s strength (gained through finding items and earning experience)
against the perceived dangers of a virtual dungeon, where every floor gets more dangerous the
deeper one ventures.
Fig. 24. Telengard Screenshot. From: Crafting a Vintage CRPG,
http://www.adamantyr.com/crpg/article_01.htm
Telengard produces a gigantic virtual dungeon—millions of rooms in a dungeon two
hundred cells square and fifty levels deep could be explored by a player—yet the program is less
Furmanski 95
than one thousand written lines of code and contains no explicit data for the dungeon itself.
Lawrence explains in an interview:
84
“When I reproduced DND on the commodore Pet (8k RAM) I certainly had no room for the
dungeon maps stored in the data files back on the main frame. Take your character’s X/Y/Z
position, do some math involving prime numbers, pick out a few internal bits of the result and
there you have a description of the current location!”
Fig. 25. Recreated section of the Telengard map. Gregory Bush, From: “Telengard in Elm: The Dungeon”.
Telengard in Elm. http://elm-telengard.blogspot.com/2014/03/the-dungeon.html
Gregory Bush has been porting Telengard to modern computers, a process requiring
considerable analysis of the preexisting code.
85
His notes and explanations have greatly aided
this case study.
84
Matt Barton. “Interview with Daniel M. Lawrence, CRPG Pioneer and Author of Telengard”. Armchair Arcade.
http://www.armchairarcade.com/neo/node/1366 (Accessed June 22, 2007).
Furmanski 96
Examining the source code on both the Commodore and Apple versions, the meat of the
process outlined above is listed here:
XO = 1.6915
YO = 1.4278
ZO = 1.2462
W0 = 4694
10010 Q = X * XO + Y * YO + Z * ZO + (X + XO) * (Y + YO) * (Z + ZO)
10020 H% = (Q - INT (Q)) * W0: IF H% / 256 > 5 THEN
H% = H% - INT (H% / 256) * 256
10025 IF INT (H% / 256) > 0 THEN
H% = (INT ((Q * 10 - INT (Q * 10)) * 15 + 1) * 256) + H% - INT (H% / 256) * 256
The number this formula generates is used in many ways as it defines a given dungeon room.
As a Gregory Bush explains:
86
The basic idea behind the rest of the algorithm is that the fractional part of Q will vary in
unpredictable seeming ways as the inputs change.
The lowest order bits determine what the north and west boundaries are (the south and
east boundaries are just the north and west boundaries of the adjacent rooms). The
highest order bits are used to determine if a feature is present in the room.
12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0
feature? | unused | west | north
00 = Empty
01 = Empty
10 = Door
11 = Wall
If the feature field is zero or > 5, then there is no feature in the room. Otherwise q is used
again to determine what the feature is.
85
Gregory Bush. “Telengard in Elm: The Dungeon”. Telengard in Elm. http://elm-
telengard.blogspot.com/2014/03/the-dungeon.html (Accessed March 24, 2014).
86
Ibid.
Furmanski 97
Such features include stairways, pits, fountains, and other embellishments to the maze-
like setting.
This small section of code creates to the skeleton of a game spanning millions of rooms.
More code outlines smaller details, boundary cases, or are used to depict the data that these lines
generate, but these commands output a variable H%, which the game looks at to determine what
sort of hallway or encounter the player will find. The variables XO, YO, ZO, and W0 fulfill a
role very similar to a classic random seed, and changing them drastically changes the layout of
the maze.
The ability to produce an apparently random, yet nevertheless consistent and repeatable
value from a series of coordinate inputs can be a useful paradigm for procedural environments
and processes. Perlin noise functions using very similar parameters. This process produces a
gigantic dungeon in Telengard, yet one that is persistent, and has the same layout every time the
player starts a new game. The dungeon walls and interiors are as static as a mathematical graph,
but are part of much larger structure than would not have been remotely possible had they been
all explicitly defined and laid out by human input.
While the dungeon itself has no explicit data, the game has a series of “Inns” where a
player can rest. The locations of these inns are determined like the other dungeon interiors, and
their names have a mad-lib-type quality to them: “Bold Tooth Lodge”, “Worthy Wharf
Resthouse”, etc. A website recreates
87
this subprocess of Telengard, if you want a quick Inn
name. The names are chosen from an authored lexicon of words that the game chains together,
dependent on math related to the player’s position. The player starts underneath an Inn,
87
“The Telengard Tavern Generator”. The Digital Eel. http://www.digital-eel.com/ttg.htm
Furmanski 98
providing a safe area to run back to, and beginners will usually stay in sight of it while they
slowly build up experience and strength.
Other elements in Telengard are random, often incredibly so. The game updates every
few seconds whether the player interacts or not – an artifact of the game’s mainframe roots,
where shared computer time was precious, and idle users were apt to be kicked off. This means
that monsters can appear without the player having to move. Other events are the result of a
digital die-roll as well: fountains may always be in the same place, but drinking from them can
produce different results on different playthroughs.
The maze layouts in Telengard tend to lack saliency for all their apparent size. This
repetition is offset some by the fact that the player can only see rooms immediately adjacent, and
traveling long distances can be interrupted by any number of encounters or random events.
Amazingly, getting lost becomes a secondary concern to mere survival, especially for new
players, and even on the more repetitious Atari version the patterns of corridors may take a while
to recognize. Still, contemporary advertisements and reviews (often hard to tell apart) of
Telengard almost always remarked on its “depth” – the presence of 50 levels, large ones at that,
was considered noteworthy enough for marketers and critics alike.
88
Telengard shows another example of PCG and its capability for compression, but also
how this compression creates persistence, and how technological constraints changed the work
profoundly from its antecedents. Dissecting a single number for numerous purposes was a
common, often necessary practice, and shows up often in these early cases. The simplicity of
Telengard itself allowed it to be ported among several contemporary machines, and even to be
88
Telengard Fan Site. “Misc. Docs, Reviews, and Ads”. http://www.angelfire.com/ny5/telengard/doc2.htm
Furmanski 99
edited by players who bought the game. I can personally vouch for altering the Apple II version
on the original hardware, and adjusting the numbers to verify the techniques described on the
websites I consulted. The compression tends to invite such activity. If such algorithms are by
necessity small with large outputs, they can be entertaining and illuminating to play with.
Furmanski 100
Rogue
The game of Rogue gives a player the opportunity to explore a randomly-created
dungeon, navigating dozens of floors in search of a macguffin Amulet, and then to return to the
dungeon’s entrance. The game coined the term “Roguelike,” encompassing a genre of games
that share and differ on many counts, but almost all use procedural content generation in some
form or another. Another quality to a roguelike game is the use of “permadeath,” where the
player has only one chance to play a given session to completion. The combination of a novel
environment on each play through and the high stakes forms a core part of Rogue’s experience.
The original game was programmed by Michael Toy. Toy endeavored to write a clone of
a Star Trek game that he originally encountered on a mainframe computer at his father’s
laboratory. Programming a familiar work on a new system to better understand the intricacies of
said system seemed to contribute to his developing skills as a programmer—porting a familiar
program to a new platform might not have proven mastery but at least created familiarity.
During his time as a student at the University of California, Santa Cruz, he started work on
Rogue. Toy’s paradoxical love of both creating and playing games led to a key design goal that
would introduce the near-infinite novelty promised by PCG:
The problem was that the story and puzzles never changed; once he finished a game,
there was no reason to boot it up a second time. Worse, he had no reason to play his own
adventure games—he knew all of the solutions. “The ideal would be to write a computer
game that was fun for me to play, and that was the problem with these adventure games I
loved so much.”
89
89
David L. Craddock, Dungeon Hacks: How NetHack, Angband, and Other Roguelikes Changed the Course of
Video Games. Edited by Andrew Magrath. (Press Start Press, 2015).
Furmanski 101
Fig. 26. Rogue Screenshot. From: Dos Games Archive,
http://image.dosgamesarchive.com/screenshots/rogue2.gif
Rogue ended up succeeding at least in challenging Toy by randomly producing a new
world each play session. Toy reiterates a common lament of someone working with PCG:
It’s still sort of an unsolved problem: how to make interesting and complex spaces that
are as engaging as something an artist designs, but able to spit out as many [pieces of
content] as you need,” Toy admitted. “It’s really hard to figure out, based on the
computing resources you have, how to create worlds, and then design a world that is as
amazing as something you could write or draw with your own hands. It’s like trying to
paint the Mona Lisa with an air cannon and a bucket of paint.
90
Toy found a collaborator in Glen Wichman, a student who had come to UCSC with
aspirations of game design. He describes the basic algorithm to create a new dungeon level: “In
Rogue, every level is a tic-tac-toe board; it’s exactly nine rooms in a three-by-three grid, […] We
basically divided the screen into those nine areas, then put a room in each area.”
91
Each of the
nine sections is only potentially a room, and corridors will snake between them in order to
90
Ibid.
91
Ibid.
Furmanski 102
guarantee a player can navigate to any part of the dungeon level. The relative simplicity of the
floor structures makes such a guarantee easy to fulfill. A virtual world might aspire to be
beautiful or intricate, but it must be possible to navigate.
Many of Rogue’s qualities came from Toy and Wichman’s experiences in hacking the
relatively new UNIX operating system. UNIX was and still is a system designed not only to be
shared across a network but also meant to be completely customizable for an end user. Every
command a user can give is, itself, a computer program. This allowed students to experiment
with completely novel ways to use a computer by creating programs that served as components
of larger programs. Ken Arnold, ultimately another contributor to Rogue, wrote one such
program, curses, which could place text anywhere on the screen, acting as a sort of stand-in for
graphics in a text-only display. Toy and Wichman used characters like dots, pound signs, and
the like to depict blueprint-like floor plans, and the now-iconic @ character to stand in for the
player, a convention that still is the norm for the many games descended from Rogue.
Currently less of a prerequisite for a roguelike game is the use of textual characters as
graphical elements. As mentioned, the original Rogue, as well as its many imitators and
descendants, did not use pixel-based images, but rather text to visualize the play space. Hashes,
dashes, and the like denoted walls and corridors, and single letters portrayed enemies, producing
a somewhat humorous abecedarium of antagonists. This use of text to display the state of the
game had resulted in an unforeseen potential interaction—given the text based command-line
systems virtually all computers had at this time, designed to be easy to read on the computer’s
end and simple to input on the human end, could a program be made to survive and thrive in the
programmatic dungeons of Rogue? The shared network depicted by the novel UNIX operating
Furmanski 103
system already had created a living rivalry between the writers of Rogue and students playing the
game. Why let the humans have all the fun?
Rog-O-Matic
While Rogue is seminal enough in PCG history to give its name to an entire genre of
games ("Roguelikes"), its nature has also given rise to a rarer brand of procedurality—the
procedural player. The Rog-O-Matic was a program locked in a sort of arms race, a program
designed to beat Rogue's algorithmic dungeons. The problem is a classic one for AI: the space is
unknown and new with each iteration. One cannot simply give a sequence of optimal commands
as they might on a static, repeatable space. Even in games with a large possibility space such as
Chess or Go, you can assume an initial state. Rogue and its kin wallow in randomness and
usually take a perverse interest in unfairness, or at least starting states of varying difficulties from
game to game.
Building a procedural game player highlights many of the qualities, as well as the allure,
of an experience dependent on procedural and indefinite content. What makes things difficult
for an AI can also makes things fun and novel to experience and play. An AI designed to do
well at games with deterministic, optimal outcomes (a surprising number) would simply be a
static list of commands, and while all AI can be classified as “automation,” some AI might be
considered more automatic than others.
The Rog-O-Matic AI could be classified as an “Expert System,” a type of reasoning
system combining a database of known elements with systems of inferring the unknown. As the
resulting paper Rog-O-Matic: A Belligerent Expert System observed, the game Rogue provided
Furmanski 104
an excellent environment to test these dynamics. Indeed, the procedural nature of the space
required the AI to be more adaptable than a typical expert system of the time:
Rog-O-Matic is designed as a knowledge based or "expert" system because a search
based system was completely infeasible. There are as many as 500 possible actions at any
one time; for each action there are several thousand possible next states.
Rog-O-Matic differs from other expert systems in the following respects:
● It solves a dynamic problem rather than a static one.
● It plans against adversaries.
● It plays a game in which some events are determined randomly.
● It plays despite limited information.
92
The use of text allowed an interesting level of interaction with the Rog-O-Matic. The AI
program did not directly access the game at a code level; instead, it used the textual output of
each turn to indirectly build its own internal game state, interpreting the textual interface in a
way similar to a human user. The code outlined this “sensory” phase in the program’s functions,
and acted on it with a variety of strategies, but the networked Unix system led to the Rog-O-
Matic having the same amount of information a human player would have. If Rogue had a
graphical interface, this level of “fair play” would have been harder (perhaps impossible in the
early 1980s) to achieve. The networked, shared programming environment allowed this project
to evolve from a fun project into a research paper outlining a relatively advanced AI Expert
System that was produced almost by accident.
The mutual pressure between coders and players made Rogue grow in terms of
mechanics—it would combat the capabilities of the Rog-O-Matic by adding features that the
artificial intelligence would not expect or would not be able to properly comprehend and interact
92
Michael L. Maudlin et al., “ROG-O-MATIC: A Belligerent Expert System” (Conference of the Canadian Society
for Computational Studies of Intelligence, London, Ontario, 1984).
Furmanski 105
with. This produced a rather adversarial but fruitful set of development cycles through the
university communities that played it.
Furmanski 106
Noctis
Procedural Content Generation can offer billions of instances of…something. In our
everyday experiences we usually do not comprehend, let alone deal with, billions of individual
instances of anything. However, one area where astronomically large numbers of something are
involved is astronomy. More than once, a digital game has focused on a procedural paradigm to
create a cosmos. PCG in this case does something that nothing else really could, and, in creating
such a space, a coder might possibly attempt a sort of understanding of the scale involved—
adding an interactive element could be considered an attempt to try and expand perception
beyond trying to count large numbers and seeing uncountable points of light.
Fig. 27. Noctis Ringed World.
From: Sirpaper’s Gallery. Anynowhere,
http://anynowhere.com/bb/posts.php?t=3813
Fig. 28. Forest in Noctis.
From: Sirpaper’s Gallery. Anynowhere,
http://anynowhere.com/bb/posts.php?t=3813
Fig. 29. Panoramic Screenshot.
From: Sirpaper’s Gallery. Anynowhere, http://anynowhere.com/bb/posts.php?t=3813
Furmanski 107
Noctis, by Alessandro Ghignola and first released in 2000, fits rather comfortably in the
"Procedural Universe" genre, a descendant of Elite and forerunner to Spore and No Man's Sky.
Noctis is largely the work of a single developer, Alessandro Ghignola, although much of the
work has been kept alive by a diligent player community. Despite its cosmic size, the work fits
rather comfortably on a floppy disk. Quite a bit of its code is devoted to rendering 3D geometry
and effects. Its initial release was just before the full adoption of hardware accelerated 3D
graphics, and its software rendering proves a fascinating variant to what would soon be a fairly
codified set of conventions in 3D graphics. Procedural algorithms define the cosmos, but the
world has its own unique way of portraying it as well.
Noctis is not the first game of its kind, as Elite provides a clear ancestor in many ways to
the "Literally Astronomically Huge Game," but Noctis does offer a rather intriguing set of
qualities, given when it was created and its focus on environment and ambiance. Elite, while
open ended, was a fairly goal-oriented space exploration game, with interplanetary trade,
exploratory survival, dogfights between spaceships, and other science fiction/adventure tropes
that would not be out of place in other games. Noctis instead dwells in the scale of its own
universe. Hazards inherent in exploration are kept to a minimum, and hostile entities are non-
existent—indeed, finding any living creature can be considered a rare achievement. Like Elite,
Noctis algorithmically generates a universe full of stars and planets, but expands Elite's
specifically-limited hundreds of stars, and instead creates millions of them.
Your avatar in Noctis, a catlike being piloting a starship, can land and explore planets of
varying types. There is no explicit goal, although there could be several implicit ones to
Furmanski 108
consider. A community-run database catalogs the stars and planets that might be found. While
there are countless worlds, only a few have life on them, and a very few have enigmatic ruins to
explore—finding such a world on one's own is quite an event and feat. The catlike race of
Felisians are the only sentient race, and a player will not encounter one by chance, but only
under specific circumstances outlined below.
Landing on a planet will have the computer generate a chunk of terrain with defined
boundaries. Your avatar cannot travel beyond this generated space, but could re-embark the
spaceship and travel to another part of the planet, and then return to the original region thanks to
the persistent nature of the world generation. The area is certainly big enough to get lost in, and
the addition of a vertical "light geyser", a column of light showing points of interest, primarily
your spaceship, provides a simple way to re-orient and maintain bearings.
About the only resource to keep track of in Noctis is fuel, which can only be mined from
certain types of stars. Travel between stars takes time, and being stranded simply produces a
long wait as you call for and receive help. Interestingly, this “fail state” is the only opportunity
to see a fellow Felisian, as they stop by to give you spare fuel, thus providing a unique
experience within the game. A forum post notes how this small distinction adds perspective,
comparing it to an interactive simulator called Space Engine:
…I don't think Space Engine and Noctis are quite the same thing at this point. Space
Engine simulates a universe, while Noctis simulates being an explorer in a universe. With
Space Engine, you don't have to gather fuel, or control a ship's functions, or return to a
landing pod, or anything like that, but instead you just move as a sort of ethereal being
wherever you want, whenever you want, without any restrictions. That's cool if you just
want to look around, but lacks a lot of the immersion you get from having more
Furmanski 109
limitations imposed upon your travel. Plus, when you're at ground level, there isn't really
much to see or interact with up close, even compared to Noctis.
93
In Noctis, a player can encounter trees, animals, oceans, as well as terrain reflecting
climate and temperature. Noctis also provides a built-in camera for taking screenshots, including
a panoramic mode. With no other artifacts or items, and the old tourism saying “leave only
memories, take only pictures,” this camera provides something to collect and document with.
The ability to explore effectively without end encourages not only personal wanderlust
but community collaboration. Later versions of Noctis allowed the player to contribute to their
own star map, and later to send information to a shared database. While explorers are not able to
meet fellow explorers, they can still share the same cosmos. The ability to find a place no one
else has ever seen, explore it, and even name it for others to find later has continued into future
games. The space exploration game No Man’s Sky made a point to automatically give credit to
the first person to discover a celestial body, allowing the discoverer naming rights. Elite
Dangerous, a direct descendant of the original Elite from the 1980s, gives players high bounties
to find particularly interesting objects like black holes, neutron stars, Earth-like planets and so
forth. Space games as a genre often have the potential for different career paths for players.
Some like fighting, others trading, but PCG allows exploration to be a valid and fulfilling option
within a virtual shared environment. NoctisMapper, a separate application from Noctis, can
import databases of stars to interactively view, and its download includes 2190 documented stars
to start with.
93
Cryoburner. Posted on Anynowhere message board. http://anynowhere.com/bb/posts.php?t=4887 (Accessed
February 22, 2014).
Furmanski 110
Fig. 30. Screenshot of NoctisMapper, showing the coordinates of the star “Felid.” From: NoctisMapper
Application, by Peek.
Other "Sandbox" type games tend to add dynamics and features. These features also tend
to be the new measure of "content." In Noctis, no one could possibly see every star, but one
might be able to see just about every type of planet, and "do" everything, or every type of thing
there is to do.
Noctis, for all its size, is static. Planets rotate and revolve, and animals might forage
through savannas and forests, but these patterns do not change. Animal movements and
presences, while contextually dependent, are random at the level of player encounter. Their
Furmanski 111
actions are simple, just enough to give verisimilitude of existence. Planets might revolve, but do
not evolve. The player is merely an observer, a tourist, as mentioned earlier.
Later games similar to Noctis start with a similar static environment, but they also take
into account all of the changes a player might introduce during play. Reconciling the generated
environment with player actions produces several interesting challenges. Massive online
experiences may keep the "world" remote, having keeping track of one world (per server) and
several terminals ending with a multitude of players. A server paradigm can be tempting for
handling worlds, not so much for size but dynamism. A single-player (that is, egocentric) world
usually focuses on the world immediately close to the player, for performance and memory
limits if nothing else. Trying to simulate the entire world beyond the player's notice might be
technically prohibitive as well as practically wasteful—why simulate something someone will
never see? The "TALESPIN effect" also applies in this instance, where player interactions might
only touch the tip of an incredibly complicated iceberg.
Furmanski 112
Dwarf Fortress
Dwarf Fortress—a game starting with seven semi-autonomous (that is to say, not always
cooperative) dwarves, a wagon full of supplies, and a goal to dig a subterranean city—offers an
example of a virtual world with history, surprisingly sophisticated and adaptable characters, and
simulation rarely rivaled within digital games. The work shows the potential (and consequences)
of building a game where practically every aspect is generated and simulated, from family trees
to clothing production.
A world in Dwarf Fortress, complete with national histories and genealogies, can take
hours to generate, but gives an extended account of births, deaths, battles, monster rampages
(and slayings), a world history of which a player may only sense a tiny fraction. Having such a
large simulated history provides continuity and verisimilitude, even if every explicit detail may
be obscure. Ruins discovered by the player were once towns or temples, and contain detritus and
hints of what sort of places they once were, instead of being spontaneously generated.
Engravings and artworks have pedigrees, referencing events and knowledge of the past.
One of the challenges of simulations comes from managing increasingly profound
consequences of the systems involved. As an example, Dwarf Fortress has immortal elves, who
on some development versions overcrowded the worlds generated. In many ways the game
might be considered an experiment in putting fantasy genre clichés to task, seeing the results of a
world filled with immortal elves, monstrous dragons, goblin hordes, and industrious dwarves
play out.
Furmanski 113
Fig. 31. Dwarf Fortress session underway. From: Ars Technica,
https://arstechnica.com/gaming/2013/04/challenge-accepted-dwarf-fortress-pros-show-ars-up-with-insane-10-
hour-forts/
Even without coding experience, people wishing to modify the game still have access to
an incredible amount of customization. Players have extensively modified the original “vanilla”
Dwarf Fortress, increasing an already impressive menagerie, changing genre, and continuing to
expand an already expansive game. The qualities of most creatures (including the eponymous
Dwarves) are accessible in text files, and are helpfully given very descriptive tags and easy to
read values. These qualities do not just include physical statistics, but social conventions and
ethics followed by various races – the morality of theft or murder, for instance. Changing a few
values can give the player a race of flying golden giants who find slavery abhorrent but admire
thievery. Dwarf Fortress achieves the ever-so-rarely attained goals of a simulation—you can
place just about any object within the world and it will know how to behave, and other
characters, objects, and simulated forces will know how to interact with it.
The ASCII text-based art style belies an incredibly detailed physical and psychological
world simulation. The visuals are simple, but the careful observer finds that the game is very
Furmanski 114
well “animated.” A dragon can smash through a city gate, with townsdwarves fleeing and
guards charging to defend the settlement. While the human instinct to give added motivation to
abstraction clearly aids verisimilitude, the internal reasoning of Dwarf Fortress’s characters does
stand up to considerable scrutiny. A warrior will run to retrieve a dropped weapon, only to flee
when outnumbered. An artist who sees his masterpiece destroyed may become despondent, and
refuse to do work or even eat. The dwarves’ behavior can become so sophisticated that its
shortcomings and oddities seem all the more ridiculous or fascinating—an engraver bringing his
pet cow with him to work at the Mayor’s office, or a sentry holding a door open during a siege to
retrieve a sock. A common slogan for the game is “Losing is Fun,” a phrase implying that even
catastrophic failure can be entertaining to watch and experience. A player can observe the
intricate tragedies that come from a monster attack, a mechanical failure flooding the fortress
with lava, or an internal riot rising out of a family feud that gets out of hand (the last example is
known among the player base as a “tantrum spiral,” a surprisingly common way to lose an
otherwise prosperous fortress). Dwarf Fortress moves kicking over sandcastles to a new league.
Tarn Adams, the game’s creator, encoded behaviors reflective of genre fantasy, with
tropes like industrious dwarves, nature-loving elves, wicked goblins, and so forth. Simulating
such tropes has led to interesting side effects often at odds with an expected, Tolkien-esque
narrative. Immortal elves tend to flood the world with overpopulation, while a goblin tendency
to kidnap other races leads to their civilizations being the most cosmopolitan, with the
descendants of humans and dwarves often making up ruling castes, and few actual goblins.
Constant adjustments have been made over the many released versions to somewhat reconcile
behaviors with tropes, but seeing the consequences of fictional tropes play out could be
considered part of the exercise of the game.
Furmanski 115
In starting a new game, Dwarf Fortress runs a simulation for a set number of years,
starting with a small initial groups of families, and charting their growth over generations,
building civilizations that fight, trade, invade, and possibly fall. Ageless beasts like dragons can
roam the countryside, threatening cities only to be slain by some heroic figure. Such beasts do
not reproduce, weighing Dwarf Fortress’s model of history in a direction when an “Age of
Legend” slowly shifts to an “Age of Heroes” and perhaps to an “Age of Mortals” as the old
mythical beasts become rarer and nearly forgotten. As cities grow, characters can create artwork
referring to these created events. The history forms a continuity rarely seen in games. If a player
stumbles upon a ruin, that ruin came from somewhere, with its own history and personalities,
even if only fragments and impressions remain.
Dwarf Fortress uses an interesting approach to time, as perceived by the player. A
relatively simple area with few characters can run real time (or faster), but the game is
technically turn based; the simulation simply processes the turns either as fast as it can or
according to user settings. Simulating a thousand virtual characters with an appreciable
knowledge base of their environment can be very taxing even on modern machines. Certain
game strategies are used to avoid computer performance slowing to an unusable speed, as
simulated dwarven cities become victims of their own successful growth. Seeing a thriving city
in motion, however, can be a very pleasurable sight, bringing about the “animation” I mentioned
above and finding character interaction and motivation rarely seen, even in commercial games
with cutting-edge graphical capabilities.
For all its approach to “sandbox” simulation, Dwarf Fortress has a well-designed arc
informed by game design sensibilities. An implicit goal of building a Dwarven settlement is to
increase population and prosperity. Starting with the requisite seven dwarves, the first goals are
Furmanski 116
simple survival needs: growing food, building shelter, dealing with local wildlife. The initial
families produce children, and migrants will come the growing settlement. Traders come each
year, and more goods traded will in turn bring more caravans. Above a certain population, crime
becomes an issue, and the player must start implementing a justice system. Above a certain level
of wealth, nobility will arrive, requiring infrastructure to support their (often aggravating and
arbitrary) desires. Dealing with wildlife while scrounging for food eventually turns into
repelling besieging armies looking to pillage a wealthy, thriving city which is constructing a
throne chamber fit for a monarch.
A game with so many possibilities invites people to invent many ways to play.
Communities of players often play “legacy” games, where the fortress is handed off to a new
play at the end of a simulated year. This high-stakes game of telephone can often result in
disastrous but hilarious narratives. The recounting of one such game on a message board, known
as “Boatmurdered”
94
(one of Dwarf Fortress’s randomly christened names) was how I personally
discovered Dwarf Fortress. Half-built machines, murderous wildlife, uncooperative dwarves
and players alike create an almost absurdist misadventure, where the simulation’s shortcomings
of reality only add to the often macabre humor.
Dwarf Fortress can be considered an auteur work, the result of years of development
(and still under construction) by one primary creator, Tarn Adams, with some input from his
brother Zach. His background is in mathematics, and his income derives in large part from
player donations. The fact that a two person team of semi-hobbyist programmers could produce
such a complex game illuminates the power of procedurality. More traditional methods would
94
Various. “Dwarf Fortress – Boatmurdered.” Let’s Play Archive. https://lparchive.org/Dwarf-Fortress-
Boatmurdered/ (Accessed April 14, 2007).
Furmanski 117
have fallen short of creating the signature dynamics required, such that a professional team of
fifty people working without PCG could not have produced a work nearly as intricate.
Minecraft creator Markus Perrson mentions Dwarf Fortress as a primary inspiration,
95
and both share an open-ended, simulation-heavy philosophy. Minecraft offers a first person
visualization of a Dwarf Fortress-like space; indeed, both have volumetric 3D spaces with
blocklike cells interacting with each other. Minecraft eschews the history and agent-based
civilizations for the most part, focusing more on immersion and player interaction with the
landscape. Minecraft, in short, includes the simulated physical dynamics but not the history.
The sheer complexity of simulation in Dwarf Fortress can intimidate a starting player. A
mature fortress requires attention to housing, garbage disposal, trade agreements, seasonal food
harvests, steel production, justice systems, the odd vampire infestation…the list goes on.
Acclimatizing to the possibilities even as the game world continues to act requires considerable
time and patience. Nevertheless, the community that has grown around the game has built epic
generational stories, functional computing machines, volumetric works of art, and other creations
using the immense potential offered by a virtual space obsessed with its own detailed dynamics.
95
Daniel Goldberg and Linus Larrson. “The Amazingly Unlikely Story of How Minecraft Was Born”.Wired.com.
https://www.wired.com/2013/11/minecraft-book/ (Accessed November 5, 2013).
Furmanski 118
Other Works
The following works are important enough in just about any discussion of procedural
content generation to include by necessity, although in these shorter summaries I will try to
outline some specific aspect, which should not be taken as a complete exploration of a
potentially sprawling space. The works include two galaxies, a town with over a century of
history, and a volumetric playground larger than the planet Earth, so for the time being I’ll only
highlight small parts of each experience.
Elite
Games involving procedurally generated galaxies have become common enough to
justify their own genre, and a venerable one at that. Elite was released in 1984, and invites
comparison to a majority of later works. Elite Dangerous, a massive online game, is itself a
direct descendant, and while the two are separated by over three decades, similarities are easily
seen. The inclusion of virtual reality, planets a player can land on, Internet connectivity, and any
number of other features all look like added detail to an existing superstructure nearly as old as
the home computer.
Elite has a “sandbox” structure, where the player is given a spaceship and a large amount
of freedom to do what they want. They might travel from system to system trading commodities,
or make a name for themselves fighting pirates in the open void. The game functions like a
flight simulation (or space simulation) with the player looking out the front of their craft. The
screenshot shows the player trying to enter the mail-slot shaped opening of a rotating space
station.
Furmanski 119
Fig. 32. Space Station in Elite. From: Wikipedia, https://en.wikipedia.org/wiki/Elite_(video_game)
Fig. 33. For comparison, a similar space station in the 2014 sequel, Elite Dangerous. From: Elite Dangerous Wiki.
http://elite-dangerous.wikia.com/wiki/File:Coriolis_station.jpg
Elite had the same limitations as other games in the early 1980s, but ended up becoming
one of the first home computer games to use realtime 3d graphics, as well as to contain an
amount of content that would normally be considered impossible on a 1984 cassette tape or
floppy drive. Similar to Telengard’s gargantuan dungeon, Elite contained galaxies of planets to
explore, and relied on algorithmic manipulation of random numbers to generate them. Also like
Furmanski 120
Telengard, these planets were generated in a way that was persistent, and would be in the same
place, with the same qualities, every time the player started a new game.
A considerable amount of information about the original Elite’s workings can be found
on a site devoted to an open-source update to the game, known as Oolite (“Object Oriented
Elite”). Despite boasting higher quality graphics and other added features, the planet names and
qualities remain virtually identical to the original 1984 version.
Elite’s random number generator uses a Fibonacci sequence, with the generator dividing
each number in the sequence divided by 65536 and taking the remainder (known as the modulo
operation), clamping the value to avoid numbers of infinite size. Modulo operations also “roll
over” a number, allowing cycles and periodic patterns to form. Elite then does an operation,
known informally as “the twist”
96
, shuffling the bytes that make up the produced number like a
shell game. The shuffled bytes are then added to the original value, resulting in a sequence that
allows 32,768 planets in a galaxy – Elite only uses 256 of them, and allows the player to visit 8
galaxies. The number generated gives each planet its name, government type, galactic
coordinates, and other relevant information.
97
Certain flaws come with this process, like a few
repeated names of planets. Another flaw comes from a result of gameplay mechanics limiting
how far a player might travel from a world—some worlds are created outside of this range, and
certain circumstances could result in the player being stranded on one of these “lost planets” with
no way back.
Elite has something else in common with Telengard—a lexicon, used to generate
planetary descriptions, has much the same result as Telengard’s Inn names. The descriptions
96
“Random Number Generator”. The Elite Wiki. http://wiki.alioth.net/index.php/Random_number_generator
(Accessed June 6, 2016).
97
Ibid.
Furmanski 121
themselves have no direct effect on gameplay or mechanics, but economic conditions and values
of trade goods are also generated, providing each planet with notable qualities in regards to
player activity. Industrial planets might have a high premium on food prices, while agricultural
worlds will pay more for machinery and technological goods. Figuring out profitable trade
routes, one of the potential player goals of the game, can take quite a bit of time—and with the
large number of variables a planet might have, each provides a salient point in the experience of
the would-be space traveler.
Spore
Spore was a game from Maxis studios, in the tradition of the studio’s Sim- games like
SimCity. The player started as a microscopic organism, developed into a sentient creature, then a
civilization, before building a galactic empire. Spore’s planets and galaxies, while impressive,
had precedence with games like Elite. The larger part of Spore’s pioneering work came from its
open-ended, easy-to-use creature creator, where a novice user could build a creature with an
arbitrary number of limbs and still have a feasible walk cycle. Technically, the results were
achieved through a combination of very sophisticated skeletal rigging and a large library of
components designed by human hands.
98
Spore populated its planets with user-created content,
augmenting its procedural creatures, buildings, and environments with the input of an anticipated
user base. As noted previously, demoscene coders and their mentality were part of Spore’s
development, but the game made a specific and deliberate effort to allow novice users to create
fairly complicated digital work that would not require dedicated coding skills. Players of the
98
Chris Hecker, Bernd Raabe, Ryan W. Enslow, John DeWeese, Jordan Maynard, and Kees van Prooijen. "Real-
time motion retargeting to highly varied user-created morphologies." ACM Transactions on Graphics (TOG) 27, no.
3 (2008): 27.
Furmanski 122
game became their own “black box,” a procedural component to other players, their output
producing novel creatures and artifacts to encounter for fellow users.
Fig. 34. Spore Creature Creator. From: Spore Wiki, http://spore.wikia.com/wiki/Spore_Creature_Creator
No Man’s Sky
No Man’s Sky from Hello Games is a relatively recent addition to the PCG space travel
genre. It allows a player to traverse a number of galaxies, fly in a spaceship, land on a planet,
and walk around that planet, all as a seamless experience. The game streams procedurally
generated data for this continuous traversal, while further using PCG for planets, spaceships,
plants, animals, and intelligent lifeforms the player might encounter. Marketing materials claim
the number of discoverable planets is 18,446,744,073,709,551,616, which seems plausible.
99
The limitations come more from the maximum values allowed on a computing system than
storage space or even the algorithm as such.
99
Sean Murray. “Exploring the 18,446,744,073,709,551,616 planets of No Man’s Sky”. Playstation.blog.
https://blog.eu.playstation.com/2014/08/26/exploring-18446744073709551616-planets-mans-sky/ (Accessed
August 26, 2014).
Furmanski 123
Despite cutting edge graphics and technical PCG, visually No Man’s Sky has a certain
retro aesthetic. No Man’s Sky took sizable inspiration from classical works of science fiction
illustration. Project lead Sean Murray explained in an interview:
[There isn't] one [game] that matches my view of what science fiction is. My view is
what I grew up with, which is book covers—that's what I picture as science fiction. Often
quite colourful, quite vibrant, quite exciting to look at. A mixture of all the authors you
can think of, Heinlein, Clarke, Asimov, but also Chris Foss, the actual artist behind them.
That's how I picture those worlds.
100
The visuals of No Man’s Sky use considerable calculation and procedural shading in order to
mimic such covers, looking at times like an airbrushed or oil painting that might adorn a
paperback found in a used bookstore.
Fig. 35. No Man’s Sky screenshot.
The development team created tools specifically to show dozens of instances of an
animal, vehicle, or plant that might be generated given a series of parameters.
101
Programmer
100
Martin Robinson. “The art of No Man’s Sky”. Eurogamer.net. http://www.eurogamer.net/articles/2015-03-20-
the-art-of-no-mans-sky (Accessed March 20, 2015).
101
Grant Duncan. ”No Man’s Sky: How I Learned to Love Procedural Art” GDC Vault.
http://www.gdcvault.com/play/1021805/Art-Direction-Bootcamp-How-I (Accessed 2015).
Furmanski 124
Hazel McKendrick further emphasizes the necessity for each asset to integrate with the rest of
the game: “…nothing exists in isolation, you can’t create a coherent game generating one asset at
a time…it needs to be tied to all of your other systems.”
102
Designers and artists were not
creating a single entity, but a range of them, and tools were required to see a wide range of
consequential results taken from their initial parameters. Working with PCG requires a different
paradigm than more orthodox development strategies. Doubling your team size does not
necessarily double your output. There’s a reason many PCG games have only had one or two
primary authors. Hello Games has a larger (although still small by commercial standards)
development team, necessitating the creation of tools and a defined work pipeline to
accommodate their focus on PCG.
Fig. 36. A screenshot of one of Hello Games’ development tools, showing a variety of potential creatures to
encounter. From: “No Man’s Sky: How I Learned to Love Procedural Art” GDC Vault.
http://www.gdcvault.com/play/1021805/Art-Direction-Bootcamp-How-I
102
Hazel McKendrick. “Procedural Doesn’t Mean Random: Generating Interesting Content”, ProcJam 2014.
https://www.youtube.com/playlist?list=PLxGbBc3OuMgiHnQfqerWtgJRA3fMzasra (Accessed 2014).
Furmanski 125
Bad News
Bad News is a hybrid digital/live action game project from the University of Santa Cruz
Expressive Studio. Set in 1979, Bad News has the player, in the role of a mortician, search for a
recently deceased’s next of kin.
103
After using a computer program to generate an entire town’s
history from its founding generations earlier, a live human actor manages the player’s
exploration of the town and their interactions with the townspeople. The interaction is fairly
simple, with the player and actor sitting on opposite sides of a table, similar to an improve
exercise. The added challenge comes from the player trying to find the next of kin, but not
letting on why (as the player’s mark should be the first to know about the bad news). The player
will likely inadvertently learn more about the relationships and history of the town, both tragic
and humorous, and this detail fleshes out the experience of what might at first seem a simple
task.
The origins of Bad News came from an improvised idea that proved more successful than
initially planned. The team realized the game did not have to be fully digital:
The game was originally intended as a live-action prototype that would guide content
creation for a fully digital game. However, as we began to playtest the experience with a
variety of users, we were impressed by how deeply immersed many of them became in
exploring the town and meeting its residents. We were particularly touched by how
emotional players seemed to get when delivering the eponymous bad news to the next of
kin; many of them struggled to search for the right words, with audible hitches in their
voices. Inspired by these performances, we decided to develop Bad News as a standalone
experience.
104
103
James Ryan, Ben Samuel and Adam Summerville. Bad News Homepage.
https://www.badnewsgame.com/overview/ (Accessed April 30, 2017).
104
“About Bad News”. Indiecade. http://www.indiecade.com/games/selected/bad-news
Furmanski 126
Bad News is a cybernetic experience, leveraging the presence of a human actor in order to
filter and make sense of the information the generated town history provides. The presence of a
human element might almost be considered cheating, as that person’s highly adaptable
intelligence with considerable understanding of social mores and context (at least compared to a
computer intelligence) would seem to eliminate a large amount of problems normally
encountered by a simulated experience. The fact that the experience came from a surprisingly
effective prototype belies how valuable a human presence can be, and shows how much a digital
homunculus remains an aspiration of PCG in general. When your prototype materials (usually
dice, index cards, maybe a game board) for a presumably all-digital game include “a person who
will converse with the player,” your sights are high indeed. From the other direction, providing
an actor with generations of history and a town full of characters creates an intimidating pool of
resources.
Bad News accidentally added another medium to this discussion that PCG can apply to:
Theatre. The synthesized town history is an equivalent to a galaxy of improvisational prompts
for an actor. The algorithmic aspect enforces cohesion and continuity; the actor has faith that
family relationships make sense and events follow cause, effect, and order. The human actor
then needs to undertake the monumental task of synthesizing this data into the series of
characters the player will interview as they search for the person to whom they will need to break
the bad news. While it lacks much of the trappings of technical spectacle, Bad News is
undoubtedly cybernetically enhanced theatre, and, due to the inclusion of a digital history
combined with a live human, it offers a level of interactive immersion most purely digital works
still only dream of.
Furmanski 127
Minecraft
The mainstream success and attention of Minecraft makes either a short summary or a
sprawling exploration intimidating. The promise of a reactive, modular, LEGO-style world with
more square feet than the planet Earth has an allure approaching universal. The networkable
capability of the game, allowing different players to meet and play in a shared space, no doubt
contributes to its popularity. As mentioned, Minecraft owes much to the open aspects of Dwarf
Fortress, while adding a first person-centric aspect to the experience.
The division of the world of Minecraft into discrete blocks of various types of materials
drives the mechanics of the game. Players collect raw materials that can move, forge, burn, or
convert to other materials. The ability to craft the landscape has resulted in impressive creations,
such as scale models of cities, fully-functioning computers, and in-universe programmable 3-D
printers.
105
106
The terrain of Minecraft uses a variation of Perlin noise. In fact, external hacks can adjust
the periods and amplitudes of the noise that defines the hills, mountains, and valleys the player
can explore. While blocks can react with each other, this reaction is almost always initiated by
the player (or possibly another creature, like the infamous, exploding creepers). Indeed, a
primary reason the scale of Minecraft works is due to the static nature of the space until the
player interacts and changes a portion of it. Storing the world without the compression PCG can
offer would make the game untenable. At least one player-created mod, called Wedge,
107
allows
105
“minecraft no command block 3D printer [Tutorial]”. Youtube.com. https://www.youtube.com/watch?v=t-
xMnBsKG8k (Accessed January 11, 2016).
106
“3D Printer With 16 Colors - Minecraft Invention” Youtube.com.
https://www.youtube.com/watch?v=NosYiyNXhzQ (Accessed December 16, 2013).
107
Minecraft Forum. http://www.minecraftforum.net/forums/mapping-and-modding/minecraft-mods/1281979-1-2-
5-wedge-the-worldgen-editor-create-worlds (Accessed March 4, 2012).
Furmanski 128
users to hack the octaves of the noise, changing the output of the landscape, revealing a crucial
part of how the game builds its terrain.
Fig. 37. Minecraft landscape with noise values altered with Wedge. From: Minecraft Forum.
http://www.minecraftforum.net/forums/mapping-and-modding/minecraft-mods/1281979-1-2-5-wedge-the-
worldgen-editor-create-worlds
Furmanski 129
Fig. 38. One interface of Wedge, allowing the user to play with noise parameters. From: Minecraft Forum.
http://www.minecraftforum.net/forums/mapping-and-modding/minecraft-mods/1281979-1-2-5-wedge-the-
worldgen-editor-create-worlds
Minecraft’s use of PCG allows incredibly dynamic interaction with the virtual space, as
well as offer an incredibly vast space to play in to begin with. Its implicit challenge to hack and
adjust it has allowed the game to become a phenomenon above any specific narrative or single
goal it may have had.
Furmanski 130
Personal Work
The following case studies are from my own work. I’ve found that my understanding of
procedural content generation greatly increased when I began to implement it. The artistry
behind PCG comes from defining a structure and evaluating the dynamic range of consequences.
Trial and error are not the only parts to the development process, but they are present, and part of
the lure of working with PCG is exploration as much the implementation a well thought-out plan.
Games of Life: Two simple implementations of Conway’s Game of Life on the Atari 2600
and the Hololens, two devices separated by some four decades of time. The exercise serves in
part to familiarize myself with both platforms, as well as to meditate on the nature of how their
structures influences Conway’s deceptively simple algorithms.
The Night Journey: A game built on the video artwork of Bill Viola. My focus in this
case study is on the graphical effects used in the real-time 3D imagery, which I developed in part
to mimic the technology artifacts of some of the earliest video equipment. The process amounts
to a real-time digital rotoscoping of a polygonal space, simulating analog qualities while
accommodating video data to composite into a final frame.
5 Processes: An experiment in a self-contained, dynamic virtual space consisting of five
interacting materials of Water, Wood, Fire, Earth, and Metal, inspired by the elemental system of
Wu Xing. Each material is represented by a different cellular automaton, reacting with the other
four. The shifting balance of the elements is visualized as mountains, oceans, and forests that
ebb and flow with the simulation. Animals made of the same elemental materials look for
deficiencies in themselves, seeking to consume materials they lack. The overlapping chaotic
Furmanski 131
systems end up bringing about emergent patterns, as the cyclical nature of the Wu Xing means a
dominant element eventually creates a surplus of the new material that will undermine it.
LocusSwarm: A thought experiment where I imagined a display aesthetic as a virtual
ecosystem, each picture element (ie “pixel”) a simple artificially intelligent agent, wandering
through an image and consuming new or changed data coming from a Kinect camera system.
The environment and the frame from a live camera would be one and the same, but only the
behavior of the agents would reveal the actual space.
Forska: A collection of code, aesthetics, and algorithms flying in close formation,
currently manifesting as a navigable landscape painting. Numerous terrain generation systems,
non-photoreal rendering systems, and a navigation system based on interacting with a still image
while still maintaining a 3-dimensional space all form the core pieces of Forska in its current
state. Forska has become a repository of ideas that can interact together or separately, not only
to store code and PCG ideas, but to build an infrastructure to support more complicated PCG
ideas such as artificial intelligence.
I have learned much, both on theory and through practice, in the last few years. The
reading and exams have given me a further grounding and confidence in my knowledge of the
topics I have explored in an artistic way. I find dynamics more trustworthy than statements – in
many of my works I find that dynamics speak much more eloquently than a rigid idea, and I can
learn from the work as much as any other player or spectator. I tend to value a project more on
what I have learned, or where it has led me, than what I might have originally meant to say with
it.
Furmanski 132
I look to develop more sophisticated ways of producing dynamically driven virtual
spaces, trying to find the most interesting volume between static spaces and chaotic noise.
Emergent systems look for an “Edge of Chaos” to skip across—I’m looking to codify similar
ideas in terms of world building and game agency. At what point can a system enable creativity,
rather than stifle or overwhelm the muse? This undoubtedly changes from person to person, and
from occupation to occupation. A game designer will find different types of inspiration than a
cinematographer, even if they’re looking at the same source.
Each of these projects has wandered, but all carry common threads between them. The
next step is to bring these pieces, changed in their travels, back together, and with this unity
move forward to new places profound.
Furmanski 133
Games of Life
These two works are slight, but their inception is instructive. The first work is a
recreation of Conway’s Game of Life implemented on an Atari 2600. The second, bookending
work is a 3-dimensional version of Conway’s simulation system on the Microsoft Hololens.
Variants of Conway’s Game of Life have become something of a “Hello World!” program I use,
in order to understand a hardware or software platform’s capabilities, as well to verify my own
understanding as I adapt the venerable algorithm.
“Hello World” is a term used to describe a bare minimum program, usually used as a
measure of how complicated a program will be on a given software platform. The classic
version of the program would simply output the text “Hello world!” to a computer screen. One
written in BASIC on the Apple 2 computer would simply be:
10 PRINT “HELLO WORLD!”
The simplicity of the program belies the profundity of its utility—“Hello World” can
reveal how complicated a given hardware infrastructure is to work with, or show how much
access a programmer might have (and need) to build such a program. In assembly code on an
IBM 386, “Hello World” can be written as such:
108
dosseg
.model small
.stack 100h
.data
hello_message db 'Hello, World!',0dh,0ah,'$'
.code
108
WikiWikiWeb. “Hello World in Many Programming Languages”.
http://wiki.c2.com/?HelloWorldInManyProgrammingLanguages (Accessed November 12, 2014).
Furmanski 134
main proc
mov ax,@data
mov ds,ax
mov ah,9
mov dx,offset hello_message
int 21h
mov ax,4C00h
int 21h
main endp
end main
The program allocates memory, lists the sentence “Hello, World” as data, and then moves it to
screen memory where it will be displayed as text on a monitor, before explicitly ending the
program. A bit more complicated than the previous BASIC example, but slightly more revealing
of the inner workings of a computer. One might compare manual to automatic transmission in a
car, or instant brownie mix to baking from scratch.
Complexity and control are not the only considerations, merely ones easy to illustrate.
Developers and artists routinely use existing digital platforms for purposes unforeseen by the
platform’s designers and manufacturers. A major hurdle to artistic expression comes from
understanding these platforms, at least enough to use them, and, later, enough to both work to
their strengths and overcome their limitations. I have not quite reached this latter stage with
either the Hololens or Atari, but I simply submit implementations of celebrated cellular automata
as a polite greeting to each.
Furmanski 135
Fig. 39. Games of Life, both the Atari 2600 and Hololens version in action.
Platform Theory is in part a way to acknowledge this aspect of software development in
digital media when discussing its qualities critically. Ignoring the structures enforced by
hardware considerations in a digital work is akin to discussing a city’s population without
considering its infrastructure. Even the 3-dimensional version of Game of Life is venerable, as
far as software history goes—my main source for finding structures and rulesets to implement in
the Hololens came from a paper published in 1987.
109
The hardware platform might be
considered the medium of a work; the idea of a flower depends considerably on whether one
wants to paint the flower in oils or chisel it out of marble.
The one-dimensional form, serialized vertically, suggests the same vertical qualities that
the television scanline imposes on the Atari. The Atari infamously did not have the memory to
store an entire image at the same time, and aspiring programmers “raced the beam”—that is,
109
Carter Bays. “Candidates for the Game of Life in Three Dimensions.” Complex Systems 1, no. 3 (1987): 373–
400.
Furmanski 136
timed their code with the electron gun driving the television screen in order to determine what
graphics were displayed.
110
Certain limitations within the hardware were overcome in this
manner: The Atari could only draw 2 sprites (moveable bitmaps) at a time, but “forgot” about
ones it had already drawn once the beam drawing a television frame had moved further down the
monitor. A savvy programmer could use the same 2 sprites multiple times, provided they were
not horizontally aligned—the effect might be similar to someone running behind a panoramic
camera in order to receive a double exposure, producing a photograph that contains both the
person and their own doppelganger. I did not have to resort to such methods in this case, but the
iterative, vertical nature of the evolving simulation (changing color each generation/line) is
reminiscent of one of the Atari’s defining technical qualities.
The Game of Life can trivially expand into 3 dimensions—the problem initially was
finding a computer powerful enough to calculate the behaviors of volume of cells, each with 26
neighbors instead of the 8 or 3 from the 2 and 1-dimensional variants. This problem was minor
(corrected with Moore’s Law of ever-increasing performance in computation) compared to
productively visualizing the volume of cells in a way that allowed a human to grasp the
interactions between cellular structures. Complicated volumetric shapes hide their complexities
due to simple occlusion, and even the highest resolution monitors do little to rectify this issue on
their own. Placing the 3-dimensional simulation in the physical space, using the augmented
reality qualities on the Hololens, making the cells more analogous to a mobile sculpture in a
physical space, one that you can literally stick your head inside, promises a possibility of
intuitive understanding not found on previous platforms.
110
Nick Montfort, Racing the Beam: The Atari Video Computer System. Platform Studies. (Cambridge, Mass: MIT
Press, 2009), 28.
Furmanski 137
The two devices, separated by 40 years, have their share of similarities. Conway’s
algorithm is simple enough that, given that I already had a casual acquaintance working on both
machines, I was able to implement both pieces in about two or three afternoons, usually while
waylaid waiting for something else. My alternative title for this work was “Fear no Platform.”
Furmanski 138
The Night Journey
In the mid-00’s I was asked to help implement an interactive work with video artist Bill
Viola. The resulting work, The Night Journey, was a navigable space incorporating both
polygonal 3D graphics as well as video pieces from Viola’s collection of works. Neither the 3-
dimensional elements nor the procedural components within The Night Journey were a foreign
idea to Bill Viola. “I see the technology moving us toward building objects from the inside out
rather than from the outside in ... soon images will be formed out of a system of logic, almost
like a form of philosophy – a way of describing an object based on mathematical codes and
principles rather than freezing its light waves in time.”
111
The Night Journey’s landscape comes from a set of regions taken from Viola’s archives,
modeled and put together in a chimeric whole. A desert, a forest, an ocean, and mountains all
fuse in a central set of canyons roughly shaped like a mandala. The player starts their experience
falling into this world, given a set of simple prompts explaining how to look, move, and “reflect”
in the world. Reflecting (which occurs when the player presses a button) will composite a video
over the 3D-rendered scene, as well as gradually increase the player’s walking speed. When the
speed has increased to a certain point, flight becomes possible and the player can traverse the
landscape with alacrity.
Different actions the player takes, such as traveling through a given place, will secretly
add a “dream” to a queue of clips. The resulting montage of clips are shown after a given
amount of time, as night gradually creeps into the landscape. Reflecting can temporarily push
this sunset back, but eventually darkness falls. The player then sees a video indirectly edited by
111
Raymond Bellour, Bill Viola, “An Interview with Bill Viola” in October, vol. 34, Autumn, 1985, 94.
Furmanski 139
their actions, a sort of procedural dream. The clips generally have some relationship to where
the player has traveled or actions they may have taken, creating a mix of familiarity and
distancing that many dreams have. As days pass, the process for selecting which clips the player
sees gradually changes, focusing in on a final montage.
The procedural nature of the dreams comes from the goal of producing a novel, different
experience each time a player experiences a day. The obscurity of the process is designed to
encourage contemplation. Indeed, beyond the simple prompts for movement, looking, and
reflecting, the mechanics of gliding, dreaming, and other elements are meant to be discovered
instead of being told about.
Four hermitages the player can encounter within The Night Journey are one part of the
many enigmatic elements within the experience. Within each of the four major areas is a hut, a
containing a door that the player might enter. These foci are, like most elements, taken directly
from Viola’s footage, and include a ruined forest cabin, a forlorn desert hotel, a trailer stranded
in the sand, and ancient ruins in a mountain side. Entering each of these spaces will bring the
player to a “room” either in total darkness or a solid bright white, with a tiny doorway in the
distance. Voices might be heard, or a heartbeat and breath if the player stands still. Walking
forward, the spot of light/dark will eventually grow in size.
The nature of apparent size over distance is such that a point will not appear to change
size until the viewer comes very close, in which case the object’s size will expand very quickly.
A few different equations are used in astronomy for figuring out the apparent size of heavenly
bodies, but the apparent size of an object versus its distance from the viewer comes down to an
inverse relationship, a curve that can seem almost like a horizontal line at far distances but
Furmanski 140
approaches an infinitely steep slope at 0 distance from the eye. The relationship be summarized
as the “parachutist effect”—a skydiver might not notice much apparent motion simply looking at
the ground when they first jump, but in the last hundred feet objects on the ground get very big
very fast. Viola uses this effect in many of his works, and our development group noticed it in
particular when we saw a performance of Tristan while we were finalizing several visual
elements for The Night Journey.
Entering the next “room” in each hermitage, the player will find themselves walking with
a synchronized first-person video, which will fade out when the player ceases to move forward.
Walking forward until the player enters a door (and finishes the accompanying video) will
“complete” this particular hermitage, leaving the decayed remains behind as the player re-enters
the exterior landscape. Completing all four interiors will give a final puzzle, a point of light seen
outside. As the player moves forward, they see the point of light is a doorway like they have
seen on the interior hermitages, and walking directly forward will enter this final door, exiting
the chimeric landscape and completing the experience with a final video sequence.
Given the scope of the project, as well as our graphical limitations, I decided to mimic the
early work of Bill Viola involving tube cameras such as the Vidicon. These were among the
earliest handheld video cameras available: monochromatic, and very sensitive to light. Sudden
changes in brightness leave contrails and phosphor burn-in (sometimes permanently—never aim
this type of camera directly at the sun!) and all told produce a distinct sort of visual that Viola
took advantage of for several of his works.
Furmanski 141
Fig. 40. Original frame from The Night Journey before realtime postprocessing.
Fig. 41. Image after filter added to mimic early video camera.
Furmanski 142
Fig. 42. Image during a “reflection” with video footage added to 3-D rendered scene.
In many ways, the process was similar to rotoscoping, or painting on an already exposed
piece of film. Programmatically, I would take an image already rendered, which in and of itself
resembled a typical 3D polygonal scene. Using the data from this image, I would sample pixels
and replace them with a new, larger, fuzzier “pixel” mimicking an analog phosphor. This
process was relatively slow, especially since I had to analyze an image along with all the other
algorithms The Night Journey required within 1/30th of a second. Interlacing was one method of
speeding up the process (halving the amount of work) that was conducive to the way broadcast
video already worked. I also randomly varied the brightness of each pixel slightly, introducing
visual noise similar to that seen on tube camera footage, especially grainy footage taken at night
or low-light settings.
Furmanski 143
Fig. 43. Magnified area before filter added—the digitally pixellated forest.
Fig. 44. After filter added, “pixels” mimic qualities of the cathode ray tube.
Furmanski 144
Along with the “painted over” process, I made sure to keep a memory of the previous
frame, an image buffer, to then apply to the current frame. This process is additive, so the
“previous frame” contains gradually diminishing afterimages of other previous frames. This
creates an effect akin to motion blur, but since brighter elements of the frame are privileged over
darker areas, the result is more in line with the tube camera’s qualities of “burn in.” Bright
points can become contrails, and the image in motion might lose focus only to consolidate detail
when stationary once more.
The image processing both imitates certain aspects of Bill Viola’s work in video, as well
as defines a way of producing virtual images beyond what a graphics card might give an artist.
The sharp lines and edges typical of most 3D rendered spaces soften in this rendition. Trees and
foliage that would look mathematically 2-dimensional are given more volume, and the pseudo-
analog pixels mesh with the video samples that overlay various segments of the experience.
I am very aware that the ongoing progress of computer graphics means that even
(especially) cutting edge visuals using typical techniques and graphical conventions age and date
themselves very quickly. What looks “photo-real” one year looks like sloppy origami the next.
One way to avoid this pitfall is to wrest the aesthetic from the inherent technology and try to
define one’s own style. Advancements in graphical shaders and other techniques over the years
have meant that there are more possibilities—not just more polygons, but a more open method of
accessing how the computer renders a defined space. Knowledge of technique and procedure
allows an artist to define how a viewer or player “sees” the space in ways they might not have
ever imagined, but if the work continues in ignorance, the artist will let this potential fall out of
their control and back to default numbers on a video card.
Furmanski 145
5 Processes (Wu Xing)
After making several world simulators that had grown too complicated in adding
numerous elements to simulate, 5 Processes was an attempt at a “complete” world, one simple
enough to fully describe but complicated enough to have dynamic, ever-changing structures that
could provide stimulus to human and AI players alike. An artificial intelligence needs a problem
or environment interesting enough to react to and interact with, and I suppose this might apply to
human intelligence as well.
The Wu Xing, or 5 processes is a system often used in Feng Shui, and combines an
oppositional as well as cyclical nature. On a technical level, 5 Processes are five separate
cellular automata, each feeding into the others, and visualization of their relationships.
Adding entities, be they characters, objects, or materials to a virtual space can be
frustrating, as any new dynamics must be reconciled with all currently existing components in
the project. Adding a chair to a virtual space, for instance, might have to introduce a variety of
affordances (”sitting,” for starters; “lion taming” for the more advanced), and the author will
likely miss something. 5 Processes came, in part, from looking at a virtual space on a chemical,
or perhaps alchemical level. What if everything in the space had to be composed of a finite
variety of elements or materials? What if these materials reacted with each other in specific
ways, constantly, predictably, but in a manner introducing both stability and instability within the
contrived environment? If such a system worked, all questions regarding an object’s (or even
character) behavior could largely be answered using this chemistry analogue. An object could
melt, combust, solidify, or decay without explicit, unique scripting required. How few elements
would be needed, and what would be the best way to organize them? Looking at different
Furmanski 146
systems, I found the relationships of Wu Xing promising, although part of me was tempted to
make a world defined by the alchemical components of sulfur, mercury and salt, in an attempt to
make an analogue of the terrestrial world as understood by medieval philosophers and charlatans
alike.
Fig. 45. Wu Wing elemental relationships.
The combination of the cyclical and oppositional relationships described in each of the
five elements in Wu Xing gives each element its own period of dominance. I developed a
system where each element was visualized in a rather literal form. The amount of water in the
world reflected in sea level, for instance, while the amount of earth and metal formed the
Furmanski 147
elevation at any given point. Wood and fire were reflected in the presence of trees, “fire”
darkening such trees, implying scrub land and chaparral as opposed to pine forests and jungle.
I also wished to have these reactions occurring throughout the world. Unlike many
games that have a crafting or reaction system that relies on specific player input, I was curious
what a world would look like if a reactive system applied everywhere at once. Minecraft’s
system of combining components to make new items generally comes from player activity, even
if raw materials come from the environment. Indeed, given the size of the Minecraft space,
having each block constantly react with its surrounding blocks would have been technically
infeasible. Even if it were possible, designing a coherent landscape would have been much more
complicated. Minecraft’s landscape is gigantic, but relies on the fact that its initial generation of
mountains, caves, rivers and so forth are ultimately static until a player interacts with them. The
volumetric nature of the game also requires constant optimization of space—it’s impossible to
track every single block. A world 1000 blocks on each side would be a one billion blocks, easily
taking up gigabytes of data, and Minecraft’s space is considerably bigger than these dimensions.
Volumetric numbers—that is, taking numbers to the third power—get big very quickly.
Minecraft and similar works “chunk” the world, simplifying common structures and sets of
blocks, or forgetting/recreating them as they pass in and out of the player’s field of view.
Minecraft gets around this by chunking sections of the map, simplifying portions of the map that
contain similar block types, encoding structures using Perlin Noise, and only paying attention to
the areas closest to the player, or places the player has affected. 5 Processes, being among other
things an experiment in a “dynamic, everywhere” virtual space, uses none of these optimizations.
Indeed, the space is designed to run quite happily without player input, even as it can respond to
the player interacting with the mixture of elements that define the space.
Furmanski 148
Each “cell,” or small component of the grid that defines the qualities of the virtual space,
contains some mixture of the Wu Xing elements of water, wood, fire, earth, and metal. The
literal depiction of the elements as their physical namesakes simplifies a lot of the symbolism
often associated with the ideas behind the elements and their relationships. The names for the
elements might as well have a “-like” added as a suffix, as they apply to ideas such as seasons,
music, personalities, cardinal directions, and many other aspects of space and time. Indeed, on a
certain level, their depiction as a landscape in this work might be considered a symptom of the
more abstract goal of contemplating the abstract dynamics that govern their combination of
cyclical and oppositional relationships. Were I a musician, I might have learned similar lessons
and understanding working with sound.
Starting with the “Water” element, I have cells with high concentrations of water disperse
rather quickly through the map, and the “sea level” of the world is effectively a visualization of
the average water throughout the entire simulation.
The following elements of “Wood,” “Fire,” and “Earth” work in a very similar fashion,
preferring to spread outward from their original progenitor cells, but with each element doing so
at a different rate. “Earth” is visualized as erosion, with higher concentrations directly affecting
terrain height. “Wood” and “Fire” are visualized as trees with color ranging from green in the
former element to black in the latter.
The “metal” element I associated with mountain ranges, and it uses a type of cellular
automata known as a “Diffusion Limited Aggregation” or DLA. Cells wander randomly until
they hit a “crystallized” or static cell, after which they become static themselves. When applied
to a heightmap, they can produce very convincing mountain ranges, with winding crests and
Furmanski 149
canyons. I had large concentrations of metal crystallize, which makes any randomly traveling
sources of metal stop moving. The resulting pattern forms mountains that grow into plateaus,
and then the tall parts of labyrinthine canyon systems.
While an element might find a period of dominance, it creates the elements that will
eventually undermine it, acting as a negative feedback (that is, stabilizing) system. Making sure
the cycles themselves were stable took some trial, error, and arbitrary values, but the
relationships between the elements already suggested this stable cycle as an optimal dynamic.
One key element in making the virtual space variable was altering the rate at which the
elements “react,” that is, create and transform into new elements. In a homogeneous virtual
space where all places reacted or cycled at equal rates, the entire world would settle into a single
element, and the cycle, while predictable and reliable, occurred all at once. A solid mass of trees
would burn away and lead to a single, featureless plateau, and then uniformly flood. Making the
reaction rate slower at the center and faster (and more implicitly chaotic) at the edges of the
space lead to local zones of stability and discernible structures like mountain crests that would
gradually change yet seem to have lifespans that could survive the larger scale transient
dominance each element had.
Furmanski 150
Fig. 46. 5 Processes landscape.
Fig. 47. 5 Processes maps transitioning over time, moving from desert to ocean to forest.
Agents inhabiting the space are made up of a mixture of the five elements themselves.
Currently they do little beyond searching for and consuming elements they are deficient in, but
the possibilities of adding more sophistication to their behavior are many. The mixture could
Furmanski 151
directly affect their internal behaviors, health and physical form, to say nothing of how they
might perceive and navigate the exterior world.
Furmanski 152
LocusSwarm
LocusSwarm was another thought experiment in visualizing a space. Instead of static
pixels, why not have each “Picture Element” be an agent, a decision-making entity, seeking light
input the way a creature would food. Instead of having a single raster, a scanline that produces
pixels line-by-line the way a CRT television does, what if each “pixel” sought color on its own,
moving in multiple directions. The resulting image system would resemble a primitive
ecosystem. Where 5 Processes focused on the environment, LocusSwarm brought attention to
the agents that might live in one.
LocusSwarm uses the infrared element of the Kinect camera system, designed to detect
the presence of people in front of the device. Heat, for instance body heat, can be captured in
this part of the system, separate from the visible light camera. I ended up treating each “warm”
pixel as “food” for a series, or “swarm” of wandering virtual agents. While the installation is
treated like a mirror for a viewer, the agents are the only part of the image that the user can
actually see. They change color to match the color of a pixel they have most recently “eaten,”
and will fade if no food is eaten. The result is a semi-persistent afterimage based on the presence
of the viewer or viewers within camera range. The latency of the image make motion an
interesting prospect—the effect combines certain aspects of long-exposure photography with
slow scanline artifacts.
Within a gallery setting, a viewer unacquainted with the piece might stop and
contemplate a dark or chaotic screen, wondering at the dynamic of the piece just long enough for
their silhouette to come into shape. The fact that the wandering agents make no distinction
between separate people made this visualization popular with couples.
Furmanski 153
Fig. 48. Figure interacting with LocusSwarm.
Furmanski 154
Forska
Forska represents not a single work but a representation of knowledge accumulated. I’ve tried to
wield different techniques of procedural content generation in an effort to understand them, see them “in
motion,” and I hope as I describe Forska I can illustrate how the earlier chapters figure into this practice.
Fig. 49. Screenshot of Forska.
A Sketchbook, Portfolio, Toolbox, and Zoo
Forska (Swedish for “Research”) is meant to be a place where I can implement different
procedural techniques and see how they play together, without having to worry too much about a
specific goal. Specific goals can be achieved later in external projects, where I can transplant code
fragments and modify them to a more specialized end. Within Forska itself, each technique has a
demonstrative effect on the virtual space, and I try to generalize it to make transplantation to a different
project as easy as I can. Having them all in one place can be very handy as well. A lot of my
experimentation with simulations and agent behaviors, for instance, proved slow going until I realized I
needed an interesting enough environment for the agents to inhabit, sense, and react to. Terrains, agent
Furmanski 155
behaviors, graphical effects, and other dynamics can be hard to develop in a vacuum, so placing them all
into one project seemed like the logical approach. Systems interacting with other systems can yield
results more complicated and interesting in a multiplicative way. As mentioned in its case study, Perlin
noise shows more promise when combined with other systems, and its behaviors are hardly alone in that
regard. Processes often only make sense when applied to different processes, and the interactions
become more than the sum of their parts.
I use Unity as the primary engine, which has helped me port code to a variety of different
platforms, as well as allowing me to view parameters and world states easily. A certain amount of
traffic between the java library known as Processing and my own venerable C++ codebase also proves
useful as I develop code. Translating algorithms and source code between different computer languages
tends to demand a better understanding of how each process works, and how it might be made to fit
within new technological affordances and constraints.
Navigation
The idea behind Forska’s navigation is quite simple – you click on the image, and you’re
teleported to where you clicked. In many ways this calls back to adventure games like Myst, but instead
of having a limited database of images, I take the appropriate image using a virtual camera and a 3D
space. It’s a simple matter of ray casting from the mouse cursor to the corresponding point in the
landscape. I realize that many VR experiences, particularly in the Vive, have adopted a similar
approach.
Furmanski 156
This idea of “click to move” came from a need to quickly explore large virtual spaces, without
slogging back and forth, or rocketing too fast past small destinations. With the paradigm Forska uses,
taking a single step or walking a mile can both be done in one click. This approach has also proven to
work very well with touchscreens and similar interfaces.
I have watched far too many people struggle with a game controller while exploring a space at
24-60 frames per second. My countless hours of gaming in my youth have given me the dexterity to use
a joystick or gamepad, but many people have not had this tacit education. I wanted to experiment with
removing this barrier of entry to exploring a virtual space.
Moving to a more explicitly real-time, animated, and/or cinematic paradigm is possible, even
trivial in many cases, but being able to see a virtual environment frame-by-frame encourages
contemplation of algorithms and processes currently in use. Having the option to exclude motion has
been liberating, and paradoxically has allowed me to make a more dynamic space in many ways.
Simulating hundreds of canny characters in a virtual scene stretches possibility in real time, for instance,
but becomes far more possible in a space viewed frame-by-frame. I mentioned Dwarf Fortress’s own
interesting methods of playing with time in order to accommodate simulated complexity; I used a
similar, although not identical, strategy for Forska.
Terrain
The foundation of a terrain comes from a digital elevation map, or DEM. DEMs by themselves
usually look like greyscale images, with the intensity of a color equating to a height. Images can be
Furmanski 157
taken or generated from just about anywhere, and a lot of the artistry comes from selecting a series of
overlapping systems to create interesting patterns of shades of gray.
One incarnation of generating the mountains and valleys comes from randomly choosing from a
set of explicitly different processes. One process favors rough mountain ranges, another rolling hills,
another still crater-like landscapes. All of these processes use some form of Perlin Noise at some point,
but varying frequencies can mean the difference between smooth, idealized shapes and rough, noisy
patterns.
This following section of code gives a set of Alp-like mountains. “CellNoise” is a function that
encapsulates a number of Perlin Noise variants, with the idea that they might be called early and often.
This first line samples a variant type of noise that uses “Manhattan” distance. The Manhattan distance
between two points is not a straight line between them, but a combination of a horizontal and vertical
distance. As the name implies, the distance is reminiscent of walking between two locations in a
gridded street in a city like New York.
newColor = CellNoise.CellNoiseFunc((x * 0.025f,0f,y * 0.025f), worldSeed,
CellNoise.ManhattanDistanceFunc);
The 0.025 value is the scale at which the noise function is sampled. Increasing or decreasing this
value will make the mountains larger or smaller. This initial shape would give a series of very straight-
edged pyramid-like shapes, and abruptly cut off at the edges of the sampled terrain. The following code
takes the distance from center of the map and dampens the heights towards the edges, giving the
mountains in the center a tendency to be higher. Adding classical Perlin noise gives just enough chaos
to make the mountains generated look something like a volcanic island.
Furmanski 158
newColor.x *= (centerDist * (Mathf.PerlinNoise((float)(x + noiseOffsetX) * noiseScale,(float)
(y + noiseOffsetY) * noiseScale)) * 2.0f);
One trick, originally used to generate marble like textures, comes from subtracting a Perlin noise
function from itself. If the subtracted portion comes from a slightly different place on the noise
function, veins, or canyons, result. This allows clusters of mountains and canyons to appear in the
generated shape:
newColor.x -= ((1 - centerDist) * (Mathf.Clamp(Mathf.PerlinNoise((float)(x + noiseOffsetX) *
coastalNoise,(float)(y + noiseOffsetY) * coastalNoise),-0.1f,1f) * 0.25f));
Further scaling and adjusting might be required to fit the mountainscape within the virtual space,
but these end up being the “pinch of salt” to finalize the created terrain.
Non-Photoreal Rendering
The painted effect I give each rendered image starts with a typical 3D-rendered camera shot,
which then has a heavily modified blur shader applied to it. A separate, “noisy” texture input gives the
blur a series of offset distances – the final result mimics brushstrokes, and it is this image input that
controls stroke size, direction, etc.
I tend to blur more in the midground, keeping the fore and backgrounds relatively detailed, since
a faraway point of interest can be the same relative size on the screen as an object close to the camera,
and faraway features can be obliterated if one simply does a “more distance = more blur” calculation.
Making blurs proportional to the sine of the distance can be helpful.
Furmanski 159
float depthToSat = clamp(refDepth,0,1);
depthToSat = 1 - depthToSat;
depthToSat -= 0.4;
depthToSat *= 0.5;
depthToSat = clamp(depthToSat,0,.1);
color = lerp(color, lum,depthToSat);
The variable “color” in this case indicates the final pixel color after all of the operations. Taking
the “depth” of the pixel, already stored as part of the process of rendering a 3-dimensional scene, I “flip”
the values so that closer pixels are larger than farther ones. A few “magic numbers” like subtracting 0.4
or halving the value come from aesthetic decisions on my part, and are arbitrary in the classic sense—
some value is useful, but a specific one is not inherently necessary. After clamping the value to make
sure it’s within a specific scale, from 0 to 0.1, I interpolate between the original color and a black and
white value. This final command in the shader means that the color will be 100% saturation at its
closest to the camera, and 90% saturation at its farthest, which is just enough to give a sense of depth
even without other factors like fog or weather. Adjusting the scale in the penultimate line given, say to
0.5, would have the saturation vary—halving at its farthest point.
In keeping with the sketchbook approach, I’ve developed several methods that mimic styles like
oil paints, pastels, woodcuts, mosaics, and the like. Other procedural elements like terrain generation
and dynamic skies give these shaders good subjects to work from, and can give even identical scenes
their own sense of character.
Furmanski 160
Fig. 50. A variant shader resembling a Roman Mosaic.
Fig. 51. A variant shader simulating charcoals and pastels.
Oftentimes mimicking an artistic method requires a roundabout method of thinking. Most of the
painterly modes use programs known as Pixel Shaders—simple functions that take a color as an input,
Furmanski 161
apply a series of mathematicaly operations to that color, and then output it once more. Shaders can feed
into other shaders, and take multiple colors or data sources as input.
I have played with a lot of these ideas for years, but having easy access to depth information
within an image, combined with graphical support for shadows within a 3D scene (a relatively new
development, all things considered) allowed a detail within a scene and the ability to take advantage of
the data such detail implied. Mathematically distorting an image to mimic painting, but without regard
for depth, produces what is known as a “shower door” effect, where a viewer perceives all of the
distortion on the picture plane. Even in a still image this effect can be perceived as a simplistic
operation over an image. However, accessing the depth of an image within a shader used to be a fairly
arcane process, but now serves as a crucial step for producing any number of effects, from virtual
painting to highlighting edges to mimicking a camera’s depth of field.
Seeds and Determinism
As outlined in previous chapters, random number generators typically rely on an initial “seed”
number, and I use this dynamic to attach an ID number to a particular world instance. This allows me to
have access to billions (more, with a little more work) of spaces I can effectively index with single
numbers. Along with collecting methods of rendering, terrain generation, and artificially intelligent
systems, I also collect different methods of random number generation, as well as means of keeping
track of their individual states. Storing the state of the world within a single number can be very fragile,
as it depends entirely, and very sensitively, on the algorithm that follows. Within Forska I use the
potential of managing a series of random generators, each with their own jurisdiction. After a certain
level of complexity, having one source of random/deterministic values becomes unwieldy, as any
Furmanski 162
change in any part of the complete set of systems could drastically affect everything else. Having one
number generator to manage mountains, another for city maps, another for vegetation, etc. proves to be
useful when adjusting individual processes.
One experiment taking advantage of random seeds was generating physical maps of the places
Forska creates. Converting the data into contour maps turned into an automated process itself, and I can
output a series of maps, indexed by their random seed. Given enough paper I could publish the billions
of islands, although even seeing a few dozen allows me to notice trends and commonalities I might
otherwise miss. The process of seeing a physical artifact brings home the odd idea of a single number
encapsulating an entire place, a cartographic echo of Borges’ Library of Babel.
Representation and Platform
Keeping the data accessible and independent of its visualization makes Forska adaptable to
different hardware platforms. The “point and click” nature of the navigation in Forska works
remarkably well with a mobile touchscreen, and Unity’s own support for multiple platforms makes
porting the experience to iOS or Android relatively straightforward. Graphically, many of the shaders
work with minimal alteration, although the complexity of the generated landscapes requires more careful
management of what part of the scene is rendered at what time. Even when viewing the scene as a non-
animated image, a mobile device still requires a sizeable amount of processing time, and I try to keep
this time as small as possible to keep the touch-based navigation responsive.
Furmanski 163
Fig. 52. Forska on a mobile platform.
Running Forska on a mobile platform requires a number of changes even when the final result is nearly
identical.
Forska is a realtime 3-D space, and the decision to only show one frame at a time is purely
optional. Depicting the non-photoreal space presents a number of decisions, few of them obviously
“correct” or “wrong,” but the added dimension of time makes any real-time variant of Forska a much
more complicated endeavor. I have even tested the experience inside immersive platforms such as
headmounted displays. As it stands, a different process would need to be used to properly support
Furmanski 164
stereographic input. Non-photoreal imagery that tends to eschew sharp edges is not incompatible with
stereography, but does go against common aesthetic decisions meant to strengthen stereo depth cues.
As an example of a non-digital method of presentation, I have developed a script that iterates
through a section of the possible islands Forska can generate and outputs them as printable contour
maps. I admit creating physical artifacts of a normally all-digital work has a visceral gratification, while
the method of output allows me to see the variance (or lack thereof) of the algorithmic methods. Like
the productivity program developed for No Man’s Sky, being able to see procedural products by the
dozen can allow for a much more informed summary of how successful or desired the current state of
the work might be.
Furmanski 165
Fig. 53. A generated contour map from one of the islands of Forska.
Agents
The vistas of Forska give complicated algorithms enough space to run and play. I implemented
a version of the famous pathfinding algorithm A* (A-Star). Artificially intelligent agents existing in a
complicated environment require some sort of means of getting from where they are to where they want
to be. A* typically takes a start and end point as input, and sees the intervening space as a series of
Furmanski 166
costs. A road would be cheaper to traverse than a cliff, for instance. The algorithm is very fast for what
it can accomplish, and is something of a standard by which other pathfinding systems are measured.
Pathfinding can also be used for pathmaking, and using parameters of the A* algorithm I can
create paths between points that believably follow the contours of the hills and valleys they traverse.
The slopes of the hills account for the “cost,” leading to emergent switchback patterns and trails that
look appropriate on a mountainscape or valley.
I have added agents who will wander the space, and Forska defaults to updating its frame every
few seconds even when no human interaction is forthcoming, so motion might be sensed by a patient
and lucky explorer who finds a character wandering the islands. The semi-non-real-time nature of
Forska means I have a little more freedom in adding sophistication to the internal systems of virtual
characters, or adding a larger quantity without worrying as much about computer performance or time-
intensive work like character animation.
I always had a huge hurdle in developing and testing artificial intelligence, as I felt I needed an
adequately interesting environment for a character to inhabit. The modular nature of Forska allows me
to start with a fairly involved virtual space that I can work to make simpler or more complex, but
regardless can act as a stage to try different approaches to simulating such characters.
Introspection
Of course, further study has made me examine both the ways the technological platforms
involved shape how I build this virtual toolbox, as well as my own internal assumptions, in trying to
articulate unconscious decisions as conscious modes of thought.
Furmanski 167
Appropriating a model, particular scientific models in the case of natural landscapes, leads one to
look at the scope in which said model works, and how important fidelity might be with regards to
technological constraints, to say nothing of the design goals of any particular project.
Appropriating aesthetics comes with its own set of questions and concerns. In the case of
Forska, the stylized effects mean to show the possibilities inherent within the graphical paradigms of
modern computers. Graphical architecture and hardware encourage a particular kind of modeling,
which often means that digital media can have a tendency to look monolithic and homogenous, but
careful study of the process allows an incredibly expanded view of how a machine might depict novel
visuals.
I mean to continue adding to this menagerie of techniques, as well as methods to explore and
view them. I’ve done a few smaller works using components from Forska, each with a mixture of
handcrafted and procedural components, and I intend to do more. I do not mean to add things like
narrative, puzzles, encounters, or real time graphics to Forska itself, but I certainly plan to develop them
for a variety of the project’s offspring! Forska is not really one work, and committing to a “definitive”
version of Forska would undermine its larger purpose to act as the foundation of ideas and numerous
potential works.
Furmanski 168
New Communities, New Focus
“And there you are—an infinitely original author of charming sensibility, even though
unappreciated by the vulgar herd.”
~Tristan Tzara, Dada Manifesto on Feeble Love and Bitter Love
This line summarizes and ends Tzara’s Dadaist “algorithm,” and seems appropriate to
initiate concluding statements. Algorithms are not required to have a concluding description or
further context added outside their inner workings, but Tzara courteously offers one here.
Tzara’s sarcastic yet paradoxically valid statement brings in many of the themes involved
in PCG, and simultaneously their shortcomings. In creating a poetic collage, reminiscent of a
clichéd ransom letter, the Dadaist activity does deliver what it promises, but also highlights what
might be necessary to fulfill its own potential. In coding an application to write a poem, the
difference between taking words from a computer program and taking words from a newspaper
is only a matter of degree. The “infinitely charming sensibility” comes from a cultural
construction, extremely visible but very easily ignored. Tzara’s poetic algorithm questions
where our creative muses get their ideas, before rather crassly and definitively answering “the
daily newspaper.” The old proverb “whoever discovered water was not a fish” shows another
angle to the concept—in finding and analyzing systems, we can see what assumptions even the
simplest systems depend on, the forces and institutions we take for granted yet contribute
inescapably to such internal, abstract activities as creative process.
I immersed myself in emergent algorithms and procedural content to find new places to
explore. If I'm truly honest with myself, that's the primary reason. In a way, the uses of
procedural content generation were not methods of creation but methods of exploration, ways to
Furmanski 169
find things mechanically I could not find otherwise. As I see knowledge of decades-old archaic
programs made new again by people who might never have seen or realized this was a pathway
to expression, I find incredible worlds and stories produced by these newly-enabled people.
What would automating creativity mean? In pursuing the question, the fact that any
creative output is interpretive becomes manifest in a way impossible to ignore—context, both
historical and cultural, requires examination all the more because the idea of authorship comes
into question. A machine that generates novels has to explicitly define what a novel is, and even
when it doesn’t, a judge must do so in order to determine whether a masterpiece was written.
Who is to judge?
Different voices are using the dynamics of PCG to express their own views, needs, and
culture. As the abstract of an article published in 2014 states:
Games now inhabit a space where creativity is no longer centered around human
authorship. The use of procedural content generation has been embraced by industry,
academics and fans as a means for reducing labor cost, providing additional replayable
content for players, investigating computational creativity in a complex and multifaceted
domain and enabling new kinds of playable experiences. This incorporation of
computational creative labor confuses authorship, labor politics and responsibility for
rhetoric embedded in the procedures by complicating the way in which the computer is
portrayed to users, researchers and other developers. We can apply feminist
methodologies attentive to questions of difference and power in systemic structures in
order to better understand each of these questions in turn.
112
The demoscene has a reputation of being populated almost exclusively by male teenagers,
113
typically from a privileged enough social standing to afford access to a high-end computer, and a
retrospective on the Oulipo group’s work in recent years examined the male-dominated
112
Amanda Phillips et al., “Feminism and Procedural Content Generation: Toward a Collaborative Politics of
Computational Creativity,” Digital Creativity 27, no. 1 (January 2, 2016): 82–97.
113
Borzyskowsky, George. “The Hacker Demo Scene and It's Cultural Artifacts.” Scheib.net.
http://www.scheib.net/play/demos/what/borzyskowski/ (Accessed October 17, 2000).
Furmanski 170
membership and the prejudiced view of Herve Le Tellier in particular.
114
If PCG comes from an
exploration of potential, limiting diversity stifles such potential, and ignores incredible
possibilities.
The time periods I focused on come from a matter of accessibility as well as a necessity
to narrow focus. Increased interest in PCG in recent years has meant it has become far more
interesting, but much less capable of being done justice in a single volume. While there have
been many technological developments in recent years, and while I’d also argue that artists have
yet to really take advantage of the decades-old techniques, this cannot be the whole picture.
More than any technological advance, I’d say the increase of interest and access to procedural
techniques has been the most exciting development in recent years.
Kate Compton’s “Tracery” is a software library designed to put together strings of text
from a database. While the easiest way to imagine this library being used is the creation of, say,
a poem or novel, the idea of putting textual symbols together is the core of computing, and
Tracery has been used to generate HTML, twitter bots, SVG vector graphics, and source code for
other programs. The library in and of itself is not a new idea, but the work has a philosophy of
inviting people who might not otherwise be familiar with PCG or even coding to engage with it,
to show what working at a procedural level might be like. Compton’s “Casual Creators” blog
talks about creative tools she has worked with or helped design, such as with her work on Spore,
and how certain works invite a more accessible means to invoke creative potential:
I’ve found that one of the best ways to define a genre is by what emotional, psychological
and functional needs it’s providing to the player (not a new idea, certainly). But it’s
particularly useful when looking at these casual creator systems, whether a Rainbow
Loom or the Spore Creature Creator, because it walks so easily around the tangential
114
Lauren Elkin, and Scott Esposito. The End of Oulipo?: An Attempt to Exhaust a Movement. (Zero Books, 2013).
Furmanski 171
issues of age and audience and gender that get bound up with traditional marketing
genres, which construct commercially-useful groupings of target consumers, rather than
intellectually-productive groupings of software.
Casual creator
an authored system or software that:
-privileges enjoyment of the creative process above productivity
-encourages and supports a state of creative flow
-results in the user’s feeling of pride and ownership toward the produced artifact,
and sense of pride in their own creativity.
This definition, at least, is the way to look at a piece of software (or not software!)
and see if it is a casual creator, or, more commonly, see if it should be attempting
to become one, and how well it is succeeding.
115
In recent years, instances of PCG have come together and formed something of a
movement in and of itself. Conferences, workshops, focuses of specific programs like the
Expressive Processing group at UC Santa Cruz have all manifested within the last few years.
Interest waxes and wanes, but there seems to be a growing interest ever since Will Wright’s
Spore, and attention has dovetailed into the hacker/maker movements.
Since 2014 Michael Cook has hosted ProcJam, a “Game Jam” where entrants have 9 days
(a week plus an extra weekend) to “Make something that makes something.”
116
Speakers kick
off the jam with presentations in a day-long symposium. Most presentations talk about work the
presenter has done, but also include surveys of works historical and contemporary. During this
inaugural event, which has now become an annual event, I started to put together my collection
of techniques, tricks, and accumulated knowledge that became Forska.
115
Kate Compton, “What are Casual Creators?” CasualCreator.com. http://www.casualcreator.com/blog/?p=10
(Accessed March 4, 2014).
116
Michael Cook, ProcJam.com. http://www.procjam.com/
Furmanski 172
Michael Cook also wrote a post explaining that making PCG obscure and “black magic”
undermines a large part of its potential:
The old language of procedural generation needs to be done away with, and in its stead
we need a new way of communicating about what we do, and why it’s interesting. We
need to debunk the idea of procedural generation as a dark art, and show people that it is
accessible, understandable and interesting. It might feel scary at first, it might feel like
we’re making our work vulnerable and pointing out all the cracks, but people won’t
mind. They love the cracks. They love the stupid stuff generators do. They don’t expect
all the answers right now, I don’t think, they just expect us to be honest and clear with
them.
117
PCG involves active, animated processes. These invite hacking, tinkering, and play. Even
without complete knowledge of a given algorithm, PCG might still be used, and in its use
understanding gained. The critical analysis of these techniques has expanded beyond “how,” to
look at the “why” and “what,” not only why such techniques should be used, but what
assumptions we make in building a machine that generates language, a city, or a universe. What
do we leave out; what escapes our own internal models that we’ve built for ourselves? I learned
much about the craft of acting when I took a course in motion capture, and I found an
understanding of morality that I didn’t previously have when I actively contemplated artificial
intelligence for another class. Procedurality forces an artist or designer face-to-face with the
perils of generalization, asking questions about what might be left out and what might be left in,
and what is possible with a finite algorithm in a larger cultural setting.
117
Michael Cook. “Alien Languages: How We Talk About Procedural Generation”. Games by Angelina.
http://www.gamesbyangelina.org/2016/08/procedurallanguage/ (Accessed August 18, 2016).
Furmanski 173
Conclusions
The axioms I listed in the introduction are not truths or panaceas; they are conjectures,
promises, even myths. Taken at their best they can inspire, and at their worst they can lead to
dead ends, false hopes, and vacuous results. Gillian Smith, in her own research on the evolution
of procedural content generation, defines a “stagnation of motivation” with regards to procedural
content:
Early motivation and purpose for using PCG in analog games is almost perfectly
mirrored in the commonly-cited contemporary motivations. Technical approaches to
rapid creation of complex content that obeys design guarantees have drastically improved
since the days of random lookup tables…but ultimately PCG research is attempting to
solve the same set of problems as the early systems themselves were: providing
meaningfully replayable experiences, reducing an authoring burden on players and/or
designers, and providing content that adapts to the player’s current skills.
118
PCG limits the artist if it is simply seen as a way to solve a problem. Smith concludes:
“Viewing [PCG] as a designed system that can be executed by either human or computer, has
helped identify new directions for digital PCG research in terms of finding new motivations.”
119
Reviewing the axioms introduced at the beginning, with the context of at least some
history and dissection of selected works, we might look at their meaning and viability in more
detail:
Procedural content generation can produce an indefinite amount of novel media, in
that perceived content is just an instance of an effectively infinite body of potential works.
PCG can produce an incredible amount of output, but this quality might be further judged
by the output’s saliency. Avoiding the “10,000 Bowls of Oatmeal” problem comes from careful
design and scope, and often a level of abstraction and expectation of results. Often, augmenting
118
Smith, Gillian. "An Analog History of Procedural Content Generation." Paper presented at the Foundation for
Digital Games, Monterey, CA, 2015.
119
Ibid.
Furmanski 174
some other aspects of a work, like player agency, can counter other limitations, such as visual
similarity. Rogue and Dwarf Fortress rely on textual depictions, combined with a large amount
of player agency, to make sure each experience gives the player a different set of decisions to
make. Having multiple systems interacting with each other can further multiply the potential
output space—Forska’s geology, climate, and island shapes are all independent of each other,
leading to a much wider variety of landscapes. PCG is a craft of potential, and finding ways of
increasing this potential is a core technique to learn. Having access to infinite instances of
anything demands examination of what makes any instance unique or notable.
Procedural content generation can generalize and quantify systems, entities, and
other elements in a manner that allows ease of alteration, expansion, digitization, and
quantification for the purpose of interaction.
Turning the universe into digital numbers would be a crass and limited way to classify
PCG. When procedural systems combine, their potential does not add, but rather multiplies.
Perlin Noise, as I’ve stated before, is not a visual system but the mathematic manifestation of a
design idea. Dwarf Fortress’s levels of simulation, from air temperature to emotional states,
give an animism and coherency to a finely detailed yet expansive world. Cellular automata
invite patterns of incredible complexity on their own, but can further give life to virtual
environments like in 5 Processes. A system might exist in a vacuum, but the interaction between
various systems and their consequences drives the discovery of the potential of PCG. The fact
that systems need compatibility on some level mean that PCG can offer coherency along with
quantity and novelty.
Furmanski 175
Procedural content generation can dynamically generate an amount of data that
would not be feasible to store or create through other human-authored means, offering a
profound means of compression.
Compression might be a way to reduce PCG to a simple technical shortcut, but within the
above examples the results seem to enable as much they compress. The jungles of Pitfall! and
the galaxies of Elite, Noctis, and their kindred rely on compression not only to overcome
constraints but to highlight possibility. The Atari 2600 was designed to play Pong on a single
screen, but Pitfall! used that same platform to create a virtual space to explore consisting of
hundreds of screens. The galactic exploration genre really can only exist with PCG, because it
offers a way to contemplate and interact on a cosmic scale. Even Queneau’s modular sonnets
show how language can multiply meaning, and how quickly the possibilities of the written word
expand even in short, small stanzas. Compression moves beyond a technical necessity and
enables incredible potential; a difference of degree becomes a difference in kind.
Procedural content generation suggests a mastery of codification, a demonstration of
ability used to produce a work in a manner both enhanced and handicapped by structural
concerns.
Coding has traditionally had an air of mastery and secrecy, particularly in the 1980s and
‘90s when even depicting basic 3D geometry was a feat in and of itself. The animations created
by the demoscene were meant to impress, the term “demonstration” betraying their primary goal
as crafted works. The acrobatics need not be merely self-serving, however. Translation, either
in language or between computing platforms, is not only difficult but also illuminating. Working
to overcome limitations and constraints can increase creativity, like exercise can increase bodily
Furmanski 176
health. Finding and knowing the constraints allows for a more informed method of development.
Finally, mastery can be an encouraging factor in the creative process. I can certainly vouch for
the pleasure of seeing a program work and building something beautiful and engaging, only to
make adjustments and continue further. To be surprised and surpassed by my own work would
be the paradoxical proof of my mastery.
Procedural content generation carries, within its implementation, a suspicion of the
limits of conscious creative effort, either in scope, power, or endurance.
The more recent case studies and examples of Procedural Content Generation reviewed in
this work promise, and even in some ways deliver, the mortal desires of power, economy, and
novelty. The works themselves rarely seem to offer introspection, at least in the ways many pre-
digital works aspired to or hoped for. The major growth in thought was in technique rather than
themes and concepts. Noctis might be an exception, eschewing prescribed ludic purpose for
existential spectacle. While networked communities offer a potential avenue of discourse, this is
no guarantee. The demoscene could not have existed without bulletin board services and later
the Internet, but the driving force of the demonstration of mastery continued to be the primary
theme.
Procedural content generation, that is, the design and implementation of dynamics,
the interplay between systems, becomes its own art form.
PCG is the craft of ideas, of dynamics. Perlin Noise is not a graphical entity—it came
from a desire to introduce detail where none exists. This idea of detail (or noise) can apply to
image, sound, animation, and Perlin’s original algorithm has become one of many with the same
goal of controlled randomness as applied to media visual, auditory, cinematic and interactive.
Furmanski 177
The themes of Conway’s Game of Life come from a desire to produce as much complexity from
as simple a set of instructions as possible—the consequences manifest in a variety of media, or
as components of larger constructions.
My focus in games, as opposed to visual arts, music, or other areas that PCG can apply to
comes from the active nature involved in games. If Procedural Content Generation relies on the
interaction between a system or systems to create something, games encourage interaction with
these systems while in motion. A long tradition of hacking and dissection encourages closer
inspection of these systems in a way that procedural literature or music might not. Games tend
to carry these systems with them, instead of simply being products of them, making the study of
the systems less obscure.
PCG offers a method of expression with incredible potential, and I find that this avenue
for expression includes its own sources of inspiration, discovery, and growth. My most
successful projects have been the ones where I have learned the most, or found places I would
not have otherwise traveled to. Reviewing the promises of Procedural Content Generation,
looking at its history, cultures, and techniques has increased my understanding, and I hope in
showing my own work I have demonstrated and communicated at least a little of that
understanding to others. After traversing the landscape of chaos, agency, procedure, and decades
of history in focus, I feel like I’ve just started to play. More excitement comes from letting
others know they can play, too.
Furmanski 178
Bibliography
“3D Printer With 16 Colors - Minecraft Invention” Youtube.com.
https://www.youtube.com/watch?v=NosYiyNXhzQ (Accessed December 16, 2013).
“About Bad News”. Indiecade. http://www.indiecade.com/games/selected/bad-news
Alexander, Christopher. The Timeless Way of Building. (New York: Oxford University Press,
1979).
Anynowhere message board. http://anynowhere.com/bb/posts.php?t=4887 (Accessed February
22, 2014).
Barton, Matt. “Interview with Daniel M. Lawrence, CRPG Pioneer and Author of Telengard”.
Armchair Arcade. http://www.armchairarcade.com/neo/node/1366 (Accessed June 22,
2007).
Bays, Carter. “Candidates for the Game of Life in Three Dimensions.” Complex Systems 1, no. 3
(1987).
Bellour, Raymond and Bill Viola, “An Interview with Bill Viola” in October, vol. 34, Autumn,
1985.
Bishop, Claire. Artificial Hells: Participatory Art and the Politics of Spectatorship. (Verso,
2012).
Borges, Jorge Luis and Eliot Weinberger, Selected Non-Fictions (New York: Penguin Books,
2000).
Bohman-Kalaja, Kimberly. Reading Games: An Aesthetics of Play in Flann O’Brien, Samuel
Beckett & Georges Perec. (Champaign, IL: Dalkey Archive Press, 2007).
Borzyskowsky, George. “The Hacker Demo Scene and It's Cultural Artifacts.” Scheib.net.
http://www.scheib.net/play/demos/what/borzyskowski/ (Accessed October 17, 2000).
Bush, Gregory. “Telengard in Elm: The Dungeon”. Telengard in Elm. http://elm-
telengard.blogspot.com/2014/03/the-dungeon.html (Accessed March 24, 2014).
Cage, John. Silence: Lectures and Writings. (London: Marion Boyars, 1995).
Calvino, Italo. Invisible Cities. (Mariner Books, 2013).
Carroll, Noel. Theorizing the Moving Image. (Cambridge University Press). 10.
Furmanski 179
Cepero, Miguel. “Procedural Information.” Procedural World.
http://procworld.blogspot.com/2014/03/procedural-information.html (Accessed March 8,
2014).
Compton, Kate. “So You Want to Build a Generator”.
http://galaxykate0.tumblr.com/post/139774965871/so-you-want-to-build-a-generator
(Accessed February 22, 2016).
Compton, Kate. “What are Casual Creators?” CasualCreator.com.
http://www.casualcreator.com/blog/?p=10 (Accessed March 4, 2014).
Cook, Michael. “Alien Languages: How We Talk About Procedural Generation”. Games by
Angelina. http://www.gamesbyangelina.org/2016/08/procedurallanguage/ (Accessed
August 18, 2016).
Cook, Michael. ProcJam.com. http://www.procjam.com/
Craddock, David L. Dungeon Hacks: How NetHack, Angband, and Other Roguelikes Changed
the Course of Video Games. Edited by Andrew Magrath. (Press Start Press, 2015).
Crane, David. “Classic game Postmortem: Pitfall!”. GDC Vault.
http://www.gdcvault.com/play/1014632/Classic-Game-Postmortem-PITFALL (Accessed
2011).
Cumbrowski, Carsten. “The History of the Demoscene.” Roy/SAC.
http://www.roysac.com/blog/2008/01/the-history-of-the-demoscene/ (Accessed January
23, 2008).
Delahunty, James. “Piracy group resurrects after being raided.” Afterdawn News.
http://www.afterdawn.com/news/article.cfm/2004/09/03/piracy_group_resurrects_after_b
eing_raided (Accessed September 3, 2004).
Duncan, Grant. ”No Man’s Sky: How I Learned to Love Procedural Art” GDC Vault.
http://www.gdcvault.com/play/1021805/Art-Direction-Bootcamp-How-I (Accessed
2015).
Elkin, Lauren and Scott Esposito. The End of Oulipo?: An Attempt to Exhaust a Movement.
(Zero Books, 2013).
“Federation Against Ikari” The C-64 Scene Database.
http://noname.c64.org/csdb/group/?id=2431&show=current
Flanagan, Mary. Critical Play: Radical Game Design. (Cambridge, Mass.: The MIT Press,
2009).
Furmanski 180
friol. “demoreview#6: zooooooming in............” The Tunnel.
http://datunnel.blogspot.com/2007/10/demoreview-6-scene-zooms.html (Accessed April
10, 2007).
Gardener, Martin. “The Fantastic Combinations of John Conway’s New Solitaire Game ‘life.’”
Scientific American, no. 223 (October 1970).
Goldberg, Daniel and Linus Larrson. “The Amazingly Unlikely Story of How Minecraft Was
Born”.Wired.com. https://www.wired.com/2013/11/minecraft-book/ (Accessed
November 5, 2013).
Hecker, Chris, Bernd Raabe, Ryan W. Enslow, John DeWeese, Jordan Maynard, and Kees van
Prooijen. "Real-time motion retargeting to highly varied user-created morphologies."
ACM Transactions on Graphics (TOG) 27, no. 3 (2008): 27.
Heikkilä, Ville-Matias. “Putting the demoscene in a Context.” Countercomplex.
http://www.pelulamu.net/countercomplex/putting-the-demoscene-in-a-context/
(Accessed July 11, 2009).
Heikkilä, Ville-Matias. “Putting the demoscene in a Context.” Countercomplex.
http://www.pelulamu.net/countercomplex/putting-the-demoscene-in-a-context/
(Accessed July 11, 2009).
Knight, Gareth. "Boing Back – The origins of the Boing Ball”. Amiga History Guide.
http://www.amigahistory.co.uk/boingball.html
Kosak, Dave. “Will Wright Presents Spore...and a New Way to Think About Games.”
Gamespy.com. http://www.gamespy.com/articles/595/595975p1.html (Accessed March
14, 2005).
Kuittinen, Petri. “Computer Demos – The Story So Far.” Media Lab Helsinki.
http://mlab.uiah.fi/~eye/demos/ (Accessed April 28, 2001).
Lagae, A., S. Lefebvre, R. Cook, T. DeRose, G. Drettakis, D. S. Ebert, J. P. Lewis, K. Perlin, and
M. Zwicker. “State of the Art in Procedural Noise Functions.” In Eurographics 2010 -
State of the Art Reports. Norrköping, Sweden: The Eurographics Association, 2010..
Langton, Christopher. Artificial Life Workshop, Artificial Life II: Proceedings of the Workshop
on Artificial Life: Held February 1990 in Santa Fe, New Mexico. Santa Fe Institute
Studies in the Sciences of Complexity Proceedings. Proceedings Volume 10 (Redwood
City, Calif: Addison-Wesley, 1992).
Maudlin, Michael L. et al., “ROG-O-MATIC: A Belligerent Expert System” (Conference of the
Canadian Society for Computational Studies of Intelligence, London, Ontario, 1984).
Furmanski 181
McKendrick, Hazel. “Procedural Doesn’t Mean Random: Generating Interesting Content”,
ProcJam 2014,
https://www.youtube.com/playlist?list=PLxGbBc3OuMgiHnQfqerWtgJRA3fMzasra
(Accessed 2014).
Miller, John H. and Scott E. Page. Complex Adaptive Systems: An Introduction to Computational
Models of Social Life. (Princeton, N.J: Princeton University Press, 2007).
Minecraft Forum. http://www.minecraftforum.net/forums/mapping-and-modding/minecraft-
mods/1281979-1-2-5-wedge-the-worldgen-editor-create-worlds (Accessed March 4,
2012).
“minecraft no command block 3D printer [Tutorial]”. Youtube.com.
https://www.youtube.com/watch?v=t-xMnBsKG8k (Accessed January 11, 2016).
Montfort, Nick. Racing the Beam: The Atari Video Computer System. Platform Studies.
(Cambridge, Mass: MIT Press, 2009).
Motte, Warren F. ed., Oulipo: A Primer of Potential Literature, 1st Dalkey Archive ed, French
Literature Series (Normal, Ill: Dalkey Archive Press, 1998).
Janet H. Murray, Hamlet on the Holodeck: The Future of Narrative in Cyberspace, (Free Press,
1997).
Murray, Sean. “Exploring the 18,446,744,073,709,551,616 planets of No Man’s Sky”.
Playstation.blog. https://blog.eu.playstation.com/2014/08/26/exploring-
18446744073709551616-planets-mans-sky/ (Accessed August 26, 2014).
Ohlerich, Dierk. Homepage. http://www.xyzw.de/
Pall Bearer, Toxic Zombie. “The Internet Demo Scene”. Imphobia #6. August 1993.
Perlin, Ken. “Making Noise”.
https://web.archive.org/web/20160303232627/http://www.noisemachine.com/talk1/
(Accessed March 3, 2016).
Perlin, Ken. “Noise and Turbulence”. NYU Media Research Lab.
http://mrl.nyu.edu/~perlin/doc/oscar.html
Phillips, Amanda et al., “Feminism and Procedural Content Generation: Toward a Collaborative
Politics of Computational Creativity,” Digital Creativity 27, no. 1 (January 2, 2016).
Polgar, Tamas. “The Full History of the Demoscene”. Youtube.com
https://www.youtube.com/watch?v=8iDr-8odlqo (Accessed February 23, 2014).
Furmanski 182
Queneau, Raymond. Exercises in Style, trans. Barbara Wright, 2nd edition (New York: New
Directions, 1981)
“Random Number Generator”. The Elite Wiki.
http://wiki.alioth.net/index.php/Random_number_generator (Accessed June 6, 2016).
Ramsay, Stephen. Reading Machines: Toward an Algorithmic Criticism. (Urbana: University of
Illinois Press, 2011).
Reas, Casey, Chandler McWilliams, and Jeroen Barendse. Form+code in Design, Art, and
Architecture. (New York: Princeton Architectural Press, 2010).
Reunanen, Markku and Antti Silvast, “Demoscene Platforms: A Case Study on the Adoption of
Home Computers,” in History of Nordic Computing 2, ed. John Impagliazzo, Timo Järvi,
and Petri Paju, vol. 303 (Berlin, Heidelberg: Springer Berlin Heidelberg, 2009), 289–301.
Robbe-Grillet, Alain. Generative Literature and Generative Art: New Essays. Fredericton, N.B.,
(Canada: York Press, 1983).
Robinson, Martin. “The art of No Man’s Sky”. Eurogamer.net.
http://www.eurogamer.net/articles/2015-03-20-the-art-of-no-mans-sky (Accessed March
20, 2015).
Ryan, James, Ben Samuel and Adam Summerville. Bad News Homepage.
https://www.badnewsgame.com/overview/ (Accessed April 30, 2017).
Scott, Jason. “Apple II Pirate Lore.” Archive.org. http://www.archive.org/details/Apple-II-
Pirate-Lore (Accessed March 29, 2003).
Sicart, Miguel. The Ethics of Computer Games. 1st edition. (The MIT Press, 2009).
Silverman, Kenneth. Begin Again: A Biography of John Cage. (Knopf Doubleday Publishing
Group, Kindle Edition, 2010).
Smith, Gillian. "An Analog History of Procedural Content Generation." Paper presented at the
Foundation for Digital Games, Monterey, CA. 2015.
Smith, Gillian. “History of Procedural Content Generation”.
http://sokath.com/main/blog/2015/05/23/history-of-procedural-content-generation/
(Accessed June 2015).
Tekinbas, Katie Salen and Eric Zimmerman. Rules of Play: Game Design Fundamentals.
(Cambridge, Mass.: MIT Press, 2003).
Telengard Fan Site. “Misc. Docs, Reviews, and Ads”.
http://www.angelfire.com/ny5/telengard/doc2.htm
Furmanski 183
“The Telengard Tavern Generator”. The Digital Eel. http://www.digital-eel.com/ttg.htm
Thompson, John. “Top 10 Demos That Will Blow You Away”. PCPlus.
http://pcplus.techradar.com/feature/top-10-demos-11-08-10 (Accessed August 11, 2010).
Thompson, Tim. “Tune Toys,” http://www.nosuch.com/tjt/tunetoys.html
Trenholme, Sam. “A 4096-byte Jungle”. Sam Tremholme’s Webpage.
http://samiam.org/blog/20130606.html
Various. “Dwarf Fortess – Boatmurdered.” Let’s Play Archive. https://lparchive.org/Dwarf-
Fortress-Boatmurdered/ (Accessed April 14, 2007).
Volko, Claus Dieter. "The Demoscene – A Short Introduction”. Hugi.
http://www.hugi.scene.org/adok/articles/demoscn.htm (Accessed 2004).
Volko, Claus Dieter. “Making of fr-08: .the.product.” Hugi.
http://www.hugi.scene.org/adok/articles/adfr08.htm (Accessed 2001).
Volko, Claus Dieter. “Presenting the Scene to the Public: Good or Bad?” Hugi.
http://www.hugi.scene.org/adok/articles/present.htm (Accessed 2003).
Wardrip-Fruin, Noah. Expressive Processing: Digital Fictions, Computer Games, and Software
Studies. (Cambridge, Mass.; London: MIT Press, 2012).
WikiWikiWeb. “Hello World in Many Programming Languages”.
http://wiki.c2.com/?HelloWorldInManyProgrammingLanguages (Accessed November
12, 2014).
Youngblood, Gene. Expanded Cinema. (New York: E P Dutton, 1970).
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Poetic science: evoking wonder through transmedia discovery of science
PDF
Reality ends here: environmental game design and participatory spectacle
PDF
Designing speculative rituals: tangible imaginaries and fictive practices from the (inter)personal to the political
PDF
Modular cinema: multi-screen aesthetics and recombinatorial narrative
PDF
Worldizing data: embodiment, abstraction, and distortion in art-science praxis
PDF
Real fake rooms: experiments in narrative design for virtual reality
PDF
The make of The Surveillant: a thesis project ""postpartum""
PDF
From the extraordinary to the everyday: fan culture’s impact on the transition of Chinese post-cinema in the first twenty years of the twenty-first century
PDF
Idhan
PDF
Tracking Ida
PDF
Moloch: creating games with alternative mental state goals to move beyond flow
PDF
There You Are: an exploration of storytelling methods using in video games
PDF
From archive to analytics: the R-Shief media system and the stories it tells
PDF
Quicksilver: infinite story: procedurally generated episodic narratives for gameplay
PDF
Light at the End of the Tunnels: level design and its relationship to a spectrum of fear
PDF
Bottles
PDF
Somatic montage: supra-dimensional composition in cinema and the arts
PDF
The Toymaker's Bequest
PDF
Encounters with the Anthropocene: synthetic geologies, diegetic ecologies and other landscape imaginaries
PDF
Smile medicine: the playful approach to failing fast
Asset Metadata
Creator
Furmanski, Todd Anthony
(author)
Core Title
Media of the abstract: exploring axioms, techniques, and histories of procedural content generation
School
School of Cinematic Arts
Degree
Doctor of Philosophy
Degree Program
Cinematic Arts (Media Arts and Practice)
Publication Date
07/22/2017
Defense Date
05/23/2017
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
algorithmic art,artificial creativity,artificial life,demoscene,digital history,game design,Games,OAI-PMH Harvest,Oulipo,procedural content generation
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Fullerton, Tracy (
committee chair
), Fisher, Scott (
committee member
), Willis, Holly (
committee member
)
Creator Email
tfurmanski@cinema.usc.edu,tfurmanski@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-409809
Unique identifier
UC11264310
Identifier
etd-FurmanskiT-5594.pdf (filename),usctheses-c40-409809 (legacy record id)
Legacy Identifier
etd-FurmanskiT-5594.pdf
Dmrecord
409809
Document Type
Dissertation
Rights
Furmanski, Todd Anthony
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
algorithmic art
artificial creativity
artificial life
demoscene
digital history
game design
Oulipo
procedural content generation