Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
Computer Science Technical Report Archive
/
USC Computer Science Technical Reports, no. 889 (2007)
(USC DC Other)
USC Computer Science Technical Reports, no. 889 (2007)
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
MIMI
A Musical Improvisation System
That Provides Visual Feedback to the Performer
Alexandre R.J. Fran¸ cois
∗
, Elaine Chew
†
and Dennis Thurmond
‡
University of Southern California
{afrancoi,echew,thurmond}@usc.edu
January 2007 - posted April 2007
Abstract
This report describes the design and realization of Mimi, a multi-modal interactive musical improvi-
sation system that explores the potential and powerful impact of visual feedback in performer-machine
interaction. Mimi is a performer-centric tool designed for use in performance and teaching. Its key and
novelcomponentisitsvisualinterface, designedtoprovidetheperformerwithinstantaneousandcontin-
uous information on the state of the system. For human improvisation, in which context and planning
are paramount, the relevant state of the system extends to the near future and recent past. The Mimi
system, designed and implemented using the SAI framework, successfully integrates symbolic computa-
tions and real-time synchronization in a multi-modal interactive setting. Mimi’s visual interface allows
for a peculiar blend of raw reflex typically associated with improvisation, and preparation and timing
more closely affiliated with score-based reading. Mimi is not only an effective improvisation partner,
it has also proven itself to be an invaluable platform through which to interrogate the mental models
necessary for successful improvisation.
Keywords: Performer-machine interaction, visualization design, machine improvisation
Figure 1: Aural and visual performer-machine interaction: Dennis Thurmond and Mimi.
∗
Computer Science Department, Viterbi School of Engineering
†
Epstein Department of Industrial and Systems Engineering, Viterbi School of Engineering
‡
Thornton School of Music
1
USC/CS/07-889 c °2007 arjf+ec+dt 2
1 Introduction
ThisreportdescribesthedesignandrealizationofMimi,showninFigure1,amulti-modalinteractivemusical
improvisationsystemthatexplorestheroleofvisualfeedbackinperformer-machineinteraction. Theproject
results from the playful interactions of a computer scientist/amateur pianist (Fran¸ cois), an engineer/concert
pianist (Chew), and a classical/improvising keyboard performer/pedagogue (Thurmond).
Mimi is, first and foremost, a performer-centric tool designed for use in performance and teaching.
Interactions with the system occur in two distinct phases: preparation and performance. Interaction in
the two stages occur over identical modalities, but under different external constraints. In the preparation
phase, a musician lays down the material that will be used by the factor oracle-based system for generating
improvisations. Thismaterialfromthehumanperformer/improvisermayresultfromaspontaneouscreation,
verysimilartoaspectsoftheimprovisationprocessduringhuman-machineinteraction,oritmayresultfroma
carefullyplannedprocess,moreakintocomposition,whichhassometimesbeenlikenedtoslowimprovisation.
The key and novel component of the Mimi system is its visual interface, shown in Figure 2. The screen
image is divided into two halves. The upper half, performance visualization, displays a scrolling piano roll
notation, with a central bar demarcating the present, while to its right are the notes to come, and to its
left are the notes recently sounded. The machine-generated improvisation is colored blue, while the human
improviser’snotesarecoloredred. Thisupperdisplayactsasamapthatallowstheperformertoplanahead,
and to re-visit recent past material. The lower half, oracle visualization, shows the preparatory material
from which the factor oracle is derived, with a red cursor that shows the oracle improviser’s present position
in the preparatory material. This lower display presents a visual aid to the human improviser, helping the
improviser keep a mental note of the structural content and relative location (within the initial material) of
the current machine improvisation. When projected on a large screen in a performance, these visuals may
also assist the audience in perceiving higher level structures and the subtleties of improvisation strategies.
As the initial material is created, Mimi’s visual interface shows the incoming notes, in piano roll repre-
sentation, scrolling from right to left in the upper screen, and collecting cumulatively from left to right in
the lower screen. In the performance phase, the system generates new musical material from the prepared
material(visibleonthelowerscreen),withwhichtheperformerimprovises. Inadditiontoprovidingauditory
feedback, the upper half of the visual display shows the interactions between the machine’s improvisations
and the human performer’s improvisations, and the musical material to come, and recently passed, in real
time. The lower half of the visual display documents the current state of the improvisation engine, with
respect to the preparatory material.
Fromatechnicalpointofview, thesystemrequirementsfor Mimispantraditionallydisjointsetsthatare
difficult to reconcile. First, Mimi’s use in live performance demands real-time, interactive and synchronous
synthesis and display of multi-modal output. Second, the symbolic data structure (factor oracle) and algo-
rithms that constitute the core of the improvisation engine operate outside of time, and must be seamlessly
integrated with the real-time performance aspects of the system, without compromising their power and
flexibility, or the quality of the real-time experience. The meeting of these requirements is facilitated by the
use of Fran¸ cois’ Software Architecture for Immersipresence (SAI) framework.
This report presents Mimi’s system design and implementation, and findings from initial case studies on
the use and effectiveness of visual feedback in improvisation planning and design. The remainder of this
report is organized as follows. Section 2 places Mimi’s approach in context with past and ongoing work in
interactivemachineimprovisation. Section3describestheMimiprototype,andoutlinesthedesignandinner
workingsofitsmaincomponents. Section4offersadiscussionofthesystem,andoftheprinciplesunderlying
the design of the visual interface. Finally, Section 5 offers concluding remarks, and outlines research and
development directions for ongoing and future work.
2 Machine Improvisation
ThissectionrelatesMimitorepresentativepastandongoingworkininteractivemachineimprovisation. The
designs of these systems rely almost exclusively on real-time auditory feedback for the performer(s) to assess
the state of the system, and therefore its possible evolution in the near future.
GeorgeLewishasbeencreating, andperformingwith, algorithmicimprovisationsystemssincethe1980s.
One example is Voyager, whom Lewis describes as a “nonhierarchical, interactive musical environment that
privileges improvisation” [10]. In Lewis’ approach, the preparation process consists of the creating of a
computer program. During the performance, the program automatically generates responses to the musi-
cian’s playing, as well as new material. The performer(s) listens and reacts to the system’s musical output,
USC/CS/07-889 c °2007 arjf+ec+dt 3
Figure 2: Mimi’s visual interface.
closing the exclusively aural feedback loop. The musician has no direct control over the program during
performance. In the scenario developed in the present study, Mimi generates music based entirely on the
preparatory material, and is unaffected by the new human and machine improvisations during performance,
so as to keep the source material manageable for both the improvisation system and the performer (see
discussion in Section 4).
Mimi’simprovisationengineisinspiredbythatoftheOMaxsystem[3,7,2]. InOMax,Assayag,Dubnov
et al. introduce the factor oracle approach in the context of machine improvisation (see Section 3.2 for a
brief introduction). In OMax, off-line versions of the learning and generation algorithms are implemented in
OpenMusic [15, 4, 5]. OpenMusic’s functional programming approach does not allow for efficient handling
of real-time (or on-line) events. Consequently, OMax relies on Max/MSP [11] to handle on-line aspects
such as real-time control, MIDI and audio acquisition, and rendering. OpenMusic and Max/MSP adopt
similarvisualmetaphors(patches),butwithdifferent,andincompatible,semantics: asobservedbyPuckette
in [18], Max patches contain dynamic process information, while OpenMusic patches contain static data.
Therefore, communication between, and coordination of, the two subsystems in OMax requires the use of an
interactionprotocol,OpenSoundControl[16]. Mimiisdesignedandimplementedinasingleformalism(SAI),
which results in a simpler and more scalable system. For example, the integration of complex visualization
functionalities occured in a natural and seamless way, without compromising system interactivity.
In OMax, the improvising musician interacts with the system based on purely aural feedback, while
another human operator controls the machine improvisation (the improvisation region in the oracle, instru-
mentation, etc.) through a visual interface, during performance. Mimi explores a different approach, with
interaction specifically designed for, and under the sole control of, the improviser. Visual feedback of future
and past musical material, and of high level structural information, provides timely cues for planning the
improvisation during performance. The oracle on which the improvisation is based can be spontaneously
generated, or pre-meditated, and carefully planned.
Factororacle-basedimprovisationisbasedonastochasticprocessinwhichthemusicalmaterialgenerated
is a recombination of musical material previously learned. Many improvisation systems make use of various
probabilistic models to learn parameters from large bodies of musical material, then generate new material
USC/CS/07-889 c °2007 arjf+ec+dt 4
with similar statistical properties (akin to style). Examples of machine improvisation systems employing
other probabilistic models include Pachet’s Continuator [17], Thom’s Band-OUT-of-the-Box (BoB) [21, 20],
and Walker’s ImprovisationBuilder [24, 22, 23].
Musical interaction modalities during performance vary from system to system, and range from turn-
taking dialogue to synchronous accompaniment, but in all cases, performer-machine interaction is based
exclusively on aural feedback. Recently, the Haile/Pow humanoid-shaped robotic drummer [25] introduced
gestural interaction for musical improvisation. Mimi’s visual feedback design does not aim to emulate
human gestures, but rather, it seeks to explore different modes and mental spaces for communication in the
interactive music creation process.
3 System Design
This section describes the architecture of the Mimi prototype, and outlines the design and inner workings of
its main components.
3.1 System Architecture
The requirements for Mimi span disjoint sets that are traditionally difficult to reconcile. Live performance
demands real-time interaction, with synchronous synthesis and display of multi-modal output, while the
underlying improvisation engine, with its data structures and algorithms, typically operate outside of time.
The two must be seamlessly integrated without compromising power or flexibility, or the quality of the real-
timeexperience. Theseissuespertaintoaclassoffundamentalissuescharacerized, forexample, byPuckette
as a divide between processing and representation paradigms [18], and by Dannenberg as the difficulty of
combining (functional) signal processing and (imperative) event processing [6]. SAI is designed specifically
to address the limitations of traditional approaches in the creation of interactive systems. The relevance of
the framework in the context of interactive music systems has been explored and established over the past
few years through collaborations between Fran¸ cois and Chew [9].
The use of Fran¸ cois’ SAI architectural framework [8] afforded the efficient design and rapid implemen-
tation of Mimi. Figure 3 shows Mimi’s conceptual level architecture in SAI notation. In the SAI model,
volatile data flows down streams (depicted by thin lines) defined by connections between processing centers
called cells (notated by squares), in a message passing fashion. Repositories called sources (shown as circles)
hold persistent data to which connected cells (connections denoted by thick lines) have shared concurrent
access. The directionality of stream connection between cells expresses dependency relationships.
TheboxesinFigure3groupcomponentsintothemajorsubsystems: (1)theimprovisationengine;(2)the
performanceengine;(3)thevisualizationinterface;and,(4)MIDIinputandoutput. Thedesignisinherently
concurrent, and the modularity of the SAI style facilitates design evolution. The graph not only presents
an intuitive view of the system, but because of the visual language’s well-defined semantics, the graph also
serves as a formal architectural specification, in this case, at the conceptual level. The Mimi prototype
was implemented in C++ directly from this specification using the Modular Flow Scheduling Middleware
(MFSM) [12], an open source architectural middleware that implements SAI’s architectural abstractions.
The remainder of this section briefly describes each of Mimi’s subsystems.
3.2 Improvisation Engine
At the heart of Mimi’s improvisation engine is a factor oracle, a graph structure introduced by Allauzen et
al. [1] for use in pattern matching applications. Given a string of symbols, p, its factor oracle is a directed
acyclicgraphthatrecognizesatleastthefactors(substrings)of p, hasthefeweststatespossible, andalinear
number of transitions. Such a graph can be constructed incrementally by an on-line algorithm described by
the authors.
Assayag,Dubnovetal.[2,3]noticedthat,byexploitingsometemporaryedges(calledsuffixlinks)created
bytheonlineconstructionalgorithm,thefactororaclecanbeusedformusicalimprovisation. Afactororacle
can be created for any sequence of (possibly complex) musical events, given a dissimilarity measure. The
factor oracle always contains the original sequence. If the graph is traversed along that longest path, then
the original sequence can be regenerated. Stochastically following the suffix links leads to the generation of
a sequence that is a recombination of the original sequence without new symbol sequences.
The factor oracle is a persistent data structure, which grows during learning, and remains static oth-
erwise. Mimi’s factor oracle stores musical “characters” that are vectors encoding the energy level of each
USC/CS/07-889 c °2007 arjf+ec+dt 5
Figure 3: Mimi’s conceptual level system architecture in SAI notation.
MIDI note during a time interval. During learning, these characters (volatile data) are continuously and
uniformly sampled; a pulsar triggers the sampling. The Add character cell implements the online oracle
construction algorithm. The dissimilarity measure currently considers only note on/off information and
assumes enharmonic equivalence. During improvisation, the same pulsar triggers the regeneration of the
musical characters, at the same rate, by the Improvise cell, which implements an online stochastic traversal
algorithm. A persistent index stores the current position in the oracle, from which the next character will
be decided. A single parameter sets the recombination rate in the generated material.
3.3 Performance Engine
The performance engine implements real-time data structures for the different elements of the system to
interact in a coherent time frame. For instance, the improvisation engine consists of a data structure and
algorithms that operate outside of time. In order to establish interaction with the performer, they must be
embedded in a common time-frame, the world time (or real-time). Also, to show future musical material in
Mimi’s visualization interface, the music generated by the oracle is not sounded immediately, but delayed
by a specific amount of time; thus, the oracle’s output and the real-time events flow at the same speed but
they are offset by a time interval. Time management and synchronization is implemented in, and handled
by, the performance engine.
Circular buffers called tracks (persistent data) store a fixed number of musical characters. A delay
parameter relates each track’s time horizon to the present. The Push cell implements passage of time, with
or without inserting a specific character. By design of the performance engine, and consistent with the
definition of real-time, the tracks are always in motion, and time cannot be stopped. The Play cell, when
triggered, generates the “present” character in the track. In Mimi, there is no delay for the performer’s
track, so that the pattern of push and pull cells results in the immediate sounding of a character when it is
pushed into the track. In contrast, characters generated by the oracle are pushed and played with a fixed
delay (set to 10 seconds).
USC/CS/07-889 c °2007 arjf+ec+dt 6
3.4 Visualization
Mimiis,firstandforemost,aperformer-centrictooldesignedforuseinperformanceandteaching. Itsprimary
users are therefore musicians in the act of performing. Mimi’s visuals adhere to well-known principles
of design for usability and understandability [13], namely to provide a good conceptual model, visibility,
feedback and natural mappings. Figure 2 shows Mimi’s main defining feature: its visual interface. The
purpose of this interface is to provide the performer with instantaneous and continuous information on the
state of the system. For human improvisation, in which context and planning are paramount, the relevant
state of the system extends to the near future and recent past.
Mimi’svisualizationsubsystemconsistsoftworenderingcells, eachofwhichusesOpenGL[14]todisplay
a pictorial representation of the corresponding data structures (oracle and tracks) at regular time intervals.
The resulting display is divided horizontally into two halves. The upper half, the performance visual-
ization, provides a scrolling piano roll notation, showing past, present, and future notes, with a central bar
markingthepresent. Thenotesfloatfromrighttoleft,andaresoundedwhentheypassthecenterbar. Both
machine and human improvisations are shown on the same panel, dististinguished by color codes: blue for
the machine, and red for the human. The future human improvisation is generally not known, and therefore
not shown on the panel. This upper visual display acts as a musical map, allowing the performer to see
the oncoming musical material and to plan ahead, as well as to visualize the interaction of the recent past
improvisations between themselves and the machine.
The lower half, the oracle visualization, shows the source musical material for the factor oracle, provided
in the preparatory stage. A red vertical line shows the present state of the oracle as a position in the source
material. During improvisation, this cursor maps the traversal of the oracle links in real-time; its rate of
movement reflects the amount of recombinations introduced by the oracle. This display presents a visual aid
to the improviser, giving them a high-level structural understanding of the musical material at hand.
The piano roll conceptual model is familiar to musicians. The mapping of time along the horizontal axis
is omnipresent in music notation schemes, and it accomodates both static (oracle) and dynamic (tracks)
representations. Furthermore, the level of precision offered by the piano roll notation is particularly well
suited to improvisation, which involves fast pattern identification and recognition. A more precise graphic
notation, such as score notation, might overwhelm the less-experienced performer. On the contrary, the
piano roll notation allows for the association of musical ideas with visual patterns.
3.5 MIDI input and output
Mimi interfaces with MIDI devices for input and output. The I/O subsystems leverage an existing MFSM
module which encapsulate the rtMidi library [19]. In order to provide the improvisation engine with a con-
tinuous stream of uniformly sampled characters, the MIDI Input subsystem maintains a persistent character
(labeledCurrent stateinthegraph). TheProcess eventcellupdatesthestatefromMIDIevents. TheSample
cell, triggered by a pulsar, generates volatile characters. The MIDI Output subsystem performs the reverse
operation to generate MIDI events from a stream of characters, implemented by the Generate MIDI events
cell.
4 Visual Feedback in Improvisation
This section offers a discussion of the system and the motivation behind the design of its visual interface.
Mimi’s design evolved over the course of quasi-weekly play sessions involving the three authors. In the
early sessions, Thurmond exercises free license in experimenting with the latest version of the system, and
learning its behavior in improvisation. As the sessions deepen in sophistication, the play sessions become
more structured, and the laying down of preparatory material of each oracle, and the experimentation with
its expressive scope, is preceded and succeeded by question and answer sessions to interrogate Thurmond on
the interface design, and the oracle design and performer-machine improvisation process.
Mimi incorporates aspects of raw reflex typically associated with improvisation, and preparation and
timing more closely affiliated with score-based reading. The visual feedback gives the performer foresight
into the future (through the right-hand-side of the scrolling upper panel showing future musical material),
hindsight into the past (through the left-hand-side of the scrolling upper panel), and the current state of the
oracle within the preparatory material. These visual cues were incorporated after the first few sessions of
free play, where it was determined that the performer needed more than simply a scrolling piano roll of the
music’s now and past.
USC/CS/07-889 c °2007 arjf+ec+dt 7
Comparing the importance of the three types of information (future and past material, and oracle state),
Thurmond reports that he pays attention to the future content part of the panel approximately 60% of the
time, and divides the remaining 40% of the time between the past and the oracle state parts of the display.
It is expected that the improviser would find the future musical content of imminent interest for planning
purposes, to be able to foresee what the machine would be playing next ahead of time, and to have the time
to prepare counter material in response.
QuotingThurmond,“I’mlookingatwhat’shappeningrightthen,andthenI’mscanningaheadconstantly.
... when I would see a certain kind of dissonant pattern coming up, I would know, because of the way I set
up the oracle itself, what other dissonant intervals I could put with it. And then I would use clusters where
I could see it coming to something I had set up that was open, and I would mix the cluster with it, and I
would repeat the cluster. And sometimes, I could see the silence, which is very important in the original
oracle− to build in silence− then I could place that, and set up an ostinato, and it would hit it, and it was
great. That was very exciting.”
Whatwassurprisingwasthediscoverythatthepastinformationwasalsoimportant. AsThurmondputs
it,itisimportanttolookatwhat’sgoneafter“becauseitgivesmeanideaofwhatIwanttodointhefuture.
... having a history is very important because if I do something that is not quite right, in improvisation
if you make a mistake, then you don’t jump away from it, you stay with it to make it not a mistake any
more. ... It becomes a feature ... then I bring it back again.” The statement resonates with John Cage’s
quote, “The idea of mistake is beside the point, for once anything happens it authentically is.” When the
unexpected happens, to re-create it again, one has to recognize how it happened, and in what context, so as
to be able to repeat a particular pattern in the future. This is where the visual display of the past, of the
machine-generated context (in blue) and the human response (in red) comes into play.
Thurmondusestheoraclestatepanelinthelowerhalfofthedisplay,tocontextualizethepresentmusical
content and state of the machine improvisation. “I wanted to see what [Mimi] was thinking. ... it allowed
me a lot more freedom than trying to remember just exactly what kind of pattern I set up. So the structure
that I set up, I am constantly reminded of the structure.” The oracle’s state thus provides the performer
with a quick reference for the structure of the present improvisation material.
5 Conclusion
This report described the design and implementation of Mimi, a musical improvisation system that explores
the potential and powerful impact of visual feedback in performer-machine interaction. In only a few short
months, it has already proven itself to be an invaluable platform through which to investigate the mental
models necessary for successful improvisation. Future research will explore the improvisation strategies
through the defining of key principles in setting up effective oracles in Mimi’s preparatory stage, and its
application to the teaching of improvisation.
6 Acknowledgments
TheauthorsthankGerardAssayagforhisvaluablediscussionson, andinsightsinto, theuseoffactororacles
formachineimprovisationinOMax,andKatherinedeSousa(USCWomeninScienceandEngineeringUnder-
graduateResearchFellow)forherhelpinvideotapingandtranscribingtheperformer-machineimprovisation
sessions.
References
[1] Cyril Allauzen, Maxime crochemore, and Mathieu Raffinot. Factor oracle: A new structure for pattern
matching. In J. Pavelka, G. Tel, and M. Bartosek, editors, Proc. SOFSEM’99, Theory and Practice of
Informatics, pages 291–306, Milovy, Czech Republic, 1999. Springer Verlag Lecture Notes in Computer
Science.
[2] G. Assayag and S. Dubnov. Using factor oracles for machine improvisation. Soft Computing, 8:1–7,
2004.
USC/CS/07-889 c °2007 arjf+ec+dt 8
[3] Gerard Assayag, Georges Bloch, Marc Chemillier, Arshia Cont, and Shlomo Dubnov. Omax brothers :
a dynamic topology of agents for improvization learning. In Proc. ACM Workshop on Music and Audio
Computing, Santa Barbara, CA, USA, October 2006.
[4] G´ erard Assayag, Camilo Rueda, Mikael Laurson, Carlos Agon, and Olivier Delerue. Computer assisted
composition at IRCAM: PatchWork & OpenMusic. Computer Music Journal, 23(3), 1999.
[5] Jean Bresson, Carlos Agon, and G´ erard Assayag. OpenMusic 5: A cross-platform release of the
computer-assisted composition environment. In Proc. 10th Brazilian Symposium on Computer Music,
Belo Horizonte, Brazil, 2005.
[6] Roger B. Dannenberg. A language for interactive audio applications. In Proc. International Computer
Music Conference, San Francisco, CA, 2002.
[7] S. Dubnov and G. Assayag. Improvisation planning and jam session design using concepts of sequence
variation and flow experience. In Proc. International Conference on Sound and Music Computing,
Salerno, Italy, November 2005.
[8] Alexandre R.J. Fran¸ cois. A hybrid architectural style for distributed parallel processing of generic
data streams. In Proc. International Conference on Software Engineering, pages 367–376, Edinburgh,
Scotland, UK, May 2004.
[9] Alexandre R.J. Fran¸ cois and Elaine Chew. An architectural framework for interactive music systems.
In Proc. International Conference on New Interfaces for Musical Expression, Paris, France, June 2006.
[10] GeorgeLewis. Toomanynotes: Computers,complexityandcultureinvoyager. LeonardoMusicJournal,
10:pp. 33–39, 2000.
[11] Max/MSP. www.cycling74.com.
[12] Modular Flow Scheduling Middleware. mfsm.sourceForge.net.
[13] Donald A. Norman. The Design of Everyday Things. Basic Books, 2002.
[14] OpenGL. www.OpenGL.org.
[15] OpenMusic. recherche.ircam.fr/equipes/repmus/OpenMusic.
[16] OpenSound Control. www.cnmat.berkeley.edu/OpenSoundControl.
[17] Fran¸ cois Pachet. The continuator: Musical interaction with style. Journal of New Music Research,
32(3):333–341, 2003.
[18] Miller S. Puckette. A divide between ‘compositional’ and ‘performative’ aspects of Pd. In Proc. First
International Pd Convention, Graz, Austria, 2004.
[19] rtMidi. www.music.mcgill.ca/ gary/rtmidi.
[20] Belinda Thom. Bob: an interactive improvisational companion. In Proc. International Conference on
Autonomous Agents (Agents-2000), Barcelona, Spain, 2000.
[21] Belinda Thom. Interactive improvisational music companionship: A user-modeling approach. The User
Modeling and User-Adapted Interaction Journal; Special Issue on User Modeling and Intelligent Agents,
Spring 2003.
[22] William Walker and Brian Belet. Applying ImprovisationBuilder to interactive composition with midi
piano. In Proc. International Computer Music Conference, Hong Kong, China, 1996.
[23] William Walker, Kurt Hebel, Salvatore Martirano, and Carla Scaletti. ImprovisationBuilder: Impro-
visation as conversation. In Proc. International Computer Music Conference, San Jose, CA, USA,
1992.
[24] William F. Walker. A computer participant in musical improvisation. In Proc. Human Factors in
Computing Systems (CHI), Atlanta, GA, USA, March 1997.
[25] Gil Weinberg and Scott Driscoll. Robot-human interaction with an anthropomorphic percussionist. In
Proc. Human Factors in Computing Systems (CHI), Montreal, Qu´ ebec, Canada, April 2006.
Linked assets
Computer Science Technical Report Archive
Conceptually similar
PDF
USC Computer Science Technical Reports, no. 880 (2006)
PDF
USC Computer Science Technical Reports, no. 895 (2008)
PDF
USC Computer Science Technical Reports, no. 638 (1996)
PDF
USC Computer Science Technical Reports, no. 598 (1994)
PDF
USC Computer Science Technical Reports, no. 578 (1994)
PDF
USC Computer Science Technical Reports, no. 575 (1994)
PDF
USC Computer Science Technical Reports, no. 574 (1994)
PDF
USC Computer Science Technical Reports, no. 635 (1996)
PDF
USC Computer Science Technical Reports, no. 699 (1999)
PDF
USC Computer Science Technical Reports, no. 600 (1995)
PDF
USC Computer Science Technical Reports, no. 817 (2004)
PDF
USC Computer Science Technical Reports, no. 696 (1999)
PDF
USC Computer Science Technical Reports, no. 890 (2007)
PDF
USC Computer Science Technical Reports, no. 931 (2012)
PDF
USC Computer Science Technical Reports, no. 549 (1993)
PDF
USC Computer Science Technical Reports, no. 572 (1994)
PDF
USC Computer Science Technical Reports, no. 609 (1995)
PDF
USC Computer Science Technical Reports, no. 888 (2007)
PDF
USC Computer Science Technical Reports, no. 796 (2003)
PDF
USC Computer Science Technical Reports, no. 547 (1993)
Description
Alexandre R.J. Francois, Elaine Chew, Dennis Thurmond. "MIMI - A musical improvisation system that provides visual feedback to the performer." Computer Science Technical Reports (Los Angeles, California, USA: University of Southern California. Department of Computer Science) no. 889 (2007).
Asset Metadata
Creator
Chew, Elaine
(author),
Francois, Alexandre R.J.
(author),
Thurmond, Dennis
(author)
Core Title
USC Computer Science Technical Reports, no. 889 (2007)
Alternative Title
MIMI - A musical improvisation system that provides visual feedback to the performer (
title
)
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Tag
OAI-PMH Harvest
Format
8 pages
(extent),
technical reports
(aat)
Language
English
Unique identifier
UC16270115
Identifier
07-889 MIMI - A Musical Improvisation System That Provides Visual Feedback to the Performer (filename)
Legacy Identifier
usc-cstr-07-889
Format
8 pages (extent),technical reports (aat)
Rights
Department of Computer Science (University of Southern California) and the author(s).
Internet Media Type
application/pdf
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/
Source
20180426-rozan-cstechreports-shoaf
(batch),
Computer Science Technical Report Archive
(collection),
University of Southern California. Department of Computer Science. Technical Reports
(series)
Access Conditions
The author(s) retain rights to their work according to U.S. copyright law. Electronic access is being provided by the USC Libraries, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Repository Email
csdept@usc.edu
Inherited Values
Title
Computer Science Technical Report Archive
Description
Archive of computer science technical reports published by the USC Department of Computer Science from 1991 - 2017.
Coverage Temporal
1991/2017
Repository Email
csdept@usc.edu
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/