scispace - formally typeset
Search or ask a question

Showing papers in "Computer Music Journal in 2000"


Journal ArticleDOI
TL;DR: This article is an attempt to provide feedback to both academic and commercial music software developers by showing how current DSP tools are being used by post-digital composers, affecting both the form and content of contemporary “non-academic“ electronic music.
Abstract: Over the past decade, the Internet has helped spawn a new movement in digital music. It is not academically based, and for the most part the composers involved are self-taught. Music journalists occupy themselves inventing names for it, and some have already taken root: glitch, microwave, DSP, sinecore, and microscopic music. These names evolved through a collection of deconstructive audio and visual techniques that allow artists to work beneath the previously impenetrable veil of digital media. The Negroponte epigraph above inspired me to refer to this emergent genre as “post-digital” because the revolutionary period of the digital information age has surely passed. The tendrils of digital technology have in some way touched everyone. With electronic commerce now a natural part of the business fabric of the Western world and Hollywood cranking out digital fluff by the gigabyte, the medium of digital technology holds less fascination for composers in and of itself. In this article, I will emphasize that the medium is no longer the message; rather, specific tools themselves have become the message. The Internet was originally created to accelerate the exchange of ideas and development of research between academic centers, so it is perhaps no surprise that it is responsible for helping give birth to new trends in computer music outside the confines of academic think tanks. A non-academic composer can search the Internet for tutorials and papers on any given aspect of computer music to obtain a good, basic understanding of it. University computer music centers breed developers whose tools are shuttled around the Internet and used to develop new music outside the university. Unfortunately, cultural exchange between nonacademic artists and research centers has been lacking. The post-digital music that Max, SMS, AudioSculpt, PD, and other such tools make possible rarely makes it back to the ivory towers, yet these non-academic composers anxiously await new tools to make their way onto a multitude of Web sites. Even in the commercial software industry, the marketing departments of most audio software companies have not yet fully grasped the post-digital aesthetic; as a result, the more unusual tools emanate from developers who use their academic training to respond to personal creative needs. This article is an attempt to provide feedback to both academic and commercial music software developers by showing how current DSP tools are being used by post-digital composers, affecting both the form and content of contemporary “non-academic“ electronic music.

372 citations


Journal ArticleDOI
TL;DR: A modular system for the real-time analysis of body movement and gesture, with a particular focus on the understanding of affect and expressive content in gesture, is developed.
Abstract: The goal of the EyesWeb project is to develop a modular system for the real-time analysis of body movement and gesture. Such information can be used to control and generate sound, music, and visual media, and to control actuators (e.g., robots). Another goal of the project is to explore and develop models of interaction by extending music language toward gesture and visual languages, with a particular focus on the understanding of affect and expressive content in gesture. For example, we attempt to distinguish the expressive content from two instances of the same movement

212 citations


Journal ArticleDOI
TL;DR: A complex, ecological-predictive ANN was designed that was able to learn a professional pianist's performance style at the structural micro-level and listeners were able to recognize the intended emotional colorings.
Abstract: This dissertation presents research in the field ofautomatic music performance with a special focus on piano A system is proposed for automatic music performance, basedon artificial neural networks (ANNs) A complex,ecological-predictive ANN was designed thatlistensto the last played note,predictsthe performance of the next note,looksthree notes ahead in the score, and plays thecurrent tone This system was able to learn a professionalpianist's performance style at the structural micro-level In alistening test, performances by the ANN were judged clearlybetter than deadpan performances and slightly better thanperformances obtained with generative rules The behavior of an ANN was compared with that of a symbolicrule system with respect to musical punctuation at themicro-level The rule system mostly gave better results, butsome segmentation principles of an expert musician were onlygeneralized by the ANN Measurements of professional pianists' performances revealedinteresting properties in the articulation of notes markedstaccatoandlegatoin the score Performances were recorded on agrand piano connected to a computerStaccatowas realized by a micropause of about 60% ofthe inter-onset-interval (IOI) whilelegatowas realized by keeping two keys depressedsimultaneously; the relative key overlap time was dependent ofIOI: the larger the IOI, the shorter the relative overlap Themagnitudes of these effects changed with the pianists' coloringof their performances and with the pitch contour Theseregularities were modeled in a set of rules for articulation inautomatic piano music performance Emotional coloring of performances was realized by means ofmacro-rules implemented in the Director Musices performancesystem These macro-rules are groups of rules that werecombined such that they reflected previous observations onmusical expression of specific emotions Six emotions weresimulated A listening test revealed that listeners were ableto recognize the intended emotional colorings In addition, some possible future applications are discussedin the fields of automatic music performance, music education,automatic music analysis, virtual reality and soundsynthesis

178 citations


Journal ArticleDOI
TL;DR: In this paper, the authors employ a framework based on Bayesian statistics for quantization of natural performances, and demonstrate that some simple quantization schemata can be derived in this framework by simple assumptions about timing deviations.
Abstract: Automatic Music Transcription is the extraction of an acceptable notation from performed music. One important task in this problem is rhythm quantization which refers to categorization of note durations. Although quantization of a pure mechanical performance is rather straightforward, the task becomes increasingly difficult in presence of musical expression, i.e. systematic variations in timing of notes and tempo changes. For quantization of natural performances, we employ a framework based on Bayesian statistics. We demonstrate that some simple quantization schemata can be derived in this framework by simple assumptions about timing deviations. A general quantization method, which can be derived in this framework, is vector quantization (VQ). The algorithm operates on short groups of onsets and is thus flexible in capturing the structure of timing deviations between neighboring onsets and thus performs better than simple rounding methods. Finally, we present some results on simple examples.

84 citations


Journal ArticleDOI
TL;DR: Director Musices is a program that transforms notated scores into musical performances and implements the performance rules emerging from research projects at the Royal Institute of Technology (KTH).
Abstract: Director Musices is a program that transforms notated scores into musical performances. It implements the performance rules emerging from research projects at the Royal Institute of Technology (KTH). Rules in the program model performance aspects such as phrasing, articulation, and intonation, and they operate on performance variables such as tone, inter-onset duration, amplitude, and pitch. By manipulating rule parameters, the user can act as a metaperformer controlling different features of the performance, leaving the technical execution to the computer. Different interpretations of the same piece can easily be obtained. Features of Director Musices include MIDI file input and output, rule palettes, graphical display of all performance variables (along with music notation), and user-defined performance rules. The program is implemented in Common Lisp and is available free as a stand-alone application both for Macintosh and Windows platforms. Further information, including music examples, publications, and the software itself, is located online at http:// www.speech.kth.se/music/performance/.

82 citations


Journal ArticleDOI
TL;DR: This article presents an accurate, efficient, and flexible three-part model for audio signals consisting of sines, transients, and noise by extending spectral modeling synthesis (SMS) with an explicit flexible transient model called transient-modeling synthesis (TMS).
Abstract: Sinusoidal modeling has enjoyed a rich history in both speech and music applications, including sound transformations, compression, denoising, and auditory scene analysis. For such applications, the underlying signal model must efficiently capture salient audio features (Goodwin 1998). In this article, we present an accurate, efficient, and flexible three-part model for audio signals consisting of sines, transients, and noise by extending spectral modeling synthesis (SMS) (Serra and Smith 1990) with an explicit flexible transient model called transient-modeling synthesis (TMS). The sinusoidal transformation system (STS) (McAulay and Quatieri 1986) and SMS find the slowly varying sinusoidal components in a signal using spectral-peak-picking algorithms. Subtracting the synthesized sinusoids from the original signal creates a residual consisting of transients and noise (Serra 1989; George and Smith 1992). However, sinusoids do not model this residual well. Although it is possible to model transients and noise by a sum of sinusoidal signals (as with the Fourier transform), it is neither efficient, because transient and noisy signals require many sinusoids for their description, nor meaningful, because transients are short-lived signals, while the sinusoidal model uses sinusoids that are active on a much larger time scale. In the STS system (generally applied to speech), the transient + noise residual is often masked sufficiently to be ignored (McAulay and Quatieri 1986). In music applications, this residual is often important to the integrity of the signal. The SMS system extends the sinusoidal model by explicitly modeling the residual as slowly filtered white noise. Although this technique has been very successful, transients do not fit well into this model, because transients modeled as filtered noise lose sharpness in their attack and tend to sound dull. Because transients are

68 citations


Journal ArticleDOI
TL;DR: In this paper, performances are restricted to MIDI recordings of piano music, and the pitch, onset, and duration of every note are clearly defined.
Abstract: A matcher is an algorithm that links events in a musical performance to the corresponding events in a score. Matching is difficult because performers make errors, performers use expressive timing, and scores are frequently underspecified. In this article, two existing matchers are discussed. A general control structrure is described, in order to respecify these matchers and to be able to compare them. A new matcher is proposed that uses structural annotations in the score to deal better with the difficulties in matching.

36 citations


Journal ArticleDOI
TL;DR: This article gives an account of a research project conducted during 1995-1997 studying the GENDYN program and music, the software implementation of dynamic stochastic synthesis, a rigorous algorithmic composition procedure conceived by Iannis Xenakis.
Abstract: This article gives an account of a research project conducted during 1995-1997 studying the GENDYN program and music. The GENDYN program is the software implementation of dynamic stochastic synthesis, a rigorous algorithmic composition procedure conceived by Iannis Xenakis (1992). The original program was written in BASIC by the composer himself at CEMAMu, Paris, with the assistance of Marie-H61n it does not depend on any other information input than that coded into the program lines. (Xenakis did parameterize some variables in his algorithm, but he did so by hard-coding their values into an auxiliary program called PARAG, read by GENDYN on startup. So, to be precise, the statement is true for the two programs GENDYN and PARAG taken together as a whole.) From an artistic point of view, however, the question is how far Xenakis, working with the program, was actually able to realize his artistic intents in his composition in spite of its sound and structure being entirely generated by probabilities. Trying to understand this, I decided to study the architecture of his program and also to operate it myself in order to practically experience the material conditions of Xenakis's creative work with the

35 citations


Journal ArticleDOI
TL;DR: Topics include the dynamics of sound diffusion; stereo and multitrack solo-tape diffusion with multispeaker configurations in composition and performance; spatial composing; the diffusion of "tape-plus" pieces; diffusion as drama or "aural cinema".
Abstract: have been conducting with leading composer-practitioners of electroacoustic and computer music whose contributions to the art and technology of sound diffusion have been key to its development and growing acceptance through the last four decades. These discussions cover the aesthetics of, compositional approaches to, and technological realizations of sound diffusion. Topics include the dynamics of sound diffusion; stereo and multitrack solo-tape diffusion with multispeaker configurations in composition and performance; spatial composing; the diffusion of "tape-plus" pieces; diffusion as drama or "aural cinema"; present and future developments in sound diffusion in loudspeaker orchestras and multispeaker diffusion systems such as the Acousmonium (Desantos 1997), the Gmebaphone (Clozier 1997), and BEAST (Harrison 1998); the Spatialisateur (Jot 1997); advances in the field of ambisonics and Digital Versatile Disc (DVD) multichannel systems (Elen 1998); and specialized multiple-speaker systems (Stockhausen 1996). The present interview with composer Denis Smalley (see Figure 1) took place at City University in London on 2 April 1998. Denis Smalley, composer, is professor and head of the Department of Music at City University, London. He received his first degrees in New Zealand, specializing in composition and performance. He studied with Olivier Messiaen at the Paris Conservatoire and investigated electroacoustic composition with the Groupe de Recherches Musicales in Paris before moving to the UK, where he completed his doctorate at the University of York. He was senior lecturer in music and director of the Electroacoustic Music Studio at the Univer-

26 citations


Journal ArticleDOI
Osvaldo Budón1
TL;DR: Characterized by an unrelenting flow of energy, the textures of his music often display dense streams of very small sound particles evolving at different densities and rates of speed.
Abstract: 9 Computer Music Journal, 24:3, pp. 9–22, Fall 2000 © 2000 Massachusetts Institute of Technology. In the context of electroacoustic and computer music, Horacio Vaggione (see Figure 1) has emerged as one of the most original composers working today. Characterized by an unrelenting flow of energy, the textures of his music often display dense streams of very small sound particles evolving at different densities and rates of speed. He has developed a unique approach to sound composition on multiple time scales. Born in Córdoba, Argentina in 1943, Mr. Vaggione has been involved in the field of electronic music from an early age. He was co-founder of the Experimental Music Center of the University of Córdoba, Argentina. In Spain, he was a member of the ALEA live electronic music group and worked at the Computer Music Project at the University of Madrid. Later he produced music at the Groupe de Musique Expérimentale de Bourges (GMEB), Institut National de l’Audiovisuel/Groupe de Recherches Musicales (INA-GRM), the Institut de Recherche et Coordination Acoustique/Musique (IRCAM), the Institute of Sonology in The Hague, and the Technical University of Berlin. Currently Horacio Vaggione carries on multiple activities as a composer, teacher, and researcher. Since 1978 he has been living in France, where he is currently a professor of music at the Université de Paris VIII and director of the Centre de Recherche Informatique et Création Musicale (CICM) attached to the École Doctorale Esthétique, Sciences et Technologies des Arts of the University. In 1997–1998, I visited the Université de Paris VIII where I attended Mr. Vaggione’s graduate seminar in computer music. The following interview was started then and later continued via electronic mail. Career Path

25 citations



Journal ArticleDOI
TL;DR: Years ago, in the founding days of ASCAP, Richard Rodgers, composer of countless wonderful showtunes, is reputed to have cast "serious" composers in the collective role of the research-and-development department of the music industry.
Abstract: Years ago, in the founding days of ASCAP, Richard Rodgers, composer of countless wonderful showtunes, is reputed to have cast "serious" (ASCAP's term to indicate art as opposed to entertainment) composers in the collective role of the research-and-development department of the music industry. And it does seem to be true that in the industry of electronic music, the research-anddevelopment department has influenced the popular music division in different ways. Many pop-culture groups have acknowledged a background in the electronic music classics, many popular-music composers listen seriously to "high-art" electronic music, and many commercially successful ideas and technologies have grown out of the "serious" music world. Sampling is rooted in the tradition of musique concrate, for example, and frequency modulation as a soundgenerating technique was a product of computer music research. Lines of influence occasionally seem to point also in the other direction, from popular music to computer music. Some computer music composers have incorporated popular elements such as jazz standards and folk tunes in their music and, far more important, some composers have reinterpreted the dynamics of jazz improvisation into the framework of performance with interactive systems. In whatever direction influence flows, however, it is not surprising that composers of one type of music might take ideas from other types of music. But at this particular moment in the history of computer music, the flow of ideas between high art and popular art seems to have a particular significance. Indeed, the protective parapet that has long kept high art and popular art mutually exclusive seems to be showing signs of vulnerability. It seems that we are about to enter a new cultural architec-

Journal ArticleDOI
TL;DR: Tristan Murail founded the ensemble L’Itineraire, which became known as the starting point for an aesthetic movement known as spectral composition, its two main proponents being Tristan Murail and Gerard Grisey.
Abstract: 11 Computer Music Journal, 24:1, pp. 11–19, Spring 2000 © 2000 Massachusetts Institute of Technology. Tristan Murail was born in Le Havre, France in 1947. Following university studies in economics, Arabic, and political science, he entered the composition class of Olivier Messiaen at the Conservatoire National Superieur de Musique de Paris in 1967. Upon graduation in 1971, he was awarded the Prix de Rome. On his return to Paris, he founded the ensemble L’Itineraire with composers and former Conservatoire classmates Gerard Grisey and Michael Levinas. L’Itineraire soon became known as the starting point for an aesthetic movement known as spectral composition, its two main proponents being Tristan Murail and Gerard Grisey. In a nutshell, much of the material in a spectral composition is derived from the frequencies of spectra and their behavior. Tristan Murail has been involved with IRCAM since 1980 as a composer, researcher, and professor. In 1997 he moved to New York City, where he is a professor of composition at Columbia University. The following interview was conducted by telephone on 7 February 1999.

Journal ArticleDOI
Benny Sluchin1
TL;DR: Always interested in the expanded technical possibilities of melodic instruments, Stockhausen wanted to compose polyphonic music for a solo monodic instrument where the soloist is aided by several assistants.
Abstract: In Solo, he turned his attention to the tape recorder-or, more precisely, to the tape loop. For Stockhausen, feedback went beyond the simple "reinjection" of sound into a musical process; it had a more general meaning, as he himself said: "I mean, for example, any kind of feedback between musicians who play in a group, where one musician inserts something, bringing something into context and then listening to what the next musician's doing with it when he's following certain instructions, transforming what he hears" (Cott 1973). Other pieces written with this philosophy include Prozession, Kurzwellen, Spiral, Pole, and Expo. The composer used different symbols to indicate the transformations to be used with a chosen musical event played by either the same or another player. Always interested in the expanded technical possibilities of melodic instruments, Stockhausen wanted to compose polyphonic music for a solo monodic instrument where the soloist is aided by several assistants (Stockhausen 1971):


Journal ArticleDOI
TL;DR: Four parallel trends of digital musicians that have no apparent intersection between them are examined: “live laptop performers,” audio CD designers, “unclassifiable artists,’ and techno artists.
Abstract: 19 Computer Music Journal, 24:4, pp. 19–32, Winter 2000 © 2000 Massachusetts Institute of Technology. The 1990s witnessed an explosion in the number of Japanese “digital performers” coming from virtually nowhere—or, more accurately, born directly from digital technology and hardware. Most of them had no musical background. They were graphic designers, programmers, rock musicians, or simply fans of 1970s progressive rock or 1980s alternative rock—Kraftwerk, Yellow Magic Orchestra, Cabaret Voltaire, The Residents. This was a generation not particularly interested in music, but one that enjoyed editing digital files that contain sounds (among other things). For them, the music of diverse cultures and historical periods are all situated on the same ahistorical plane, a flat space that simply crosses the year 2000. The title of a recent techno-lounge-collage CD, The World Shopping with Space Ponch, accurately embodies this attitude: the world and its complexity are reduced to the surface of a discount cultural supermarket (Kishino et al. 2000; Kishino et al. 1999). I will examine four parallel trends of digital musicians that have no apparent intersection between them: “live laptop performers,” audio CD designers, “unclassifiable artists,” and techno artists. They do, however, share numerous traits, either in the technology or the forms of expression used.

Journal ArticleDOI
TL;DR: My composition Tongues of Fire (1994) is a musical work created for the recorded medium using the computer that relies on the computer's signalprocessing power to metamorphose one kind of sound material into another, thereby making audible connections between different kinds of music.
Abstract: hope the listener will perceive and appreciate, though not necessarily consciously. Hence, much of this article is for the benefit of composers rather than potential listeners. My composition Tongues of Fire (1994) is a musical work created for the recorded medium using the computer. It relies on the computer's signalprocessing power to metamorphose one kind of sound material into another, thereby making audible connections between different kinds of

Journal ArticleDOI
TL;DR: Counterpoints are discoveries as much as creations in any closely-controlled modal, tonal, or atonal style, independent contrapuntal lines which align to form coherent harmonies often appear to be felicitous solutions to a complex set of simultaneous equations.
Abstract: teenth-century composers such as Palestrina and Lassus and 20th century composers such as Bart6k, Stravinsky, and Schoenberg have constructed counterpoints according to very different-but equally rigorous-melodic and harmonic constraints. Owing to the dual harmonic/melodic constraints on each pitch in any closely-controlled modal, tonal, or atonal style, independent contrapuntal lines which align to form coherent harmonies often appear to be felicitous solutions to a complex set of simultaneous equations: discoveries as much as creations.

Journal ArticleDOI
TL;DR: Now it is possible to directly analyze all possible combinations of all allowable modifications of predefined musical themes that satisfy a set of restrictions.
Abstract: For several centuries, imitation has supplied the locomotive force of counterpoint. Composers of different cultures, schools, and styles have constructed musical forms on the basis of relatively short themes passed among different voices with various delays and modifications. However, the problem of self-compatibility of the theme arises: how does one imitate the theme with the modifications of the same theme maintaining a certain vertical sonority at each instance, avoiding unwanted parallelisms and harmonies? The problem of this kind also arises when one tries to present several themes simultaneously or with a short delay. Few traditional quantified approaches to this problem exist that do not use a computer. One method is to compose the final part of a theme after integrating it into a polyphonic structure. Once we have found a certain combination, we can derive some "child" variants and try to connect them. This way of thinking yields more questions than answers, such as how to implement predefined themes and how to connect the derivatives if possible. On the other hand, the computer provides a means to quickly evaluate the large number of possible combinations. Now it is possible to directly analyze all possible combinations of all allowable modifications of predefined musical themes that satisfy a set of restrictions.

Journal ArticleDOI
TL;DR: Sculptor is a phase-vocoder-based package of programs that allows users to explore timbral manipulation of sound in real time to perform gestural capture by analysis of the sound a performer makes using a conventional instrument.
Abstract: Sculptor is a phase-vocoder-based package of programs that allows users to explore timbral manipulation of sound in real time. It is the product of a research program seeking ultimately to perform gestural capture by analysis of the sound a performer makes using a conventional instrument. Since the phase-vocoder output is of high dimensionality — typically more than 1,000 channels per analysis frame—mapping phase-vocoder output to appropriate input parameters for a synthesizer is only feasible in theory.

Journal ArticleDOI
TL;DR: Current synthesizers can be very expressive instruments, whether they are analog synthesizers with multiple oscillators and filters that can shape a sound, sample-based synthesizers that use multidimensional wavetables to provide timbral variation, or physical modeling synthesizer that simulate acoustic instruments and can be articulated in analogous ways.
Abstract: Current synthesizers can be very expressive instruments, whether they are analog synthesizers with multiple oscillators and filters that can shape a sound, sample-based synthesizers that use multidimensional wavetables to provide timbral variation, or physical modeling synthesizers that simulate acoustic instruments and can be articulated in analogous ways. Yet computer music is still often thought of as lacking in expression, because although current synthesizers allow almost unlimited expression, this potential is often difficult to realize with computers.






Journal ArticleDOI
TL;DR: PRIE has been developed to address issues of symbolic representation and levels of manipulation of musical materials, particularly for object-oriented conceptions and to address composers' reluctance to develop large quantities of code.
Abstract: Commenting on Anton Webern's music, Pierre Schaeffer addressed the concept of a musical expression that is highly organized but at the same time capable (by means of timbre and textural variations) of hiding or showing part of its internal structure-even exposing something other than its internal structure (Pierret 1969). Similarly, MIDI allows composers to organize musical ideas while hiding elements of structure. Because sounds reside in external devices, manipulation of MIDI data can be based on a set of abstract methods that frees composers to expose the derived structures according to their personal aesthetic. (Relationships with adopted synthesis techniques, although fundamental, are not of interest here.) A direct consequence is that MIDI can support note-oriented as well as object-oriented techniques, and all the nuances between these two poles. However, composers frequently dismiss MIDI because of its symbolic representation and levels of manipulation of musical materials, particularly for object-oriented conceptions. Working at the object level often requires the ability to create and edit events by moving from the general to the particular. This also often necessitates symbolizing the flow of materials and relationships with a form-driven logic and at different degrees of magnification. Furthermore, it is important to be able to operate in a fully graphical environment. Though this point may appear to be unimportant, composers are rarely willing to develop large quantities of code or to invest a significant part of their time in learning how to carry out the computerbased part of their work. PRIE has been developed to address these issues. (Related previous literature includes works by Greussay 1973; Ambrosini 1982; Koenig 1983; Provaglio et al. 1991; and Haus and Sametti 1991.) PRIE