scispace - formally typeset
Search or ask a question

Showing papers in "Organised Sound in 2002"


Journal ArticleDOI
TL;DR: The issues involved in the design of electronic and computer interfaces, specifically mapping - the designed link between an instrument's playing interface and its sound source are considered, with the aim of providing a framework for future discussions on what makes an effective mapping.
Abstract: This paper considers the issues involved in the design of electronic and computer interfaces, specifically mapping - the designed link between an instrument's playing interface and its sound source. It defines the problem area, reviews the literature, and gives examples of specific system mappings. A general model is presented, with the aim of providing a framework for future discussions on what makes an effective mapping. Several guidelines for mapping strategies are given, based on existing work.

183 citations


Journal ArticleDOI
TL;DR: The term soundscape composition did not exist when I started composing with environmental sounds in the mid-1970s as discussed by the authors and it came as a surprise to me, as I had never thought of composing nor of broadcasting as a professional choice in my life.
Abstract: 1. EXPLORING THE CONNECTIONThe term soundscape composition did not exist when I started composing with environmental sounds in the mid-1970s. Through a variety of fortunate circumstances and because of what the 1970s were in Vancouver and Canada - artistically inspiring and moneys were available for adventurous and culturally, socially, politically progressive projects - I had discovered that environmental sounds were the perfect compositional ‘language’ for me. I had learnt much while working with the World Soundscape Project at Simon Fraser University, about listening, about the properties of sound, about noise, the issues we face regarding the quality of the sound environment and much more. This in combination with learning to record and to work with analog technology in the sonic studio allowed me to speak with sound in a way I found irresistible. In addition, the start-up of Vancouver Co-operative Radio gave us the - at that time rare - opportunity to broadcast our work. It was a place where cultural exploration and political activism could meet. It was from within this exciting context of ecological concern for the soundscape and the availability of an alternate media outlet that my compositional work - now often called soundscape composition - emerged. And it came as a surprise to me, as I had never thought of composing nor of broadcasting as a professional choice in my life.

130 citations


Journal ArticleDOI
TL;DR: A two-axis transparency framework that can be used as a predictor of the expressivity of a musical device is defined, and this theory provides a framework for design and evaluation of new human–machine and human–human interactions, including musical instruments.
Abstract: We define a two-axis transparency framework that can be used as a predictor of the expressivity of a musical device. One axis is the player's transparency scale, while the other is the audience's transparency scale. Through consideration of both traditional instruments and new technology-driven interfaces, we explore the role that metaphor plays in developing expressive devices. Metaphor depends on a literature, which forms the basis for making transparent device mappings. We examine four examples of systems that use metaphor: Iamascope, Sound Sculpting, MetaMuse and Glove-TalkII; and discuss implications on transparency and expressivity. We believe this theory provides a framework for design and evaluation of new human–machine and human–human interactions, including musical instruments.

128 citations


Journal ArticleDOI
TL;DR: The soundscape composition, as pioneered at Simon Fraser University since the early 1970s, has evolved rapidly to explore a full range of approaches from the 'found sound' representation of acoustic environments through to the incorporation of highly abstracted sonic transformations.
Abstract: The soundscape composition, as pioneered at Simon Fraser University since the early 1970s, has evolved rapidly to explore a full range of approaches from the ‘found sound’ representation of acoustic environments through to the incorporation of highly abstracted sonic transformations. The structural approaches similarly range from being analogues of real-world experience, such as listening from a fixed spatial perspective or moving through a connected series of acoustic spaces, to those that mirror both nonlinear mental experiences of memory recall, dreams, and free association, as well as artificial sonic constructs made familiar and possible by modern ‘schizophonic’ audio techniques of sonic layering and embedding. The octophonic surround-sound playback format as used in contemporary soundscape presentations has achieved a remarkable sense of immersion in a recreated or imaginary sonic environment. Specific works realised at SFU are analysed that illustrate each of these approaches.

114 citations


Journal ArticleDOI
TL;DR: Several implications of the mapping strategies are discussed: the influence of chosen mapping limits onto performers' virtuosity, and the incidence of mapping on the learning process with virtual instruments and on improvisation possibilities.
Abstract: This paper is about mapping strategies between gesture data and synthesis model parameters by means of perceptual spaces. We define three layers in the mapping chain: from gesture data to gesture perceptual space, from sound perceptual space to synthesis model parameters, and between the two perceptual spaces. This approach makes the implementation highly modular. Both perceptual spaces are developed and depicted with their features. To get a simple mapping between the gesture perceptual subspace and the sound perceptual subspace, we need to focus our attention on the two other mappings. We explain the mapping types: explicit/implicit, static/dynamic. We also present the technical and aesthetical limits introduced by mapping. Some practical examples are given of the use of perceptual spaces in experiments done at LMA in a musical context. Finally, we discuss several implications of the mapping strategies: the influence of chosen mapping limits onto performers' virtuosity, and the incidence of mapping on the learning process with virtual instruments and on improvisation possibilities.

96 citations


Journal ArticleDOI
TL;DR: The genesis of soundscape composition and its underlying principles and motivations are charted, and one perspective is presented, that of considering soundscape compositions as ethnography, that is primarily engaged with the making of representation.
Abstract: Despite roots in acoustic ecology and soundscape studies, the practice and study of soundscape composition is often grouped with, or has grown out of the acousmatic music tradition. This can be observed in the positioning of soundscape compositions juxtaposed with acousmatic music compositions in concert programmes, CD compilations and university syllabuses. Not only does this positioning inform how soundscape composition is listened to, but also how it is produced, sonically and philosophically. If the making and presenting of representations of environmental sound is of fundamental concern to the soundscape artist, then it must be addressed. As this methodological issue is outside of previous musical concerns, to this degree, we must look to other disciplines that are primarily engaged with the making of representation, and that have thoroughly questioned what it is to make and present representations in the world today. One such discipline is ethnography. After briefly charting the genesis of soundscape composition and its underlying principles and motivations, the rest of the paper will present and develop one perspective, that of considering soundscape composition as ethnography.

91 citations


Journal ArticleDOI
TL;DR: A new way of thinking about musical tones is described, specifically in the context of how features of a sound might be controlled by computer musicians, and how those features might be most appropriately mapped onto musical controllers.
Abstract: In this paper we describe a new way of thinking about musical tones, specifically in the context of how features of a sound might be controlled by computer musicians, and how those features might be most appropriately mapped onto musical controllers. Our approach is the consequence of one bias that we should reveal at the outset: we believe that electronically controlled (and this includes computer-controlled)musical instruments need to be emancipated from the keyboard metaphor; although piano-like keyboards are convenient and familiar, they limit the musician's expressiveness (Mathews 1991, Vertegaal and Eaglestone 1996, Paradiso 1997, Levitin and Adams 1998). This is especially true in the domain of computer music,in which timbres can be created that go far beyond the physical constraints of traditional acoustic instruments.

73 citations


Journal ArticleDOI
TL;DR: A compound mapping cross-coupling several controls and synthesis parameters can surprisingly increase the performer's intuitive understanding of the instrument.
Abstract: Software-based musical instruments have controls for input, a sound synthesizer for output, and mappings connecting the two. An effective layout of controls considers how many degrees of freedom each has, as well as the overhead of selecting each one while performing. An isolated mapping from one control to one synthesis parameter needs an appropriate choice of proportional, integral or derivative control (the control's value, or that value's rate of change, drives the synthesis parameter's value, or that value's rate of change). Beyond this, a compound mapping cross-coupling several controls and synthesis parameters can surprisingly increase the performer's intuitive understanding of the instrument.

60 citations


Journal ArticleDOI
TL;DR: Design of interactive music systems and artworks which pursue interaction that does not include any pre-defined pathways would be a precursor to a new approach to interactivity that responds more directly and uniquely to those who engage in the work, and in so doing rewards them more richly for their time, energy and enthusiasm.
Abstract: This article is intended to raise some points of interest and mark out some pointers for alternative approaches to the design and execution of interactive music systems and artworks which pursue interaction that:• does not include any pre-defined pathways,• takes dynamic morphology as its foundation, and• implements dynamic software infrastructures, built on the object-oriented model, providing dynamic instrument instantiation, orchestration and timbral control.It is intended that such design would be a precursor to a new approach to interactivity that responds more directly and uniquely to those who engage in the work, and in so doing rewards them more richly for their time, energy and enthusiasm.

55 citations


Journal ArticleDOI
TL;DR: This paper explains a number of important spatial composition strategies available to the acousmatic composer in light of current technology and sound reproduction situations with an aesthetical rather than a technical standpoint.
Abstract: Spatial elements in acousmatic music are inherent to the art form, in composition and in the projection of the music to the listener. But is it possible for spatial elements to be as important carriers of musical structure as the other aspects of sound? For a parameter to serve the requirements of musical development, it is necessary for that parameter to cover a range of perceptually different states. For ‘space’ to be more than a setting within which the main active elements in the structure unfold, it needs to satisfy these requirements. This paper explains a number of important spatial composition strategies available to the acousmatic composer in light of current technology and sound reproduction situations. The analysis takes an aesthetical rather than a technical standpoint.

51 citations


Journal ArticleDOI
TL;DR: The vBow, a virtual violin bow musical controller, has been designed to provide the computer musician with most of the gestural freedom of a bow on a violin string.
Abstract: The vBow, a virtual violin bow musical controller, has been designed to provide the computer musician with most of the gestural freedom of a bow on a violin string. Four cable and servomotor systems allow for four degrees of freedom, including the lateral motion of a bow stroke across a string, the rotational motion of a bow crossing strings, the vertical motion of a bow approaching and pushing into a string, and the longitudinal motion of a bow travelling along the length of a string. Encoders, attached to the shaft of the servomotors, sense the gesture of the performer, through the rotation of the servomotor shafts, turned by the motion of the cables. The data from each encoder is mapped to a parameter in synthesis software of a bowed-string physical model. The software also sends control voltages to the servomotors, engaging them and the cables attached to them with a haptic feedback simulation of friction, vibration, detents and elasticity.

Journal ArticleDOI
TL;DR: This paper explores the artistic and technical development of the Metasaxophone, as well as new conceptions of musical mappings arising from the enhanced interface.
Abstract: The Metasaxophone is an acoustic tenor saxophone retrofitted with an onboard computer microprocessor and an array of sensors that convert performance data into MIDI control messages. The instrument has additionally been outfitted with a unique microphone system that allows for detailed control of the amplified sound. While maintaining full acoustic functionality it is also a versatile MIDI controller and an electric instrument. A primary motivation behind the Metasaxophone is to put signal processing under direct expressive control of the performer. Through the combination of gestural and audio performance control, employing both discrete and continuous multilayered mapping strategies, the Metasaxophone can be adapted for a wide range of musical purposes. This paper explores the artistic and technical development of the instrument, as well as new conceptions of musical mappings arising from the enhanced interface.

Journal ArticleDOI
TL;DR: This essay is a case study of a design process for interactive performance that presents a simple model of composition, choreography and collaboration in an interactive context and offers the possibility of a new kind of interactive theatre/costume design –an interactive sonic character.
Abstract: Pikapika is a collaborative solo performance by Bahn and Hahn that presents a simple model of composition, choreography and collaboration in an interactive context. The piece offers the possibility of a new kind of interactive theatre/costume design –an interactive sonic character. This essay is a case study of a design process for interactive performance. While we include some details of our specific interface, these are primarily employed as examples to suggest our principles for creating personal, idiosyncratic interactive systems. Our collaboration integrates elemental sound and movement relationships with an awareness of the embodied cultural knowledge of the performer and with a specific sensing scheme to capture her particular gestural vocabulary. The combination of individual ‘atoms’ of movement and sound leads to a complexity that must be practised until they can be performed with ease as an embodied interaction. We find the process of collaboration and its articulation as a dynamic interactive structure fascinating and enduring beyond the specific technologies employed. The terms meta-composition, composed instrument and composed character are used to describe the interactive structure of the piece.

Journal ArticleDOI
TL;DR: Semiotic concepts offer an interesting approach to sound perception, and the concept of soundscape can be compared with Ferdinand de Saussure's semiotic theory about the arbitrary meaning of signs.
Abstract: In discussing different sound environments - sound in the field of art as well as sound in the context of our daily sonic environment - this article makes reference to semiotic theories.Sound without source. Electroacoustic media shape our perceptive realities. There are multiple tools available to record and reproduce sound, but is it possible to handle the fleeting nature of sound, the escape of sound? Certainly there are tools to manipulate sound, to create new soundscapes in this way. We can generate virtual sound-projecting soundscapes via speakers, via headphones in a new context - but what are we listening to?Every sound evokes images. The concept of ‘musique acousmatique’, according to Francois Bayle, amplifies Pierre Schaeffer's notion of the ‘objet sonore’. ‘Musique acousmatique’ refers to sound projection, and thus to our imagination while concentrating on listening. In listening to acousmatic music, we can find three tonal levels, and this tripartite concept of listening refers to the tripartite semiotic concept introduced by Charles Sanders Peirce.Finally, sound affects us emotionally. In contradiction to the term ‘objet sonore’, the term ‘sound event’ coined by R. Murray Schafer stresses the necessity to analyse sound in its context. It is the sonic environment which determines the meaning of the ‘sound event’. Thus, from my point of view, the concept of soundscape can be compared with Ferdinand de Saussure's semiotic theory about the arbitrary meaning of signs. Signs are determined by their systems.Semiotic concepts offer an interesting approach to sound perception. Let's listen to soundscapes before sound escapes.

Journal ArticleDOI
TL;DR: Hildegard Westerkamp's Kits Beach Soundwalk effectively promotes the changing of listening habits; the distancing of individuals from oppressive sonic environments; and the regaining of an individual's inner voice.
Abstract: Hildegard Westerkamp's Kits Beach Soundwalk challenges us as listeners to re-evaluate our acoustic soundscape. Juxtaposing the sounds of barnacles with the noise of the city, Westerkamp reveals an unbalanced world in which individual voices are silenced. Kits Beach Soundwalk allows Westerkamp to help rectify that imbalance. It provides her with the opportunity to create a place in which a listener can take pleasure in simply being. She reveals the metaphors, the hidden entrances, within sounds that take us into other spaces. A listener travels with Westerkamp into worlds of tiny sounds and tiny voices, dreams, and places of fantasy and the imagination. She challenges us as listeners to re-establish our place within the world around us. By designing the piece to reach the audience on a number of levels - intellectual, physiological, metaphorical - Westerkamp effectively promotes the changing of listening habits; the distancing of individuals from oppressive sonic environments; and the regaining of an individual's inner voice.

Journal ArticleDOI
Paul Doornbusch1
TL;DR: This article looks at mapping from the point of view of algorithmic composition, particularly where persistence is an issue, such that the gesture (conceptual domain) is embodied and perceptible in the musical result.
Abstract: Mapping concerns the connection between gestures, or structures and audible results in a musical performance or composition. While this is of intense interest to performers of new instruments and instrument designers, it has also been an area of interest for some composers. Algorithmic composition is sometimes the process of imagining a gesture or structure - perhaps physical or visual - and then applying a mapping process to turn that ‘gesture’ of the conceptual domain into sound which may display the original conception in some way. This article looks at mapping from the point of view of algorithmic composition, particularly where persistence is an issue, such that the gesture (conceptual domain) is embodied and perceptible in the musical result.

Journal ArticleDOI
TL;DR: It is suggested that the word mapping be updated with the more descriptive expression dynamic control processing, and a broadening of the function space offers new aesthetic possibilities for composing instruments.
Abstract: The expression gestural mapping is well imbedded in the language of instrument designers, describing the function from interface control parameters to synthesis control parameters. This function is in most cases implicitly assumed to be instantaneous, so that at any time its output depends only on its input at that time. Here more general functions are considered, in which the output depends on the history of input, especially functions that behave like physical dynamic systems, such as a damped resonator. Acoustic instruments are rich in dynamical behaviour. Introducing dynamics at the control stage of an electronic instrument can help compensate for lack of dynamics in later non-physical synthesis stages. A broadening of the function space offers new aesthetic possibilities for composing instruments. Examples are presented to illustrate the new design/composition mode as well as practical techniques. In this context, it is suggested that the word mapping be updated with the more descriptive expression dynamic control processing.

Journal ArticleDOI
TL;DR: The soundscape composition is the journey that circumscribes the relationship, the conversation between composer and sound sources, the confluences of memory, time and place that interest those who compose with soundscapes.
Abstract: The soundscape composition is the journey that circumscribes the relationship, the conversation between composer and sound sources. (Hildegard Westerkamp)Environmental sounds hold an unusual place in our imaginations. On the one hand, hey make up the often unnoticed ambiences of our daily lives: they are so much with us and surrounding us that it takes a special effort to bring them into the foreground, and pay attention to them. On the other hand, environmental sounds form a powerful conduit to memory. Hearing a particular sound or ambience can launch a chain of related memories, whether experienced consciously or working subconsciously, that reconnects us with particular places and times in our lives. It is precisely these confluences of memory, time and place that interest those who compose with soundscapes.

Journal ArticleDOI
Kia Ng1
TL;DR: Several implementations and multi-disciplinary collaborative projects using the proposed trans-domain mapping framework are reported, including a motion and colour-sensitive system, a sensor-based system for triggering musical events, and a distributed multimedia server for audio mapping of a real-time face tracker.
Abstract: This paper describes a trans-domain mapping (TDM) framework for translating meaningful activities from one creative domain onto another. The multi-disciplinary framework is designed to facilitate an intuitive and non-intrusive interactive multimedia performance interface that offers the users or performers real-time control of multimedia events using their physical movements. It is intended to be a highly dynamic real-time performance tool, sensing and tracking activities and changes, in order to provide interactive multimedia performances.From a straightforward definition of the TDM framework, this paper reports several implementations and multi-disciplinary collaborative projects using the proposed framework, including a motion and colour-sensitive system, a sensor-based system for triggering musical events, and a distributed multimedia server for audio mapping of a real-time face tracker, and discusses different aspects of mapping strategies in their context.Plausible future directions, developments and exploration with the proposed framework, including stage augmenta tion, virtual and augmented reality, which involve sensing and mapping of physical and non-physical changes onto multimedia control events, are discussed.

Journal ArticleDOI
TL;DR: This article collates results from a number of applications of interactive evolution as a sound designer's tool for exploring the parameter spaces of synthesis algorithms, investigating new applications such as reverb design and an analytical stiff string model not previously encountered in the literature.
Abstract: This article collates results from a number of applications of interactive evolution as a sound designer's tool for exploring the parameter spaces of synthesis algorithms. Experiments consider reverberation algorithms, wavetable synthesis, synthesis of percussive sounds and an analytical solution of the stiff string. These projects share the property of being difficult to probe by trial and error sampling of the parameter space. Interactive evolution formed the guidance principle for what quickly proved a more effective search through the multitude of parameter settings.The research was supported by building an interactive genetic algorithm library in the audio programming language SuperCollider. This library provided reusable code for the user interfaces and the underlying genetic algorithm itself, whilst preserving enough generality to support the framework of each individual investigation.Whilst there is nothing new in the use of genetic algorithms in sound synthesis tasks, the experiments conducted here investigate new applications such as reverb design and an analytical stiff string model not previously encountered in the literature. Further, the focus of this work is now shifting more into algorithmic composition research, where the generative algorithms are less clear-cut than those of these experiments. Lessons learned from the deployment of interactive evolution in sound design problems are very useful as a reference for the extension of the problem set.

Journal ArticleDOI
TL;DR: In this paper, the authors describe an "interactivated space" to encompass both the intimate scale of a performer manipulating the materials through an on-body interface, and the larger in-space interface where the work is shared with the performers and audience.
Abstract: In contemporary music and arts practices the previously distinct roles of author, composer and performer have become increasingly conflated, catalysed by the use of computer technology. Newly combined roles of composer and performer that are assumed by one or more people or computer systems are identified and described, as well as actions including preparation, organisation and presentation. In this paper the interface is described as an ‘interactivated space’ to encompass both the intimate scale of a performer manipulating the materials through an on-body interface, and the larger in-space interface where the work is shared with the performers and audience. Two examples of projects the authors are involved in are described, which form the basis for further discussion. The two interfaces that manifest themselves in the processes, the instrument and the score are discussed in more detail with a focus on their changed appearance and role.

Journal ArticleDOI
TL;DR: The Rhythmicon was one of the first electronic music instruments to use technology to extend performers' musical capacities, anticipating the interactive computer music movement by several decades.
Abstract: In the early 1930s, maverick composer Henry Cowell collaborated with inventor Leon Theremin to build an electronic instrument capable of producing intricate polyrhythms. This instrument, dubbed the Rhythmicon, can be considered a rudimentary example of an interactive music system. Cowell and Theremin created the machine to fulfil a compositional need, but it ultimately failed to become a successful musical instrument. The Rhythmicon was one of the first electronic music instruments to use technology to extend performers' musical capacities, anticipating the interactive computer music movement by several decades. Despite its shortcomings, the Rhythmicon should be remembered as an important step on the road to interactivity.

Journal ArticleDOI
TL;DR: In this article, listener responses to a contemporary soundscape composition based on the sound of a cricket are discussed, and compared with close listening to excerpts of the film soundtrack of Ridley Scott's Alien as well as a short excerpt from the soundtrack of the X Files, discussing how science fiction film and television soundtracks index sonic intimacy with different intent from that of Westerkamp.
Abstract: This paper discusses listener responses to a contemporary soundscape composition based on the sound of a cricket. Soundscape composers make works based on everyday sounds and sound environments, usually recorded by themselves (Truax 1984, 1996). While the composer of this piece aims to bring listeners closer to the sounds around them by creating audio pieces based on these sounds (Westerkamp 1988), some listeners feel fear and anxiety rather than the heightened closeness and understanding that she wishes listeners to experience. I compare the sound structure of Cricket Voice with close listening to excerpts of the film soundtrack of Ridley Scott's Alien as well as a short excerpt from the soundtrack of the X Files, discussing how science fiction film and television soundtracks index sonic intimacy with different intent from that of Westerkamp, and raising questions about how such approaches to intimacy might simultaneously reflect and intensify urban anxieties about the sounds of ‘alien’ species that are associated with wilderness environments.

Journal ArticleDOI
TL;DR: In this paper, a new rationale for compositional pedagogy for developing musicianship is presented, which is essential for preparing contemporary musicians and audiences to understand most properly what music "is" and "is good for".
Abstract: Composing has been slighted at all levels of education. Following an analysis of the history and failure of compositional pedagogy for developing musicianship, a new rationale for such pedagogy is presented. This pedagogy is argued to be essential for preparing contemporary musicians and audiences to understand most properly what music ‘is’ and ‘is good for’, and for promoting ever-new conceptions of ‘music’ and of its evolving values. In addition to advancing general musicianship in relation to the standard repertory, the special contribution of pedagogy rooted in composing organised sound pieces is outlined in relation to a new praxial philosophy of music that is challenging the limited and limiting theory of music and its value provided by traditional aesthetic theory. The latter is seen to be a major impediment to new compositional modes that expand musical frontiers, while the praxial theory supports, as well as gains support from, various new attempts to organise sound for expressive and other purposes.

Journal ArticleDOI
TL;DR: An analytical overview of the currently available software technologies designed to assist in creation, dissemination, and most importantly performance of interactive electroacoustic art is offered, grouping the software into two basic groups based on their interfaces.
Abstract: The following article offers an analytical overview of the currently available software technologies designed to assist in creation, dissemination, and most importantly performance of interactive electroacoustic art. By grouping the software into two basic groups based on their interfaces, my aim is to provide a comprehensive list of two groups' strengths and shortcomings, therefore exposing common issues that arise whenever a composer utilises such software interfaces in performance settings. Finally, as an incentive in solving a number of given problems, the author will present RTMix, his own software creation that has been designed primarily as a standardised interface for the purpose of easier production, performance and dissemination of the interactive electroacoustic artwork.

Journal ArticleDOI
TL;DR: This text intends to introduce a discussion about the many possibilities offered by the morphology of interaction between acoustic sources and electroacoustic resources and structures to identify intermediate types between the extremes of pure fusion and pure contrast.
Abstract: In mixed electroacoustic music it is common to find the erroneous conception according to which interaction should base itself exclusively on the fusion between instrumental writing and electronic devices, whereas the contrast between these sound spheres is as significant as the fusional states. Although fusion may be seen as the most important ingredient for an efficacious compositional strategy concerning interaction, it is actually through contrast that the identities of spectral transfers in mixed composition can be evaluated by the listener. This text intends to introduce a discussion about the many possibilities offered by the morphology of interaction between acoustic sources and electroacoustic resources and structures. In this sense it tries to identify intermediate types between the extremes of pure fusion and pure contrast, which can be established by the composer that sees in interactive music one of the most advantageous poetic realms of electroacoustic music.

Journal ArticleDOI
Tony Myatt1
TL;DR: The interactive strategies adopted during the composition and realisation of construction 3 for soprano saxophone and multiple media, by Tony Myatt and Peter Fluck, are described.
Abstract: This paper describes the interactive strategies adopted during the composition and realisation of construction 3 for soprano saxophone and multiple media, by Tony Myatt and Peter Fluck. This work is an interactive multiple media composition for computer, computer-enhanced saxophone and computer graphics projected onto irregularly shaped screens. Derivation of the performer sensing system and mapping strategies to control signal processing, computer generated materials and communication gestures between the performer and computer are described. These processes include generative mapping techniques using neural networks for the recognition of gestural information and the development and application of wireless, wearable sensing technology. This work is described as a holistic approach to the derivation of performer sensing, data mapping and the application of reactive and interactive processes in the context of the creation of a new musical and multiple media composition.

Journal ArticleDOI
TL;DR: A part of Modalys-ER has now been ported to OpenMusic, providing a platform for developing more sophisticated automation and control systems that can be specified through OpenMusic's visual programming interface.
Abstract: Modalys-ER is a graphical environment for creating physical model instruments and generating musical sounds with them. While Modalys-ER provides users with a relatively simple-to-use interface, it has only limited methods for mapping control data onto model parameters for performance. While these are sufficient for many interesting applications, they do not bridge the gap from high-level specifications such as MIDI files or Standard Western Notation (SWN) down to low-level parameters within the physical model. With this issue in mind, a part of Modalys-ER has now been ported to OpenMusic, providing a platform for developing more sophisticated automation and control systems that can be specified through OpenMusic's visual programming interface. An overview of the MfOM library is presented and illustrated with several musical examples using some early mapping designs. Also, some of the issues relating to building and controlling virtual instruments are discussed and future directions for research in this area are suggested. The first release is now available via the IRCAM Software Forum.

Journal ArticleDOI
Nick Fells1
TL;DR: The author's approach to investigating this issue through composition is discussed, using two pieces as examples: Words on the streets are these (2001), an interactive installation, and Still Life (2002), a concert piece for string quartet and live electroacoustics.
Abstract: This article discusses space and interactivity in relation to some of the author's compositions for performers and computer. It outlines approaches to musical space relevant to those works arising from the realms of acousmatic, instrumental/acoustic and soundscape composition, and explores the conflict arising when a variety of listening practices are employed by the spectator-listener. This conflict challenges the perceived boundaries that differentiate genres, and also challenges the significance of existing ‘instrumental’ definitions of interactive computer music to the author's work. The author's approach to investigating this issue through composition is discussed, using two pieces as examples: Words on the streets are these (2001), an interactive installation, and Still Life (2002), a concert piece for string quartet and live electroacoustics. Technical and aesthetic aspects are outlined, specifically in relation to the experience of the spectator-listener. The overall aim is to emphasise the importance of considering spatial issues in composing interactive music, and to examine how the interplay of spatial concepts might be explored in practice.

Journal ArticleDOI
TL;DR: The methods in which the sounds were collected in the field and compiled, how these sounds were edited in the studio, and how the entire piece was assembled are discussed.
Abstract: Zagreb Everywhere (2001), a video portrayal of the city of Zagreb, Croatia, is the result of an international collaboration between writer Gordana Crnkovic (Croatia), video artist Victor Ingrassia (US), and composer David Hahn (US), who collected sounds from Zagreb and together with some of his own music created the soundscape for the piece. Opposing stereotypes about the ‘barbaric Balkans’ often reinforced by Western media during the recent war in that region, Zagreb Everywhere provides a unique view of Zagreb and its inimitable features, while at the same time showing the experiences of that city and its people as having a broad human appeal and resonance. The piece exists in two formats: (i) a stand-alone video work, and (ii) a multimedia performance piece with projected visual images, sound, and live narration. After a brief recounting of the genesis of the idea for Zagreb Everywhere and the main aesthetic aspects of the project, this paper discusses the methods in which the sounds were collected in the field and compiled, how these sounds were edited in the studio, and how the entire piece was assembled. Zagreb Everywhere premiered in May 2001 at the University of Washington in Seattle.