scispace - formally typeset
Search or ask a question
JournalISSN: 1355-7718

Organised Sound 

Cambridge University Press
About: Organised Sound is an academic journal published by Cambridge University Press. The journal publishes majorly in the area(s): Electroacoustic music & Sound art. It has an ISSN identifier of 1355-7718. Over the lifetime, 838 publications have been published receiving 10565 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Electoacoustic music opens access to all sounds, a bewildering sonic array ranging from the real to the surreal and beyond, where the familiar articulations of instruments and vocal utterance are gone, and the stability of note and interval is gone.
Abstract: The art of music is no longer limited to the sounding models of instruments and voices. Electoacoustic music opens access to all sounds, a bewildering sonic array ranging from the real to the surreal and beyond. For listeners the traditional links with physical sound-making are frequently ruptured: electroacoustic sound-shapes and qualities frequently do not indicate known sources and causes. Gone are the familiar articulations of instruments and vocal utterance: gone is the stability of note and interval: gone too is the reference of beat and metre. Composers also have problems: how to cut an aesthetic path and discover a stability in a wide-open sound world, how to develop appropriate sound-making methods, how to select technologies and software.

524 citations

Journal ArticleDOI
TL;DR: This paper describes MARSYAS, a framework for experimenting, evaluating and integrating techniques for audio content analysis in restricted domains and a new method for temporal segmentation based on audio texture that is combined with audio analysis techniques and used for hierarchical browsing, classification and annotation of audio files.
Abstract: Existing audio tools handle the increasing amount of computer audio data inadequately. The typical tape-recorder paradigm for audio interfaces is inflexible and time consuming, especially for large data sets. On the other hand, completely automatic audio analysis and annotation is impossible using current techniques. Alternative solutions are semi-automatic user interfaces that let users interact with sound in flexible ways based on content. This approach offers significant advantages over manual browsing, annotation and retrieval. Furthermore, it can be implemented using existing techniques for audio content analysis in restricted domains. This paper describes MARSYAS, a framework for experimenting, evaluating and integrating such techniques. As a test for the architecture, some recently proposed techniques have been implemented and tested. In addition, a new method for temporal segmentation based on audio texture is described. This method is combined with audio analysis techniques and used for hierarchical browsing, classification and annotation of audio files.

444 citations

Journal ArticleDOI
TL;DR: This paper presents an introduction to the field of live coding, of real-time scripting during laptop music performance, and the improvisatory power and risks involved, and looks at two test cases, the command-line music of slub utilising Perl and REALbasic, and Julian Rohrhuber's Just In Time library for SuperCollider.
Abstract: Seeking new forms of expression in computer music, a small number of laptop composers are braving the challenges of coding music on the fly. Not content to submit meekly to the rigid interfaces of performance software like Ableton Live or Reason, they work with programming languages, building their own custom software, tweaking or writing the programs themselves as they perform. Often this activity takes place within some established language for computer music like SuperCollider, but there is no reason to stop errant minds pursuing their innovations in general scripting languages like Perl. This paper presents an introduction to the field of live coding, of real-time scripting during laptop music performance, and the improvisatory power and risks involved. We look at two test cases, the command-line music of slub utilising, amongst a grab-bag of technologies, Perl and REALbasic, and Julian Rohrhuber's Just In Time library for SuperCollider. We try to give a flavour of an exciting but hazardous world at the forefront of live laptop performance.

254 citations

Journal ArticleDOI
TL;DR: A personal experience of soundscape listening is the starting point, and uncovers basic ideas relating to the disposition and behaviour of sounding content, and listening strategy that lead to concepts central to the structuring of perspectival space in relation to the vantage point of the listener.
Abstract: The analytical discussion of acousmatic music can benefit from being based on spatial concepts, and this article aims to provide a framework for investigation. A personal experience of soundscape listening is the starting point, and uncovers basic ideas relating to the disposition and behaviour of sounding content, and listening strategy. This enables the opening out of the discussion to include source-bonded sounds in general, giving particular consideration to how experience of sense modes other than the aural are implicated in our understanding of space, and in acousmatic listening. Attention then shifts to a source-bonded spatial model based on the production of space by the gestural activity of music performance, prior to focusing in more detail on acousmatic music, initially by delving into spectral space, where ideas about gravitation and diagonal forces are germane. This leads to concepts central to the structuring of perspectival space in relation to the vantage point of the listener. The final section considers a methodology for space-form investigation.

212 citations

Journal ArticleDOI
TL;DR: The Open Sound Control protocol is described, followed by some theoretical limits on communication latency and what they mean for music making, and a representative list of some of the projects that take advantage of the protocol is presented.
Abstract: Since telecommunication can never equal the richness of face-to-face interaction on its own terms, the most interesting examples of networked music go beyond the paradigm of musicians playing together in a virtual room. The Open Sound Control protocol has facilitated dozens of such innovative networked music projects. First the protocol itself is described, followed by some theoretical limits on communication latency and what they mean for music making. Then a representative list of some of the projects that take advantage of the protocol is presented, describing each project in terms of the paradigm of musical interaction that it provides.

205 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202315
2022117
202130
202035
201931
201818