scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Auditory Display in 1998"


Proceedings ArticleDOI
01 Nov 1998
TL;DR: This paper describes an experiment to investigate the effectiveness of adding sound to progress bars, which showed a significant reduction in the time taken to perform the task in the audio condition.
Abstract: This paper describes an experiment to investigate the effectiveness of adding sound to progress bars. Progress bars have usability problems because they present temporal information graphically and if the user wants to keep abreast of this information, he/she must constantly visually scan the progress bar. The addition of sounds to a progress bar allows users to monitor the state of the progress bar without using their visual focus. Nonspeech sounds called earcons were used to indicate the current state of the task as well as the completion of the download. Results showed a significant reduction in the time taken to perform the task in the audio condition. The participants were aware of the state of the progress bar without having to remove the visual focus from their foreground task.

64 citations


Proceedings Article
01 Nov 1998
TL;DR: It was found that with interactive multiple- stream audio, the ten users could accurately complete the browsing tasks significantly faster than those who had single-stream audio support.
Abstract: The effectiveness of providing multiple-stream audio to support browsing on a computer was investigated through the iterative development and evaluation of a series of sonic browser prototypes. The data set used was a database containing music. Interactive sonification was provided in conjunction with simplified human-computer interaction sequences. It was investigated to what extent interactive sonification with multiple-stream audio could enhance browsing tasks, compared to interactive sonification with single-stream audio support. With ten users it was found that with interactive multiple-stream audio the users could accurately complete the browsing tasks significantly faster than those who had single-stream audio support.

48 citations


Proceedings ArticleDOI
01 Nov 1998
TL;DR: An experiment to investigate if the addition of non-speech sounds to theDrag and drop operation would increase usability showed that subjective workload was significantly reduced, and overall preference significantly increased, without sonically-enhanced drag and drop being more annoying to use.
Abstract: This paper describes an experiment to investigate if the addition of non-speech sounds to the drag and drop operation would increase usability. There are several problems with drag and drop that can result in the user not dropping a source icon over the target correctly. These occur because the source can visually obscure the target making it hard to see if the target is highlighted. Structured non-speech sounds called earcons were added to indicate when the source was over the target, when it had been dropped on the target and when it had not. Results from the experiment showed that subjective workload was significantly reduced, and overall preference significantly increased, without sonically-enhanced drag and drop being more annoying to use. Results also showed that time taken to do drag and drop was significantly reduced. Therefore, sonic-enhancement can significantly improve the usability of drag and drop.

35 citations


Proceedings ArticleDOI
01 Nov 1998
TL;DR: Results show that the specific meaning of musical motives can be used to provide ways to navigate in a hierarchical structure such as telephone-based interfaces menus, with non-speech audio.
Abstract: This paper describes an experiment that investigates new principles for representing hierarchical menus such as telephone-based interface menus, with non-speech audio. A hierarchy of 25 nodes with a sound for each node was used. The sounds were designed to test the efficiency of using specific features of a musical language to provide navigation cues. Participants (half musicians and half non-musicians) were asked to identify the position of the sounds in the hierarchy. The overall recall rate of 86% suggests that syntactic features of a musical language of representation can be used as meaningful navigation cues. More generally, these results show that the specific meaning of musical motives can be used to provide ways to navigate in a hierarchical structure such as telephone-based interfaces menus.

30 citations


Proceedings Article
01 Nov 1998
TL;DR: The VBAP method is reviewed and a new automatic loudspeaker setup division routine is presented, used to create auditory display to the Digital Interactive Virtual Acoustics (DIVA) virtual environment system.
Abstract: Auditory displays containing an arbitrary number of loudspeakers in any positioning are created. The sound signals are positioned to the display using Vector Base Amplitude Panning (VBAP). The VBAP method is reviewed and a new automatic loudspeaker setup division routine is presented. The VBAP is used to create auditory display to the Digital Interactive Virtual Acoustics (DIVA) virtual environment system.

30 citations


Proceedings ArticleDOI
01 Nov 1998
TL;DR: The goal is to develop an augmented Internet browser to facilitate blind users access to the World Wide Web and to provide sight handicapped people with alternative access modalities to pictorial documents.
Abstract: The Internet now permits easy access to textual and pictorial material from an exponentially growing number of sources The widespread use of graphical user interfaces, however, increasingly bars visually handicapped people from using such material In this context, our project aims at providing sight handicapped people with alternative access modalities to pictorial documents More precisely, our goal is to develop an augmented Internet browser to facilitate blind users access to the World Wide Web The main distinguishing characteristics of this browser are: (1) generation of a virtual sound space into which the screen information is mapped; (2) transcription into sounds not only of text, but also of images; (3) active user interaction, both for the macro-analysis and micro-analysis of screen objects of interest; (4) use of a touch-sensitive screen to facilitate user interaction Several prototypes have been implemented, and are being evaluated by blind users

29 citations


Proceedings ArticleDOI
01 Nov 1998
TL;DR: A sound retrieval method described in this paper enables users to easily obtain their desired sound and adopts three keyword types, onomatopoeia, sound source, and adjective.
Abstract: A sound retrieval method described in this paper enables users to easily obtain their desired sound. A sound representation experiment was conducted to study how people represent the sounds. Almost all representations were made with verbal descriptions that could be classified into "description of sound itself", "description of sounding situation" and "description of sound impression". The retrieval method, which adopts three keyword types, onomatopoeia, sound source, and adjective, was proposed based on the experimental results. The sound retrieval system's efficiency was discussed based on the subjective evaluation. Users can select a convenient retrieval method and adapt it to their idea of retrieval.

27 citations


Proceedings ArticleDOI
01 Nov 1998
TL;DR: A case study derived from a one year activity theory-oriented ethnographic study of information gathering work at a UK daily newspaper is presented, considering the soundscape aspects of mediating collaborative activity in the newsroom.
Abstract: This paper identifies a gap in the research agenda of the auditory display community - the study of work practice and the uses (current and potential) of the workplace 'soundscape'. The paper presents a case study derived from a one year activity theory-oriented ethnographic study of information gathering work at a UK daily newspaper. We consider the soundscape aspects of mediating collaborative activity in the newsroom, and conclude with a discussion of the issues arising from this attempt to utilise ethnographic techniques within the auditory display design domain.

25 citations


Proceedings ArticleDOI
01 Nov 1998
TL;DR: LiteFoot is an interactive floor space that tracks dancers steps, and converts the steps into auditory and visual display, and can also record steps, for further analysis for use in dance research programmes, choreographic experimentation and training.
Abstract: This paper describes the development of LiteFoot, an interactive floor space that tracks dancers steps, and converts the steps into auditory and visual display. The system can also record steps, for further analysis for use in dance research programmes, choreographic experimentation and training.

25 citations


Proceedings ArticleDOI
01 Nov 1998
TL;DR: Two studies investigated the use of non-speech sounds in non-visual interfaces to MS-Windows for blind computer users and found that task completion time was significantly shorter with the inclusion of sounds, although interesting effects on user perceptions were found.
Abstract: Two studies investigated the use of non-speech sounds (auditory icons and earcons) in non-visual interfaces to MS-Windows for blind computer users. The first study presented sounds in isolation and blind and sighted participants rated them for their recognisability, and appropriateness of the mapping between the sound and the interface object/event. As a result, the sounds were revised and incorporated into the interfaces. The second study investigated the effects of the sounds on user performance and perceptions. Ten blind participants evaluated the interfaces, and task completion time was significantly shorter with the inclusion of sounds, although interesting effects on user perceptions were found.

22 citations


Proceedings Article
01 Nov 1998
TL;DR: An exploratory experiment investigating access to non-seen diagrams with a view to presenting such diagrams through an auditory interface showed that participants could understand and internalise the simpler diagrams, though not with complete success, but faltered on the more complex diagram.
Abstract: This paper describes an exploratory experiment investigating access to non-seen diagrams with a view to presenting such diagrams through an auditory interface. Sighted individuals asked questions of a human experimenter about diagrams they could not see, in order to learn about them. The dialogue was recorded and analysed. The analysis resulted in an insight into the strategies used by the participants and a handle on the information requirements of the participants. Results showed that participants could understand and internalise the simpler diagrams, though not with complete success, but faltered on the more complex diagram. Several strategies and points for further investigation emerged.

Proceedings Article
01 Nov 1998
TL;DR: In this article, the authors propose a methodology consisting of three interrelated specification levels: conceptual level, structural level, and physical dimensions of sound level, for the specification of a simple listbox widget.
Abstract: When the visual channel of communication is unavailable because the user is blind, non-visual user interfaces must be developed. The proposed methodology consists of three interrelated specification levels. Information and supported tasks are specified in abstract terms at the conceptual level, taking into account requirements imposed by manipulation of interaction devices and information provided by analysis of the visual representation. The perceptual structure of the auditory scene is specified next at the structural level and then the physical dimensions of sound are defined at the implementation level. The methodology is applied to the specification of a simple listbox widget.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: From the lessons learnt from the work, a set of organising principles for the design and construction of musically-based program auralisations are proposed aimed towards providing accessible auralisation to the average programmer who has no formal musical training.
Abstract: Early studies have shown that musical program auralisations can convey structural and run-time information about Turbo Pascal programs to listeners [3, 4, 10]. Auralisations were effected by mapping program events and structures to musical signature tunes, known as motifs. The design of the motifs was based around the taxonomical nature of the Turbo Pascal language constructs [3]. However, it became clear that as the musical complexity and grammatical rigour of the motifs increased, their discernability by the average user decreased. Therefore, from the lessons learnt from our work we propose a set of organising principles for the design and construction of musically-based program auralisations. These organising principles are aimed towards providing accessible auralisations to the average programmer who has no formal musical training.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: Ad-hoc synthesis is introduced, an approach to designing auditory icons and synthesis algorithms that emphasizes the perception of the sounds by users instead of the analysis of actual sources and sound, and an auditory illusion is shown how a sound that does not exist in the real world can be used to convey the notion of speed in a natural and non-intrusive way.
Abstract: This article introduces ad-hoc synthesis, an approach to designing auditory icons and synthesis algorithms that emphasizes the perception of the sounds by users instead of the analysis of actual sources and sound We describe two substractive synthesis algorithms for generating and controlling wind and wave sounds in real-time by means of high-level parameters Even though these sounds are not audiorealistic, they convey information in a non-intrusive way and therefore are suitable for monitoring background activities These sounds capture the main invariants of the sounds they imitate, enabling users to recognize and understand them easily We then push the approach further by showing how an auditory illusion, ie a sound that does not exist in the real world, can be used to convey the notion of speed in a natural and non-intrusive way

Proceedings ArticleDOI
01 Nov 1998
TL;DR: The CyberStage is GMD's CAVE-like audio-visual projection system integrating a 4-side visual stereo display and an 8-channel spatial auditory display and software-based sound server for the generation of auditory cues for interactive virtual environments has been developed.
Abstract: The CyberStage is GMD's CAVE-like audio-visual projection system integrating a 4-side visual stereo display and an 8-channel spatial auditory display. A software-based sound server for the generation of auditory cues for interactive virtual environments has been developed for this display system in the context of a research project on integrated simulation of image and sound (ISIS). Hardware and software components of the auditory display and their integration in the CyberStage application development process are described. Four applications from different areas are discussed as examples.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: This paper presents tactical audio as support for precise manual positioning of a surgical instrument, and introduces acoustic rendering as an additional information channel and/or warning signal in EEG analysis.
Abstract: Biomedical procedures of long duration cause mental fatigue and attention deficit. We investigated using sound as a means to support sustained attention during prolonged procedures and analysis. In this paper we present tactical audio as support for precise manual positioning of a surgical instrument, and introduce acoustic rendering as an additional information channel and/or warning signal in EEG analysis.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: The transfer of Hypermedia features to audio in an audio-visual environment is discussed, introducing sonic hyperlinks, which are links annotated using sound within an audio stream that lead to arbitrary multimedia content.
Abstract: The transfer of Hypermedia features to audio in an audio-visual environment is discussed, introducing sonic hyperlinks. Sonic hyperlinks are links annotated using sound within an audio stream that lead to arbitrary multimedia content. As an example application, sonic hyperlinks have been integrated in interactive Web-TV which is broadcasted via the Internet. A system architecture and implementation relying on commercial WWW technology like RealMedia is presented. The system includes an authoring tool, as well as the necessary presentation plugin for an Internet browser.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: It is proposed that the principles of information design, sound design and music must be considered equally with those of acoustics and psychoacoustics when designing audio information and feedback systems.
Abstract: Beginning with a detailed presentation of the use of audible signals, in the New York City subway stations and trains, we present an analysis of the information that is communicated by the existing sound design. We show the results of a survey of subway riders regarding their awareness and comprehension of the audible signals in the system. An analysis of the system's sound and information environment is presented, followed by a proposed sonic re-design to better serve both communication and aesthetic needs. We conclude by proposing that the principles of information design, sound design and music must be considered equally with those of acoustics and psychoacoustics when designing audio information and feedback systems.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: The modality appropriateness hypothesis that originated from experiments in perception is tested for human computer interaction situations and it is concluded that users do not always benefit from having information in more than one modality.
Abstract: In this study, the modality appropriateness hypothesis that originated from experiments in perception is tested for human computer interaction situations. In multimodal information processing users need to integrate the data coming from various sources into one message. In a visual and auditory categorisation task with accessory stimuli in the other modality, containing a mood, it was shown that in tasks where choices need to be made based on the meaning of the stimuli, the visual modality seems more appropriate. From the results can be concluded that users do not always benefit from having information in more than one modality.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: Two related pedagogies for design education are described in which the symbols and organizational principles of aural experience are transformed to the visual domain and vice versa to develop in the student the ability to visualize and auralize from direct observation as an alternative to copying and modifying existing designs.
Abstract: This paper describes two related pedagogies for design education - one for teaching visual design and the other for teaching sound composition - in which the symbols and organizational principles of aural experience are transformed to the visual domain and vice versa. The purpose of the pedagogies is to develop in the student the ability to visualize and auralize from direct observation as an alternative to copying and modifying existing designs. The paper begins with a discussion of visual and aural perception and their relationship to thinking and imagination. The author then describes the use of semiotic transformation in teaching courses in sound composition and visual design.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: The relation of spherical display systems to conventional systems in terms of spatial audio and sound-field reconstruction is discussed, with the conclusion that most conventional techniques can be used for spherical display Systems as well.
Abstract: This paper describes multi-speaker display systems for immersive auditory environments, collaborative projects, realistic acoustic modeling, and live musical performance. Two projects are described. The sound sub-system of the Princeton Display Wall project, and the NBody musical instrument body radiation response project. The Display Wall is an 18' × 8' rear-projection screen, illuminated by 8 high-resolution video projectors. Each projector is driven by a 4-way symmetric-multi-processor PC. The audio sub-system of this project involves 26 loudspeakers and server PCs to drive the speakers in real time from soundfile playback, audio effects applied to incoming audio streams, and parametric sound synthesis. The NBody project involves collecting and using directional impulse responses from a variety of stringed musical instruments. Various signal processing techniques were used to investigate, factor, store, and implement the collected impulse responses. A software workbench was created which allows virtual microphones to be placed around a virtual instrument, and then allows signals to be processed through the resulting derived transfer functions. Multi-speaker display devices and software programs were constructed which allow real-time application of of the filter functions to arbitrary sound sources. This paper also discusses the relation of spherical display systems to conventional systems in terms of spatial audio and sound-field reconstruction, with the conclusion that most conventional techniques can be used for spherical display systems as well.

Proceedings Article
01 Nov 1998
TL;DR: In this article, a general methodological framework for evaluating the perceptual properties of auditory stimuli is described, which can ensure the effective use of sound for a variety of applications including virtual reality and data sonification systems.
Abstract: This paper describes a general methodological framework for evaluating the perceptual properties of auditory stimuli. The framework provides analysis techniques that can ensure the effective use of sound for a variety of applications including virtual reality and data sonification systems. Specifically, we discuss data collection techniques for the perceptual qualities of single auditory stimuli including identification tasks, context-based ratings, and attribute ratings. In addition, we present methods for comparing auditory stimuli, such as discrimination tasks, similarity ratings, and sorting tasks. Finally, we discuss statistical techniques that focus on the perceptual relations among stimuli, such as Multidimensional Scaling (MDS) and Pathfinder Analysis. These methods are presented as a starting point for an organized and systematic approach for non-experts in perceptual experimental methods, rather than as a complete manual for performing the statistical techniques and data collection methods. It is our hope that this paper will help foster further interdisciplinary collaboration among perceptual researchers, designers, engineers, and others in the development of effective auditory displays.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: This research considers performing arts theory with an emphasis on sound, validates some of this theory in the form of a series of interactive multimedia exercises, and describes commentary from performing arts professionals who discuss practical and theoretical issues in sound design from an experienced perspective.
Abstract: Sound is underutilized in software and on the web, in spite of its obvious value to other media, such as film. Many practitioners in computer-based design, particularly those with backgrounds in programming and print design, are simply unfamiliar with the medium of sound. The performing arts has a long history of creating sound which makes a powerful impression on human perception and emotion, and has accumulated a rich body of theories and practical insights for how this is done. These theories and insights should be explored for their usefulness in improving sound design in software. The purpose of the research discussed in this paper has been to learn from the principles and practices of sound design in the performing arts, and to discuss and demonstrate ways in which some of these ideas might be helpful to designers of computer-based media and software. This research considers performing arts theory with an emphasis on sound, validates some of this theory in the form of a series of interactive multimedia exercises, and describes commentary from performing arts professionals who discuss practical and theoretical issues in sound design from an experienced perspective. Games tend to make better use of sound than other computer-based products and, because of their narrative qualities, occupy a place in design somewhere between traditional performing arts and software. For these reasons, games have lessons to offer to other areas of computer-based design in terms of sound use, and some analysis of game design is included here, as well.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: Findings indicate percent correct performance was about 40% lower with the traditional diotic presentation compared to a virtual presentation and performance in the virtual reverberant was about 5% lower than in thevirtual anechoic environment.
Abstract: The purpose of this investigation was to examine the effects of presentation mode on speech intelligibility in adverse listening conditions as signal-to-noise ratio was systematically varied in anechoic and reverberant environments. Speech intelligibility scores were obtained from 21 normally hearing listeners using a nonsense syllable test. The syllables were recorded in three environments (mono anechoic, spatial anechoic and spatial reverberant) at three SNR (0, 5, and 9dB) using two simultaneous interfering sound sources. The findings indicate (a) percent correct performance was about 40% lower with the traditional diotic presentation compared to a virtual presentation; (b) performance in the virtual reverberant was about 5% lower than in the virtual anechoic environment.

Proceedings ArticleDOI
Kristen Wegner1
01 Nov 1998
TL;DR: In this paper, an experimental audio feedback system and method for positional guidance in real-time surgical instrument placement tasks is discussed, which is intended for future usability testing in order to ascertain the efficacy of the use of the aural modality for assisting surgical placement tasks in the operating room.
Abstract: We discuss an experimental audio feedback system and method for positional guidance in real-time surgical instrument placement tasks. This system is intended for future usability testing in order to ascertain the efficacy of the use of the aural modality for assisting surgical placement tasks in the operating room. The method is based on translating spatial parameters of a surgical instrument or device, such as its position or velocity with respect to some coordinate system, into a set of audio feedback parameters along the coordinates of a generalised audio space. Error signals that correspond to deviations of the actual instrument trajectory from an optimal trajectory are transformed into a set of audio signals that indicate to the user whether correction is necessary. An experimental hardware platform was assembled using commercially available hardware. A system for 3-D modelling, surgical procedure planning, real-time instrument tracking and audio generation was developed. Prototype software algorithms for generating audio feedback as a function of instrument navigation were designed and implemented. The system is sufficient for future usability testing. This technology is still in an early stage of development, with formal usability and performance testing yet to be done. However, informal usability experiments in the course of the basic engineering process indicate the use of audio is a promising alternative to, or redundancy measure in support of visual display technology for intra-operative navigation.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: This paper presents a method for designing and evaluating auditory interfaces for use in high speed train driving cabs using a simulation approach benefiting from co-operation between ergonomists, engineers and volunteer train drivers.
Abstract: As operators are spending more and more of their time monitoring and using control systems, auditory displays are becoming increasingly useful along with other control devices and control panels. This paper presents a method, elaborated with the help of engineers, ergonomists and train drivers, for designing and evaluating auditory interfaces for use in high speed train driving cabs. During this design project, ergonomists studied the usefulness and the usability of auditory signals, in relation to visual displays and future driving cab control devices. To take into account every aspect of auditory signals, members of the project have chosen to use a simulation approach benefiting from co-operation between ergonomists, engineers and volunteer train drivers who took part in the entire project for the design of the future driving cab.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: The use of Head-Related Transfer Functions in stationary sound spatialisation is sought to extend to encompass movement synthesis, and the use of time-frequency spectrograms are proposed and demonstrated as a mechanism for characterising source movement.
Abstract: Auditory display designers are making increasingly effective and creative use of our ability to localise sound; to particular auditory events as occurring at particular locations. Many applications in which spatial audio has been applied could also benefit from exploiting another important ability of the auditory system; the detection and identification of sound source motion. The display of moving sources could improve usability, provide additional variables in sonification, make virtual environments more perceptually realistic and provide new creative possibilities for designers. Transaural cancellation allows the creation of spatial audio with just two loudspeakers. These techniques are now extended to create the illusion of a sound source moving along an arbitrary trajectory at an arbitrary rate. This paper discusses the application of synthesised sound source movement to a number of practical applications in auditory display. We seek to extend the use of Head-Related Transfer Functions (HRTFs) in stationary sound spatialisation to encompass movement synthesis. The detection of moving sources is not time-invariant so we propose and demonstrate the use of time-frequency spectrograms as a mechanism for characterising source movement. There are an infinite number of such trajectory-related spectrograms and we address the need for a continuous directional model to accommodate this.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: A pilot experiment examined the recogniser listening and processing states and showed that auditory icons representing these caused fewer incorrect user responses than the control condition, suggesting that expert users may require a period of acclimatisation to the use of sounds as they tend to listen to them due to novelty.
Abstract: At the lexical level, a typical human-computer dialogue in an aural-only spoken language system consists of two stages, system output and user input. As with human-human conversation, a good proportion of turn taking clues are given by lapses in talk. Unfortunately, in telephone-based automated spoken dialogues, silences on the system's part may not be so easily resolved. A pilot experiment examined the recogniser listening and processing states and showed that auditory icons representing these caused fewer incorrect user responses than the control condition. However, where system prompts explicitly requested a response, icons were not necessary if talkover was provided. Also, the effectiveness of auditory representations had a strong interaction with the expertise of the caller suggesting that expert users may require a period of acclimatisation to the use of sounds as they tend to listen to them due to novelty. Conversely, novice users with no experience acted correctly.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: The design rationale for a group drawing tool exploiting localised auditory cues to describe user activities, and the auditory authoring and browsing techniques involved in the group drawing environment together with their implications for the design of future collaborative environments involving auditory ecologies are presented.
Abstract: In this paper, we present the design rationale for a group drawing tool exploiting localised auditory cues to describe user activities. Our hypothesis is that these cues are important for two reasons. Firstly, they make participants aware of the details of execution of peer activities. This is especially significant when these activities are out of visual focus. Secondly, they convey intentionality information among participants. The later has been found to influence significantly inter-participant conversations during real world collaborative drawing activities. Our approach for adding sounds to the group drawing environment involves associating localised auditory messages to the palette, tools, primitive drawing objects and cursors representing metaphoric hands or points of gaze. These mappings give rise to dynamic soundscapes describing operations being or intended to be performed. We discuss the auditory authoring and browsing techniques involved in our group drawing environment together with their implications for the design of future collaborative environments involving auditory ecologies.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: Sound Traffic Control is a system for interactively controlled 3-D audio, displayed using a loudspeaker array, embodying ideas developed during over a decade of experimentation, and is evaluated based on the experiences of users and developers.
Abstract: Sound Traffic Control (STC) is a system for interactively controlled 3-D audio, displayed using a loudspeaker array. The intended application is live musical performance. Goals of the system include flexibility, ease of use, fault tolerance, audio quality, and synchronization with external media sources such as MIDI, audio feeds from musicians, and video. It uses a collection of both commercial and custom components. The development and design of the current system is described, embodying ideas developed during over a decade of experimentation, and is evaluated based on the experiences of users and developers.