scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Auditory Display in 2003"


Proceedings Article
01 Jul 2003
TL;DR: This paper reports on a pilot project between the research department and a local radio station, investigating the use of sonification to render and present auditory weather forecasts, and introduces the sonification concept and presents the design which is oriented at combined perceptional salience and emotional truthfulness.
Abstract: This paper reports on a pilot project between our research department and and a local radio station, investigating the use of sonification to render and present auditory weather forecasts. The sonifications include auditory markers for certain relevant time points, expected weather events like thunder, snow or fog and several auditory streams to summarize the temporal weather changes during the day. To our knowledge, this is the first utilization of sonification in a regular radio program. We introduce the sonification concept and present our design of the sonification which is oriented at combined perceptional salience and emotional truthfulness. Sound examples are given for typical weather situations in Germany and several prototypical weather conditions which tune to be connected with emotional value. We will report first experiences with this pilot project and feedback of the audience on ICAD since broadcast started in February 2003.

41 citations


Proceedings Article
01 Jul 2003
TL;DR: The AVDisplay provides the central interface for monitoring and debugging this system, currently involving about 20 computers hosting more than 30 complex processes, designed to provide a summary over the system's activities combining visualization and sonification techniques.
Abstract: This paper introduces AVDisplay, a versatile auditory and visual display for monitoring, querying and accessing information about modules or processes in complex systems. In the context of a collaborative research effort (SFB360, artificial communicators) at Bielefeld University, a cognitive robotics system for humanmachine interaction is being developed. The AVDisplay provides the central interface for monitoring and debugging this system, currently involving about 20 computers hosting more than 30 complex processes. The display is designed to provide a summary over the system’s activities combining visualization and sonification techniques. The dynamic visualization allows inference of correlated activity of processes. A habituation simulation process automatically sets a perceptional focus on interesting and relevant process activities. The sonification part is designed to integrate emotional aspects ‐ if the system suffers from poor sensory quality, the sound conveys this by sounding uncomfortable.

14 citations


Proceedings Article
01 Jul 2003
TL;DR: In this article, a method for the design of new sounds, based on a perceptual study of the actual sounds of car horns, is proposed, which is able to synthesize sounds and predict their ability to be identified as car horns.
Abstract: Due to the technologies used, there exist only a few different kinds of car horn sounds. We propose a method for the design of new sounds, based on a perceptual study of the actual sounds of car horns. Firstly we deal with recordings of existing car horns. We show that the different kinds of horn sounds can be divided into nine main families. Within these families we demonstrate secondly that the perception of timbre results from the integration of three elementary sensations. Thirdly, another experiment reveals that some sounds are better identified as car horns than others. A relationship between the perceived timbre of the sounds and their ability to be identified as car horns is established. Finally, we generalize our results to synthesized sounds. The synthesis method was designed to explore and enlarge the perceptual space. Studying these sounds confirms and generalizes the previous results. A model is proposed, which is able to synthesize sounds and predict their ability to be identified as car horns.

13 citations


Proceedings Article
01 Jul 2003
TL;DR: How multiple-hour training differentially affects the discrimination of sound frequency, intensity, location, and duration is reported, and how learning on a given discrimination condition generalizes, or fails to generalize, to stimuli not encountered during training is discussed.
Abstract: Human listeners can learn to discriminate between sounds that are initially indistinguishable To better understand the nature of this learning, we have been using behavioral techniques to examine training-induced improvements on basic auditory discrimination tasks Here we report how multiple-hour training differentially affects the discrimination of sound frequency, intensity, location, and duration, and how learning on a given discrimination condition generalizes, or fails to generalize, to stimuli not encountered during training We discuss how these data contribute to our understanding of discrimination learning and of the mechanisms underlying performance on particular trained tasks, and explore the implications of this learning for the design and evaluation of auditory displays

11 citations


Proceedings Article
01 Jan 2003
TL;DR: A series of experiments are reviewed to examine the impact of different audio-display design parameters on overall intelligibility of simultaneous speech messages and show how using these features may improve intelligibility.
Abstract: Speech intelligibility can be severely compromised in environments where there are several competing speakers. It may be possible, however, to improve speech intelligibility in such environments by manipulating certain acoustic parameters known to facilitate the segregation of competing signals. The first step in designing such a feature-rich audio display is to understand the significant elements of human auditory perception that affect information transmission capacity. We review a series of experiments to examine the impact of different audio-display design parameters on overall intelligibility of simultaneous speech messages and show how using these features may improve intelligibility.

11 citations


Proceedings Article
01 Jan 2003
TL;DR: The present system uses monaural recordings of actual aircraft flyover noise and presents these binaurally using head tracking information and three-dimensional audio is simultaneously rendered with a visual presentation using a headmounted display (HMD).
Abstract: This paper presents a system developed at NASA Langley Research Center to render aircraft flyovers in a virtual reality environment. The present system uses monaural recordings of actual aircraft flyover noise and presents these binaurally using head tracking information. The three-dimensional audio is simultaneously rendered with a visual presentation using a headmounted display (HMD). The final system will use flyover noise synthesized using data from various analytical and empirical modeling systems. This will permit presentation of flyover noise from candidate low-noise flight operations to subjects for psychoacoustical evaluation.

10 citations


Proceedings Article
01 Jul 2003
TL;DR: A virtual listening environment providing localization cues is proposed for the reproduction of acoustic depth by simulating the propagation of acoustic waves inside a tube and suggests to represent the tube using a model which allows direct control of the depth parameter.
Abstract: A virtual listening environment providing localization cues is proposed for the reproduction of acoustic depth By simulating the propagation of acoustic waves inside a tube it allows to change the source/listening point positions interactively, in a way that listeners experience various spatial configurations depending on the source/listener mutual position, and, correspondingly, perceive different auditory cues of depth The quantitative relationship existing between physical and auditory distance assessments suggests to represent the tube using a model which allows direct control of the depth parameter Simulations and experiments demonstrate the effectiveness of the model and its relative robustness in applications contexts where state of the art equipment and ideal listening conditions cannot be guaranteed

10 citations


Proceedings Article
01 Jul 2003
TL;DR: An evaluation of musical earcons was carried out to see whether they are an effective and efficient method of delivering information about household appliances to elderly people, and showed the need for a redesign of the earcons.
Abstract: An evaluation of musical earcons was carried out to see whether they are an effective and efficient method of delivering information about household appliances to elderly people. A test was carried out to explore the ability of the elderly subjects in remembering and learning the musical earcons. This test indicated a poor rate of recognition of the earcons. A second test that included the presentation of information in three modes (audio, visual and multimodal) was performed to determine which modality was preferred to deliver certain types of information among this group. We hypothesized that the multimodal interface would be the best in terms of speed and accuracy of response, and this was supported by the data. The results showed the need for a redesign of the earcons.

7 citations


Proceedings Article
01 Jul 2003
TL;DR: The aim of the present article is to compare the perceptive representation and the functional representation with the usual sound categories designed to fulfill specific actions of user's interface.
Abstract: In the field of audio signaletics, most sound designers have their own recipes to make samples that convey a certain meaning, which we could call auditory function. The aim of the present article is to compare the perceptive representation and the functional representation with the usual sound categories designed to fulfill specific actions of user's interface. The article finally proposes recommendations for the designers according to perceptive results.

4 citations


Proceedings Article
01 Jul 2003
TL;DR: The project described in this paper investigates the ability to enhance cockpit transparency and reduce the tendency for ‘peripheralisation’ of the pilot through implementation of a background auditory environment.
Abstract: Pilot Situational Awareness (SA), namely, the pilot’s ability to accurately perceive, understand and predict events, is an essential requirement of effective decision-making [1]. Good ‘visibility’ of aircraft state and environment is instrumental in maintaining a high level of SA. ‘Cockpit transparency’ indicates the capability of the cockpit to provide such visibility. The project described in this paper investigates the ability to enhance cockpit transparency and reduce the tendency for ‘peripheralisation’ of the pilot through implementation of a background auditory environment. A computer-based tool using currently available software and off-the-shelf hardware is being developed to produce an initial demonstration of ideas.

4 citations



Proceedings Article
01 Jul 2003
TL;DR: This paper reports the strategies and results of all four project stages, with special emphasis on sound design and validation, of a new warning system for high tide in Venice.
Abstract: A new warning system for high tide in Venice has been designed to replace the existing network of electro-mechanical sirens. The project was divided into four sections: (i) optimal placement of loudspeakers via constraint logic programming, (ii) simulation and visualization of the acoustic field in the city, (iii) design of the warning sounds, (iv) validation of the warning sounds. This paper reports the strategies and results of all four project stages, with special emphasis on sound design and validation.