scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Auditory Display in 2010"


Proceedings Article
01 Jun 2010
TL;DR: This approach combines Tangible Active Objects (TAOs) and Interactive Sonification into a non-visual multi-modal data exploration interface and thereby translates the visual experience of scatter plots into the audio-haptic domain.
Abstract: In this paper we present an approach that enables visually impaired people to explore multivariate data through scatter plots. Our approach combines Tangible Active Objects (TAOs) [1] and Interactive Sonification [2] into a non-visual multi-modal data exploration interface and thereby translates the visual experience of scatter plots into the audio-haptic domain. Our system and the developed sonification techniques are explained in this paper and a first user study is presented.

14 citations


Proceedings Article
01 Jun 2010
TL;DR: A simulator experiment was conducted to examine how urgent warnings could impact the affective state of experienced truck drivers, and their response performance to an unpredictable situation, and as predicted, the more urgent warning was rated more annoying and startling.
Abstract: A range of In-Vehicle Information Systems are currently developed and implemented in trucks to warn drivers about road dangers and vehicle failures. Systems often make use of conventional repetitive auditory warnings to catch attention. In a critical driving situation it might be tempting to use signals that express very high levels of urgency. However, previous studies have shown that more urgent alerts can have a negative impact on the listeners’ affective state. A simulator experiment was conducted to examine how urgent warnings could impact the affective state of experienced truck drivers, and their response performance to an unpredictable situation. As predicted, the more urgent warning was rated more annoying and startling. The drivers who received an urgent warning braked significantly harder to the unpredictable event (a bus pulling out in front of the truck). The drivers also tended to brake later after the urgent warning, but no significant effect on response time or time to collision was found. A concluding recommendation for future research is to investigate distracting effects of urgent auditory warnings on less experienced drivers.

14 citations


Proceedings Article
01 Jun 2010
TL;DR: It is argued that electroacoustic composition techniques can provide a methodology for structuring and presenting multivariable data through sound and an embodied music cognition driven interface is applied to provide an interactive exploration of the generated output.
Abstract: In this paper, a framework for interactive sonification is introduced. It is argued that electroacoustic composition techniques can provide a methodology for structuring and presenting multivariable data through sound. Furthermore, an embodied music cognition driven interface is applied to provide an interactive exploration of the generated output. The motivation and theoretical foundation for this work are presented as well as the framework’s implementation and an exploratory use case.

9 citations


Proceedings Article
01 Jun 2010
TL;DR: The proposed AAR system outperforms conventional telecommunication systems in terms of the speaker segregation by supporting spatial separation of binaurally recorded speakers.
Abstract: Audio communication in its most natural form, the face-to-face conversation, is binaural. Current telecommunication systems often provide only monaural audio, stripping it of spatial cues and thus deteriorating listening comfort and speech intelligibility. In this work, the application of binaural audio in telecommunication through audio augmented reality (AAR) is presented. AAR aims at augmenting auditory perception by embedding spatialised virtual audio content. Used in a telecommunication system, AAR enhances intelligibility and the sense of presence of the user. As a sample use case of AAR, a teleconference scenario is devised. The conference is recorded through a headset with integrated microphones, worn by one of the conference participants. Algorithms are presented to compensate for head movements and restore the spatial cues that encode the perceived directions of the conferees. To analyse the performance of the AAR system, a user study was conducted. Processing the binaural recording with the proposed algorithms places the virtual speakers at fixed directions. This improved the ability of test subjects to segregate the speakers significantly compared to an unprocessed recording. The proposed AAR system outperforms conventional telecommunication systems in terms of the speaker segregation by supporting spatial separation of binaurally recorded speakers.

5 citations


Proceedings Article
01 Jun 2010
TL;DR: SonicFunction as discussed by the authors is a prototype for the interactive sonification of mathematical functions, which includes information about the sign of the function value f(x) within the timbre of the sonification and leaves the auditory graph context free for an acoustic representation of the bounding box.
Abstract: We present in this paper SonicFunction, a prototype for the interactive sonification of mathematical functions. Since many approaches to represent mathematical functions as auditory graphs exist already, we introduce in SonicFunction three new aspects related to sound design. Firstly, SonicFunction features a hybrid approach of discrete and continuous sonification of the function values f(x). Secondly, the sonification includes information about the derivative of the function. Thirdly, SonicFunction includes information about the sign of the function value f(x) within the timbre of the sonification and leaves the auditory graph context free for an acoustic representation of the bounding box. We discuss SonicFunction within the context of existing function sonifications, and report the results from an evaluation of the program with 14 partially sighted and blind students.

5 citations


Proceedings Article
01 Jun 2010
TL;DR: The results show that sounds can effectively illustrate some concepts, especially those related to concrete entities and actions, and thus can be utilized in assistive communication applications.
Abstract: Sonification, the use of nonspeech audio to represent data and information, has been applied to industrial systems and computer interfaces via mechanisms such as auditory icons and earcons. In this paper, we explore a different application of sonification, which is to facilitate communication across language barriers by conveying commonly used concepts via environmental auditory representations. SoundNet, a linguistic database enhanced with natural nonspeech audio, is constructed for this purpose. The concept-sound associations which are building blocks of SoundNet were validated through a sound labeling study conducted on Amazon Mechanical Turk. We determine the factors that cause a sound to evoke a concept. We examine which aspects of the proposed auditory representations are evocative, and what kinds of confusions may occur. Our results show that sounds can effectively illustrate some concepts, especially those related to concrete entities and actions, and thus can be utilized in assistive communication applications.

3 citations


Proceedings Article
01 Jun 2010
TL;DR: The Many Ears project seeks to find out what will happen when data sonification is made more available as a mass medium?
Abstract: In this paper we describe the Many Ears project that will develop the first example of a social site for a community of practice in data sonification. This site will be modeled on the Many Eyes site for “shared visualization and discovery” that combines facilities of a social site with online tools for graphing data. Anyone can upload a dataset, describe it and make it available for others to visualize or download. The ease of use of the tools and the social features on Many Eyes have attracted a broad general audience who have produced unexpected political, recreational, cultural and spiritual applications that differ markedly from conventional data analysis. The Many Ears project seeks to find out what will happen when data sonification is made more available as a mass medium? What new audiences will listen to sonifications? Who will create sonifications and for whom? What unexpected purposes will sonification be put to?

2 citations


Proceedings Article
01 Jun 2010
TL;DR: In this paper, a system for simulating the sounding dimension of physical environments is presented, which consists of a software application, a 5.1 surround sound system and a set of guidelines and methods for use.
Abstract: Sounds are (almost) always heard and perceived as parts of greater contexts. How we hear a sound depends on things like other sounds present, acoustic properties of the place where the sound is heard, the distance and direction to the sound source etc. Moreover, if the sound bear any meaning to us or not and what the meaning is, if any, depends largely on the listener’s interpretation of the sound, based on memories, previous experiences etc. When working with the design of sounds for all sorts of applications, it is crucial to not only evaluate the sound isolated in the design environment, but to also test the sound in possible greater contexts where it will be used and heard. One way to do this is to sonically simulate one or more environments and use these simulations as contexts to test designed sounds against. In this paper we report on a project in which we have developed a system for simulating the sounding dimension of physical environments. The system consists of a software application, a 5.1 surround sound system and a set of guidelines and methods for use. We also report on a first test of the system and the results from this test.