scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Auditory Display in 2007"


Proceedings Article
01 Jan 2007
TL;DR: Using clinical recordings with epileptic seizures, it is demonstrated how the spatio-temporal characteristics of EEG rhythms can be perceived in such sonifications.
Abstract: The electroencephalogram (EEG) provides a diagnostically important stream of multivariate data of the activity of the human brain. Various EEG sonification strategies have been proposed but auditory space has rarely been used to give cues about the location of specific events. Here we introduce a multivariate event-based sonification that, in addition to displaying salient rhythms, uses pitch and spatial location to provide such cues. Using clinical recordings with epileptic seizures we demonstrate how the spatio-temporal characteristics of EEG rhythms can be perceived in such sonifications.

35 citations


Proceedings Article
01 Jun 2007
TL;DR: Although participants perform best with speech reminders, there are large inter-subject differenc es in performance, and over 50% prefer non-speech audio reminders.
Abstract: In this paper we report an experimental comparison between three different types of audio reminders in the home setting: speech, earcons, and a simple pager sound. We examine how quickly and accurately participants were able to interpret the remi nders, and to what extent presentation of the reminders interfered with a digit span background task. In addition, a questionnaire was used to gather user preferences and attitudes towards the different types of reminders. Although participants perform best with speech reminders, there are large inter-subject differenc es in performance, and over 50% prefer non-speech audio reminders. The implications for the design and application of auditory int erfaces for home-based reminder systems are discussed.

31 citations


Proceedings Article
01 Jun 2007
TL;DR: This paper reviews the sonification of spatial data, looking at strategies for presentation, exploration and what spatial interfaces and devices developers have used to interact with the sonifications, and discusses the duality between spatial data and various sonification methodologies.
Abstract: Sonification is the use of sound and speech to represent information. There are many sonification examples in the literature from simple realizations such as a Geiger counter to representations of complex geological features. The data that is being represented can be either spatial or non-spatial. Specifically, spatial data contains positional information; the position either refers to an exact location in the physical world or in an abstract virtual world. Likewise, sound itself is spatial: the source of the sound can always be located. There is obviously a synergy between spatial data and sonification. Hence, this paper reviews the sonification of spatial data and investigates this synergy. We look at strategies for presentation, exploration and what spatial interfaces and devices developers have used to interact with the sonifications. Furthermore we discuss the duality between spatial data and various sonification methodologies.

30 citations


Proceedings Article
01 Jun 2007
TL;DR: Interactive Optimization is a promising novel paradigm for solving the mapping problems and for a user-centred design of auditory display for users to bring in their perceptual capabilities without burdening them with computational tasks.
Abstract: This paper presents a novel approach for the interactive optimization of sonification parameters. In a closed loop, the system automatically generates modified versions of an initial (or previously selected) sonification via gradient ascend or evolutionary algorithms. The human listener directs the optimization process by providing relevance feedback about the perceptual quality of these propositions. In summary, the scheme allows users to bring in their perceptual capabilities without burdening them with computational tasks. It also allows for continuous update of exploration goals in the course of an exploration task. Finally, Interactive Optimization is a promising novel paradigm for solving the mapping problems and for a user-centred design of auditory display. The paper gives a full account on the technique, and demonstrates the optimization at hand of synthetic and real-world data sets.

14 citations


Proceedings Article
01 Jun 2007
TL;DR: Design requirements for sounds appropriate as auditory alerts, defined as Natural Warning Sounds are provided and the results show that auditory systems should have cancellation capabilities and avoid continuously repeated alerts.
Abstract: The goal of this research is increased safety and human performance in aviation. Human errors are often consequences of actions brought about by poor design. The pilot communicates with the aircraft system through an interface in cockpit. In an alerting situation this interface includes an auditory alerting system. Pilots complain that they may be both disturbed and annoyed of alerts, which may affect performance, especially in non-normal situations when the mental workload is high. This research is based on theories in human factors /ergonomics and cognitive engineering with the assumption that improved human performance within a system increase safety. Cognitive engineering is a design philosophy for reducing the effort required by cognitive functions by changing the technical interface, which may lead to improved performance. Knowledge of human abilities and limitations and multidisciplinary interrelated theories between humans, sounds and warnings are integrated into this research. Several methods are included, such as literature studies, field studies, controlled experiments and simulations with pilots. This research provides design requirements for sounds appropriate as auditory alerts, defined as Natural Warning Sounds. These sounds either have a natural meaning within the user’s context, or are compatible with the human’s natural auditory information process, or both, they are also pleasant to listen to (not annoying), easy to learn and clearly audible. In an experimental study associability of different sounds were compared. Associability is the required effort to associate sounds to their assigned alert function meaning. The more associable a sound is it requires less effort and fewer cognitive resources. The study shows that auditory icons and animal sounds were more associable than conventional alerts! In another listening study the method of Soundimagery was used to develop soundimages. A soundimage is a sound, which by its acoustics characteristics has a particular meaning to someone without prior training in a certain context. Soundimages were successfully developed, however it may be difficult to come up with sound candidates for functions that lack sound or are not associated to a particular sound. In a simulation study different presentation formats were compared. The results show that auditory systems should have cancellation capabilities and avoid continuously repeated alerts. This research brings related theories closer to practice and demonstrates methods that will allow designers, together with the users of the system, to apply them in their own system design.

11 citations


Proceedings Article
01 Jun 2007
TL;DR: A project is described that explores some of the mechanisms that invoke clear inner, mental images in the player and points out some new potential directions for computer games and game play design.
Abstract: A computer game with most of the traditional graphics removed and replaced with a detailed and realistic soundscape, can give immersive gaming experiences. By reducing the graphical, explicit output of information from the game, the player becomes free to concentrate on interpreting the implicit information from a rich sound scape. This process of interpretation seems to have the power to invoke clear inner, mental images in the player, which in turn gives strong and immersive experiences. This paper describes a project that explores some of these mechanisms and points out some new potential directions for computer games and game play design.

9 citations


Proceedings Article
01 Jun 2007
TL;DR: The experiment shows that the method providing the lowest degree of real-time control to the user is the least efficient and this method is also perceived to be the least pleasant, fast, clear and intuitive.
Abstract: The aim of the experiment described in this paper is to evaluate and compare three different methods for interacting with an algorithm for the sonification of data streams. The experiment was carried out using an existing Interactive Sonification Toolkit as a high fidelity prototype. The experiment focused on measuring and comparing the efficiency and effectiveness of three interaction methods which differ in the degree of real-time control allowed to the user. Subjects were also asked to answer a questionnaire which gathered information about their perception of using the different interaction methods. The experiment shows that the method providing the lowest degree of real-time control to the user is the least efficient. This method is also perceived to be the least pleasant, fast, clear and intuitive. There are no significant differences in terms of effectiveness and efficiency for the remaining two methods both in terms of objective measures and user perception. Finally the method allowing a medium degree of control to the user is judged to be significantly more pleasant than the others.

6 citations


Proceedings Article
01 Jun 2007
TL;DR: An experimental pilot study in which the possibilities of using short musical pieces as warning signals in a vehicle cab are explored, indicating interestingly that drivers may be able to understand the intended meaning of musical warning signals.
Abstract: Warning signals are often very simple and monotone sounds.This paper focuses on taking a more musical approach to the design of warnings and alarms than has been the case in the past. We present an experimental pilot study in which we explore the possibilities of using short musical pieces as warning signals in a vehicle cab. In the study, 18 experienced drivers experienced five different driving scenarios with different levels of urgency. Each scenario was presented together with an auditory icon, a traditional abstract warning sound, and a musical warning sound designed in collaboration with a composer. The test was carried out in an “audio-only” environment. Drivers were required to rate the perceived urgency, annoyance and appropriateness for every sound. They also had a chance to talk freely about the different warning signals. The results indicate interestingly that drivers may be able to understand the intended meaning of musical warning signals. It seems like the musical warning signals may prove useful primarily in situations of low and medium levels of urgency.

5 citations


Proceedings Article
01 Jun 2007
TL;DR: There is evidence that k Kurtosis correlates with roughness or sharpness and that participants were able to distinguish signals with increasing difference of the kurtosis, and there is no similar evidence for skewness.
Abstract: This paper investigates the use of auditory perceptualisation for analysing the statistical properties of time series data. We introduce the problem domain and provide basic background on higher order statistics like skewness and kurtosis. The chosen approach was direct audification because of the inherent time line and the high number of data points usually available for time series. We present the tools we developed to investigate this problem domain and elaborate on a listening test we conducted to find perceptual dimensions that would correlate with the statistical properties. The results indicate that there is evidence that kurtosis correlates with roughness or sharpness and that participants were able to distinguish signals with increasing difference of the kurtosis. For the setting in the experiment the just noticeable difference was found to be 5. The collected data did not show any similar evidence for skewness and it remains unclear whether this is perceivable in direct audification at all. [

5 citations


Proceedings Article
26 Jun 2007
TL;DR: This paper is based upon contemporary models of perception and presents proposals for additional spatial characteristics beyond classical concepts of three-dimensional positioning of virtual objects.
Abstract: Recent work in audio and visual perception suggests that, over and above sensory acuities, exploration of an environment is a most powerful perceptual strategy. For some uses, the plausibility of artificial sound environments might be dramatically improved if exploratory perception is accommodated. The composition and reproduction of spatially explorable sound fields involves a different set of problems from the conventional surround sound paradigm, developed to display music and sound effects to an essentially passive audience. This paper is based upon contemporary models of perception and presents proposals for additional spatial characteristics beyond classical concepts of three-dimensional positioning of virtual objects.

4 citations


Proceedings Article
01 Jan 2007
TL;DR: SoniPy as mentioned in this paper integrates the expertise and prior development of software components using a public-domain community-development approach, which is a more robust model which can integrate the expertise of experts and prior software components.
Abstract: The need for better software tools was highlighted in the 1997 Sonification Report [1]. It included some general proposals for adapting sound synthesis software to the needs of sonification research. Now, a decade later, it is evident that the demands on software by sonification research are greater than those afforded by music composition and sound synthesis software. This paper compares some major contributions made towards achieving the Report’s proposals with current sonification demands and outlines SoniPy, a broader and more robust model which can integrate the expertise and prior development of software components using a public-domain community-development approach.

Proceedings Article
01 Jun 2007
TL;DR: The paper reports on ongoing research on bimodal (audio-haptic) rendering of virtual objects and current research directions and open issues, including multimodal interfaces and virtal environments, automatic recognition and classification, and sound design.
Abstract: This review paper discusses the literature on perception and synthesis of environmental sounds. Relevant studies in ecological acoustics and multimodal perception are reviewed, and physicallybased sound synthesis techniques for various families of environmental sounds are compared. Current research directions and open issues, including multimodal interfaces and virtal environments, automatic recognition and classification, and sound design, are discussed. The focus is especially on applications of physicallybased techniques for synthesis of environmental sounds in interactive multimodal systems. The paper reports on ongoing research on bimodal (audio-haptic) rendering of virtual objects. [

Proceedings Article
01 Jun 2007
TL;DR: A multimodal architecture in which audio and haptic textures are simulated in real-time using physical models shows that auditory cues significantly influence the haptic experience of virtual textures.
Abstract: We propose a multimodal architecture in which audio and haptic textures are simulated in real-time using physical models. Experiments evaluating audio-haptic interaction in textures pe rception show that auditory cues significantly influence the haptic pe rception of virtual textures.

Proceedings Article
01 Jun 2007
TL;DR: No significant influence of auditory cues on perceived order in depth was found, except when the visual information was totally ambiguous: in this case, the perceived order showed limited dependence on the acoustic information.
Abstract: We present an experiment investigating the influence of auditory cues on visual perceived orders in depth. Visual stimuli consisted in a layered 2D drawing of two squares respectively blue and red using semi-transparency. Auditory signals of the two words “red” and “blue” were presented simultaneously to the images. Subjects were required to determine which square appeared in front of the other in these cross-modal conditions. The coefficient of transparency as well as the audio level difference between the two speech signals “red” and “blue” were systematically varied. No significant influence of auditory cues on perceived order in depth was found, except when the visual information was totally ambiguous: in this case, the perceived order showed limited dependence on the acoustic information. [