scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Auditory Display in 2013"


Proceedings Article
06 Jul 2013
TL;DR: An overview and critical review of the principal research to date in EEG data sonification is presented and future application domains which may benefit from new capabilities in the real-time sonification of EEG data are discussed.
Abstract: Over the last few decades there has been steady growth in research that addresses the real-time sonification of electroencephalographic (EEG) data. Diverse application areas include medical data screening, Brain Computer Interfaces (BCI), neurofeedback, affective computing and applications in the arts. The present paper presents an overview and critical review of the principal research to date in EEG data sonification. Firstly, we identify several sub-domains of real-time EEG sonification and discuss their diverse approaches and goals. Secondly, we describe our search and inclusion criteria, and then present a synoptic summary table spanning over fifty different research projects or published research findings. Thirdly, we analyze sonification approaches to the various EEG data dimensions such as time-frequency filtering, signal level, location, before going on to consider higher order EEG features. Finally, we discuss future application domains which may benefit from new capabilities in the real-time sonification of EEG data. We believe that the present critical review may help to reduce research fragmentation and may aid future collaboration in this emerging multidisciplinary area.

33 citations


Proceedings Article
01 Jan 2013
TL;DR: This paper introduces Blended Sonifications as sonifications that blend into the users’ environment without confronting users with any explicitly perceived technology, and presents interface examples, both for mediated communication and information display applications.
Abstract: In recent years, graphical user interfaces have become almost ubiquitous in form of notebooks, smartphones and tablets. These systems normally force the user to attend to an often very specific and narrow screen and thus squeeze the information through a chokepoint. This ties the users’ attention to the device and affects other activities and social interaction. In this paper we introduce Blended Sonifications as sonifications that blend into the users’ environment without confronting users with any explicitly perceived technology. Blended Sonification systems can either be used to display information or to provide ambient communication channels. We present a framework that guides developers towards the identification of suitable information sources and appropriate auditory interfaces. We aim at improving the design of interactions and experiences. Along with the introduction and definition of the framework, this paper presents interface examples, both for mediated communication and information display applications.

12 citations


Proceedings Article
01 Jul 2013
TL;DR: An argument for why sonifiers using parametric-mapping sonification should consider incorporating micro-gestural inflections if they are to mitigate The Mapping Problem in enhancing the intelligibility of sonified data is outlined.
Abstract: Most of the software tools used for data sonification have been adopted or adapted from those designed to compose computer music, which in turn, adopted them from abstractly notated scores. Such adoptions are not value-free; by using them, the cultural paradigms underlying the music for which the tools were made have influenced the conceptualization and, it is argued, the effectiveness of data sonifications. Recent research in cognition supports studies in empirical musicology that suggest that listening is not a passive ingestion of organised sounds but is an embodied activity that invisibly enacts gestures of what is heard. This paper outlines an argument for why sonifiers using parametric-mapping sonification should consider incorporating micro-gestural inflections if they are to mitigate The Mapping Problem in enhancing the intelligibility of sonified data.

9 citations


Proceedings Article
06 Jul 2013
TL;DR: This paper presents the BlenderCAVE project, which extends the 3D creation content software Blender and its Game Engine to Virtual Reality (VR) applications, and integrates a complete framework dedicated to Virtual reality (VR), compatible with the three main Operating Systems for any given VR architecture configuration.
Abstract: This paper presents the BlenderCAVE project, which extends the 3D creation content software Blender and its Game Engine (BGE) to Virtual Reality (VR) applications. Based on a multi-screen non-stereoscopic adaptation of the BGE [Gascon et al., 2010], BlenderCAVE now integrates a complete framework dedicated to Virtual Reality (VR), compatible with the three main Operating Systems for any given VR architecture configuration. It has been developed by audio and VR researchers with support from the Blender Community on LIMSI's state of the art VR platforms. Acting as a Scene Graph, BlenderCAVE handles multi-screen/multi-user tracked stereoscopic rendering through an efficient low-level master/slave synchronization process while controlling spatial audio rendering (ambisonic, multi-user binaural, WFS, etc.) and haptic events through OSC and VRPN protocols. The scene creation process itself is reduced to simple Blender manipulations including basic python programming easily carried out using standard laptops. OSC client and spatial audio rendering methods have thus far been implemented in the Max/MSP Audio Programming Environment.

9 citations


Proceedings Article
01 Jul 2013

8 citations


Proceedings Article
01 Jul 2013
TL;DR: A new approach for an unobtrusive and affective ambient auditory information display to become and stay aware of water and energy consumption while taking a shower and introduces the 4/5-factor approach to adapt the auditory display's output so that it supports a slow but steady adjustment of the personal showering habit over time.
Abstract: Although most of us strive to develop a sustainable and less resource-intensive behavior, this unfortunately is a difficult task, because often we are unaware of relevant information, or our focus of attention lies elsewhere. Based on this observation, we present a new approach for an unobtrusive and affective ambient auditory information display to become and stay aware of water and energy consumption while taking a shower. Using the interaction sound of waterdrops falling onto the bathtub as a carrier for information, our system supports users to be in touch with resource-related variables. We explore the usage of an affective dimension as an additional layer of information and introduce our 4/5-factor approach to adapt the auditory display’s output so that it supports a slow but steady adjustment of the personal showering habit over time. We present and discuss several alternative sound and interaction designs.

5 citations


Proceedings Article
06 Jul 2013
TL;DR: The results show that frequencies above 2 kHz provide information for localization of the object, whereas the lower frequency range might be used for size determination, and it is shown that stationary sound signals in echolocation can provide relevant acoustic cues.
Abstract: Some visually impaired people are able to recognize their surroundings by emitting oral sounds and listening to the sound that is reflected at objects and walls. This is known as human echolocation. The present paper reports on the calculation of objective auditory cues present in human echolocation by means of the boundary element method using a spherical model of the human head in the presence of a reflecting disc at different positions. The studied frequency range is 100 Hz to 6.5 kHz. The results show that frequencies above 2 kHz provide information for localization of the object, whereas the lower frequency range might be used for size determination. It is also shown that stationary sound signals in echolocation can provide relevant acoustic cues, because displacements in the proximity of a reflecting object lead to frequency-dependent amplitude modulations.

3 citations