scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Auditory Display in 2011"


Proceedings Article
01 Jun 2011
TL;DR: A meta-study of previous sonification designs taking physical quantities as input data is introduced to build a solid foundation for future sonification works so that auditory display researchers would be able to take benefit from former studies, avoiding to start from scratch when beginning new sonification projects.
Abstract: We introduce a meta-study of previous sonification designs taking physical quantities as input data. The aim is to build a solid foundation for future sonification works so that auditory display researchers would be able to take benefit from former studies, avoiding to start from scratch when beginning new sonification projects. This work is at an early stage and the objective of this paper is rather to introduce the methodology than to come to definitive conclusions. After a historical introduction, we explain how to collect a large amount of articles and extract useful information about mapping strategies. Then, we present the physical quantities grouped according to conceptual dimensions, as well as the sound parameters used in sonification designs and we summarize the current state of the study by listing the couplings extracted from the article database. A total of 54 articles have been examined for the present article. Finally, a preliminary analysis of the results is performed.

13 citations


Proceedings Article
01 Jan 2011
TL;DR: While ‘sonification’ of emotions has been applied in opera and film for some time, the present study allows a new way of conceptualizing the ability of sound to communicate affect through music.
Abstract: This paper discusses the uses of sound to provide information about emotion. The review of the literature suggests that music is able to communicate and express a wide variety of emotions. The novel aspect of the present study is a reconceptualisation of this literature by considering music as having the capacity to sonify emotions. A study was conducted in which excerpts of non-vocal film music were selected to sonify six putative emotions. Participants were then invited to identify which emotions each excerpt sonified. The results demonstrate a good specificity of emotion sonification, with errors attributable to selection of emotions close in meaning to the target (excited confused with happy, but not with sad, for example). While ‘sonification’ of emotions has been applied in opera and film for some time, the present study allows a new way of conceptualizing the ability of sound to communicate affect through music. Philosophical and psychological implications are considered.

11 citations


Proceedings Article
01 Jan 2011
TL;DR: A new method to quantify the coupling in hybrid design matrices for traffic intersections by taking into account the presence of coupling, the types of conflict that coupling may introduce, and the impact that the conflict may have on the intersection is proposed.
Abstract: This work proposes a new method to quantify the coupling in hybrid design matrices for traffic intersections by taking into account the presence of coupling, the types of conflict that coupling may introduce, and the impact that the conflict may have on the intersection. The result is a single numerical value, called the coupling impact index, which can be used to select the safest intersection design for a give n situation. This technique is demonstrated with a case study which calculates the coupling impact index for three traffic intersections based on two sets of traffic conditions and suggests the best intersection for the anticipated traffic volumes provided.

9 citations


Proceedings Article
01 Jun 2011
TL;DR: This work explores how sonification of human brai n and body signals can enhance user experience in collaborati ve music composition and builds a novel multimodal interact system that allows performers to generate and control sounds using their or their fellow team member's physiology.
Abstract: Physiological Computing has been applied in different disc iplines, and is becoming popular and widespread in Human-Computer In teraction, due to device miniaturization and improvements in realtime processing. However, most of the studies on physiology based interfaces focus on single-user systems, while their us in Computer-Supported Collaborative Work (CSCW) is still eme rging. The present work explores how sonification of human brai n and body signals can enhance user experience in collaborati ve music composition. For this task, a novel multimodal interact ive system is built using a musical tabletop interface (Reactable) and a hybrid Brain-Computer Interface (BCI). The described syst em allows performers to generate and control sounds using their o wn or their fellow team member’s physiology. Recently, we assess ed this physiology-based collaboration system in a pilot experime nt. Discussion on the results and future work on new sonifications wi ll be accompanied by practical demonstration during the confere ce.

7 citations


Proceedings Article
20 Jun 2011
TL;DR: Results show that listeners differently weigh cues of time and frequency in the recognition of a specific floor, however this tendency is not polarized enough to enable interaction designers to reduce the functionality of a walking sound synthesizer to simple operations made on the temporal or spectral domain depending on the simulated material.
Abstract: In a multiple choice auditory experimental task, listeners had to discriminate walks over floors made of concrete, wood, gravel, or dried twigs. Sound stimuli were obtained by mixing temporal and spectral signal components, resulting in hybrid formulations of such materials. In this way we analyzed the saliency of the corresponding cues of time and frequency in the recognition of a specific floor. Results show that listeners differently weigh such cues during recognition, however this tendency is not polarized enough to enable interaction designers to reduce the functionality of a walking sound synthesizer to simple operations made on the temporal or spectral domain depending on the simulated material.

6 citations


Proceedings Article
01 Jun 2011
TL;DR: It is demonstrated that Higher Order Ambisonics is a viable and effective means for real-time, simultaneous spatialization in multiple locations, and that it enables a range of creative uses that explore the nature of space, distance and location in networked performance.
Abstract: In spite of recent widespread interest in network technologies for real-time musical collaboration between distant locations, there has been little focus on spatial audio in such applications. We discuss the potential for dynamic spatialization in the context of network music collaboration, in particular through the use of Higher Order Ambisonics. We describe a platform for real-time encoding, streaming and decoding of spatial audio using Ambisonics, and provide details of two case studies of creative applications built on top of this platform. We demonstrate that Higher Order Ambisonics is a viable and effective means for real-time, simultaneous spatialization in multiple locations, and that it enables a range of creative uses that explore the nature of space, distance and location in networked performance.

3 citations


Proceedings Article
24 Jun 2011
TL;DR: This paper describes large scale implementations of spatial audio systems which focus on the presentation of simplified spatial cues that appeal to auditory spatial perception and proposes that a series of multiple, coordinated sound fields may provide better solutions to large auditorial surround sound than traditional reproduction fields which surround the audience.
Abstract: This paper describes large scale implementations of spatial audio systems which focus on the presentation of simplified spatial cues that appeal to auditory spatial perception. It reports a series of successful implementations of nested and multiple spatial audio fields to provide listeners with opportunities to explore complex sound fields, to receives cues pertaining to source behaviors within complex audio environments. This included systems designed as public sculptures capable of presenting engaging sound fields for ambulant listeners. The paper also considers questions of sound field perception and reception in relation to audio object scaling according to the dimensions of a sound reproduction system and proposes that a series of multiple, coordinated sound fields may provide better solutions to large auditorial surround sound than traditional reproduction fields which surround the audience. Particular attention is paid to the experiences since 2008 with the multi-spatial The Morning Line sound system, which has been exhibited as a public sculpture in a number of European cities.

3 citations