scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Auditory Display in 2019"


Proceedings ArticleDOI
01 Jun 2019
TL;DR: Future sonification design efforts that explicitly strive to meet either artistic or scientific goals may lead to greater clarity and success in the field and more widespread adoption of useful sonification techniques.
Abstract: Despite persistent research and design efforts over the last twenty years, widespread adoption of sonification to display complex data has largely failed to materialize, and many of the challenges to successful sonification identified in the past persist. Major impediments to the widespread adoption sonification include fundamental perceptual differences between vision and audition, large individual differences in auditory perception, musical biases of sonification researchers, and the interdisciplinary nature of sonification research and design. The historical and often indiscriminate mingling of art and science in sonification design may be a root cause of some of these challenges. Future sonification design efforts that explicitly strive to meet either artistic or scientific goals may lead to greater clarity and success in the field and more widespread adoption of useful sonification techniques.

29 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: The results indicate that persons who need to locate nearby objects in limited visibility conditions could benefit from the types of auditory displays considered here, and three auditory display concepts were developed and evaluated in the context of finding objects within a virtual room.
Abstract: In both extreme and everyday situations, humans need to find nearby objects that cannot be located visually. In such situations, auditory display technology could be used to display information supporting object targeting. Unfortunately, spatial audio inadequately conveys sound source elevation, which is crucial for locating objects in 3D space. To address this, three auditory display concepts were developed and evaluated in the context of finding objects within a virtual room, in either low or no visibility conditions: (1) a one-time height-denoting “area cue,” (2) ongoing “proximity feedback,” or (3) both. All three led to improvements in performance and subjective workload compared to no sound. Displays (2) and (3) led to the largest improvements. This pattern was smaller, but still present, when visibility was low, compared to no visibility. These results indicate that persons who need to locate nearby objects in limited visibility conditions could benefit from the types of auditory displays considered here.

16 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: This study demonstrates how a location in three-dimensional space can be sonified unambiguously by the implementation of perceptually orthogonal psychoacoustic attributes in monophonic playback.
Abstract: Physical attributes of sound interact perceptually, which makes it challenging to present a large amount of information simultaneously via sonification, without confusing the user. This paper presents the theory and implementation of a psychoacoustic signal processing approach for three-dimensional sonification. The direction and distance along the dimensions are presented via multiple perceptually orthogonal sound attributes in one auditory stream. Further auditory streams represent additional elements, like axes and ticks. This paper describes the mathematical and psychoa-coustical foundations and discusses the three-dimensional sonification for a guidance task. Formulas, graphics and demo videos are provided. To facilitate use at virtually all places the approach is mono-compatible and even works on budget loudspeakers.

13 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: The interface was extremely easy to learn and navigate, participants all had unique navigational styles and preferred using their own screen reader, and participants needed user interface features that made it easier to understand and answer questions about spatial properties and relationships.
Abstract: This study evaluated a web-based auditory map prototype built utilizing conventions found in audio games and presents findings from a set of tasks participants performed with the prototype. The prototype allowed participants to use their own computer and screen reader, contrary to most studies, which restrict use to a single platform and a self-voicing feature (providing a voice that talks by default). There were three major findings from the tasks: the interface was extremely easy to learn and navigate, participants all had unique navigational styles and preferred using their own screen reader, and participants needed user interface features that made it easier to understand and answer questions about spatial properties and relationships. Participants gave an average task load score of 39 from the NASA Task Load Index and gave a confidence level of 46/100 for actually using the prototype to physically navigate.

13 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: In a laboratory study it was tested if participants could differentiate between seven ranges of oxygen saturation using the proposed psychoacoustic sonification, and on average participants could identify in 84% of all cases the correct SpO2 range.
Abstract: Oxygen saturation monitoring of neonates is a demanding task, as oxygen saturation (SpO2) has to be maintained in a particular range. However, auditory displays of conventional pulse oximeters are not suitable for informing a clinician about deviations from a target range. A psychoacoustic sonification for neonatal oxygen saturation monitoring is presented. It consists of a continuous Shepard tone at its core. In a laboratory study it was tested if participants (N = 6) could differentiate between seven ranges of oxygen saturation using the proposed sonification. On average participants could identify in 84% of all cases the correct SpO2 range. Moreover, detection rates differed significantly between the seven ranges and as a function of the magnitude of SpO2 change between two consecutive values. Possible explanations for these findings are discussed and implications for further improvements of the presented sonification are proposed.

11 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: This project aims to design a sonification system allowing human operators to take better decisions on anomalous behavior while occupied in other (mainly visual) tasks, using a state-of-the-art detection algorithm and data sets from the Battle of the Attack Detection Algorithms.
Abstract: Water distribution systems are undergoing a process of intensive digitalization, adopting networked devices for monitoring and control. While this transition improves efficiency and reliability, these infrastructures are increasingly exposed to cyber-attacks. Cyber-attacks engender anomalous system behaviors which can be detected by data-driven algorithms monitoring sensors readings to disclose the presence of potential threats. At the same time, the use of sonification in real time process monitoring has grown in importance as a valid alternative to avoid information overload and allowing peripheral monitoring. Our project aims to design a sonification system allowing human operators to take better decisions on anomalous behavior while occupied in other (mainly visual) tasks. Using a state-of-the-art detection algorithm and data sets from the Battle of the Attack Detection Algorithms, a series of sonification prototypes were designed and tested in the real world. This paper illustrates the design process and the experimental data collected, as well results and plans for future steps.

10 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: The current status of theory in sonification is assessed as it relates to each component, and, where possible, recommendations are offered for practices that can advance theory and theoretically-motivated research and practice in the field of sonification.
Abstract: Despite over 25 years of intensive work in the field, sonification research and practice continue to be hindered by a lack of theory. In part, sonification theory has languished, because the requirements of a theory of sonification have not been clearly articulated. As a design science, sonification deals with artifacts—artificially created sounds and the tools for creating the sounds. Design fields require theoretical approaches that are different from theory-building in natural sciences. Gregor and Jones [1] described eight general components of design theories: (1) purposes and scope; (2) constructs; (3) principles of form and function; (4) artifact mutability; (5) testable propositions; (6) justificatory knowledge; (7) principles of implementation; and (8) expository instantiations. In this position paper, I examine these components as they relate to the field of sonification and use these components to clarify requirements for a theory of sonification. The current status of theory in sonification is assessed as it relates to each component, and, where possible, recommendations are offered for practices that can advance theory and theoretically-motivated research and practice in the field of sonification.

7 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: This work developed their own soccer data set through computer vision analysis of footage from a tactical overhead camera, revealing an overall preference for the Pitch Variation and Musical Moments methods, and revealed a robust trade-off between usability and enjoyability.
Abstract: We present multiple approaches to soccer sonification, focusing on enhancing the experience for a general audience. For this work, we developed our own soccer data set through computer vision analysis of footage from a tactical overhead camera. This data set included X, Y, coordinates for the ball and players throughout, as well as passes, steals and goals. After a divergent creation process, we developed four main methods of sports sonification for entertainment. For the Tempo Variation and Pitch Variation methods, tempo or pitch is operationalized to demonstrate ball and player movement data. The Key Moments method features only pass, steal and goal data, while the Musical Moments method takes ex-isting music and attempts to align the track with important data points. Evaluation was done using a combination of qualitative focus groups and quantitative surveys, with 36 participants completing hour long sessions. Results indicated an overall preference for the Pitch Variation and Musical Moments methods, and revealed a robust trade-off between usability and enjoyability.

6 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: The method of real-time Auditory Contrast Enhancement (ACE) is introduced, derived from algorithms for speech enhancement as well as from the remarkable sound processing mechanisms of the authors' ears, and is able to significantly enhance spectral and temporal contrast.
Abstract: Every day, we rely on the information that is encoded in the auditory feedback of our physical interactions. With the goal to perceptually enhance those sound characteristics that are relevant to us — especially within professional practices such as percussion and auscultation — we introduce the method of real-time Auditory Contrast Enhancement (ACE). It is derived from algorithms for speech enhancement as well as from the remarkable sound processing mechanisms of our ears. ACE is achieved by individual sharpening of spectral and temporal structures contained in a sound while maintaining its natural gestalt. With regard to the targeted real-time applications, the proposed method is designed for low latency. As the discussed examples illustrate, it is able to significantly enhance spectral and temporal contrast.

6 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: A listening experiment of Portuguese consumption habits in the course of ten days, gathered from a Portuguese retail company is presented, focusing on how to represent this time-series data as a musical piece that would engage the listener’s attention and promote an active listening attitude, exploring the influence of aesthetics in the perception of auditory displays.
Abstract: The stimuli for consumption is present in everyday life, where major retail companies play a role in providing a large range of products every single day. Using sonification techniques, we present a listening experiment of Portuguese consumption habits in the course of ten days, gathered from a Portuguese retail company. We focused on how to represent this time-series data as a musical piece that would engage the listener’s attention and promote an active listening attitude, exploring the influence of aesthetics in the perception of auditory displays. Through a phenomenological approach, ten participants were interviewed to gather perceptions evoked by the piece, and how the consumption variations were un-derstood. The tested composition revealed relevant associations about the data, with the consumption context indirectly present throughout the emerging themes: from the idea of everyday life, routine and consumption peaks to aesthetic aspects as the passage of time, frenzy and consumerism. Documentary, movie imagery and soundtrack were also perceived. Several musical aspects were also mentioned, as the constant, steady rhythm and the repetitive nature of the composition, and sensations such as pleasantness, sat-isfaction, annoyance, boredom and anxiety. These collected topics convey the incessant feeling and consumption needs which portray our present society, offering new paths for comprehending musical sound perception and consequent exploration.

6 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: A sonification-based approach to raise user awareness by conveying information on web tracking through sound through sound while the user is browsing the web, and adds the capability to monitor any network connection, including all browsers, applications and devices.
Abstract: Web tracking is found on 90% of common websites. It allows online behavioral analysis which can reveal insights to sensitive personal data of an individual. Most users are not aware of the amount of web tracking happening in the background. This paper contributes a sonification-based approach to raise user awareness by conveying information on web tracking through sound while the user is browsing the web. We present a framework for live web tracking analysis, conversion to Open Sound Control events and sonification. The amount of web tracking is disclosed by sound each time data is exchanged with a web tracking host. When a connection to one of the most prevalent tracking companies is established, this is additionally indicated by a voice whispering the company name. Compared to existing approaches on web tracking sonification, we add the capability to monitor any network connection, including all browsers, applications and devices. An initial user study with 12 participants showed empirical support for our main hypothesis: exposure to our sonification significantly raises web tracking awareness.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: The data show that an immersive exocentric environment enhances spatial perception, and that the physical implementation using high density loudspeaker arrays enables significantly improved spatial perception accuracy relative to the egocentric and virtual binaural approaches.
Abstract: The Spatial Audio Data Immersive Experience (SADIE) project aims to identify new foundational relationships pertaining to hu-man spatial aural perception, and to validate existing relation-ships. Our infrastructure consists of an intuitive interaction in-terface, an immersive exocentric sonification environment, and a layer-based amplitude-panning algorithm. Here we highlight the system’s unique capabilities and provide findings from an initial externally funded study that focuses on the assessment of human aural spatial perception capacity. When compared to the existing body of literature focusing on egocentric spatial perception, our data show that an immersive exocentric environment enhances spatial perception, and that the physical implementation using high density loudspeaker arrays enables significantly improved spatial perception accuracy relative to the egocentric and virtual binaural approaches. The preliminary observations suggest that human spatial aural perception capacity in real-world-like immersive exocentric environments that allow for head and body movement is significantly greater than in egocentric scenarios where head and body movement is restricted. Therefore, in the design of immersive auditory displays, the use of immersive exocentric environments is advised. Further, our data identify a significant gap between physical and virtual human spatial aural perception accuracy, which suggests that further development of virtual aural immersion may be necessary before such an approach may be seen as a viable alternative.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: It is argued that the use of less intrusive continuous sonic interaction could be more beneficial for the user experience and that auditory displays have the potential to help solve these issues.
Abstract: The current position paper discusses vital challenges related to the user experience design in unsupervised, highly automated cars. These challenges are: (1) how to avoid motion sickness, (2) how to ensure users’ trust in the automation, (3) how to ensure usability and support the formation of accurate mental models of the automation system, and (4) how to provide a pleasant and enjoyable experience. We argue for that auditory displays have the potential to help solve these issues. While auditory displays in modern vehicles typically make use of discrete and salient cues, we argue that the use of less intrusive continuous sonic interaction could be more beneficial for the user experience.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: The goal is to sonify paintings reflecting their art style and genre to improve the experience of both sighted and visually impaired individuals and to develop a multidimensional sonification algorithm that can better transcribe visual art into appropriate music.
Abstract: Sonification and data processing algorithms have advanced over the years to reach practical applications in our everyday life. Similarly, image processing techniques have improved over time. While a number of image sonification methods have already been developed, few have delved into potential synergies through the combined use of multiple data and image processing techniques. Additionally, little has been done on the use of image sonification for artworks, as most research has been focused on the transcription of visual data for people with visual impairments. Our goal is to sonify paintings reflecting their art style and genre to improve the experience of both sighted and visually impaired individuals. To this end, we have designed initial sonifications for paintings of abstractionism and realism, and conducted interviews with visual and auditory experts to improve our mappings. We believe the recommendations and design directions we have received will help develop a multidimensional sonification algorithm that can better transcribe visual art into appropriate music.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: An evaluation of the effects of the feedback on speakers is conducted and initial findings assessing how different musical modulations might potentially affect the emotions and mental state of the speaker as well as semantic content of speech, and musical vocal parameters are presented.
Abstract: Changing the way one hears one’s own voice, for instance by adding delay or shifting the pitch in real-time, can alter vocal qualities such as speed, pitch contour, or articulation. We created new types of auditory feedback called Speech Companions that generate live musical accompaniment to the spoken voice. Our system generates harmonized chorus effects layered on top of the speaker’s voice that change chord at each pseudo-beat detected in the spoken voice. The harmonization variations follow predeter-mined chord progressions. For the purpose of this study we generated two versions: one following a major chord progression and the other one following a minor chord progression. We conducted an evaluation of the effects of the feedback on speakers and we present initial findings assessing how different musical modulations might potentially affect the emotions and mental state of the speaker as well as semantic content of speech, and musical vocal parameters.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: This work explores effective methods of generating route overviews, which can create a similar cognitive model for routes, using audio, that can help users plan their journey according to their preference and prepare for it in advance.
Abstract: While travelling to new places, maps are often used to determine the specifics of the route to follow. This helps prepare for the journey by forming a cognitive model of the route in our minds. However, the process is predominantly visual and thus inaccessible to people who are either blind or visually impaired (BVI) or doing an activity where their eyes are otherwise engaged. This work explores effective methods of generating route overviews, which can create a similar cognitive model as visual routes, using audio. The overviews thus generated can help users plan their journey according to their preferences and prepare for it in advance. This paper explores usefulness and usability of auditory routes overviews for the BVI and draws design implications for such a system following a 2-stage study with audio and sound designers and users. The findings underline that auditory route overviews are an important tool that can assist BVI users to make more informed travel choices. A properly designed auditory display might contain an integration of different sonification methods and interaction and customisation capabilities. Findings also show that such a system would benefit from the application of a participatory design approach.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: Photone is an interactive installation combining color images with musical sonification, where the musical expression is generated based on the syntactic features of an image as well as its semantic features.
Abstract: Photone is an interactive installation combining color images with musical sonification. The musical expression is generated based on the syntactic (as opposed to semantic) features of an image as ...

Proceedings ArticleDOI
27 Jun 2019
TL;DR: It is suggested that soundscapes have great potential for creating a calm technology for maintaining awareness of Twitter data, and thatsoundscapes can be useful in helping people without prior experience in sound design think about sound in sophisticated ways and engage meaningfully in sonification design.
Abstract: In this paper, we explore the potential for everyday Twitter users to design and use soundscape sonifications as an alternative, “calm” modality for staying informed of Twitter activity. We first present the results of a survey assessing how 100 Twitter users currently use and change audio notifications. We then present a study in which 9 frequent Twitter users employed two user interfaces—with varying degrees of automation—to design, customize, and use soundscape sonifications of Twitter data. This work suggests that soundscapes have great potential for creating a calm technology for maintaining awareness of Twitter data, and that soundscapes can be useful in helping people without prior experience in sound design think about sound in sophisticated ways and engage meaningfully in sonification design.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: The musical cues relate to notable network events in such a way as to minimize the amount of training time a human listener would need in order to make sense of the cues.
Abstract: Cyber defenders work in stressful, information-rich, and high-stakes environments. While other researchers have considered sonification for security operations centers (SOCs), the mappings of network events to sound parameters have produced aesthetically unpleasing results. This paper proposes a novel sonification pro-cess for transforming data about computer network traffic into music. The musical cues relate to notable network events in such a way as to minimize the amount of training time a human listener would need in order to make sense of the cues. We demonstrate our technique on a dataset of 708 million authentication events over nine continuous months from an enterprise network. We il-lustrate a volume-centric approach in relation to the amplitude of the input data, and also a volumetric approach mapping the input data signal into the number of notes played. The resulting music prioritizes aesthetics over bandwidth to balance performance with adoption.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: A case-study in the application of data-driven non-speech audio for melanoma diagnosis of physician photographs a suspicious skin lesion, triggering a sonification of the system’s penultimate classification layer, and it is discovered that training the AI on sonifications from this model improved accuracy further.
Abstract: The applications of artificial intelligence are becoming more and more prevalent in everyday life. Although many AI systems can operate autonomously, their goal is often assisting humans. Knowledge from the AI system must somehow be perceptualized. Towards this goal, we present a case-study in the application of data-driven non-speech audio for melanoma diagnosis. A physician photographs a suspicious skin lesion, triggering a sonification of the system’s penultimate classification layer. We iterated on sonification strategies and coalesced around designs representing three general approaches. We tested each in a group of novice listeners (n=7) for mean sensitivity, specificity, and learning effects. The mean accuracy was greatest for a simple model, but a trained dermatologist preferred a perceptually compressed model of the full classification layer. We discovered that training the AI on sonifications from this model improved accuracy further. We argue for perceptual compression as a general technique and for a comprehensible number of simultaneous streams.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: An assisted-living device that deliberately stimulates the sense of hearing in order to assist vision-impaired people in navigation and orientation tasks and resembles a bionic adaptation of the echolocation system of bats, which can provide successful navigation entirely in the dark.
Abstract: Sound is extremely important to our daily navigation, while sometimes slightly underestimated relative to the simultaneous presence of the visual sense. Indeed, the spatial sense of sound can immediately identify the direction of danger far beyond the restricted sense of vision. The sound is then rapidly and unconsciously interpreted by assigning a meaning to it. In this paper, we therefore propose an assisted-living device that deliberately stimulates the sense of hearing in order to assist vision-impaired people in navigation and orientation tasks. The sense of vision in this framework is replaced with a sensing capability based on radar, and a comprehensive radar profile of the environment is translated into a dedicated sound representation, for instance, to indicate the distances and directions of obstacles. The concept thus resembles a bionic adaptation of the echolocation system of bats, which can provide successful navigation entirely in the dark. The process of translating radar data into sound in this context is termed “sonification”. An advantage of radar sensing over optical cameras is the independence from environmental lighting conditions. Thus, the envisioned system can operate as a range extender of the conventional white cane. The paper technically reports the radar and binaural sound engine of our system and, specifically, describes the link between otherwise asynchronous radar circuitry and the binaural audio output to headphones.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: The initial findings on the ability of users to understand, decipher, and recreate sound representations to support primary network tasks, such as counting the number of elements in a network, identifying connections between node, determining the relative weight of connections between nodes, and recognizing which category an element belongs to are presented.
Abstract: In this paper, we explore how sonic features can be used to represent network data structures that define relationships between elements. Representations of networks are pervasive in contemporary life (social networks, route planning, etc), and network analysis is an increasingly important aspect of data science (data mining, biological modeling, deep learning, etc). We present our initial findings on the ability of users to understand, decipher, and recreate sound representations to support primary network tasks, such as counting the number of elements in a network, identifying connections between nodes, determining the relative weight of connections between nodes, and recognizing which category an element belongs to. The results of an initial exploratory study (n=6) indicate that users are able to conceptualize mappings between sounds and visual network features, but that when asked to produce a visual representation of sounds users tend to generate outputs that closely resemble familiar musical notation. A more in-depth pilot study (n=26) more specifically examined which sonic parameters (melody, harmony, timbre, rhythm, dynamics) map most effectively to network features (node count, node classification, connectivity, edge weight). Our results indicate that users can conceptualize relationships between sound features and network features, and can create or use mappings between the aural and visual domains.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: This paper presents an experiment that compares data-to-sound mappings in which the mapping’s polarity is based on results from a previous magnitude estimation experiment against mappings whose polarities are inverted, suggesting that for a simple auditory display like the one used here, whether or not the polarities of the data- to- sound mappings are based on magnitude estimation does not have a substantial effect on any objective performance measures gathered during the experiment.
Abstract: A challenge in sonification design is mapping data param-eters onto acoustic parameters in a way that aligns with a listener’s mental model of how a given data parameter should sound. Studies have used the psychophysical scaling method of magnitude estimation to systematically evaluate how participants per-ceive mappings between data and sound parameters - giving data on perceived polarity and scale of the relationship between the data and sound parameters. As of yet, there has been little re-search investigating whether data-to-sound mappings that are de-signed based on results from these magnitude estimation experiments have any effect on users’ performance in an applied audi-tory display task. This paper presents an experiment that com-pares data-to-sound mappings in which the mapping’s polarity is based on results from a previous magnitude estimation experiment against mappings whose polarities are inverted. The experiment is based around a simple task in which participants need to rank WiFi networks based on how secure they are, where security is represented using an auditory display. Results suggest that for a simple auditory display like the one used here, whether or not the polarities of the data-to-sound mappings are based on magnitude estimation does not have a substantial effect on any objective per-formance measures gathered during the experiment. Finally, potential areas for future work are discussed that may continue to investigate the problems addressed by this paper.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: Results showed that practice with feedback significantly improves performance regardless of the display design and that individual differences such as active engagement in music and motivation can predict how well a listener is able to learn to use these displays.
Abstract: Information presented in auditory displays is often spread across multiple streams to make it easier for listeners to distinguish between different sounds and changes in multiple cues. Due to the limited resources of the auditory sense and the fact that they are often untrained compared to the visual senses, studies have tried to determine the limit to which listeners are able to monitor different auditory streams while not compromising performance in using the displays. This study investigates the difference between non-speech auditory displays, speech auditory displays, and mixed displays; and the effects of the different display designs and individual differences on performance and learnability. Results showed that practice with feedback significantly improves performance regardless of the display design and that individual differences such as active engagement in music and motivation can predict how well a listener is able to learn to use these displays. Findings of this study contribute to understanding how musical experience can be linked to usability of auditory displays, as well as the capability of humans to learn to use their auditory senses to overcome visual workload and receive important information.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: Ideas developed in electroacoustic composition, such as acousmatic storytelling and sound-based narration, are presented and investigated for their use in sonification-based creative works.
Abstract: Nuclear magnetic resonance (NMR) spectroscopy is an analytical tool to determine the structure of chemical compounds. Unlike other spectroscopic methods, signals recorded using NMR spectrometers are frequently in a range of zero to 20000 Hz, making direct playback possible. As each type of molecule has, based on its structural features, distinct and predictable features in its NMR spectra, NMR data sonification can be used to create auditory ‘fingerprints’ of molecules. This paper describes the methodology of NMR data sonification of the nuclei nitrogen, phosphorous, and oxygen and analyses the sonification products of DNA and protein NMR data. The paper introduces On the Extinction of a Species, an acousmatic music composition combining NMR data sonification and voice narration. Ideas developed in electroacoustic composition, such as acousmatic storytelling and sound-based narration are presented and investigated for their use in sonification-based creative works.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: Auditory Contrast Enhancement (ACE) is introduced as a technique to enhance sounds at hand of a given collection of sound or sonification examples that belong to different classes, such as sounds of machines with and without a certain malfunction, or medical data sonifications for different pathologies/conditions.
Abstract: We introduce Auditory Contrast Enhancement (ACE) as a technique to enhance sounds at hand of a given collection of sound or sonification examples that belong to different classes, such as sounds of machines with and without a certain malfunction, or medical data sonifications for different pathologies/conditions. A frequent use case in inductive data mining is the discovery of patterns in which such groups can be discerned, to guide subsequent paths for modelling and feature extraction. ACE provides researchers with a set of methods to render focussed auditory perspectives that accentuate inter-group differences and in turn also enhance the intra-group similarity, i.e. it warps sounds so that our human built-in metrics for assessing differences between sounds is better aligned to systematic differences between sounds belonging to different classes. We unfold and detail the concept along three different lines: temporal, spectral and spectrotemporal auditory contrast enhancement and we demonstrate their performance at hand of given sound and sonification collections.

Proceedings ArticleDOI
23 Jun 2019
TL;DR: This paper motivates the use of congruent cross-modal animations to design alarms and describes audio-visual mappings based on this paradigm and found that specific polarities between visual and audio parameters were preferred.
Abstract: Operators in surveillance activities face cognitive overload due to the fragmentation of information on several screens, the dynamic nature of the task and the multiple visual or audible alarms. This paper presents our ongoing efforts to design efficient audiovisual alarms for surveillance activities such as traffic management or air traffic control. We motivate the use of congruent cross-modal animations to design alarms and describe audiovisual mappings based on this paradigm. We ran a preference experiments with 24 participants to assess our designs and found that specific polarities between visual and audio parameters were preferred. We conclude with future research directions to validate the efficiency of our alarms with different cognitive load levels.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: This study shows that such visual feedback improves users' comfort for 78% of the candidates significantly while slightly improving their time perception, and could open the door for integrated affective Human Computer Interface (HCI).
Abstract: Personal assistants are becoming more pervasive in our envi-ronments but still do not provide natural interactions. Their lack of realism in term of expressiveness and their lack of visual feedback can create frustrating experiences and make users lose patience. In this sense, we propose an end-to-end trainable neural architecture for text-driven 3D mouth animations. Previous works showed such architectures provide better realism and could open the door for integrated affective Human Computer Interface (HCI). Our study shows that such visual feedback improves users’ comfort for 78%of the candidates significantly while slightly improving their time perception.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: In an attempt to contribute to the constant feedback existing between science and music, this work describes the design strategies used in the development of the virtual synthesizer prototype called Sonifigrapher, designed for the sonification of the light curves from NASA's publiclyavailable exoplanet archive.
Abstract: In an attempt to contribute to the constant feedback existing between science and music, this work describes the design strategies used in the development of the virtual synthesizer prototype called Sonifigrapher. Trying to achieve new ways of creating experimental music through the exploration of exoplanet data sonifications, this software provides an easy-to-use graph-to-sound quadraphonic converter, designed for the sonification of the light curves from NASA’s publicly-available exoplanet archive. Based on some features of the first analog tape recorder samplers, the prototype allows end-users to load a light curve from the archive and create controlled audio spectra making use of additive synthesis sonification. It is expected to be useful in creative, educational and informational contexts as part of an experimental and interdisciplinary development project for sonification tools, oriented to both non-specialized and specialized audiences.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: The Sonification of Solar Harmonics (SoSH) Project seeks to sonify data related to the field of helioseismology and distribute tools for others to do the same.
Abstract: The Sun is a resonant cavity for very low frequency acoustic waves, and just like a musical instrument, it supports a number of oscillation modes, also commonly known as harmonics. We are able to observe these harmonics by looking at how the Sun’s surface oscillates in response to them. Although this data has been studied scientifically for decades, it has only rarely been sonified. The Sonification of Solar Harmonics (SoSH) Project seeks to sonify data related to the field of helioseismology and distribute tools for others to do the same. Creative applications of this research by the authors include musical compositions, installation artwork, a short documentary, and a full-dome planetarium experience.