scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A sonification system for process monitoring as secondary task

TL;DR: In this paper, a real-time sonification system for process monitoring is presented, which allows to generate a prototypical process in real time, and present it by two novel sonification approaches, i.e., a subsymbolic, implicit and rich display connects better to the human sense to establish auditory categories and develop sensitivity to changes within and in between.
Abstract: This paper presents a novel process monitoring system to explore and evaluate the potential of real-time sonifications (i.e. non-verbal auditory representations) for supporting awareness of process states and for detecting and resolving critical process situations. Different from established auditory alarms and warnings, our sonifications convey analogue, i.e. continuous information in form of a process-data-driven soundscape that can be easily blended out in favor of a primary task, yet which is designed to attract the user's attention even before things become critical. We argue that a subsymbolic, implicit and rich display connects better to the human sense to establish auditory categories and develop sensitivity to changes within and in between. In consequence, users may profit from prerational automatic information processing mechanisms so that the cognitive resources remain free for another ‘primary task’. Our system allows to generate a prototypical process in real-time, and we present it by two novel sonification approaches. It encompasses software components to log user performance on the monitoring task, and to engage the user in a primary task. We discuss our sonification designs and first experiences with test users, and outlook on studies that are planned to be conducted using our system.
Citations
More filters
Journal ArticleDOI
TL;DR: A system that allows reproducible research in sonification for process monitoring to answer questions if a continuous soundscape can guide the user's attention better than one that is based on auditory cues is presented.
Abstract: As many users who are charged with process monitoring need to focus mainly on other work while performing monitoring as a secondary task, monitoring systems that purely rely on visual means are often not well suited for this purpose. Sonification, the presentation of data as (non-speech) sound, has proven in several studies that it can help in guiding the user's attention, especially in scenarios where process monitoring is performed in parallel with a different, main task. However, there are several aspects that have not been investigated in this area so far, for example if a continuous soundscape can guide the user's attention better than one that is based on auditory cues. We have developed a system that allows reproducible research to answer such questions. In this system, the participants' performance both for the main task (simulated by simple arithmetic problems) and for the secondary task (a simulation of a production process) can be measured in a more fine-grained manner than has been the case for existing research in this field. In a within-subject study (n=18), we compared three monitoring conditions - visual only, visual + auditory alerts and a condition combining the visual mode with continuous sonification of process events based on a forest soundscape. Participants showed significantly higher process monitoring performances in the continuous sonification condition, compared to the other two modes. The performance in the main task was at the same time not significantly affected by the continuous sonification. HighlightsWe present a system for reproducible research in sonification for process monitoring.We developed an experiment design to analyze effectiveness in monitoring as secondary task.We compared continuous sonification to visual-only and auditory alerts.Continuous sonification significantly enhances the adequacy of interactions.Participants find continuous sonification significantly more helpful and reassuring.

22 citations

Journal ArticleDOI
TL;DR: A central approach is to pursue the new interactive knowledge transfer through the multi-sensory approach combined with innovative feedback processes, enabling a learning process with all human senses.

19 citations

Proceedings ArticleDOI
01 Oct 2015
TL;DR: An overview of the current state of research regarding auditory-based and multimodal tools in computer security, including several sonification-based tools in a mature state, is provided.
Abstract: For server and network administrators, it is a challenge to keep an overview of their systems to detect potential intrusions and security risks in real-time as well as in retrospect. Most security tools leverage our inherent ability for pattern detection by visualizing different types of security data. Several studies suggest that complementing visualization with sonification (the presentation of data using sound) can alleviate some of the challenges of visual monitoring (such as the need for constant visual focus). This paper therefore provides an overview of the current state of research regarding auditory-based and multimodal tools in computer security. Most existing research in this area is geared towards supporting users in real-time network and server monitoring, while there are only few approaches that are designed for retrospective data analysis. There exist several sonification-based tools in a mature state, but their effectiveness has hardly been tested in formal user and usability studies. Such studies are however needed to provide a solid basis for deciding which type of sonification is most suitable for which kind of scenarios and how to best combine the two modalities, visualization and sonification, to support users in their daily routines.

12 citations

Journal ArticleDOI
TL;DR: The results of these studies form the basis for the development of an “intelligent” noise protection headphone as part of Cyber Physical Production Systems which provides auditorily augmented information to machine operators and enables radio communication between them.
Abstract: We describe two proof-of-concept approaches on the sonification of estimated operation states and conditions focusing on two scenarios: a laboratory setup of a manipulated 3D printer and an industrial setup focusing on the operations of a punching machine. The results of these studies form the basis for the development of an “intelligent” noise protection headphone as part of Cyber Physical Production Systems which provides auditorily augmented information to machine operators and enables radio communication between them. Further application areas are implementations in control rooms (equipped with multi-channel loudspeaker systems) and utilization for training purposes. As a first proof-of-concept, the data stream of error probability estimations regarding partly manipulated 3D printing processes were mapped to three sonification models, providing evidence about momentary operation states. The neural network applied indicates a high accuracy (> 93%) of the error estimation distinguishing between normal and manipulated operation states. None of the manipulated states could be identified by listening. An auditory augmentation, or sonification of these error estimations, provides a considerable benefit to process monitoring. For a second proof-of-concept, setup operations of a punching machine were recorded. Since all operations were apparently flawlessly executed, and there were no errors to be reported, we focused on the identification of operation phases. Each phase of a punching process could be algorithmically distinguished at an estimated probability rate of > 94%. In the auditory display, these phases were represented by different instrumentations of a musical piece in order to allow users to differentiate between operations auditorily.

10 citations

Journal ArticleDOI
TL;DR: The relations uncovered are proposed to use as the underpinnings for a computational model of foreground selection, and also, as design guidelines for stream-based sonification applications.
Abstract: Salience shapes the involuntary perception of a sound scene into foreground and background. Auditory interfaces, such as those used in continuous process monitoring, rely on the prominence of those sounds that are perceived as foreground. We propose to distinguish between the salience of sound events and that of streams, and introduce a paradigm to study the latter using repetitive patterns of natural chirps. Since streams are the sound objects populating the auditory scene, we suggest the use of global descriptors of perceptual dimensions to predict their salience, and hence, the organization of the objects into foreground and background. However, there are many possible independent features that can be used to describe sounds. Based on the results of two experiments, we suggest a parsimonious interpretation of the rules guiding foreground formation: after loudness, tempo and brightness are the dimensions that have higher priority. Our data show that, under equal-loudness conditions, patterns with fast tempo and lower brightness tend to emerge and that the interaction between tempo and brightness in foreground selection seems to increase with task difficulty. We propose to use the relations we uncovered as the underpinnings for a computational model of foreground selection, and also, as design guidelines for stream-based sonification applications.

8 citations

References
More filters
Proceedings ArticleDOI
01 Mar 1991
TL;DR: An ecology of auditory icons which worked together to convey information about a complex, demanding simulation task, and observed users collaborating on it with and without sound suggest that audio cues can provide useful information about processes and problems, and support the perceptual integration of a number of separate processes into one complex one.
Abstract: We designed an ecology of auditory icons which worked together to convey information about a complex, demanding simulation task, and observed users collaborating on it with and without sound. Our observations suggest that audio cues can provide useful information about processes and problems, and support the perceptual integration of a number of separate processes into one complex one. In addition, they can smooth the transition between division of labour and collaboration by providing a new dimension of reference. These results suggest that auditory icons can play a significant role in future multiprocessing and collaborative systems.

381 citations

Proceedings Article
01 Jun 2008
TL;DR: A new definition for sonification and auditory display is introduced that emphasizes the necessary and sufficient conditions for organized sound to be called sonification, and suggests a taxonomy, and discusses the relation between visualization and sonification.
Abstract: Sonification is still a relatively young research field and many terms such as sonification, auditory display, auralization, audification have been used without a precise definition. Recent developments such as the introduction of Model-Based Sonification, the establishment of interactive sonification and the increased interest in sonification from arts have raised the need to revisit the definitions in order to move towards a clearer terminology. This paper introduces a new definition for sonification and auditory display that emphasizes the necessary and sufficient conditions for organized sound to be called sonification. It furthermore suggests a taxonomy, and discusses the relation between visualization and sonification. A hierarchy of closed-loop interactions is furthermore introduced. This paper aims to initiate vivid discussion towards the establishment of a deeper theory of sonification and auditory display.

257 citations

Journal ArticleDOI
TL;DR: In this article, a predictive maintenance using vibration analysis has achieved meaningful results in successfully diagnosing machinery problems, such as reducing machinery downtime and production losses, but also the more subtle long-term cost benefits which can result from accurate maintenance scheduling.
Abstract: Predictive maintenance remains a cost-effective means for a maintenance department to resolve plant machinery problems and implement a repair schedule. Quality information is the key factor in designing a successful predictive maintenance program. A strong basis for quality information and an effective methodology for using this information base are two essential program ingredients. When applied with these ingredients in place, predictive maintenance using vibration analysis has achieved meaningful results in successfully diagnosing machinery problems. The benefits of such programs include not only evident cost benefits, such as reducing machinery downtime and production losses, but also the more subtle long-term cost benefits which can result from accurate maintenance scheduling.

85 citations

Journal Article
TL;DR: A set of concepts are proposed to help unify the CogInfoCom-related aspects in these fields, from the unique viewpoint necessary for the human-oriented analysis and synthesis of Cog InfoCom channels.
Abstract: Due to its multidisciplinary origins, the elementary concepts and terminology of Cognitive Infocommunications (CogInfoCom) would ideally be derived – among others – from the fields of informatics, infocommunications and cognitive sciences. The terminology used by these fields in relation to CogInfoCom are disparate not only because the same concepts are often referred to using different terms, but also because in many cases the same terms can refer to different concepts. For this reason, we propose a set of concepts to help unify the CogInfoCom-related aspects in these fields, from the unique viewpoint necessary for the human-oriented analysis and synthesis of CogInfoCom channels. Examples are given to illustrate how the terminology presents itself in engineering applications.

77 citations

Journal ArticleDOI
01 Oct 2001
TL;DR: Results indicate that the detection of deviations in visual and redundant conditions were not significantly different, but faster than the auditory display, and performance in the tracking task was degraded least in the auditory condition, and the redundant display resulted in poorest performance—an example of a negative redundancy-gain.
Abstract: Auditory signals can take the form of “auditory displays” that communicate information redundant to visual displays. These redundant displays may allow offloading some visual workload to the auditory channel. The current study examines the effect of visual, auditory and redundant displays on the performance of a dual-task simulation of patient monitoring. Subjects performed manual compensatory tracking task while monitoring six vital signs of a simulated patient, detecting deviations from normal levels. Monitoring was presented in three display conditions: auditory only, visual only, and redundant. Results indicate that the detection of deviations in visual and redundant conditions were not significantly different, but faster than the auditory display. However, performance in the tracking task was degraded least in the auditory condition, and the redundant display resulted in poorest performance—an example of a negative redundancy-gain. Reasons for this finding are examined through data from eye-movement ...

77 citations