scispace - formally typeset
Search or ask a question
Author

Bruce N. Walker

Other affiliations: Rice University
Bio: Bruce N. Walker is an academic researcher from Georgia Institute of Technology. The author has contributed to research in topics: Auditory display & Sonification. The author has an hindex of 35, co-authored 197 publications receiving 4164 citations. Previous affiliations of Bruce N. Walker include Rice University.


Papers
More filters
Proceedings ArticleDOI
11 Oct 2007
TL;DR: A system for wearable audio navigation (SWAN) is being developed to serve as a navigation and orientation aid for persons temporarily or permanently visually impaired.
Abstract: Wearable computers can certainly support audio-only presentation of information; a visual interface need not be present for effective user interaction. A system for wearable audio navigation (SWAN) is being developed to serve as a navigation and orientation aid for persons temporarily or permanently visually impaired. SWAN is a wearable computer consisting of audio-only output and tactile input via a handheld interface. SWAN aids a user in safe pedestrian navigation and includes the ability for the user to author new GIS data relevant to their needs of wayfinding, obstacle avoidance, and situational awareness support. Emphasis is placed on representing pertinent data with non-speech sounds through a process of sonification. SWAN relies on a geographic information system (GIS) infrastructure for supporting geocoding and spatialization of data. Furthermore, SWAN utilizes novel tracking technology.

233 citations

Journal Article
TL;DR: In this study, a small-scale aerial drone was used as a tool for exploring potential benefits to safety managers within the construction jobsite and recommendations for the required features of an Ideal Safety Inspection Drone were led.
Abstract: SUMMARY: The construction industry lags behind many others in the rate of adoption of cutting edge technologies In the area of safety management this is more so Many advances in information technology could provide great benefits to this important aspect of construction operations Innovative use of these tools could result in safer jobsites This paper discusses initial application of drone technology in the construction industry In this study, a small-scale aerial drone was used as a tool for exploring potential benefits to safety managers within the construction jobsite This drone is an aerial quadricopter that can be piloted remotely using a smart phone, tablet device or a computer Since the drone is equipped with video cameras, it can provide safety managers with fast access to images as well as real time videos from a range of locations around the jobsite An expert analysis (heuristic evaluation) as well as a user participation analysis were performed on said quadricopter to determine the features of an ideal safety inspection drone The heuristic evaluation uncovered some of the user interface problems of the drone interface considering the context of the safety inspection The user participation evaluation was performed following a simulated task of counting the number of hardhats viewed through the display of a mobile device in the controlled environment of the lab Considering the task and the controlled variables, this experimental approach revealed that using the drone together with a large-size interface (eg iPad) would be as accurate as having the safety manager with plain view of the jobsite The results of these two evaluations together with previous experience of the authors in the area of safety inspection and drone technology led to recommendations for the required features of an Ideal Safety Inspection Drone Autonomous navigation, vocal interaction, high-resolution cameras, and collaborative user-interface environment are some examples of those features This innovative application of the aerial drone has the potential to improve construction practices and in this case facilitate jobsite safety inspections

205 citations

01 Jan 2010
TL;DR: An overview of sonification research can be found in this paper, where the current status of the field and a proposed research agenda are discussed. But this paper was prepared by an interdisciplinary group of researchers gathered at the request of the National Science Foundation in the fall of 1997 in association with the International Conference on Auditory Display.
Abstract: The purpose of this paper is to provide an overview of sonification research, including the current status of the field and a proposed research agenda. This paper was prepared by an interdisciplinary group of researchers gathered at the request of the National Science Foundation in the fall of 1997 in association with the International Conference on Auditory Display (ICAD).

187 citations

01 Jun 2006
TL;DR: It is suggested that spearcons are more effective than previous auditory cues in menu-based interfaces, and may lead to better performance and accuracy, as well as more flexible menu structures.
Abstract: With shrinking displays and increasing technology use by visually impaired users, it is important to improve usability with non-GUI interfaces such as menus. Using non-speech sounds called earcons or auditory icons has been proposed to enhance menu navigation. We compared search time and accuracy of menu navigation using four types of auditory representations: speech only; hierarchical earcons; auditory icons; and a new type called spearcons. Spearcons are created by speeding up a spoken phrase until it is not recognized as speech. Using a within-subjects design, participants searched a 5 x 5 menu for target items using each type of audio cue. Spearcons and speech-only both led to faster and more accurate menu navigation than auditory icons and hierarchical earcons. There was a significant practice effect for search time, within each type of auditory cue. These results suggest that spearcons are more effective than previous auditory cues in menu-based interfaces, and may lead to better performance and accuracy, as well as more flexible menu structures.

176 citations

Journal ArticleDOI
TL;DR: The selection of beacon sound and capture radius depend on the specific application, including whether speed of travel or adherence to path are of primary concern, and how sound timbre, waypoint capture radius, and practice affect performance.
Abstract: OBJECTIVE: We examined whether spatialized nonspeech beacons could guide navigation and how sound timbre, waypoint capture radius, and practice affect performance. BACKGROUND: Auditory displays may assist mobility and wayfinding for those with temporary or permanent visual impairment, but they remain understudied. Previous systems have used speech-based interfaces. METHOD: Participants (108 undergraduates) navigated three maps, guided by one of three beacons (pink noise, sonar ping, or 1000-Hz pure tone) spatialized by a virtual reality engine. Dependent measures were efficiency of time and path length. RESULTS: Overall navigation was very successful, with significant effects of practice and capture radius, and interactions with beacon sound. Overshooting and subsequent hunting for waypoints was exacerbated for small radius conditions. A human-scale capture radius (1.5 m) and sonar-like beacon yielded the optimal combination for safety and efficiency. CONCLUSION: The selection of beacon sound and capture radius depend on the specific application, including whether speed of travel or adherence to path are of primary concern. Extended use affects sound preferences and quickly leads to improvements in both speed and accuracy. APPLICATION: These findings should lead to improved wayfinding systems for the visually impaired as well as for first responders (e.g., firefighters) and soldiers. Language: en

171 citations


Cited by
More filters
Journal Article
TL;DR: In this article, the authors propose that the brain produces an internal representation of the world, and the activation of this internal representation is assumed to give rise to the experience of seeing, but it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness.
Abstract: Many current neurophysiological, psychophysical, and psychological approaches to vision rest on the idea that when we see, the brain produces an internal representation of the world. The activation of this internal representation is assumed to give rise to the experience of seeing. The problem with this kind of approach is that it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness. An alternative proposal is made here. We propose that seeing is a way of acting. It is a particular way of exploring the environment. Activity in internal representations does not generate the experience of seeing. The outside world serves as its own, external, representation. The experience of seeing occurs when the organism masters what we call the governing laws of sensorimotor contingency. The advantage of this approach is that it provides a natural and principled way of accounting for visual consciousness, and for the differences in the perceived quality of sensory experience in the different sensory modalities. Several lines of empirical evidence are brought forward in support of the theory, in particular: evidence from experiments in sensorimotor adaptation, visual \"filling in,\" visual stability despite eye movements, change blindness, sensory substitution, and color perception.

2,271 citations

Journal ArticleDOI
TL;DR: This chapter reviews the training research literature reported over the past decade and suggests that advancements have been made that help to understand better the design and delivery of training in organizations with respect to theory development as well as the quality and quantity of empirical research.
Abstract: ▪ Abstract This chapter reviews the training research literature reported over the past decade. We describe the progress in five areas of research including training theory, training needs analysis, antecedent training conditions, training methods and strategies, and posttraining conditions. Our review suggests that advancements have been made that help us understand better the design and delivery of training in organizations, with respect to theory development as well as the quality and quantity of empirical research. We have new tools for analyzing requisite knowledge and skills, and for evaluating training. We know more about factors that influence training effectiveness and transfer of training. Finally, we challenge researchers to find better ways to translate the results of training research into practice.

1,644 citations

01 Jan 2016
TL;DR: The cambridge handbook of the learning sciences is universally compatible with any devices to read and an online access to it is set as public so you can download it instantly.
Abstract: the cambridge handbook of the learning sciences is available in our digital library an online access to it is set as public so you can download it instantly. Our books collection spans in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the the cambridge handbook of the learning sciences is universally compatible with any devices to read.

1,059 citations

Journal ArticleDOI
TL;DR: The aim of this review is to address the potential of augmented unimodal and multimodal feedback in the framework of motor learning theories and the reasons for the different impacts of feedback strategies within or between the visual, auditory, and haptic modalities.
Abstract: It is generally accepted that augmented feedback, provided by a human expert or a technical display, effectively enhances motor learning. However, discussion of the way to most effectively provide augmented feedback has been controversial. Related studies have focused primarily on simple or artificial tasks enhanced by visual feedback. Recently, technical advances have made it possible also to investigate more complex, realistic motor tasks and to implement not only visual, but also auditory, haptic, or multimodal augmented feedback. The aim of this review is to address the potential of augmented unimodal and multimodal feedback in the framework of motor learning theories. The review addresses the reasons for the different impacts of feedback strategies within or between the visual, auditory, and haptic modalities and the challenges that need to be overcome to provide appropriate feedback in these modalities, either in isolation or in combination. Accordingly, the design criteria for successful visual, auditory, haptic, and multimodal feedback are elaborated.

966 citations