scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review

01 Feb 2013-Psychonomic Bulletin & Review (Springer-Verlag)-Vol. 20, Iss: 1, pp 21-53
TL;DR: The aim of this review is to address the potential of augmented unimodal and multimodal feedback in the framework of motor learning theories and the reasons for the different impacts of feedback strategies within or between the visual, auditory, and haptic modalities.
Abstract: It is generally accepted that augmented feedback, provided by a human expert or a technical display, effectively enhances motor learning. However, discussion of the way to most effectively provide augmented feedback has been controversial. Related studies have focused primarily on simple or artificial tasks enhanced by visual feedback. Recently, technical advances have made it possible also to investigate more complex, realistic motor tasks and to implement not only visual, but also auditory, haptic, or multimodal augmented feedback. The aim of this review is to address the potential of augmented unimodal and multimodal feedback in the framework of motor learning theories. The review addresses the reasons for the different impacts of feedback strategies within or between the visual, auditory, and haptic modalities and the challenges that need to be overcome to provide appropriate feedback in these modalities, either in isolation or in combination. Accordingly, the design criteria for successful visual, auditory, haptic, and multimodal feedback are elaborated.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This work reviews the state-of-the-art techniques for controlling portable active lower limb prosthetic and orthotic P/O devices in the context of locomotive activities of daily living (ADL), and considers how these can be interfaced with the user’s sensory-motor control system.
Abstract: Technological advancements have led to the development of numerous wearable robotic devices for the physical assistance and restoration of human locomotion. While many challenges remain with respect to the mechanical design of such devices, it is at least equally challenging and important to develop strategies to control them in concert with the intentions of the user. This work reviews the state-of-the-art techniques for controlling portable active lower limb prosthetic and orthotic (P/O) devices in the context of locomotive activities of daily living (ADL), and considers how these can be interfaced with the user’s sensory-motor control system. This review underscores the practical challenges and opportunities associated with P/O control, which can be used to accelerate future developments in this field. Furthermore, this work provides a classification scheme for the comparison of the various control strategies. As a novel contribution, a general framework for the control of portable gait-assistance devices is proposed. This framework accounts for the physical and informatic interactions between the controller, the user, the environment, and the mechanical device itself. Such a treatment of P/Os – not as independent devices, but as actors within an ecosystem – is suggested to be necessary to structure the next generation of intelligent and multifunctional controllers. Each element of the proposed framework is discussed with respect to the role that it plays in the assistance of locomotion, along with how its states can be sensed as inputs to the controller. The reviewed controllers are shown to fit within different levels of a hierarchical scheme, which loosely resembles the structure and functionality of the nominal human central nervous system (CNS). Active and passive safety mechanisms are considered to be central aspects underlying all of P/O design and control, and are shown to be critical for regulatory approval of such devices for real-world use. The works discussed herein provide evidence that, while we are getting ever closer, significant challenges still exist for the development of controllers for portable powered P/O devices that can seamlessly integrate with the user’s neuromusculoskeletal system and are practical for use in locomotive ADL.

853 citations


Cites background from "Augmented visual, auditory, haptic,..."

  • ...Auditory cues can vary in stereo balance, pitch, timbre and volume [89], and therefore may transmit rich information via speakers or headphones....

    [...]

Proceedings ArticleDOI
08 Oct 2013
TL;DR: The design and implementation of YouMove and its interactive mirror are discussed and a user study is presented in which YouMove was shown to improve learning and short-term retention by a factor of 2 compared to a traditional video demonstration.
Abstract: YouMove is a novel system that allows users to record and learn physical movement sequences. The recording system is designed to be simple, allowing anyone to create and share training content. The training system uses recorded data to train the user using a large-scale augmented reality mirror. The system trains the user through a series of stages that gradually reduce the user's reliance on guidance and feedback. This paper discusses the design and implementation of YouMove and its interactive mirror. We also present a user study in which YouMove was shown to improve learning and short-term retention by a factor of 2 compared to a traditional video demonstration.

263 citations


Cites background from "Augmented visual, auditory, haptic,..."

  • ...In particular, the availability and modality of feedback can greatly impact skill acquisition [30]....

    [...]

Journal ArticleDOI
TL;DR: This review has shown that wearable systems are used mostly for the monitoring and provision of feedback on posture and upper extremity movements in stroke rehabilitation, and accelerometers and IMUs are the most frequently used sensors.
Abstract: The development of interactive rehabilitation technologies which rely on wearable-sensing for upper body rehabilitation is attracting increasing research interest. This paper reviews related research with the aim: 1) To inventory and classify interactive wearable systems for movement and posture monitoring during upper body rehabilitation, regarding the sensing technology, system measurements and feedback conditions; 2) To gauge the wearability of the wearable systems; 3) To inventory the availability of clinical evidence supporting the effectiveness of related technologies. A systematic literature search was conducted in the following search engines: PubMed, ACM, Scopus and IEEE (January 2010–April 2016). Forty-five papers were included and discussed in a new cuboid taxonomy which consists of 3 dimensions: sensing technology, feedback modalities and system measurements. Wearable sensor systems were developed for persons in: 1) Neuro-rehabilitation: stroke (n = 21), spinal cord injury (n = 1), cerebral palsy (n = 2), Alzheimer (n = 1); 2) Musculoskeletal impairment: ligament rehabilitation (n = 1), arthritis (n = 1), frozen shoulder (n = 1), bones trauma (n = 1); 3) Others: chronic pulmonary obstructive disease (n = 1), chronic pain rehabilitation (n = 1) and other general rehabilitation (n = 14). Accelerometers and inertial measurement units (IMU) are the most frequently used technologies (84% of the papers). They are mostly used in multiple sensor configurations to measure upper limb kinematics and/or trunk posture. Sensors are placed mostly on the trunk, upper arm, the forearm, the wrist, and the finger. Typically sensors are attachable rather than embedded in wearable devices and garments; although studies that embed and integrate sensors are increasing in the last 4 years. 16 studies applied knowledge of result (KR) feedback, 14 studies applied knowledge of performance (KP) feedback and 15 studies applied both in various modalities. 16 studies have conducted their evaluation with patients and reported usability tests, while only three of them conducted clinical trials including one randomized clinical trial. This review has shown that wearable systems are used mostly for the monitoring and provision of feedback on posture and upper extremity movements in stroke rehabilitation. The results indicated that accelerometers and IMUs are the most frequently used sensors, in most cases attached to the body through ad hoc contraptions for the purpose of improving range of motion and movement performance during upper body rehabilitation. Systems featuring sensors embedded in wearable appliances or garments are only beginning to emerge. Similarly, clinical evaluations are scarce and are further needed to provide evidence on effectiveness and pave the path towards implementation in clinical settings.

237 citations

Journal ArticleDOI
17 Dec 2013-PLOS ONE
TL;DR: Results confirm two hypotheses formulated in a preliminary study: pitch is by far the most used auditory dimension in sonification applications, and spatial auditory dimensions are almost exclusively used to sonify kinematic quantities.
Abstract: The field of sonification has progressed greatly over the past twenty years and currently constitutes an established area of research. This article aims at exploiting and organizing the knowledge accumulated in previous experimental studies to build a foundation for future sonification works. A systematic review of these studies may reveal trends in sonification design, and therefore support the development of design guidelines. To this end, we have reviewed and analyzed 179 scientific publications related to sonification of physical quantities. Using a bottom-up approach, we set up a list of conceptual dimensions belonging to both physical and auditory domains. Mappings used in the reviewed works were identified, forming a database of 495 entries. Frequency of use was analyzed among these conceptual dimensions as well as higher-level categories. Results confirm two hypotheses formulated in a preliminary study: pitch is by far the most used auditory dimension in sonification applications, and spatial auditory dimensions are almost exclusively used to sonify kinematic quantities. To detect successful as well as unsuccessful sonification strategies, assessment of mapping efficiency conducted in the reviewed works was considered. Results show that a proper evaluation of sonification mappings is performed only in a marginal proportion of publications. Additional aspects of the publication database were investigated: historical distribution of sonification works is presented, projects are classified according to their primary function, and the sonic material used in the auditory display is discussed. Finally, a mapping-based approach for characterizing sonification is proposed.

213 citations


Cites background from "Augmented visual, auditory, haptic,..."

  • ...Recent review studies [42,47] pointed out that evaluation of sonification systems is not systematic yet, although crucial from a design perspective....

    [...]

Journal ArticleDOI
TL;DR: This review provides a unifying view of human and robot sharing task execution in scenarios where collaboration and cooperation between the two entities are necessary, and where the physical coupling ofhuman and robot is a vital aspect.
Abstract: As robotic devices are applied to problems beyond traditional manufacturing and industrial settings, we find that interaction between robots and humans, especially physical interaction, has become a fast developing field. Consider the application of robotics in healthcare, where we find telerobotic devices in the operating room facilitating dexterous surgical procedures, exoskeletons in the rehabilitation domain as walking aids and upper-limb movement assist devices, and even robotic limbs that are physically integrated with amputees who seek to restore their independence and mobility. In each of these scenarios, the physical coupling between human and robot, often termed physical human robot interaction (pHRI), facilitates new human performance capabilities and creates an opportunity to explore the sharing of task execution and control between humans and robots. In this review, we provide a unifying view of human and robot sharing task execution in scenarios where collaboration and cooperation between the two entities are necessary, and where the physical coupling of human and robot is a vital aspect. We define three key themes that emerge in these shared control scenarios, namely, intent detection, arbitration, and feedback. First, we explore methods for how the coupled pHRI system can detect what the human is trying to do, and how the physical coupling itself can be leveraged to detect intent. Second, once the human intent is known, we explore techniques for sharing and modulating control of the coupled system between robot and human operator. Finally, we survey methods for informing the human operator of the state of the coupled system, or the characteristics of the environment with which the pHRI system is interacting. At the conclusion of the survey, we present two case studies that exemplify shared control in pHRI systems, and specifically highlight the approaches used for the three key themes of intent detection, arbitration, and feedback for applications of upper limb robotic rehabilitation and haptic feedback from a robotic prosthesis for the upper limb. [DOI: 10.1115/1.4039145]

202 citations


Cites background from "Augmented visual, auditory, haptic,..."

  • ...Other groups provide additional strong evidence to support the addition of haptic cues to assist with task completion for performance enhancement [98,137,138]....

    [...]

  • ...Leveraging a multimodal feedback strategy for complex tasks is also preferred by users over unimodal interactions [97] and well suited for complex tasks with high workload in one modality, so as to prevent cognitive overload and in turn enhance motor learning [98]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This paper reformulated the manipulator con trol problem as direct control of manipulator motion in operational space—the space in which the task is originally described—rather than as control of the task's corresponding joint space motion obtained only after geometric and geometric transformation.
Abstract: This paper presents a unique real-time obstacle avoidance approach for manipulators and mobile robots based on the artificial potential field concept. Collision avoidance, tradi tionally considered a high level planning problem, can be effectively distributed between different levels of control, al lowing real-time robot operations in a complex environment. This method has been extended to moving obstacles by using a time-varying artificial patential field. We have applied this obstacle avoidance scheme to robot arm mechanisms and have used a new approach to the general problem of real-time manipulator control. We reformulated the manipulator con trol problem as direct control of manipulator motion in oper ational space—the space in which the task is originally described—rather than as control of the task's corresponding joint space motion obtained only after geometric and kine matic transformation. Outside the obstacles' regions of influ ence, we caused the end effector to move in a straight line with an...

6,515 citations


"Augmented visual, auditory, haptic,..." refers background in this paper

  • ...Intermediate steps from zero-impedance control to position control range from constraints in position (e.g., provided by path control; Khatib, 1986; Vallery, Guidali, Duschau-Wicke, & Riener, 2009), and further over predefined position and soft time constraints (e....

    [...]

Journal ArticleDOI
TL;DR: A mathematical model is formulated which is shown to predict both the qualitative features and the quantitative details observed experimentally in planar, multijoint arm movements, and is successful only when formulated in terms of the motion of the hand in extracorporal space.
Abstract: This paper presents studies of the coordination of voluntary human arm movements. A mathematical model is formulated which is shown to predict both the qualitative features and the quantitative details observed experimentally in planar, multijoint arm movements. Coordination is modeled mathematically by defining an objective function, a measure of performance for any possible movement. The unique trajectory which yields the best performance is determined using dynamic optimization theory. In the work presented here, the objective function is the square of the magnitude of jerk (rate of change of acceleration) of the hand integrated over the entire movement. This is equivalent to assuming that a major goal of motor coordination is the production of the smoothest possible movement of the hand. Experimental observations of human subjects performing voluntary unconstrained movements in a horizontal plane are presented. They confirm the following predictions of the mathematical model: unconstrained point-to-point motions are approximately straight with bell-shaped tangential velocity profiles; curved motions (through an intermediate point or around an obstacle) have portions of low curvature joined by portions of high curvature; at points of high curvature, the tangential velocity is reduced; the durations of the low-curvature portions are approximately equal. The theoretical analysis is based solely on the kinematics of movement independent of the dynamics of the musculoskeletal system and is successful only when formulated in terms of the motion of the hand in extracorporal space. The implications with respect to movement organization are discussed.

4,226 citations

Journal ArticleDOI
24 Jan 2002-Nature
TL;DR: The nervous system seems to combine visual and haptic information in a fashion that is similar to a maximum-likelihood integrator, and this model behaved very similarly to humans in a visual–haptic task.
Abstract: When a person looks at an object while exploring it with their hand, vision and touch both provide information for estimating the properties of the object. Vision frequently dominates the integrated visual-haptic percept, for example when judging size, shape or position, but in some circumstances the percept is clearly affected by haptics. Here we propose that a general principle, which minimizes variance in the final estimate, determines the degree to which vision or haptics dominates. This principle is realized by using maximum-likelihood estimation to combine the inputs. To investigate cue combination quantitatively, we first measured the variances associated with visual and haptic estimation of height. We then used these measurements to construct a maximum-likelihood integrator. This model behaved very similarly to humans in a visual-haptic task. Thus, the nervous system seems to combine visual and haptic information in a fashion that is similar to a maximum-likelihood integrator. Visual dominance occurs when the variance associated with visual estimation is lower than that associated with haptic estimation.

4,142 citations

Proceedings ArticleDOI
06 Jun 1984
TL;DR: In this paper, a unified approach to kinematically constrained motion, dynamic interaction, target acquisition and obstacle avoidance is presented, which results in a unified control of manipulator behaviour.
Abstract: Manipulation fundamentally requires a manipulator to be mechanically coupled to the object being manipulated. A consideration of the physical constraints imposed by dynamic interaction shows that control of a vector quantity such as position or force is inadequate and that control of the manipulator impedance is also necessary. Techniques for control of manipulator behaviour are presented which result in a unified approach to kinematically constrained motion, dynamic interaction, target acquisition and obstacle avoidance.

3,292 citations

Journal ArticleDOI
TL;DR: This work shows that the optimal strategy in the face of uncertainty is to allow variability in redundant (task-irrelevant) dimensions, and proposes an alternative theory based on stochastic optimal feedback control, which emerges naturally from this framework.
Abstract: A central problem in motor control is understanding how the many biomechanical degrees of freedom are coordinated to achieve a common goal. An especially puzzling aspect of coordination is that behavioral goals are achieved reliably and repeatedly with movements rarely reproducible in their detail. Existing theoretical frameworks emphasize either goal achievement or the richness of motor variability, but fail to reconcile the two. Here we propose an alternative theory based on stochastic optimal feedback control. We show that the optimal strategy in the face of uncertainty is to allow variability in redundant (task-irrelevant) dimensions. This strategy does not enforce a desired trajectory, but uses feedback more intelligently, correcting only those deviations that interfere with task goals. From this framework, task-constrained variability, goal-directed corrections, motor synergies, controlled parameters, simplifying rules and discrete coordination modes emerge naturally. We present experimental results from a range of motor tasks to support this theory.

2,776 citations