scispace - formally typeset
Search or ask a question

Showing papers by "Robert V. Kenyon published in 2009"


Journal ArticleDOI
TL;DR: Significant increases in the RMS values of the lower-limb joint angles suggest that as visually-induced postural instability increased, the body was primarily controlled as a multi-segmental structure instead of a single-link inverted pendulum, with the knee playing a key role in postural stabilization.

43 citations


Proceedings ArticleDOI
13 Nov 2009
TL;DR: A servo-controlled glove to assist extension of individual digits to promote practice of grasp-and-release movements with the hand, the PneuGlove, shows promise for rehabilitative training of hand movements after stroke.
Abstract: Hand impairment is common following stroke and is often resistant to traditional therapy methods. Successful interventions have stressed the importance of repeated practice to facilitate rehabilitation. Thus, we have developed a servo-controlled glove to assist extension of individual digits to promote practice of grasp-and-release movements with the hand. This glove, the PneuGlove, permits free movement of the arm throughout its workspace. A novel immersive virtual reality environment was created for training movement in conjunction with the device. Seven stroke suvrvivors with chronic hand impairment participated in 18 training sessions with the PneuGlove over 6 weeks. Overall, subjects displayed a significant 6-point improvement in the upper extremity score on the Fugl-Meyer assessment and this increase was maintained at the evaluation held one month after conclusion of all training (p < 0.01). The majority of this gain came from an increase in the hand/wrist score (3.8-point increase, p < 0.01). Thus, the system shows promise for rehabilitative training of hand movements after stroke.

40 citations


Journal ArticleDOI
TL;DR: It is found that even when healthy young adults stand and walk on a stable support surface, they are unable to ignore wide field of view visual motion and they adapt their postural orientation to the parameters of the visual motion.
Abstract: Orientation in space is a perceptual variable intimately related to postural orientation that relies on visual and vestibular signals to correctly identify our position relative to vertical. We have combined a virtual environment with motion of a posture platform to produce visual-vestibular conditions that allow us to explore how motion of the visual environment may affect perception of vertical and, consequently, affect postural stabilizing responses. In order to involve a higher level perceptual process, we needed to create a visual environment that was immersive. We did this by developing visual scenes that possess contextual information using color, texture, and 3-dimensional structures. Update latency of the visual scene was close to physiological latencies of the vestibulo-ocular reflex. Using this system we found that even when healthy young adults stand and walk on a stable support surface, they are unable to ignore wide field of view visual motion and they adapt their postural orientation to the parameters of the visual motion. Balance training within our environment elicited measurable rehabilitation outcomes. Thus we believe that virtual environments can serve as a clinical tool for evaluation and training of movement in situations that closely reflect conditions found in the physical world.

36 citations


Proceedings ArticleDOI
13 Nov 2009
TL;DR: An evaluation of early results of this novel post-stroke robotic-aided therapy trial that incorporates these ideas in a large VR system and simultaneously employs the patient, the therapist, and the technology to accomplish effective therapy is presented.
Abstract: Recent research has suggested that enhanced retraining for stroke patients using haptics (robotic forces) and graphics (visual display) to generate a practice environment that can artificially enhance error rather than reducing it, can stimulate new learning and foster accelerated recovery. We present an evaluation of early results of this novel post-stroke robotic-aided therapy trial that incorporates these ideas in a large VR system and simultaneously employs the patient, the therapist, and the technology to accomplish effective therapy.

28 citations


Journal ArticleDOI
TL;DR: The ability of subjects to appreciate size-constancy in an immersive virtual environment was studied while scene complexity, stereovision and motion parallax visual factors were manipulated resulting in twelve different viewing conditions.
Abstract: —An important aspect of a subject’s perception of virtual objects in a virtual environment is whether the size of the object is perceived as it would be in the physical world, which is named size-constancy. The ability of subjects to appreciate size-constancy in an immersive virtual environment was studied while scene complexity, stereovision and motion parallax visual factors were manipulated resulting in twelve different viewing conditions. Under each visual condition, 18 subjects made size judgments of a virtual object displayed at five different distances from them. Responses from the majority of our population demonstrated that scene complexity and stereovision have a significant impact on subjects’ ability to appreciate size-constancy. In contrast, motion parallax produced by moving the virtual environment or by the movements of the observer alone proved not to be a significant factor in determining size-constancy performance. Consequently, size-constancy is best obtained when scene complexity and stereovision are components of the viewing conditions.

16 citations


Journal ArticleDOI
TL;DR: Both the spatial and temporal kinematics of the reach movement were affected by the motion of the visual field, suggesting interference with the ability to simultaneously process two consecutive stimuli.
Abstract: Reaching toward a visual target involves the transformation of visual information into appropriate motor commands. Complex movements often occur either while we are moving or when objects in the world move around us, thus changing the spatial relationship between our hand and the space in which we plan to reach. This study investigated whether rotation of a wide field-of-view immersive scene produced by a virtual environment affected online visuomotor control during a double-step reaching task. A total of 20 seated healthy subjects reached for a visual target that remained stationary in space or unpredictably shifted to a second position (either to the right or left of its initial position) with different inter-stimulus intervals. Eleven subjects completed two experiments which were similar except for the duration of the target’s appearance. The final target was either visible throughout the entire trial or only for a period of 200 ms. Movements were performed under two visual field conditions: the virtual scene was matched to the subject’s head motion or rolled about the line of sight counterclockwise at 130°/s. Nine additional subjects completed a third experiment in which the direction of the rolling scene was manipulated (i.e., clockwise and counterclockwise). Our results showed that while all subjects were able to modify their hand trajectory in response to the target shift with both visual scenes, some of the double-step movements contained a pause prior to modifying trajectory direction. Furthermore, our findings indicated that both the timing and kinematic adjustments of the reach were affected by roll motion of the scene. Both planning and execution of the reach were affected by roll motion. Changes in proportion of trajectory types, and significantly longer pauses that occurred during the reach in the presence of roll motion suggest that background roll motion mainly interfered with the ability to update the visuomotor response to the target displacement. Furthermore, the reaching movement was affected differentially by the direction of roll motion. Subjects demonstrated a stronger effect of visual motion on movements taking place in the direction of visual roll (e.g., leftward movements during counterclockwise roll). Further investigation of the hand path revealed significant changes during roll motion for both the area and shape of the 95% tolerance ellipses that were constructed from the hand position following the main movement termination. These changes corresponded with a hand drift that would suggest that subjects were relying more on proprioceptive information to estimate the arm position in space during roll motion of the visual field. We conclude that both the spatial and temporal kinematics of the reach movement were affected by the motion of the visual field, suggesting interference with the ability to simultaneously process two consecutive stimuli.

10 citations


Proceedings ArticleDOI
14 Mar 2009
TL;DR: A hybrid strategy that partitions the scanning task of a frame image by both region and scale is proposed and a novel data structure called a scanning tree is designed to organize the computing nodes.
Abstract: We present a coordinated ensemble of scalable computing techniques to accelerate a number of key tasks needed for vision-based gesture interaction, by using the cluster driving a large display system. A hybrid strategy that partitions the scanning task of a frame image by both region and scale is proposed. Based on this hybrid strategy, a novel data structure called a scanning tree is designed to organize the computing nodes. The level of effectiveness of the proposed solution was tested by incorporating it into a gesture interface controlling a ultra-high-resolution tiled display wall.

5 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the evolution of inter-segmental coordination over time, a previously developed multi-variate model of postural coordination during quiet stance (Kuo et al. 1998) was extended.

2 citations


Proceedings ArticleDOI
09 Mar 2009
TL;DR: HAMERA requires the minimum hardware of a mainstream mobile device enhanced with a single accelerometer, and is capable of providing advantageous features including high-quality hand image collection, on-device profile construction and easy usability.
Abstract: This paper presents the design and implementation of HAMERA (Hand cAMERA), a novel device for hand profile construction in the pervasive environment. With the help of software, HAMERA requires the minimum hardware of a mainstream mobile device enhanced with a single accelerometer, and is capable of providing advantageous features including high-quality hand image collection, on-device profile construction and easy usability. Evaluation results show the efficiency of HAMERA in serving its goals.

1 citations