scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Interactive gesture interface for intelligent wheelchairs

30 Jul 2000-Vol. 2, pp 789-792
TL;DR: An intelligent wheelchair whose motion can be controlled by the user's face direction is presented and a gesture interface that is available while the user is not riding is proposed that can move the wheelchair by gestures.
Abstract: With the increase in the number of senior citizens, there is a growing demand for human-friendly wheelchairs as mobility aids. To meet this need, we presented an intelligent wheelchair whose motion can be controlled by the user's face direction. He/she can drive it only by looking in the direction where he/she wants to go. In addition to this human interface, the paper proposes a gesture interface that is available while the user is not riding. He/she can move the wheelchair by gestures. Gesture is a good means to give commands because it can be used in noisy conditions and we would not like to speak loud in some public places. However environments where wheelchairs are used cannot be controlled. This makes gesture recognition difficult. We propose an interactive way to solve this problem. When the wheelchair is not certain about the meaning of a user's gesture, it guesses the meaning and moves a little accordingly to show its guess to the user. Then it observes the user's response, judging whether its guess is correct or not. This cycle is repeated until the wheelchair understands the user's intention.
Citations
More filters
Journal ArticleDOI
TL;DR: A system that enables wheelchair users to interact with items placed beyond their arm’s length, with the help of Augmented Reality (AR) and Radio Frequency Identification (RFID) technologies is developed, providing an opportunity for equality improvement.

89 citations


Cites background from "Interactive gesture interface for i..."

  • ...[6] propose an intelligent wheelchair that can be controlled by gestures....

    [...]

Journal ArticleDOI
TL;DR: A novel approach for programming robots interactively through a multimodal interface that the user can provide feedback interactively at any time—during both the programming and the execution phase.
Abstract: As robots enter the human environment and come into contact with inexperienced users, they need to be able to interact with users in a multimodal fashion—keyboard and mouse are no longer acceptable as the only input modalities. In this paper we introduce a novel approach for programming robots interactively through a multimodal interface. The key characteristic of this approach is that the user can provide feedback interactively at any time—during both the programming and the execution phase. The framework takes a three-step approach to the problem: multimodal recognition, intention interpretation, and prioritized task execution. The multimodal recognition module translates hand gestures and spontaneous speech into a structured symbolic data stream without abstracting away the user’s intent. The intention interpretation module selects the appropriate primitives to generate a task based on the user’s input, the system’s current state, and robot sensor data. Finally, the prioritized task execution module se...

60 citations


Cites background or methods from "Interactive gesture interface for i..."

  • ...Kuno et al. (2000) have developed a wheelchair robot controlled by detecting hand gestures with a camera....

    [...]

  • ...Examples of such robots include pet robots (Fujita and Kitano 1998), tour-guiding robots (Thrun et al. 1999), entertainment robots (Ishida et al. 2001), intelligent wheelchairs (Kuno et al. 2000; Matsumoto, Ino, and Ogasawara 2001), and mobile vacuuming robots (Musser 2003)....

    [...]

  • ...The mobile robot interaction system by Kuno et al. (2000) also used a gesture-spotting strategy based on DTW. Starner and Pentland (1995) applied hidden Markov models (HMMs; often used to model doubly stochastic processes) to visual hand recognition of dynamic American Sign Language (ASL)....

    [...]

Proceedings ArticleDOI
14 Jul 2009
TL;DR: Based on the biopotential signals, the interface recognizes the operator's gestures, such as closing the jaw, wrinkling the forehead, and looking towards left and right by combining these gestures, the operator controls linear and turning motions, velocity, and the steering angle of the wheelchair as discussed by the authors.
Abstract: In our previous study, we presented a nonverbal interface that used biopotential signals, such as electrooculargraphic (EOG) and electromyographic (EMG), captured by a simple brain-computer interface In this paper, we apply the nonverbal interface to hands-free control of an electric wheelchair Based on the biopotential signals, the interface recognizes the operator's gestures, such as closing the jaw, wrinkling the forehead, and looking towards left and right By combining these gestures, the operator controls linear and turning motions, velocity, and the steering angle of the wheelchair Experimental results for navigating the wheelchair in a hallway environment confirmed the feasibility of the proposed method

52 citations


Cites methods from "Interactive gesture interface for i..."

  • ...In this paper, we apply the nonverbal interface to hands-free control of an electric wheelchair....

    [...]

Journal ArticleDOI
TL;DR: Methods for interactive reinforcement learning agent to learn from human social feedback and the ways of delivering feedback are reviewed.
Abstract: Reinforcement learning agent learns how to perform a task by interacting with the environment The use of reinforcement learning in real-life applications has been limited because of the sample efficiency problem Interactive reinforcement learning has been developed to speed up the agent's learning and facilitate to learn from ordinary people by allowing them to provide social feedback, eg, evaluative feedback, advice or instruction Inspired by real-life biological learning scenarios, there could be many ways to provide feedback for agent learning, such as via hardware delivered, natural interaction like facial expressions, speech or gestures The agent can even learn from feedback via unimodal or multimodal sensory input This paper reviews methods for interactive reinforcement learning agent to learn from human social feedback and the ways of delivering feedback Finally, we discuss some open problems and possible future research directions

50 citations


Cites methods from "Interactive gesture interface for i..."

  • ...used gestures to control the direction of an intelligent wheelchair and proposed to recognize unknown gestures by interaction with the human user [53]....

    [...]

Journal ArticleDOI
TL;DR: Results are presented that show that the topology-preserving quality of GNG allows generalization between gestured commands and that learning progresses toward emulation of an associative memory that maps input gesture to desired action.
Abstract: Recognition of human gestures is an active area of research integral for the development of intuitive human-machine interfaces for ubiquitous computing and assistive robotics. In particular, such systems are key to effective environmental designs that facilitate aging in place. Typically, gesture recognition takes the form of template matching in which the human participant is expected to emulate a choreographed motion as prescribed by the researchers. A corresponding robotic action is then a one-to-one mapping of the template classification to a library of distinct responses. In this paper, we explore a recognition scheme based on the growing neural gas (GNG) algorithm that places no initial constraints on the user to perform gestures in a specific way. Motion descriptors extracted from sequential skeletal depth data are clustered by GNG and mapped directly to a robotic response that is refined through reinforcement learning. A simple good/bad reward signal is provided by the user. This paper presents results that show that the topology-preserving quality of GNG allows generalization between gestured commands. Experimental results using an automated reward are presented that compare learning results involving single nodes versus results involving the influence of node neighborhoods. Although separability of input data influences the speed of learning convergence for a given neighborhood radius, it is shown that learning progresses toward emulation of an associative memory that maps input gesture to desired action.

41 citations


Cites methods from "Interactive gesture interface for i..."

  • ...For these reasons, GNG is the clustering method explored in this paper....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: An unsupervised technique for visual learning is presented, which is based on density estimation in high-dimensional spaces using an eigenspace decomposition and is applied to the probabilistic visual modeling, detection, recognition, and coding of human faces and nonrigid objects.
Abstract: We present an unsupervised technique for visual learning, which is based on density estimation in high-dimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for unimodal distributions) and a mixture-of-Gaussians model (for multimodal distributions). Those probability densities are then used to formulate a maximum-likelihood estimation framework for visual search and target detection for automatic object recognition and coding. Our learning technique is applied to the probabilistic visual modeling, detection, recognition, and coding of human faces and nonrigid objects, such as hands.

1,624 citations

Book ChapterDOI
TL;DR: A behavior-based approach was used to establish sufficient on-board autonomy at minimal cost and material usage, while achieving high efficiency, sufficient safety, transparency in appearance, and extendability, and initial results are highly encouraging.
Abstract: A brief survey of research in the development of autonomy in wheelchairs is presented and AAI's R&D to build a series of intelligent autonomous wheelchairs is discussed. A standardized autonomy management system that can be installed on readily available power chairs which have been well-engineered over the years has been developed and tested. A behavior-based approach was used to establish sufficient on-board autonomy at minimal cost and material usage, while achieving high efficiency, sufficient safety, transparency in appearance, and extendability. So far, the add-on system has been installed and tried on two common power wheelchair models. Initial results are highly encouraging.

174 citations

Proceedings ArticleDOI
07 Sep 1997
TL;DR: The NavChair's method for automatically allocating control between the wheelchair and its operator is described and results evaluating the performance of the Nav chair's automatic adaptation mechanism are presented from an experiment in which able-bodied subjects used voice control to steer the NavChair through a navigation task requiring several transitions between operating modes.
Abstract: The NavChair Assistive Wheelchair Navigation System is being developed to reduce the cognitive and physical requirements of operating a power wheelchair. The NavChair is an adaptive shared control system, shared in that control is divided between the wheelchair and the wheelchair operator and adaptive in that how control is divided between the wheelchair and the wheelchair operator varies based on current task requirements. This paper describes the NavChair's method for automatically allocating control between the wheelchair and its operator and presents results evaluating the performance of the NavChair's automatic adaptation mechanism from an experiment in which able-bodied subjects used voice control to steer the NavChair through a navigation task requiring several transitions between operating modes.

93 citations

Proceedings ArticleDOI
13 Oct 1998
TL;DR: A concept of an intelligent wheelchair to meet the need for human-friendly wheelchairs that can understand human intentions by observing the user's nonverbal behaviors and move in accordance with the users' wish with minimum human operations is proposed.
Abstract: With the increase of senior citizens, there is a growing demand for human-friendly wheelchairs as mobility aids. The paper proposes a concept of an intelligent wheelchair to meet this need. It can understand human intentions by observing the user's nonverbal behaviors and can move in accordance with the user's wish with minimum human operations. The paper also describes our experimental robotic wheelchair system. Human intentions appear most on the face. Thus, the experimental system observes the human face, computing its direction. As the first step toward the intelligent wheelchair, we have made experiments on controlling the system's motion by the face direction. Experimental results prove our approach promising.

44 citations

Journal ArticleDOI
TL;DR: This paper describes the real-time vision system (RVS-2) which shows quite high performance for low-level image processing while it is implemented in a one-board type compact size format with small power consumption.
Abstract: This paper describes the real-time vision system (RVS-2) which shows quite high performance for low-level image processing while it is implemented in a one-board type compact size format with small power consumption. The RVS-2 consists of an IMAP board, a video board and a host workstation. The IMAP board consists of eight highly-integrated IMAP LSIs and a dedicated control LSI (RVSC). The IMAP chip integrates 2 Mb image memory and 64 processing elements that operate in the SIMD mode. The RVSC chip performs global data operations efficiently without interactions with the host workstation, as well providing an instruction stream to the IMAP chips. The peak performance of the RVS-2 is 30 GIPS and most of the basic image processing tasks are carried out within about 0.1-0.7 ms, which is about 50-300 times faster than the video frame rate. >

28 citations


"Interactive gesture interface for i..." refers methods in this paper

  • ...As a computing resource, it has a PC (Pentium II 266 MHz) with a real-time image processing board consisting of 256 processors developed by NEC [ 4 ]....

    [...]