scispace - formally typeset
Search or ask a question
Journal ArticleDOI

System for assisted mobility using eye movements based on electrooculography

01 Dec 2002-Vol. 10, Iss: 4, pp 209-218
TL;DR: An eye-control method based on electrooculography (EOG) to develop a system for assisted mobility where the control is actually effected by eye movements within the socket, which could be useful in multiple applications, such as mobility and communication aid for handicapped persons.
Abstract: Describes an eye-control method based on electrooculography (EOG) to develop a system for assisted mobility. One of its most important features is its modularity, making it adaptable to the particular needs of each user according to the type and degree of handicap involved. An eye model based on electrooculographic signal is proposed and its validity is studied. Several human-machine interfaces (HMI) based on EOG are commented, focusing our study on guiding and controlling a wheelchair for disabled people, where the control is actually effected by eye movements within the socket. Different techniques and guidance strategies are then shown with comments on the advantages and disadvantages of each one. The system consists of a standard electric wheelchair with an on-board computer, sensors and a graphic user interface run by the computer. On the other hand, this eye-control method can be applied to handle graphical interfaces, where the eye is used as a mouse computer. Results obtained show that this control technique could be useful in multiple applications, such as mobility and communication aid for handicapped persons.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields, including bioinformatics as discussed by the authors, which has been emphasized in both academia and industry.
Abstract: In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies.

1,010 citations

Posted Content
TL;DR: This review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies and suggest future research directions.
Abstract: In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e., omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e., deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies.

704 citations


Cites methods from "System for assisted mobility using ..."

  • ...Various data from EEG [152], electrocorticography (ECoG) [153], electrocardiography (ECG) [154], electromyography (EMG) [155], and electrooculography (EOG) [156, 157] have been used, with most studies focusing on EEG activity so far....

    [...]

  • ...Various data from EEG [152], electrocorticography (ECoG) [153], electrocardiography (ECG) [154], electromyography (EMG) [155] and electrooculography (EOG) [156, 157] have been used, with most studies focusing on EEG activity so far....

    [...]

Journal ArticleDOI
TL;DR: This review paper depicts the application of various deep learning algorithms used till recently, but in future it will be used for more healthcare areas to improve the quality of diagnosis.

703 citations


Additional excerpts

  • ...It is useful for ophthalmological diagnosis [90]....

    [...]

Journal ArticleDOI
TL;DR: Bilateral flickering stimuli were presented simultaneously and continuously over entire trial blocks, such that externally evoked alpha desynchronization is equated in precue baseline and postcue intervals and suggests that alpha synchronization reflects an active attentional suppression mechanism, rather than a passive one reflecting "idling" circuits.
Abstract: Human electrophysiological (EEG) studies have demonstrated the involvement of alpha band (8- to 14-Hz) oscillations in the anticipatory biasing of attention. In the context of visual spatial attent...

667 citations

Journal ArticleDOI
TL;DR: The work demonstrates the promise of eye-based activity recognition (EAR) and opens up discussion on the wider applicability of EAR to other activities that are difficult, or even impossible, to detect using common sensing modalities.
Abstract: In this work, we investigate eye movement analysis as a new sensing modality for activity recognition. Eye movement data were recorded using an electrooculography (EOG) system. We first describe and evaluate algorithms for detecting three eye movement characteristics from EOG signals-saccades, fixations, and blinks-and propose a method for assessing repetitive patterns of eye movements. We then devise 90 different features based on these characteristics and select a subset of them using minimum redundancy maximum relevance (mRMR) feature selection. We validate the method using an eight participant study in an office environment using an example set of five activity classes: copying a text, reading a printed paper, taking handwritten notes, watching a video, and browsing the Web. We also include periods with no specific activity (the NULL class). Using a support vector machine (SVM) classifier and person-independent (leave-one-person-out) training, we obtain an average precision of 76.1 percent and recall of 70.5 percent over all classes and participants. The work demonstrates the promise of eye-based activity recognition (EAR) and opens up discussion on the wider applicability of EAR to other activities that are difficult, or even impossible, to detect using common sensing modalities.

619 citations

References
More filters
01 Jan 2003
TL;DR: This chapter describes research at NRL on developing interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way, and considers eye movement-based interaction as an exemplar of a new, more general class of non-command-based user- computer interaction.
Abstract: User-computer dialogues are typically one-sided, with the bandwidth from computer to user far greater than that from user to computer. The movement of a user’s eyes can provide a convenient, natural, and high-bandwidth source of additional user input, to help redress this imbalance. We therefore investigate the introduction of eye movements as a computer input medium. Our emphasis is on the study of interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way. This chapter describes research at NRL on developing such interaction techniques and the broader issues raised by non-command-based interaction styles. It discusses some of the human factors and technical considerations that arise in trying to use eye movements as an input medium, describes our approach and the first eye movement-based interaction techniques that we have devised and implemented in our laboratory, reports our experiences and observations on them, and considers eye movement-based interaction as an exemplar of a new, more general class of non-command-based user-computer interaction.

272 citations


"System for assisted mobility using ..." refers methods in this paper

  • ...Among these new methods it is necessary to mention voice recognition [4] or visual information [5]....

    [...]

Journal ArticleDOI
TL;DR: The stability of gaze in three dimensions (horizontal, vertical and torsion) was measured with a new type of scleral search coil in eight emmetropic observers to agree well with earlier results for the horizontal plane.

261 citations


"System for assisted mobility using ..." refers background in this paper

  • ...Nevertheless, this type of guidance requires a great precision by the user, and we must keep in mind that small shifts between interface and head position or head movements cause a displacement of the cursor in the screen very difficult to avoid and correct....

    [...]

Journal ArticleDOI
01 Mar 2000
TL;DR: The motivation for and the design considerations of a low-cost head-operated joystick that uses infrared light emitting diodes and photodetectors to determine head position, which is subsequently converted into signals that emulate a Microsoft mouse are described.
Abstract: Describes the motivation for and the design considerations of a low-cost head-operated joystick. The paper briefly summarizes the requirements of head-operated mouse pointer control for people with disabilities before discussing a set of technological approaches that can be used to satisfy these requirements. The paper focuses on the design of a head-operated joystick that uses infrared light emitting diodes (LED's) and photodetectors to determine head position, which is subsequently converted into signals that emulate a Microsoft mouse. There are two significant findings. The first is that, while nonideal device characteristics might appear to make the joystick difficult to use users naturally compensate for nonlinearities, in a transparent manner, because of visual feedback of mouse pointer position. The second finding, from relatively informal, independent trials, indicates that disabled users prefer a head-operated device that has the characteristics of a joystick (a relative pointing device) to those of a mouse (an absolute pointing device).

190 citations


"System for assisted mobility using ..." refers background in this paper

  • ...In former works [10], we have studied these strategies, nevertheless, the main features are commented of each one of them....

    [...]

01 Jan 2003
TL;DR: The robotic wheelchair system described in this paper, Wheelesley, automates many common navigational tasks for the user and consists of a standard electric wheelchair with an on-board computer, sensors and a graphical user interface running on a mounted notebook computer.
Abstract: An assistive robotic wheelchair system should allow its user to travel more efficiently and with greater ease. While standard electric wheelchairs can be used by people with limited upper body mobility, some tasks such as door opening and moving around corners require fine joystick control that these users often find difficult. This paper reports initial work in the development of a semiautonomous robotic wheelchair system. The robotic wheelchair system described in this paper, Wheelesley, automates many common navigational tasks for the user. The system consists of a standard electric wheelchair with an on-board computer, sensors and a graphical user interface running on a mounted notebook computer.

42 citations


"System for assisted mobility using ..." refers background in this paper

  • ...For example, if the user loses control and the system becomes unstable, the wheelchair should stop and take the control of the system....

    [...]

01 Jan 2003
TL;DR: This paper studies single switch scanning as an access method for the robotic wheelchair system, Wheelesley, and compares these results to traditional control of a powered wheelchair with single switchscanning devices.
Abstract: Single switch scanning is the access method of last resort for powered wheelchairs, primarily because drift is a significant problem. To correct a drift to the left or the right, the user must stop going forward, wait for the scanning device to get to the arrow for the direction of choice, click to turn the chair, stop turning, wait to scan to forward and then click to move forward again. Robotic assisted control can improve the ease and speed of driving using single switch scanning. Under robotic control, sensors are used to correct the drift problem and to avoid obstacles. The user is only required to give commands to change direction, for example "left" at an intersection. BACKGROUND Powered wheelchairs can be driven with a variety of access methods. The method of first choice is a joystick. If a person is unable to drive with a joystick, a multiple switch array such as a sip and puff system or a head switch array could be used. If a person can not use a multiple switch array, a single switch scanning device is used. Single switch scanning is the access method of last resort. With traditional powered wheelchairs, the need for frequent corrections to counteract drift and to move around obstacles makes driving difficult for single switch scanning users. Work on robotic wheelchairs has resulted in systems that can navigate indoor environments by taking commands from the user and carrying out the commands safely using sensors on the robot (for example, [Levine et al., 1990] and [Miller, in press]). Most of the work on robotic wheelchairs does not address the issues of access methods; the primary focus is on the navigation system. While it is important to have a safe navigation system, it also is important to consider how a person will be able to use the system. Simpson and Levine [1997] studied voice control as an access method for the NavChair system. Yanco and Gips [1997] investigated eye control as an access method. In this paper, we study single switch scanning as an access method for our robotic wheelchair system, Wheelesley, and compare these results to traditional control of a powered wheelchair with single switch scanning devices. The wheelchair system [Yanco, in press] consists of a robotic wheelchair and a user interface. To provide robotic assistance, the wheelchair uses infrared, sonar and bump sensors and an on-board processor to avoid obstacles and to keep the wheelchair centered in a hallway. The robotic wheelchair makes the necessary corrections to the current heading whenever one or more sensors indicate that an obstacle or wall is getting too close to the wheelchair. The user gives commands through the user interface, which runs on a Macintosh Powerbook. The switch is a Prentke Romich rocking level switch which is connected to the Powerbook using a Don Johnston Macintosh switch interface. For these experiments, the user interface consists of four large arrows and a stop button. The user interface was designed to look and function like a standard single switch scanning device. The interface scans to the forward arrow, the right arrow, the left arrow and the back arrow until the user selects a command by hitting a switch. The interface pauses at each possible selection for two seconds. Since all test subjects are able-bodied, the commands are latched. To stop driving or turning, the user hits the switch again. After the stop command is given, scanning starts again on the forward arrow. RESEARCH QUESTION Does robotic assistance improve driving performance compared to traditional manual control for a person using single switch scanning as an access method for a powered wheelchair? SINGLE SWITCH ROBOTIC WHEELCHAIR CONTROL METHODS To determine the answer, we designed an experiment to test the performance of subjects under robotic assisted control and under traditional manual control. Fourteen able-bodied subjects (7 men and 7 women), ranging in age from 18 to 43, were tested. At the beginning of a session, the subject was shown the wheelchair. Sensors that are used in robotic assisted control were pointed out and explained briefly. Safety measures, such as the power button, were discussed. Then the two driving methods were explained to the subject. After this introduction, the subject was seated in the wheelchair and the user interface was connected to the wheelchair. The single switch scanning interface was explained to the subject, who practiced using the interface first with the motors turned off. Once the subject was comfortable with the interface, the session entered a practice phase in which the subject first tried robotic assisted control and then traditional manual control. The subject practiced both methods until he expressed an understanding of each control method; subjects usually spent about two minutes trying each method. All practice was done off of the test course, so that the subject was not able learn anything that would assist him during the test phase. The course was designed to include obstacles (several couches and chairs, a fire extinguisher mounted to the wall 30 cm above the ground, a trash can, and a table) and turns to the left and to the right. A diagram of the course is given in Figure 1. The test phase consisted of four up-and-back traversals of the test course, alternating between the two control methods. Half of the subjects started with robotic assisted control and the other half started with traditional manual control. Each upand-back traversal consists of two parts: running the course from the couch area to the hallway and then the return trip. The turn in the middle of the course is not counted as part of the run, as turning completely around in the middle of the hallway is not a normal driving occurrence. The total session time for each subject was approximately 45 minutes. Most data collection was done by the computer which was running the user interface. The researcher only recorded the number of scrapes made by the chair. At the completion of the test, the user was asked to rank traditional manual control and robotic assisted control on a scale from 1 (worst) to 10 (best). RESULTS There were four experimental performance measures collected by the computer: (1) the number of clicks required to navigate the course, (2) the amount of time spent scanning to get to the necessary commands, (3) the amount of time spent moving or executing the given commands, and (4) the total amount of time spent on the course (scanning time plus moving time). Results are summarized in Table 1. Data for each experimental measure was analyzed using an ANOVA test. The differences between robotic control and manual control were highly significant with p<.0001 for all measures. On average, robotic control saved 60 clicks over manual control, which is a 71% improvement. Total time for robotic assisted control was 101 seconds shorter than manual control on average, which is a 25% improvement. The differences between the two trials were significant for clicks (p=.003) and for time spent scanning (p=.015). There was not a significant difference between trials for moving time or total time. Couch Chair Table Chair

22 citations