scispace - formally typeset
Search or ask a question

Showing papers on "Eye tracking published in 1998"


Journal ArticleDOI
TL;DR: The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined.
Abstract: Recent studies of eye movements in reading and other information processing tasks, such as music reading, typing, visual search, and scene perception, are reviewed. The major emphasis of the review is on reading as a specific example of cognitive processing. Basic topics discussed with respect to reading are (a) the characteristics of eye movements, (b) the perceptual span, (c) integration of information across saccades, (d) eye movement control, and (e) individual differences (including dyslexia). Similar topics are discussed with respect to the other tasks examined. The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined. Theoretical and practical considerations concerning the use of eye movement data are also discussed.

6,656 citations


Journal ArticleDOI
TL;DR: This paper showed that a goal-directed eye movement towards an object is disrupted by the appearance of a new irrelevant object which is known to capture attention automatically, and the eye often landed for a very short period of time near the new object.
Abstract: We make rapid eye movements to examine the world around us. Before an eye movement is made, attention is covertly shifted to the location of the object of interest. The eye typically will land at the position at which attention is directed. Here we report that a goal-directed eye movement towards an object is disrupted by the appearance of a new irrelevant object which is known to capture attention automatically. In many instances, before the eye reached the singleton target, it started moving in the direction of the new object. The eye often landed for a very short period of time (25-150 ms) near the new object. The results suggests parallel programming of two saccades: one voluntary goal-directed eye movement toward the colour singleton target and one stimulus-driven eye move-ment reflexively elicited by the appearance of the new object. Neuroanatomic structures responsible for parallel programming of saccades are discussed.

655 citations


Proceedings ArticleDOI
17 Jul 1998
TL;DR: This work has developed a foveated multiresolution pyramid video coder/decoder which runs in real-time on a general purpose computer and includes zero-tree coding.
Abstract: Foveated imaging exploits the fact that the spatial resolution of the human visual system decreases dramatically away from the point of gaze. Because of this fact, large bandwidth savings are obtained by matching the resolution of the transmitted image to the fall-off in resolution of the human visual system. We have developed a foveated multiresolutionpyramid (FMP) video coder/decoder which runs in real-time on a general purpose computer (i.e., a Pentium with theWindows 95/NT OS). The current system uses a foveated multiresolution pyramid to code each image into 5 or 6 regions ofvarying resolution. The user-controlled foveation point is obtained from a pointing device (e.g., a mouse or an eyelracker).Spatial edge artifacts between the regions created by the foveation are eliminated by raised-cosine blending across levels of thepyramid, and by "foveation point interpolation" within levels of the pyramid. Each level of the pyramid is then motioncompensated, multiresolution pyramid coded, and thresholdedlquantized based upon human contrast sensitivity as a functionof spatial frequency and retinal eccentricity. The final lossless coding includes zero-tree coding. Optimal use of foveatedimaging requires eye tracking; however, there are many useful applications which do not require eye tracking.Key words: foveation, foveated imaging, multiresolution pyramid, video, motion compensation, zero-tree coding, humanvision, eye tracking, video compression

400 citations


Book ChapterDOI
01 Jan 1998
TL;DR: In this article, eye movement behavior during scene viewing is divided into two relatively discrete temporal phases, fixations, or periods of time when the point of regard is relatively still, and saccades, or times when the eyes are rotating at a relatively rapid rate to reorient the position of regard from one spatial position to another.
Abstract: Publisher Summary Vision is a dynamic process in which representations are built up over time from multiple eye fixations. The study of eye movement patterns during scene viewing contributes to an understanding of how information in the visual environment is dynamically acquired and represented. The interaction among vision, cognition, and eye movement control can be seen as a scientifically tractable testing ground for theories of the interaction between input, central, and output systems. The chapter focuses on static scenes. Eye movement behavior during scene viewing is divided into two relatively discrete temporal phases, fixations, or periods of time when the point of regard is relatively still, and saccades, or periods of time when the eyes are rotating at a relatively rapid rate to reorient the point of regard from one spatial position to another. Useful pattern information is acquired during the fixations, with less useful pattern information derived during the saccades due to a combination of visual masking and central suppression. It is vital exactly where the fixation position tends to be centered and how long the position tends to remain centered at a particular location in a scene.

385 citations


Journal ArticleDOI
TL;DR: X Vision as discussed by the authors is a programming environment for real-time vision which provides high performance on standard workstations outfitted with a simple digitizer and consists of a small set of image-level tracking primitives, and a framework for combining them to form complex tracking systems.

328 citations




Patent
20 Feb 1998
TL;DR: In this article, a system for eye-gaze direction detection is disclosed that uses an infrared light emitting diode mounted coaxially with the optical axis and in front of the imaging lens of an infrared sensitive video camera for remotely recording images of the eye of the computer operator.
Abstract: A system for eye-gaze direction detection is disclosed that uses an infrared light emitting diode mounted coaxially with the optical axis and in front of the imaging lens of an infrared sensitive video camera for remotely recording images of the eye of the computer operator. The infrared light enters the eye and is absorbed and then re-emitted by the retina, thereby causing a "bright eye effect" that makes the pupil brighter than the rest of the eye. It also gives rise to an even brighter small glint that is formed on the surface of the cornea. The computer includes software and hardware that acquires a video image, digitizes it into a matrix of pixels, and then analyzes the matrix to identify the location of the pupil's center relative to the glint's center. Using this information, the software calibrates the system to provide a high degree of accuracy in determining the user's point of regard. When coupled with a computer screen and a graphical user interface, the system may place the cursor at the user's point of regard and then perform the various mouse clicking actions at the location on the screen where the user fixates. This grants the individual complete control over the computer with solely their eye. The technology is not limited to determining gaze position on a computer display; the system may determine point of regard on any surface, such as the light box radiologists use to study x-rays.

191 citations


Journal ArticleDOI
TL;DR: Despite their rapid, reflex nature, all three mechanisms rely on cortical processing and evidence from monkeys supports the hypothesis that all are mediated by the medial superior temporal (MST) area of cortex.
Abstract: Primates have several reflexes that generate eye movements to compensate for bodily movements that would otherwise disturb their gaze and undermine their ability to process visual information. Two vestibulo-ocular reflexes compensate selectively for rotational and translational disturbances of the head, and each has visual backups that operate as negative feedback tracking mechanisms to deal with any residual disturbances of gaze. Of particular interest here are three recently discovered visual tracking mechanisms that specifically address translational disturbances and operate in machine-like fashion with ultra-short latencies (< 60 ms in monkeys, < 85 ms in humans). These visual reflexes deal with motions in all three dimensions and operate as automatic servos, using preattentive parallel processing to provide signals that initiate eye movements before the observer is even aware that there has been a disturbance. This processing is accomplished by visual filters each tuned to a different feature of the binocular images located in the immediate vicinity of the plane of fixation. Two of the reflexes use binocular stereo cues and the third is tuned to particular patterns of optic flow associated with the observer's forward motion. Some stereoanomalous subjects show tracking deficits that can be attributed to a lack of just one subtype of cortical cell encoding motion in one particular direction in a narrow depth plane centred on fixation. Despite their rapid, reflex nature, all three mechanisms rely on cortical processing and evidence from monkeys supports the hypothesis that all are mediated by the medial superior temporal (MST) area of cortex. Remarkably, MST seems to represent the first stage in cortical motion processing at which the visual error signals driving each of the three reflexes are fully elaborated at the level of individual cells.

180 citations


Journal ArticleDOI
TL;DR: A pupil detection technique using two light sources and the image difference method is proposed and a method for eliminating the images of the light sources reflected in the glass lens is proposed for users wearing eye glasses.
Abstract: Recently, some video-based eye-gaze detection methods used in eye-slaved support systems for the severely disabled have been studied. In these methods, infrared light was irradiated to an eye, two feature areas (the corneal reflection light and pupil) were detected in the image obtained from a video camera and then the eye-gaze direction was determined by the relative positions between the two. However, there were problems concerning stable pupil detection under various room light conditions. In this paper, methods for precisely detecting the two feature areas are consistently mentioned. First, a pupil detection technique using two light sources and the image difference method is proposed. Second, for users wearing eye glasses, a method for eliminating the images of the light sources reflected in the glass lens is proposed. The effectiveness of these proposed methods is demonstrated by using an imaging board. Finally, the feasibility of implementing hardware for the proposed methods in real time is discussed.

172 citations


Proceedings ArticleDOI
14 Apr 1998
TL;DR: This paper proposes a system capable of tracking a face and estimating the 3-D pose and the gaze point all in a real-time video stream of the head by using a 3D model together with multiple triplet triangulation of feature positions assuming an affine projection.
Abstract: Facial pose and gaze point are fundamental to any visually directed human-machine interface. In this paper we propose a system capable of tracking a face and estimating the 3-D pose and the gaze point all in a real-time video stream of the head. This is done by using a 3-D model together with multiple triplet triangulation of feature positions assuming an affine projection. Using feature-based tracking the calculation of a 3-D eye gaze direction vector is possible even with head rotation and using a monocular camera. The system is also able to automatically initialise the feature tracking and to recover from total tracking failures which can occur when a person becomes occluded or temporarily leaves the image.

Journal ArticleDOI
TL;DR: This finding suggests that before 18 months of age, infants do not recognize the significance of eye direction for joint visual attention.
Abstract: This experiment investigates the role of eye direction in infant joint visual attention. Sixty-three infants aged from 8 to 19 months participated in a training study in which they were shown adult eye turns in association with the appearance of an interesting sight to one or the other side. It was only at 18–19 months of age that infants showed a reliable ability to use the adult eye turns to predict the side of appearance of the target. this finding suggests that before 18 months of age, infants do not recognize the significance of eye direction for joint visual attention.

Patent
31 Mar 1998
TL;DR: In this article, a computer-driven system aids operator positioning of a cursor by integrating eye gaze and manual operator input, thus reducing pointing time and operator fatigue, and a gaze tracking apparatus monitors operator eye orientation while the operator views a video screen.
Abstract: A computer-driven system aids operator positioning of a cursor by integrating eye gaze and manual operator input, thus reducing pointing time and operator fatigue. A gaze tracking apparatus monitors operator eye orientation while the operator views a video screen. Concurrently, the computer monitors an input device, such as a mouse, for mechanical activation by the operator. According to the operator's eye orientation, the computer calculates the operator's gaze position. Also computed is a gaze area, comprising a sub-region of the video screen that includes the gaze position. This region, for example, may be a circle of sufficient radius to include the point of actual gaze with a certain likelihood. When the computer detects mechanical activation of the operator input device, it determines an initial cursor display position within the current gaze area. This position may be a predetermined location with respect to the gaze area, such as a point on the bottom of the gaze area periphery. A different approach uses the initial mechanical activation of the input device to determine the direction of motion, and sets the initial display position on the opposite side of the gaze area from this motion so that continued movement of the input device brings the cursor to the gaze position in a seamless transition between gaze and manual input. After displaying the cursor on the video screen at the initial display position, the cursor is thereafter positioned manually according to the operator's use of the input device, without regard to gaze.

Patent
05 Aug 1998
TL;DR: In this paper, a display apparatus includes an image source, an eye position detector, and a combiner that are aligned to a user's eye to identify the pupil position, and if the display becomes misaligned with respect to the pupil, a physical positioning mechanism adjusts the relative positions of the image source and the beam combiner.
Abstract: A display apparatus includes an image source, an eye position detector, and a combiner, that are aligned to a user's eye. The eye position detector monitors light reflected from the user's eye to identify the pupil position. If light from the image source becomes misaligned with respect to the pupil, a physical positioning mechanism adjusts the relative positions of the image source and the beam combiner so that light from the image source is translated relative to the pupil, thereby realigning the display to the pupil. In one embodiment, the positioner is a piezoelectric positioner and in other embodiments, the positioner is a servomechanism or a shape memory alloy.

Proceedings ArticleDOI
01 Jan 1998
TL;DR: By analyzing the scan paths of the eye, it is found that menus are read in sequential sweeps and may explain why the best models produced by previous research are hybrid models that combine systematic reading behavior with random reading behavior.
Abstract: In modern graphical user interfaces pull-down menus are one of the most frequently used components. But still after years of research there is no clear evidence on how the users carry out the visual search process in pull-down menus. Several models have been proposed for predicting selection times. However, most observations are based only on execution times and cannot therefore explain where the time is spent. The few models that are based on eye movement research are conflicting. In this study we present an experiment where eye movement data was gathered in a menu usage task. By analyzing the scan paths of the eye, we found that menus are read in sequential sweeps. This may explain why the best models produced by previous research are hybrid models that combine systematic reading behavior with random reading behavior.

Journal ArticleDOI
TL;DR: A striking temporal coupling was found between completion of the primary eye saccade and time to peak acceleration for the limb and those findings support a 2-component model of limb control.
Abstract: Temporal and spatial coordination of both point of gaze (PG) and hand kinematics in a speeded aiming task toward an eccentrically positioned visual target were examined with the Optotrak 3D movement analysis system in tandem with the ASL head-mounted eye tracker. Subjects (N = 10) moved eyes, head, hand, and trunk freely. On the majority of trials, the PG pattern was a large initial saccade that undershot the target slightly, then 1 or more smaller corrective saccades to reach the target. The hand exhibited a similar pattern of first undershooting the target and then making small corrective movements. Previously (W. F. Helsen, J. L. Starkes, & M. J. Buekers, 1997), the ratio of PG and total hand response time (50%) was found to be an invariant feature of the movement. In line with those results, a striking temporal coupling was found between completion of the primary eye saccade and time to peak acceleration for the limb. Spatially, peak hand velocity coincided with completion of 50% of total movement distance. Those findings support a 2-component model of limb control.

Journal ArticleDOI
TL;DR: 2- and 3-year-olds are capable of using eye gaze alone to infer about another's desire, suggesting that the acquisition of the ability to use attentional cues to infer another's mental state may involve both an association process and a differentiation process.
Abstract: Five experiments examined children's use of eye gaze information for "mind-reading" purposes, specifically, for inferring another person's desire. When presented with static displays in the first 3 experiments, only by 4 years of age did children use another person's eye direction to infer desires, although younger children could identify the person's focus of attention. Further, 3-year-olds were capable of inferring desire from other nonverbal cues, such as pointing (Experiment 3). When eye gaze was presented dynamically with several other scaffolding cues (Experiment 4), 2- and 3-year-olds successfully used eye gaze for desire inference. Scaffolding cues were removed in Experiment 5, and 2- and 3-year-olds still performed above chance in using eye gaze. Results suggest that 2-year-olds are capable of using eye gaze alone to infer about another's desire. The authors propose that the acquisition of the ability to use attentional cues to infer another's mental state may involve both an association process and a differentiation process.

Journal ArticleDOI
TL;DR: The pattern of low-gain pursuit, impaired pursuit initiation, and intact processing of motion information for catch-up saccades but not pursuit eye movements was consistent in schizophrenic patients tested at five time points over a 2-year follow-up period and implicates the frontal eye fields or their efferent or afferent pathways in the pathophysiology of eye tracking abnormalities in schizophrenia.

ReportDOI
01 Jul 1998
TL;DR: This paper presents a real-time implementation of an eye finding algorithm for a foveated active vision system, and finds that the system finds eyes in 94% of a set of behavioral trials, suggesting that alternate means of evaluating behavioral systems are necessary.
Abstract: Eye finding is the first step toward building a machine that can recognize social cues, like eye contact and gaze direction, in a natural context. In this paper, we present a real-time implementation of an eye finding algorithm for a foveated active vision system. The system uses a motion-based prefilter to identify potential face locations. These locations are analyzed for faces with a template-based algorithm developed by Sinha (1996). Detected faces are tracked in real time, and the active vision system saccades to the face using a learned sensorimotor mapping. Once gaze has been centered on the face, a high-resolution image of the eye can be captured from the foveal camera using a self-calibrated peripheral-ta-foveal mapping.We also present a performance analysis of Sinha's ratio template algorithm on a standard set of static face images. Although this algorithm performs relatively poorly on static images, this result is a poor indicator of real-time performance of the behaving system. We find that our system finds eyes in 94% of a set of behavioral trials. We suggest that alternate means of evaluating behavioral systems are necessary.

Proceedings ArticleDOI
01 Jan 1998
TL;DR: A holistic approach to real-time gaze tracking is investigated by means of a well-defined neural network modelling strategy combined with robust image processing algorithms that effectively learns the gaze direction of a human user by modelling implicitly corresponding eye appearance.
Abstract: We investigate a holistic approach to real-time gaze tracking by means of a well-defined neural network modelling strategy combined with robust image processing algorithms. Based on captured greyscale eye images, the system effectively learns the gaze direction of a human user by modelling implicitly corresponding eye appearance ‐ the relative positions of the pupil, cornea, and light reflection inside the eye socket. In operation, the gaze tracker provides a fast, cheap, and flexible means finding the focus of a user’s attention on any of the objects displayed on a computer screen. It works in an open-plan office environment under normal illumination without using any specialised hardware. It can be easily customised to a new user and integrated into an application system that demands an intelligent non-command interface.

Patent
08 Oct 1998
TL;DR: In this paper, a technique for tracking the position of the eye between the sclera and the iris and receiving reflected light from that region is described, and the intensity of the reflected light is then measured to determine a relative position.
Abstract: Systems, methods, and apparatus are provided for deriving the relative position of an eye (2) by tracking a boundary of the eye such as the limbus (10) (i.e., the interface between the white sclera (8), and the colored iris (6)), a technique for tracking the position of the eye (2) between the sclera (8) and the iris (6), and receiving reflected light from that region. The intensity of the reflected light is then measured to determine a relative position of the eye (2). In some embodiments, the measured region is scanned around the boundary. In other embodiments, a light spot is scanned around a substantially annular trajectory (200) radially outward from the pupil (4). The signals corresponding to the intensity of the reflected light are then processed, and measured to determine the eye's position. A flap of tissue (210) covering the boundary may be automatically detected so as to selectively measure the boundary away from the flap (210). The invention also provides for integrating the eye tracker (20) into a laser eye surgery system (16).

Patent
20 Nov 1998
TL;DR: In this article, a tracking system for locating the eyes of a viewer including an illumination means (61), a plurality of cameras (60), and a processing means (55) was presented.
Abstract: The present invention relates to a tracking system for locating the eyes of a viewer including: an illumination means (61); a plurality of cameras (60); and a processing means (55); wherein at least the viewer's eyes are illuminated by the illumination means (61) to enable capture by each camera (60), and wherein the processing means (55) is adapted to process images from each camera (60) so as to detect the position of the viewer's eyes.

Proceedings ArticleDOI
01 Jan 1998
TL;DR: An adaptive stochastic model has been developed to characterize the skin-color distributions and these real-time visual tracking techniques have been successfully applied to many applications such as gaze tracking, and lipreading.
Abstract: In this paper, we present visual tracking techniques for multimodal human computer interaction First, we discuss techniques for tracking human faces in which human skin-color is used as a major feature An adaptive stochastic model has been developed to characterize the skin-color distributions Based on the maximum likelihood method, the model parameters can be adapted for different people and different lighting conditions The feasibility of the model has been demonstrated by the development of a real-time face tracker The system has achieved a rate of 30-t- frames/second using a low-end workstation with a framegrabber and a camera We also present a top-down approach for tracking facial features such as eyes, nostrils, and lip comers These real-time visual tracking techniques have been successfully applied to many applications such as gaze tracking, and lipreading The face tracker has been combined with a microphone array for extracting speech signal from a specific person The gaze tracker has been combined with a speech recognizer in a multimodal interface for controlling a panoramic image viewer

Proceedings ArticleDOI
15 Jul 1998
TL;DR: The results show that: 1) the eye gaze interface is faster to operate than the mouse; 2) making selection by means of an eye mark takes longer than just reading menu commands; and 3) most errors are induced by lack of adequate visual feedback from the screen.
Abstract: Eye gaze interface has potential as a new human computer interaction method, evident in the numerous kinds developed so far. However, in order to make sure that such an interface is both useful and convenient in daily life for a wide range of users, we need to undertake more indepth studies of human behavior in order to adapt the design to human factors. A prototype system for a menu based interface is described and an experiment designed to analyze users' performance is reported. The results show that: 1) the eye gaze interface is faster to operate than the mouse; 2) making selection by means of an eye mark takes longer than just reading menu commands; and 3) most errors are induced by lack of adequate visual feedback from the screen. Based on the results, two methods to reduce selection error and to improve the performance are discussed, and three facts that improve the usability of the eye gaze interface are presented.

Patent
14 Sep 1998
TL;DR: In this paper, an eye controllable screen pointer system is provided that combines eye gaze tracking and screen tracking from the point of view of the user, aided by either light emitted from screen beacon located near the screen, or by a light pattern emitted from the screen itself as a screen beacon signal.
Abstract: An eye controllable screen pointer system is provided that combines eye gaze tracking and screen tracking from the point of view of the user. Screen tracking is performed by a screen tracking camera, attached to a helmet the user is wearing. The screen tracking camera is aided by either light emitted from screen beacon located near the screen, or by a light pattern emitted from the screen itself as a screen beacon signal. The screen tracking means provides a screen tracking signal. Eye gaze tracking is performed as is known in the art, or according to a novel way described herein. An eye gaze tracking means is attached to the helmet, and provides an eye tracking signal. The information carried in the screen tracking signal and in the eye tracking signal are combined in a calculation by a processing means residing in a processor to produce a point of computed gaze on the screen. In an alternate embodiment of the invention, the input device of both the eye gaze tracking means and the screen tracking camera are combined into a single video camera, thus resulting in a simplified apparatus. Optionally, the system further projects a mark at the point of computed gaze.

Proceedings ArticleDOI
23 Jun 1998
TL;DR: A novel method for 3D head tracking in the presence of large head rotations and facial expression changes is described, formulated in terms of color image registration in the texture map of a 3D surface model.
Abstract: A novel method for 3D head tracking in the presence of large head rotations and facial expression changes is described. Tracking is formulated in terms of color image registration in the texture map of a 3D surface model. Model appearance is recursively updated via image mosaicking in the texture map as the head orientation varies. The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking. Parameters are estimated via a robust minimization procedure; this provides robustness to occlusions, wrinkles, shadows and specular highlights. The system was tested on a variety of sequences taken with low quality, uncalibrated video cameras. Experimental results are reported.

BookDOI
01 Sep 1998
TL;DR: This paper presents an alternative approach for image-plane control of robots using visual servoing techniques to nonholonomic mobile robots and real-time pose estimation and control for convoying applications.
Abstract: Research issues in vision and control.- Visual homing: Surfing on the epipoles.- Role of active vision in optimizing visual feedback for robot control.- An alternative approach for image-plane control of robots.- Potential problems of stability and convergence in image-based and position-based visual servoing.- What can be done with an uncalibrated stereo system?.- Visual tracking of points as estimation on the unit sphere.- Extending visual servoing techniques to nonholonomic mobile robots.- A lagrangian formulation of nonholonomic path following.- Vision guided navigation for a nonholonomic mobile robot.- Design, delay and performance in gaze control: Engineering and biological approaches.- The separation of photometry and geometry via active vision.- Vision-based system identification and state estimation.- Visual tracking, active vision, and gradient flows.- Visual control of grasping.- Dynamic vision merging control engineering and AI methods.- Real-time pose estimation and control for convoying applications.- Visual routines for vehicle control.- Microassembly of micro-electro-mechanical systems (MEMS) using visual servoing.- The block island workshop: Summary report.

Patent
29 Oct 1998
TL;DR: In this article, an apparatus for ocular testing is provided with means (2, 3) for displaying targets (T1, T2) means (4) for tracking eye movement and means (5) for controlling the display of the targets on a screen.
Abstract: An apparatus for ocular testing is provided with means (2, 3) for displaying targets (T1, T2) means (4) for tracking eye movement and means (5) for controlling the display of the targets (T1, T2) on a screen. A method comprises arranging the control means (5) to choreograph display of the targets (T1, T2...) at different positions at the screen (2) depending on whether the eye tracking means (4) detects that an observer is directly looking at the target.

Journal ArticleDOI
01 Jul 1998
TL;DR: A hybrid method employing the two tracking schemes is developed, first a measurement model for the compensation of the head movement is formulated and then the overall tracking scheme is implemented by cascading two Kalman filters.
Abstract: The authors previously (1995) proposed an efficient method for tracking the eye movements. The proposed algorithm did not address the issue of compensating for head movements. Head movements are normally much slower than eye movements and can be compensated for using another tracking scheme for head position. In this paper, a hybrid method employing the two tracking schemes is developed. To this end, first a measurement model for the compensation of the head movement is formulated and then the overall tracking scheme is implemented by cascading two Kalman filters. The tracking of the iris movement is followed by the compensation of the head movement for each image frame. Experimental results are presented to demonstrate the accuracy aspects and the real-time applicability of the proposed approach.

Patent
04 Nov 1998
TL;DR: In this paper, a system for secure data entry, including a virtual keypad having a plurality of keys, a mechanism for determining to which virtual key of the keypad a user is looking, and an actuator, operable by the user, for confirming key selection.
Abstract: A system for secure data entry, includes a virtual keypad having a plurality of keys, a mechanism for determining to which virtual key of the keypad a user is looking, and an actuator, operable by the user, for confirming key selection. Another system for secure data entry, includes a virtual keypad having a plurality of keys, an eye tracker for tracking eye movement of a user, and for receiving a coded input from an eye movement of the user gazing upon at least a selected one of the keys of the virtual keypad, and an actuator for being selectively actuated by the user upon confirmation of the coded input by the user.