scispace - formally typeset
Search or ask a question

Showing papers on "Eye tracking published in 2000"


Proceedings ArticleDOI
08 Nov 2000
TL;DR: A taxonomy of fixation identification algorithms is proposed that classifies algorithms in terms of how they utilize spatial and temporal information in eye-tracking protocols in order to evaluate and compare these algorithms with respect to a number of qualitative characteristics.
Abstract: The process of fixation identification—separating and labeling fixations and saccades in eye-tracking protocols—is an essential part of eye-movement data analysis and can have a dramatic impact on higher-level analyses. However, algorithms for performing fixation identification are often described informally and rarely compared in a meaningful way. In this paper we propose a taxonomy of fixation identification algorithms that classifies algorithms in terms of how they utilize spatial and temporal information in eye-tracking protocols. Using this taxonomy, we describe five algorithms that are representative of different classes in the taxonomy and are based on commonly employed techniques. We then evaluate and compare these algorithms with respect to a number of qualitative characteristics. The results of these comparisons offer interesting implications for the use of the various algorithms in future work.

1,809 citations


Journal ArticleDOI
TL;DR: The hypothesis that gaze following is "hard-wired" in the brain, and may be localized within a circuit linking the superior temporal sulcus, amygdala and orbitofrontal cortex is discussed.

1,714 citations


Journal ArticleDOI
TL;DR: Perception of face identity and eye gaze in the human brain was mediated more by regions in the inferior occipital and fusiform gyri, and perception ofEye-gaze perception seemed to recruit the spatial cognition system in the intraparietal sulcus to encode the direction of another's gaze and to focus attention in that direction.
Abstract: Face perception requires representation of invariant aspects that underlie identity recognition as well as representation of changeable aspects, such as eye gaze and expression, that facilitate social communication. Using functional magnetic resonance imaging (fMRI), we investigated the perception of face identity and eye gaze in the human brain. Perception of face identity was mediated more by regions in the inferior occipital and fusiform gyri, and perception of eye gaze was mediated more by regions in the superior temporal sulci. Eye-gaze perception also seemed to recruit the spatial cognition system in the intraparietal sulcus to encode the direction of another's gaze and to focus attention in that direction.

1,214 citations


Journal ArticleDOI
TL;DR: Evidence from recent neurophysiological studies that suggests that the eyes constitute a special stimulus in at least two senses is reviewed, suggesting that the structure of the eyes is such that it provides us with a particularly powerful signal to the direction of another person's gaze.

838 citations


Journal ArticleDOI
TL;DR: It is argued that eye-movement data provide an excellent on-line indication of the cognitive processes underlying visual search and reading and the relationship between attention and eye movements is discussed.

757 citations


Proceedings ArticleDOI
01 Apr 2000
TL;DR: Two experiments are presented that compare an interaction technique developed for object selection based on a where a person is looking with the most commonly used selection method using a mouse and find that the eye gaze interaction technique is faster than selection with a mouse.
Abstract: Eye gaze interaction can provide a convenient and natural addition to user-computer dialogues. We have previously reported on our interaction techniques using eye gaze [10]. While our techniques seemed useful in demonstration, we now investigate their strengths and weaknesses in a controlled setting. In this paper, we present two experiments that compare an interaction technique we developed for object selection based on a where a person is looking with the most commonly used selection method using a mouse. We find that our eye gaze interaction technique is faster than selection with a mouse. The results show that our algorithm, which makes use of knowledge about how the eyes behave, preserves the natural quickness of the eye. Eye gaze interaction is a reasonable addition to computer interaction and is convenient in situations where it is important to use the hands for other tasks. It is particularly beneficial for the larger screen workspaces and virtual environments of the future, and it will become increasingly practical as eye tracker technology matures.

547 citations


Proceedings ArticleDOI
01 Apr 2000
TL;DR: It was found that the eye movement-based interaction was faster than pointing, especially for distant objects, however, subjects' ability to recall spatial information was weaker in the eye condition than the pointing one.
Abstract: Eye movement-based interaction offers the potential of easy, natural, and fast ways of interacting in virtual environments. However, there is little empirical evidence about the advantages or disadvantages of this approach. We developed a new interaction technique for eye movement interaction in a virtual environment and compared it to more conventional 3-D pointing. We conducted an experiment to compare performance of the two interaction types and to assess their impacts on spatial memory of subjects and to explore subjects' satisfaction with the two types of interactions. We found that the eye movement-based interaction was faster than pointing, especially for distant objects. However, subjects' ability to recall spatial information was weaker in the eye condition than the pointing one. Subjects reported equal satisfaction with both types of interactions, despite the technology limitations of current eye tracking equipment.

290 citations


Patent
31 Mar 2000
TL;DR: In this article, a low-cost non-imaging eye tracking system is optimized toward the applications requiring computer cursor control by localizing the gaze direction as an operator looks through a fixed frame to provide pointing information to a computer.
Abstract: A device for measuring eye movements is suitable for stand-alone, portable use and for integration into a head/helmet mounted display The low-cost non-imaging eye tracking system is optimized toward the applications requiring computer cursor control by localizing the gaze direction as an operator looks through a fixed frame to provide pointing information to a computer

276 citations


Journal ArticleDOI
TL;DR: The authors found that infants are faster to make saccades to peripheral targets cued by the direction of eye gaze of a central face than by the eye gaze itself, but the results of Experiment 2 showed that the pupils of the stimulus face stayed still while the face was displaced to the same extent as the pupils in Experiment 1.
Abstract: Three experiments were carried out with 4 to 5-month-old infants using the eye gaze cueing paradigm of Hood, Willen, and Driver (1998). Experiment 1 replicated the previous finding that infants are faster to make saccades to peripheral targets cued by the direction of eye gaze of a central face. However, the results of Experiment 2, in which the pupils of the stimulus face stayed still while the face was displaced to the same extent as the pupils in Experiment 1, revealed that under these conditions infants were cued by direction of motion rather than by eye gaze. This conclusion was confirmed by the results of Experiment 3 in which the cueing effect was not obtained under conditions similar to those in Experiment 1, except that there was no apparent movement of the pupils. Taken together, the last two experiments suggest that directed motion may be an important contributor to the cueing effects observed following shifts of eye gaze.

254 citations


Proceedings ArticleDOI
26 Mar 2000
TL;DR: This work develops a dual-state model-based system for tracking eye features that uses convergent tracking techniques and shows how it can be used to detect whether the eyes are open or closed, and to recover the parameters of the eye model.
Abstract: Most eye trackers work well for open eyes. However blinking is a physiological necessity for humans. More over, for applications such as facial expression analysis and driver awareness systems, we need to do more than tracking of the locations of the person's eyes but obtain their detailed description. We need to recover the state of the eyes (i.e., whether they are open or closed), and the parameters of an eye model (e.g., the location and radius of the iris, and the corners and height of the eye opening). We develop a dual-state model-based system for tracking eye features that uses convergent tracking techniques and show how it can be used to detect whether the eyes are open or closed, and to recover the parameters of the eye model. Processing speed on a Pentium II 400 MHz PC is approximately 3 frames/second. In experimental tests on 500 image sequences from child and adult subjects with varying colors of skin and eye, accurate tracking results are obtained in 98% of image sequences.

181 citations


01 Jul 2000
TL;DR: This paper presents behavior models of eye gaze patterns in the context of real-time verbal communication and applies these eye gaze models to simulate eye movements in a computer-generated avatar in a number of task settings.
Abstract: As we begin to create synthetic characters (avatars) for computer users, it is important to pay attention to both the look and the behavior of the avatar’s eyes. In this paper we present behavior models of eye gaze patterns in the context of real-time verbal communication. We apply these eye gaze models to simulate eye movements in a computergenerated avatar in a number of task settings. We also report the results of an experiment that we conducted to assess whether our eye gaze model induces changes in the eye gaze behavior of an individual who is conversing with an avatar.

Proceedings ArticleDOI
08 Nov 2000
TL;DR: The Eye-gaze Response Interface Computer Aid (ERICA) is a computer system developed at the University of Virginia that tracks eye movement and individuals can reliably perform all actions of the mouse and the keyboard with their eye.
Abstract: The Eye-gaze Response Interface Computer Aid (ERICA) is a computer system developed at the University of Virginia that tracks eye movement. To allow true integration into the Windows environment, an effective methodology for performing the full range of mouse actions and for typing with the eye needed to be constructed. With the methods described in this paper, individuals can reliably perform all actions of the mouse and the keyboard with their eye.

Journal ArticleDOI
TL;DR: Assessment of the capacity to use eye cues only to follow the gaze of an experimenter in juveniles and adult pig-tailed macaques found that such abilities in macaques dramatically improve with age suggesting that the transition to adulthood is a crucial period in the development of gaze-following behavior.
Abstract: The ability of monkeys to follow the gaze of other individuals is a matter of debate in many behavioral studies. Physiological studies have shown that in monkeys, as in humans, there are neural correlates of eye direction detection. There is little evidence at the behavioral level, however, of the presence and development of such abilities in monkeys. The aim of the present study was to assess in juveniles and adult pig-tailed macaques (Macaca nemestrina) the capacity to use eye cues only to follow the gaze of an experimenter. Biological stimuli (head, eye, and trunk movements) were presented by an experimenter to 2 adult monkeys with their heads restrained (Experiment 1) and to 11 monkeys of different ages, free to move in their home cages (Experiment 2). A nonbiological stimulus served as a control. Results showed that macaques can follow the gaze of the experimenter by using head/eye and eye cues alone. Trunk movements and nonbiological stimuli did not significantly elicit similar reactions. Juvenile monkeys were not able to orient their attention on the basis of eye cues alone. In general, gaze following was more frequent in adults than in juveniles. Like in humans, however, such abilities in macaques dramatically improve with age suggesting that the transition to adulthood is a crucial period in the development of gaze-following behavior.

Proceedings ArticleDOI
08 Nov 2000
TL;DR: The use of off-screen targets and various schemes for decoding target hit sequences into text and the use of Morse code, the Minimal Device Independent Text Input Method (MDITIM), QuikWriting, and Cirrin-like target arrangements are proposed.
Abstract: Text input with eye trackers can be implemented in many ways such as on-screen keyboards or context sensitive menu-selection techniques. We propose the use of off-screen targets and various schemes for decoding target hit sequences into text. Off-screen targets help to avoid the Midas' touch problem and conserve display area. However, the number and location of the off-screen targets is a major usability issue. We discuss the use of Morse code, our Minimal Device Independent Text Input Method (MDITIM), QuikWriting, and Cirrin-like target arrangements. Furthermore, we describe our experience with an experimental system that implements eye tracker controlled MDITIM for the Windows environment.

Proceedings ArticleDOI
01 Nov 2000
TL;DR: A system for remedial reading instruction that uses visually controlled auditory prompting to help the user with recognition and pronunciation of words and a controlled study is discussed that was undertook to evaluate the usability of the Reading Assistant.
Abstract: We have developed a system for remedial reading instruction that uses visually controlled auditory prompting to help the user with recognition and pronunciation of words. Our underlying hypothesis is that the relatively unobtrusive assistance rendered by such a system will be more effective than previous computer aided approaches. We present a description of the design and implementation of our system and discuss a controlled study that we undertook to evaluate the usability of the Reading Assistant.

Patent
Cynthia S. Bell1
09 May 2000
TL;DR: In this article, an eye gaze detector is used to determine what the user is looking at at any given instance of time if the user looks at it for sufficient time, and the screen display may be altered to change the appearance of the selected image element and the unselected image elements.
Abstract: An electronic device may include a microdisplay in which a displayed image element may be selected by gazing upon it. An eye gaze detector may determine what the user is looking at at any given instance of time if the user looks at it for sufficient time. Once an image element is identified as being selected by being gazed upon, the screen display may be altered to change the appearance of the selected image element and the unselected image elements. For example, the selected image element may be brought into focus and other image elements may be blurred.

Patent
Fernando C. M. Martins1
19 Sep 2000
TL;DR: In this article, the authors track a user's eye gaze while the user is browsing a web page and modify the presentation of the web page to the user based on the tracked gaze.
Abstract: Passively tracking a user's eye gaze while the user is browsing a web page and modifying the presentation of the web page to the user based on the tracked gaze. By combining historical information about a user's direction of gaze on individual cached web pages, a browser may be enabled to represent regions of a web page that have been previously glanced at by the user in a modified manner. For example, sections of a web page that a user has previously read or viewed may be represented in a changed form, such as in a different color, brightness, or contrast, for example. In one embodiment, the portions of the web page previously viewed by the user may be represented as “grayed out” so as to be unobtrusive.

Proceedings ArticleDOI
08 Nov 2000
TL;DR: The development of a binocular eye tracking Virtual Reality system for aircraft inspection training to provide a means of comparison of the performance of experts to novices, thereby gauging the effects of training.
Abstract: This paper describes the development of a binocular eye tracking Virtual Reality system for aircraft inspection training. The aesthetic appearance of the environment is driven by standard graphical techniques augmented by realistic texture maps of the physical environment. A “virtual flashlight” is provided to simulate a tool used by inspectors. The user's gaze direction, as well as head position and orientation, are tracked to allow recording of the user's gaze locations within the environment. These gaze locations, or scanpaths, are calculated as gaze/polygon intersections, enabling comparison of fixated points with stored locations of artificially generated defects located in the environment interior. Recorded scanpaths provide a means of comparison of the performance of experts to novices, thereby gauging the effects of training.

Proceedings ArticleDOI
08 Nov 2000
TL;DR: This paper summarizes results from a series of 6 studies investigating spatial, resolutional, and temporal parameters affecting perception and performance in such eye-contingent multi-resolutional displays.
Abstract: One way to economize on bandwidth in single-user head-mounted displays is to put high-resolution information only where the user is currently looking. This paper summarizes results from a series of 6 studies investigating spatial, resolutional, and temporal parameters affecting perception and performance in such eye-contingent multi-resolutional displays. Based on the results of these studies, suggestions are made for the design of eye-contingent multi-resolutional displays.

Proceedings ArticleDOI
08 Nov 2000
TL;DR: An application called Gaze Tracker™ is described that facilitates the analysis of a test subject's eye movements and pupil response to visual stimuli, such as still images or dynamic software applications that the test subject interacts with (for example, Internet Explorer).
Abstract: The Eye-gaze Response Interface Computer Aid (ERICA) is a computer system developed at the University of Virginia that tracks eye movement. Originally developed as a means to allow individuals with disabilities to communicate, ERICA was then expanded to provide methods for experimenters to analyze eye movements. This paper describes an application called Gaze Tracker™ that facilitates the analysis of a test subject's eye movements and pupil response to visual stimuli, such as still images or dynamic software applications that the test subject interacts with (for example, Internet Explorer).

Proceedings ArticleDOI
08 Nov 2000
TL;DR: The use of eye gaze tracking and trajectory analysis in the testing of the performance of input devices for cursor control in Graphical User Interfaces (GUIs) is described.
Abstract: In this paper, we describe the use of eye gaze tracking and trajectory analysis in the testing of the performance of input devices for cursor control in Graphical User Interfaces (GUIs). By closely studying the behavior of test subjects performing pointing tasks, we can gain a more detailed understanding of the device design factors that may influence the overall performance with these devices. Our Results show them are many patterns of hand eye coordination at the computer interface which differ from patterns found in direct hand pointing at physical targets (Byrne, Anderson, Douglass, & Matessa, 1999).

Journal ArticleDOI
TL;DR: An attention-driven approach to feature detection inspired by the human saccadic system is suggested and a dramatic speedup is achieved by computing the Gabor decomposition only on the points of a sparse retinotopic grid.

Patent
01 Aug 2000
TL;DR: In this article, a pair of linear photodetectors (42a and 42b) are used to sense and measure one-dimensional positioning error and provide feedback to a single, highly linear system capable of accurate position tracking.
Abstract: Improved devices, systems, and methods for sensing and tracking the position of an eye make use of the contrast between the sclera and iris to derive eye position. In many embodiments, linear photodetectors extend across the pupil, optionally also extending across the iris to the sclera. A pair of such linear photodetectors (42a and 42b) can accurately sense and measure one-dimensional positioning error and provide feedback to a one-dimensional positioning apparatus, resulting in a single, highly linear system capable of accurate position tracking.

Patent
Eric Horvitz1, Kentaro Toyama1
01 Jun 2000
TL;DR: In this paper, a system and method for efficient and efficient performing automated vision tracking, such as tracking human head movement and facial movement, is presented, which integrates reports from several distinct vision processing procedures in a probabilistically coherent manner by performing inferences about the location and motion of objects.
Abstract: The present invention is embodied in a system and method for efficiently and efficiently performing automated vision tracking, such as tracking human head movement and facial movement. The system and method of the present invention fuses results of multiple sensing modalities to achieve robust digital vision tracking The system and method effectively fuses together the results of multiple vision processing modalities for performing tracking tasks in order to achieve robust vision tracking. The approach integrates reports from several distinct vision processing procedures in a probabilistically coherent manner by performing inferences about the location and/or motion of objects that considers both the individual reports about targets provided by visual processing modalities, as well as inferences about the context-sensitive accuracies of the reports. The context-sensitive accuracies are inferred by observing evidence with relevance to the reliabilities of the different methods.

01 Jan 2000
TL;DR: It is important to be clear about the domain and scope of the Visual Memory Theory, which is meant to capture the nature of the scene representations that are generated, retained, and available for comparison and integration across extended viewing time and multiple eye fixations.
Abstract: visual representations of local objects and scene elements in long-term memory (what we have called long-term object files; Hollingworth & Henderson, 2000b), tied together with a spatial representation of object locations. Cognitive representations including object and scene identity and meaning are also generated during fixations, and these can also be stored both in short-term and long-term memory along with more specific visual representations coded by the long-term object files. It is important that we be clear about the domain and scope of the Visual Memory Theory. The theory is meant to capture the nature of the scene representations that are generated, retained, and available for comparison and integration across extended viewing time and multiple eye fixations. Thus, the nature of the representations that support conscious experience of change, or indeed the conscious experience of anything, is beyond the scope of the theory. The theory is agnostic about

Proceedings ArticleDOI
01 Apr 2000
TL;DR: Fitts' Law model was shown to predict movement times using both interaction techniques equally well and is seen to be a potential contributor to design of modern multimodal human-computer interfaces.
Abstract: An experiment is described comparing the performance of an eye tracker and a mouse in a simple pointing task. Subjects had to make rapid and accurate horizontal movements to targets that were vertical ribbons located at various distances from the cursor's starting position. The dwell-time protocol was used for the eye tracker to make selections. Movement times were shorter for the mouse than for the eye tracker. Fitts' Law model was shown to predict movement times using both interaction techniques equally well. The model is thus seen to be a potential contributor to design of modern multimodal human-computer interfaces.


Journal ArticleDOI
TL;DR: The results show that capuchin monkeys can learn to use eye gaze as a discriminative cue, but there was no-evidence for any underlying awareness of eye gazeAs a cue to direction of attention.
Abstract: The ability of 3 capuchin monkeys (Cebus apella) to use experimenter-given cues to solve an object-choice task was assessed. The monkeys learned to use explicit gestural and postural cues and then progressed to using eye-gaze-only cues to solve the task, that is, to choose the baited 1 of 2 objects and thus obtain a food reward. Increasing cue-stimulus distance and introducing movement of the eyes impeded the establishment of effective eye-gaze reading. One monkey showed positive but imperfect transfer of use of eye gaze when a novel experimenter presented the cue. When head and eye orientation cues were presented simultaneously and in conflict, the monkeys showed greater responsiveness to head orientation cues. The results show that capuchin monkeys can learn to use eye gaze as a discriminative cue, but there was no-evidence for any underlying awareness of eye gaze as a cue to direction of attention.

Patent
Christopher S. Campbell1
29 Aug 2000
TL;DR: In this paper, three distinct mechanisms are used: (1) coarse or quantized representation of eye-movements, (2) accumulation of pooled numerical evidence based detection, and (3) mode switching.
Abstract: Accurately recognizing from eye-gaze patterns when a user is reading, skimming, or scanning on a display filled with heterogeneous content, and then supplying information tailored to meet individual needs. Heterogeneous content includes objects normally encountered on computer monitors, such as text, images, hyperlinks, windows, icons, and menus. Three distinct mechanisms are used: (1) coarse or quantized representation of eye-movements, (2) accumulation of pooled numerical evidence based detection, and (3) mode switching. Analysis of text the user is reading or skimming may infer user interest and adapt to the user's needs.

Proceedings ArticleDOI
08 Nov 2000
TL;DR: The behavior of human observers performing visual search of natural scenes using gaze-contingent variable resolution displays is examined and measures of reaction time, accuracy, and fixation duration suggest that task performance is comparable to that seen for uniform resolution displays when the central region size is approximately 5 degrees.
Abstract: Gaze-contingent variable resolution display techniques allocate computational resources for image generation preferentially to the area around the center of gaze where visual sensitivity to detail is the greatest. Although these techniques are computationally efficient, their behavioral consequences with realistic tasks and materials are not well understood. The behavior of human observers performing visual search of natural scenes using gaze-contingent variable resolution displays is examined. A two-region display was used where a high-resolution region was centered on the instantaneous center of gaze, and the surrounding region was presented in a lower resolution. The radius of the central high-resolution region was varied from 1 to 15 degrees while the total amount of computational resources required to generate the visual display was kept constant. Measures of reaction time, accuracy, and fixation duration suggest that task performance is comparable to that seen for uniform resolution displays when the central region size is approximately 5 degrees.