scispace - formally typeset
Search or ask a question

Showing papers on "Eye tracking published in 1992"


Journal ArticleDOI
TL;DR: Monkeys were trained to perform a variety of horizontal eye tracking tasks designed to reveal possible eye movement and vestibular sensitivities of neurons in the medulla, and three different types of neurons recorded in these regions were reported here.
Abstract: 1. Monkeys were trained to perform a variety of horizontal eye tracking tasks designed to reveal possible eye movement and vestibular sensitivities of neurons in the medulla. To test eye movement s...

329 citations


Journal ArticleDOI
10 Dec 1992-Nature
TL;DR: It is found that people require extra-retinal information about eye position15 to perceive heading accurately under many viewing conditions and predicts poorer performance in the simulated condition because the eyes do not move.
Abstract: Warren and Hannon (1988, 1990), while studying the perception of heading during eye movements, concluded that people do not require extraretinal information to judge heading with eye/head movements present. Here, heading judgments are examined at higher, more typical eye movement velocities than the extremely slow tracking eye movements used by Warren and Hannon. It is found that people require extraretinal information about eye position to perceive heading accurately under many viewing conditions.

328 citations


Book ChapterDOI
01 Jan 1992
TL;DR: In this paper, it was shown that during the course of a complex visual task such as reading or picture viewing, our eyes move from one location to another at an average rate of 3 to 5 times per second.
Abstract: Experimental psychologists have known for some time that it is possible to allocate visual-spatial attention to one region of the visual field even as we maintain eye fixation on another region As William James stated it, “…we may attend to an object on the periphery of the visual field and yet not accommodate the eye for it” (James, 1890/1950, p 437) At the same time, experimental psychologists have also known that during the course of a complex visual task such as reading or picture viewing, our eyes move from one location to another at an average rate of 3 to 5 times per second (eg, Rayner, 1978; Tinker, 1939; Yarbus, 1967) The question therefore arises how these covert and overt changes in processing focus are related This is the question addressed in the present chapter

227 citations


BookDOI
01 Jan 1992

192 citations


Journal ArticleDOI
TL;DR: The gaze control system can be modeled using a feedback system in which an internally created, instantaneous, gaze motor error signal--equivalent to the distance between the target and the gaze position at that time--is used to drive both eye and head motor circuits.

189 citations


Journal ArticleDOI
24 Sep 1992-Nature
TL;DR: Using a new portable and inexpensive method for recording head and eye movements, the behaviour of car drivers is examined, particularly during the large gaze changes made at road junctions, to show that the pattern of eye and head movements is highly preditable, given only the sequence of gaze targets.
Abstract: Large changes in the direction of gaze are made with a combination of fast saccadic eye movements and rather slower head movements. Since the first study on freely moving subjects, most authors have agreed that the head movement component of gaze is very variable, with a high 'volitional' component. But in some circumstances head and eye movements can be quite predictable, for example when a subject is asked to shift gaze as quickly as possible. Under these conditions, laboratory studies have shown that the eye and head motor-systems both receive gaze-change commands, although they execute them in rather different ways. Here I reconsider the way gaze direction is changed during free movement, but in the performance of a task where the subject is too busy to exert conscious control over head or eye movements. Using a new portable and inexpensive method for recording head and eye movements, I examine the oculomotor behaviour of car drivers, particularly during the large gaze changes made at road junctions. The results show that the pattern of eye and head movements is highly predictable, given only the sequence of gaze targets.

170 citations


Journal ArticleDOI
TL;DR: The interpretation is that MST‐I VT neurons are best described as encoding the direction of target motion in space‐centred coordinates by integrating inputs reflecting retinal image motion plus eye and head movement.
Abstract: Thirty-one neurons which exhibited ocular pursuit-related activity [visual-tracking (VT) neurons] were found clustered within area MST-I (the lateral part of area MST) of two rhesus monkeys. Their responses were studied to determine whether this activity was correlated only with pursuit eye movement or with head movement as well. The latter hypothesis appeared to be preferable since visual, eye movement and head movement inputs were found to be mapped in register onto most of these cells. First, in each cell tested (n=19) the pursuit response persisted even in the absence of retinal image motion, offering clear evidence for non-visual input. Second, 22 of the 31 cells were directionally responsive to moving visual stimuli and in 20 of these the preferred directions for the visual motion and pursuit responses agreed closely. Responses were also obtained from many of the same cells during suppression of both the horizontal and the vertical vestibulo-ocular reflex (VOR). In each case, where directional visual, pursuit and VOR suppression responses were each obtained, vector addition of responses during suppression of the horizontal and vertical VOR resulted in an estimated preferred direction for head rotation which was closely aligned with the preferred direction previously obtained for eye motion or visual motion. In addition, the preferred direction of head movement was conserved even when the VOR was elicited by passive head rotation in complete darkness, although the responses in this instance were, on average, only 62% of those obtained during VOR suppression. Our interpretation is that, at present, MST-I VT neurons are best described as encoding the direction of target motion in space-centred coordinates by integrating inputs reflecting retinal image motion plus eye and head movement.

152 citations


Journal ArticleDOI
TL;DR: Predictive features of eye-hand coordination control were studied by introducing a delay between the Subject's (S's) hand motion and the motion of the hand-driven target on the screen and the eyes were always in phase with the visual target.
Abstract: The aim of this study was to examine coordination control in eye and hand tracking of visual targets. We studied eye tracking of a self-moved target, and simultaneous eye and hand tracking of an external visual target moving horizontally on a screen. Predictive features of eye-hand coordination control were studied by introducing a delay (0 to 450 ms) between the Subject's (S's) hand motion and the motion of the hand-driven target on the screen. In self-moved target tracking with artificial delay, the eyes started to move in response to arm movement while the visual target was still motionless, that is before any retinal slip had been produced. The signal likely to trigger smooth pursuit in that condition must be derived from non-visual information. Candidates are efference copy and afferent signals from arm motion. When tracking an external target with the eyes and the hand, in a condition where a delay was introduced in the visual feedback loop of the hand, the Ss anticipated with the arm the movement of the target in order to compensate the delay. After a short tracking period, Ss were able to track with a low lag, or eventually to create a lead between the hand and the target. This was observed if the delay was less than 250-300 ms. For larger delays, the hand lagged the target by 250-300 ms. Ss did not completely compensate the delay and did not, on the average, correct for sudden changes in movement of the target (at the direction reversal of the trajectory). Conversely, in the whole range of studied delays (0-450 ms), the eyes were always in phase with the visual target (except during the first part of the first cycle of the movement, as seen previously). These findings are discussed in relation to a scheme in which both predictive (dynamic nature of the motion) and coordination (eye and hand movement system interactive signals) controls are included.

122 citations


Patent
04 Dec 1992
TL;DR: In this article, an eye tracking system consisting of a display and a photodetector array is described, where each pixel in the display is aligned with a corresponding photodeter.
Abstract: An eye tracking system is disclosed which is comprised of an eye tracking module formed of a display joined to a photodetector array. Each pixel in the display is aligned with a corresponding photodetector. An image generated by the display is projected onto a viewing screen or toward a viewer. Axial light rays from the display pixels are reflected by the eye and detected by a respective photodetector which generates an electrical signal indicative of eye position.

97 citations


Journal ArticleDOI
TL;DR: No significant change was quantitatively found in saccadic eye movements during and/or after five hours of rapid eye tracking tasks, and the maximum velocity obtained in the present experiment was ascertained in order to produce a scale for various visual work as an ergonomic index.
Abstract: The measurement system for quantitative analysis of eye movements and distribution of eye fixation points was developed through the study. Experiments on physiological fatigue characteristics of eye movements were studied using the system. The subjects involved in the study were six young males. No significant change was quantitatively found in saccadic eye movements during and/or after five hours of rapid eye tracking tasks. The saccadic velocity of two subjects were found in binocular decreased temporarily. The maximum velocity of eye movements obtained in the present experiment was ascertained in order to produce a scale for various visual work as an ergonomic index.

92 citations


Journal ArticleDOI
TL;DR: Results support postulation of a single gene for ocular motor dysfunction, which may be a risk factor for schizophrenia, and eye tracking may be useful as a gene carrier test in genetic studies of schizophrenia.
Abstract: Objective: Evidence suggests that poor eye tracking relates to genetically transmitted vulnerability for schizophrenia. The authors tested competing models for the genetic transmission of poor eye tracking in a search for major gene effects. Method: Samples from three studies (conducted in Minneapolis, New York, and Vancouver, B.C.) were pooled. Probands (N=92) were diagnosedas schizophrenic by DSM-III criteria. Ofthe comparison subjects (N=1 71), Vancouver patients were an epidemiologic first-episode group; at other sites unselected admitted patients were studied. First-degree relatives (N=146) of6S probands were also studied. Eye tracking was measured while subjects followed a horizontally moving, sinusoidally driven (0.4 Hz) spot of light on a screen. Performance was quantified by root mean square error. Data analysis was by complex segregation analysis (Bonney’s class D regressive models). Results: A single major gene is needed to account for poor eye tracking in schizophrenic patients and their relatives. This gene alone can explain about two-thirds of the variance in eye tracking performance. A single gene alone (regardless of dominance) will, however, not account for the data; polygenic factors are also required. Conclusions: Results support postulation of a single gene for ocular motor dysf unction, which may be a risk factor for schizophrenia. Eye tracking may be useful as a gene carrier test in genetic studies of schizophrenia. (AmJ Psychiatry 1992; 149:1362-1368)

Journal ArticleDOI
TL;DR: It is suggested that on occasions when the global optic flow cannot be resolved into a single vector useful to the oculomotor system, a third independent tracking mechanism, the smooth pursuit system, is deployed to stabilize gaze on the local feature of interest.
Abstract: In monkeys, there are several reflexes that generate eye movements to compensate for the observer's own movements. Two vestibuloocular reflexes compensate selectively for rotational (RVOR) and translational (TVOR) disturbances of the head, receiving their inputs from the semicircular canals and otolith organs, respectively. Two independent visual tracking systems that deal with residual disturbances of gaze are manifest in the two components of the optokinetic response: the indirect or delayed component (OKNd) and the direct or early component (OKNe). We hypothesize that OKNd--like the RVOR--is phylogenetically old, being found in all animals with mobile eyes, and that it evolved as a backup to the RVOR to compensate for rotational disturbances of gaze. Indeed, optically induced changes in the gain of the RVOR result in parallel changes in the gain of OKNd, consistent with the idea of shared pathways as well as shared functions. In contrast, OKNe--like the TVOR--seems to have evolved much more recently in frontal-eyed animals and, we suggest, acts as a backup to the TVOR to deal primarily with translational disturbances of gaze. Frontal-eyed animals with good binocular vision must be able to keep both eyes directed at the object of regard irrespective of proximity, and in order to achieve this during translational disturbances, the output of the TVOR is modulated inversely with the viewing distance. OKNe shares this sensitivity to absolute depth, consistent with the idea that it is synergistic with the TVOR and shares some of its central pathways. There is evidence that OKNe is also sensitive to relative depth cues such as motion parallax, which we suggest helps the system to segregate the object of regard from other elements in the scene. However, there are occasions when the global optic flow cannot be resolved into a single vector useful to the oculomotor system (e.g., when the moving observer looks towards the direction of heading). We suggest that on such occasions a third independent tracking mechanism, the smooth pursuit system, is deployed to stabilize gaze on the local feature of interest. In this scheme, the pursuit system has an attentional focusing mechanism that spatially filters the visual motion inputs driving the oculomotor system. The major distinguishing features of the 3 visual tracking mechanisms are summarized in Table 1.

Book ChapterDOI
01 Jan 1992
TL;DR: The coastline is the target-elicited saccadic eye movement, where a subject orients to a well-defined target that appears suddenly in the visual periphery that can provide a good point of departure for examining cognitive influences.
Abstract: Eye movements and visual cognition have in the past seemed like two separate continents separated by an enormous and ill-charted ocean. As the chapters in this volume show, many navigators are now successfully venturing on this ocean. However, the strategy of the work described here is somewhat more modest and may be likened to that of the early explorers who kept well within sight of the coastline. The coastline is the target-elicited saccadic eye movement, where a subject orients to a well-defined target that appears suddenly in the visual periphery. This is a piece of behavior that has been extensively studied and is reasonably well understood. It is argued that it can provide a good point of departure for examining cognitive influences.

Patent
20 Nov 1992
TL;DR: In this paper, a method for capturing and presenting a pattern of visual information includes tracking the movement of an observer's eye while imaging images, transmitting the images to an image display, and displaying the images in response to the perceiver's eye movement while fixing the image on the retina of the perceived eye.
Abstract: A method for capturing and presenting a pattern of visual information includes tracking the movement of an observer's eye while imaging images in response to the observer's eye movement, transmitting the images to an image display, tracking the movement of a perceiver's eye and displaying the images in response to the perceiver's eye movement while fixing the image on the retina of the perceiver. A system for capturing a pattern of visual information includes an observer's eye movement tracker and an imager responsive to the observer's eye movement tracker. Images captured by the imager can be transmitted to a display or recorded for later display. A system for presenting a pattern of visual information includes a display, a perceiver's eye movement tracker and an image fixer. Images from the imager or recorded images captured by the imager are displayed in response to the perceiver's eye movement tracker and fixed on the retina of the perceiver by the image fixer.

Journal ArticleDOI
TL;DR: An algorithm is described to discriminate automatically between saccades and slow eye movements and is demonstrated by search-coil data in squirrel monkeys.

Journal ArticleDOI
TL;DR: The hypothesis that specific quantitative features of eye tracking would be correlated with the amplitude of a component of the auditory evoked potential, the N100, which is known to be enhanced by arousal and selective attention is examined.
Abstract: Attentional factors are thought to affect eye-tracking patterns. The present study examined the hypothesis that specific quantitative features of eye tracking would be correlated with the amplitude of a component of the auditory evoked potential, the N100, which is known to be enhanced by arousal and selective attention. We studied 12 clinically stable schizophrenic patients by means of DC-electro-oculography. The frequency and amplitude of different types of saccades (catchup, backup, anticipatory saccades, and square wave jerks) were assessed. The results suggest that small and large saccades, as classified by a simple amplitude criterion (4°), have differential meanings and indicate that enhanced amplitudes of small saccades are an effect of arousal.

Book ChapterDOI
01 Jan 1992
TL;DR: This article showed that most of the time our attention and our eyes are directed at the same part of the stimulus field during reading or looking at a scene or search for an object in our environment.
Abstract: When we read or look at a scene or search for an object in our environment, we do so by moving our eyes over the visual field. Limitations of visual acuity necessitate these eye movements; we move our eyes so that the fovea (or center of vision) is directed toward what we wish to process in detail. Our eyes do not move smoothly over the stimulus; rather, we make a series of fixations and saccades. Fixations, the period of time when our eyes are relatively still, typically last for approximately 150 to 350 ms. Following a fixation, our eyes make a saccade (or jump) to a new location. Although it has been known for some time that we can dissociate our attention from the point of fixation, most of the time our attention and our eyes are directed at the same part of the stimulus field.

Journal ArticleDOI
TL;DR: The results suggest that finger movements become faster and use of vision is reduced when both visual and tactile information are given.
Abstract: Adaptation experiments in shape tracing were conducted to investigate finger and eye movements in various conditions of visual and tactile information. Maximum velocity, mean velocity, maximum acceleration and reacceleration point were calculated from finger movements. Number of eye fixations and lead time of eye fixation to finger position were calculated from eye movements. The results showed that for the finger movement the values of the indices studied were higher in the combined visual and tactile condition than in the visual only condition. The number of eye fixations decreased when subjects repeated the tracing and was more marked in the combined visual and tactile condition than in the visual only condition. The results suggest that finger movements become faster and use of vision is reduced when both visual and tactile information are given.

Proceedings ArticleDOI
07 Jul 1992
TL;DR: The experiments performed in a real-time environment show the effectiveness and robustness of the proposed method for servoing tasks based on visnal feedback control.
Abstract: This paper presents a new approach for visual tracking and servoing in robotics. We introduce deformable active models as a powerful means for tracking a rigid object in movement within the manipulator's workspace. Deformable models imitate, in real-time, the dynamic behavior of elastic structures. These computer-generated models are designed to capture the silhouette of rigid objects with well-defined boundaries, in terms of image gradient. By means of an eye-in-hand robot arm configuration, the desired motion of the end-effector is computed with the objective of keeping the target's position and shape invariant with respect to the camera frame. Optimal estimation and control techniques (ZQG regulator) have been successfully implemented in order to deal with noisy measurements provided by our vision sensor. Experimental results are presented for the tracking of a rigid object moving in a plane parallel to the image plane (three degree-of-freedom visual servoing). The experiments performed in a real-time environment show the effectiveness and robustness of the proposed method for servoing tasks based on visnal feedback control.

Proceedings ArticleDOI
01 Jan 1992
TL;DR: The framework of controlled active vision is applied to the problem of monocular full 3-D robotic visual tracking, and the results from its application to the TROIKABOT system (a set of three PUMA 560's manipulators) are presented.
Abstract: The framework of controlled active vision is applied to the problem of monocular full 3-D robotic visual tracking (three translations and three rotations). Full 3-D tracking of a moving target by a monocular hand-eye system is demonstrated. A single camera is used. A simple adaptive scheme is proposed, and the relative distance of the target from the camera is assumed to be partially unknown. The number of parameters that must be estimated online is minimal, resulting in a feasible real-time implementation of the scheme. The strong coupling of the rotational and translational degrees of freedom is treated in a way that guarantees robust tracking of the object. The limitations of the approach are discussed, and the results from its application to the TROIKABOT system (a set of three PUMA 560's manipulators) are presented. >

Journal ArticleDOI
TL;DR: A combination of an electro-oculogram and video is used to record eye movements and this permits the qualitative clinical appearance of the case to be illustrated simultaneously with the quantitative eye movement trace.
Abstract: Eye movement studies can be useful in neuro-ophthalmological investigations of infants and young children. In our laboratory we use a combination of an electro-oculogram and video to record eye movements. A composite video image is created consisting of an image of the electro-oculographic eye movement trace superimposed on an image of the patient's eyes and face. This permits the qualitative clinical appearance of the case to be illustrated simultaneously with the quantitative eye movement trace.

Book ChapterDOI
01 Jan 1992
TL;DR: This chapter is concerned with understanding the mechanisms underlying the control of temporal and spatial properties of the eyes’ scanning pattern in visual monitoring tasks.
Abstract: This chapter is concerned with understanding the mechanisms underlying the control of temporal and spatial properties of the eyes’ scanning pattern in visual monitoring tasks. Basically, three types of models have been distinguished that differ with respect to the assumed relationship between the ongoing processing and the temporal and spatial decisions of the control system (cf. McConkie, 1983; Rayner, 1984).

Proceedings ArticleDOI
TL;DR: The purpose of this work has been to find solutions for reliable real-time monocular visual tracking by employing a bank of extended Kalman filtering based trackers each of which calculates estimates for location and motion using measurements of a few feature points at a time.
Abstract: The purpose of this work has been to find solutions for reliable real-time monocular visual tracking. The goal is to estimate the relative motion of a camera with respect to a rigid 3-D scene by tracking features. In the beginning, the 3-D locations of the features are not known accurately, but during the tracking process these uncertainties are reduced through the integration of new observations. Most attention has been given to modeling measurement uncertainties and selecting the features to be extracted from image frames. The experimental system under implementation employs a bank of extended Kalman filtering based trackers each of which calculates estimates for location and motion using measurements of a few feature points at a time. The small number of points makes the trackers sensitive to various measurement errors, simplifying the detection of tracking failures, thereby giving potential for improving reliability. The preliminary experiments have been performed with satisfactory results for sequences of images at the rates of 22 to 35 frames per second.

Book ChapterDOI
01 Jan 1992
TL;DR: For instance, Schmauder et al. as mentioned in this paper used several different measures, such as first fixation duration, gaze duration, probability of fixating a target word, number of fixations on a target words, length of saccades off the target word and spillover effects, to draw conclusions about processing at these locations.
Abstract: Researchers investigating eye movements during reading often follow a strategy of letting theoretical considerations tell us what locations in text to focus on in analyzing the eye movement record. We attempt to draw conclusions about processing at these locations by considering results obtained with several different measures. Some of the measures we use are first fixation duration, gaze duration, probability of fixating a target word, number of fixations on a target word, length of saccades off a target word, and spillover effects (effects on words following a target word). We have argued in several places (Rayner & Pollatsek, 1987, 1989; Rayner, Sereno, Morris, Schmauder, & Clifton, 1989; Schmauder, 1991) that this strategy, as well as use of multiple paradigms to test a constant stimulus set, provides researchers with converging evidence and yields a more complete picture of cognitive processes operative during reading than that obtainable using either a single measure or a single paradigm.

Journal ArticleDOI
Muneo Iida1, Akira Tomono
TL;DR: Experimental comparison between the proposed method (with a mouse) and a conventional operation (a mouse alone) shows that the former is faster when a target is relatively separated from the center, i.e., the method is effective.
Abstract: Since human eye movement easily follows one's intention, it is possible to use this to instruct a computer through a display screen, e.g., selection of a menu and control of a cursor. This paper proposes a method to detect a gaze point on a display screen independently of the movement of an operator's head. The method uses an eye tracker based on the limbus boundary method and a three-dimensional (3-D) magnetic sensor which determines the position of an operator's head in a fixed coordinate. Experiments show that position-detection errors are about 1°, and this is enough for this application. To overcome an effect caused by a small involuntary movement of eyes around a target, a mouse is used at the same time. Experimental comparison between the proposed method (with a mouse) and a conventional operation (a mouse alone) shows that the former is faster when a target is relatively separated from the center, i.e., the method is effective.


Journal Article
TL;DR: Cortical areas were explored with regard to saccade control: the lateral intraparietal area is involved in the spatial aspects of sensorimotor processing; the supplementary motor area in goal-directed gaze control; and from lesion studies; the posterior parietal cortex in triggering visually guided saccades.
Abstract: Cortical areas were explored with regard to saccade control: the lateral intraparietal (LIP) area is involved in the spatial aspects of sensorimotor processing; the supplementary motor area in goal-directed gaze control; and from lesion studies; the posterior parietal cortex in triggering visually guided saccades. Different studies have suggested that the spatial-to-temporal transformation takes place in the superior colliculus (SC) and the cerebellum. When the vestibulo-ocular reflex (VOR) produces inadequate eye movements, other supplementary mechanisms (e.g. non-visual, saccade) may play a role in correcting gaze. A classification of central vestibular disorders of the brainstem and VOR has been proposed, as manifested in any one of the three major planes of action (yaw, pitch and role).

Journal Article
TL;DR: The normal variability values obtained can be usefully employed in neurophysiological longitudinal studies not only in normal subjects, but also in pathological condition provided the more reliable parameters and the more adequate strategies to computenormal variability values are chosen.
Abstract: We adopted the estimate of the intraclass coefficient of reliability, R, to evaluate the reliability of saccadic eye movements quantitative analysis. At a one-week interval we recorded refixative saccadic eye movements twice from fifteen healthy subjects by means of the binocular electrooculographic technique. R was computed for the constants and the slopes of the amplitude/duration and the amplitude/peak velocity relationships, for the mean precision values and for the mean latency values adjusted for subject's age. Our data demonstrated that the reliability of saccade parameters is fairly good for the amplitude/peak velocity relationship, good for the precision and very good for the amplitude/duration relationship. Finally, we believe that the normal variability values we obtained can be usefully employed in neurophysiological longitudinal studies not only in normal subjects, but also in pathological condition provided the more reliable parameters and the more adequate strategies to compute normal variability values are chosen.

Proceedings ArticleDOI
01 Oct 1992
TL;DR: A new eye-gaze detection method overcoming the weaknesses of previous methods is proposed, utilizing a video-based technique, that does not need any sensors attached to the user's face, and allows the users' free head movement.
Abstract: Recently, some devices controlled by only looking at the menus presented on a CRT display have been studied for the disabled. However, they have problem concerning their eye's point of gaze detection method, such as burden for the user. In this paper, a new eye-gaze detection method overcoming the weaknesses of previous methods is proposed. This system, utilizing a video-based technique, does not need any sensors attached to the user's face, and allows the user's free head movement.

Proceedings ArticleDOI
01 Nov 1992
TL;DR: A real-time animation technique to generate blinking and gaze shift, while still considering convergence, using a Graphic Workstation is described, and the feeling of eye contact is evaluated using this eye animation.
Abstract: We describe a real-time animation technique to generate blinking and gaze shift, while still considering convergence, using a Graphic Workstation. Moreover, we evaluate the feeling of eye contact using this eye animation. In our experiment, we subjectively evaluate gaze generated by CG using 2-D or 3-D display, and compare it with the gaze of an actual person. We also perform an experiment on the time required for perception of eye contact when the eyes of the CG image are moving.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.