scispace - formally typeset
Search or ask a question

Showing papers on "Eye tracking published in 2003"


Book ChapterDOI
01 Jan 2003
TL;DR: This chapter discusses the application of eye movements to user interfaces, both for analyzing interfaces (measuring usability) and as an actual control medium within a human–computer dialogue.
Abstract: Publisher Summary This chapter discusses the application of eye movements to user interfaces, both for analyzing interfaces (measuring usability) and as an actual control medium within a human–computer dialogue. For usability analysis, the user's eye movements are recorded during system use and later analyzed retrospectively; however, the eye movements do not affect the interface in real time. As a direct control medium, the eye movements are obtained and used in real time as an input to the user–computer dialogue. The eye movements might be the sole input, typically for disabled users or hands-busy applications, or might be used as one of several inputs, combining with mouse, keyboard, sensors, or other devices. From the perspective of mainstream eye-movement research, human–computer interaction, together with related work in the broader field of communications and media research, appears as a new and very promising area of applied work. Both basic and applied work can profit from integration within a unified field of eye­-movement research. Application of eye tracking in human–computer interaction remains a very promising approach; its technological and market barriers are finally being reduced.

1,421 citations


Book
09 Oct 2003
TL;DR: The traditional approach: 'compensatory taking into account' and Trans-saccadic integration 9.4 Conclusion: The Active Vision Cycle 9.5 Future directions
Abstract: PASSIVE VISION AND ACTIVE VISION 1.1 Introduction 1.2 Passive vision 1.3 Visual attention 1.4 Active vision 1.5 Active vision and vision for action 1.6 Outline of the book BACKGROUND TO ACTIVE VISION 2.1 Introduction 2.2 The inhomogeneity of the visual projections 2.3 Parallel visual pathways 2.4 The oculomotor system 2.5 Saccadic eye movements 2.6 Summary VISUAL SELECTION, COVERT ATTENTION AND EYE MOVEMENTS 3.1 Covert and overt attention 3.2 Covert spatial attention 3.3 The relationship between covert and overt attention 3.4 Speed of attention 3.5 Neurophysiology of attention 3.6 Non-spatial attention 3.7 Active vision and attention 3.8 Summary VISUAL ORIENTING 4.1 Introduction 4.2 What determines the latency of orienting saccades? 4.3 Physiology of saccade initiation 4.4 What determines the landing position of orienting saccades? 4.5 Physiology of the WHERE system 4.6 The Findlay and Walker model 4.7 Development and plasticity VISUAL SAMPLING DURING TEXT READING 5.1 Introduction 5.2 Basic patterns of visual sampling during reading 5.3 Perception during fixations in reading 5.4 Language processing 5.5 Control of fixation duration 5.6 Control of landing position 5.7 Theories of eye control during reading 5.8 Practical aspects of eye control in reading 5.9 Overview VISUAL SEARCH 6.1 Visual search tasks 6.2 Theories of visual search 6.3 The need for eye movements in visual search 6.4 Eye movements in visual search 6.5 Ocular capture in visual search 6.6 Saccades in visual search: scanpaths 6.7 Physiology of visual search 6.8 Summary NATURAL SCENES AND ACTIVITIES 7.1 Introduction 7.2 Analytic studies of scene and object perception 7.3 Dynamic scenes and situations 7.4 Summary HUMAN NEUROPSYCHOLOGY 8.1 Blindsight 8.2 Neglect 8.3 Balint's syndrome and dorsal simultanagnosia 8.4 Frontal lobe damage 8.5 Orienting without eye movements 8.6 Summary SPACE CONSTANCY AND TRANS-SACCADIC INTEGRATION 9.1 The traditional approach: 'compensatory taking into account' 9.2 Trans-saccadic integration 9.3 Resolution of the conflicting results 9.4 Conclusion: The Active Vision Cycle 9.5 Future directions

690 citations


Book
01 Jan 2003
TL;DR: Visual Information Processing and Saccadic Eye Movements in Reading and Language Processing and Computational Models of Eye Movement Control in Reading.
Abstract: Visual Information Processing and Saccadic Eye Movements. Eye Movements in Reading and Language Processing. Computational Models of Eye Movement Control in Reading. Eye Movements in Human-Computer Interaction. Eye Movements in Media Applications and Communication.

618 citations


Book
01 Jan 2003

432 citations


Proceedings ArticleDOI
05 Apr 2003
TL;DR: A comparative analysis of 48 post-experiment questionnaires confirms earlier findings from non-immersive studies using semi-photorealistic avatars, however responses to the lower-realism avatar are adversely affected by inferred gaze, revealing a significant interaction effect between appearance and behavior.
Abstract: This paper presents an experiment designed to investigate the impact of scommunication in an immersive virtual environment.Participants were paired by gender and were randomly assigned to a CAVE-like system or a head-mounted display. Both were represented by a humanoid avatar in the shared 3D environment. The visual appearance of the avatars was either basic and genderless (like a "match-stick" figure), or more photorealistic and gender-specific. Similarly, eye gaze behavior was either random or inferred from voice, to reflect different levels of behavioral realism.Our comparative analysis of 48 post-experiment questionnaires confirms earlier findings from non-immersive studies using semi-photorealistic avatars, where inferred gaze significantly outperformed random gaze. However responses to the lower-realism avatar are adversely affected by inferred gaze, revealing a significant interaction effect between appearance and behavior. We discuss the importance of aligning visual and behavioral realism for increased avatar effectiveness.

389 citations


Journal ArticleDOI
TL;DR: A clear temporal link between gaze and stepping pattern is provided and adds to the understanding of how vision is used to regulate locomotion.
Abstract: Spatial-temporal gaze behaviour patterns were analysed as normal participants wearing a mobile eye tracker were required to step on 17 footprints, regularly or irregularly spaced over a 10-m distance, placed in their travel path. We examined the characteristics of two types of gaze fixation with respect to the participants' stepping patterns: footprint fixation; and travel fixation when the gaze is stable and travelling at the speed of whole body. The results showed that travel gaze fixation is a dominant gaze behaviour occupying over 50% of the travel time. It is hypothesised that this gaze behaviour would facilitate acquisition of environmental and self-motion information from the optic flow that is generated during locomotion: this in turn would guide movements of the lower limbs to the appropriate landing targets. When participants did fixate on the landing target they did so on average two steps ahead, about 800–1,000 ms before the limb is placed on the target area. This would allow them sufficient time to successfully modify their gait patterns. None of the gaze behaviours was influenced by the placement (regularly versus irregularly spaced) of the footprints or repeated exposures to the travel path. Rather visual information acquired during each trial was used "de novo" to modulate gait patterns. This study provides a clear temporal link between gaze and stepping pattern and adds to our understanding of how vision is used to regulate locomotion.

356 citations


Journal ArticleDOI
TL;DR: This paper presents a system for analyzing human driver visual attention that relies on estimation of global motion and color statistics to robustly track a person's head and facial features and is able to track through yawning, which is a large local mouth motion.
Abstract: This paper presents a system for analyzing human driver visual attention. The system relies on estimation of global motion and color statistics to robustly track a person's head and facial features. The system is fully automatic, it can initialize automatically, and reinitialize when necessary. The system classifies rotation in all viewing directions, detects eye/mouth occlusion, detects eye blinking and eye closure, and recovers the three dimensional gaze of the eyes. In addition, the system is able to track both through occlusion due to eye blinking, and eye closure, large mouth movement, and also through occlusion due to rotation. Even when the face is fully occluded due to rotation, the system does not break down. Further the system is able to track through yawning, which is a large local mouth motion. Finally, results are presented, and future work on how this system can be used for more advanced driver visual attention monitoring is discussed.

334 citations


Proceedings ArticleDOI
David Beymer1, Myron D. Flickner1
18 Jun 2003
TL;DR: This work introduces a 3D eye tracking system where the head motion is allowed without the need for markers or worn devices, and uses a pair of stereo systems: a wide angle stereo system detects the face and steers an active narrow FOV stereo system to track the eye at high resolution.
Abstract: In the eye gaze tracking problem, the goal is to determine where on a monitor screen a computer user is looking, ie., the gaze point. Existing systems generally have one of two limitations: either the head must remain fixed in front of a stationary camera, or, to allow for head motion, the user must wear an obstructive device. We introduce a 3D eye tracking system where the head motion is allowed without the need for markers or worn devices. We use a pair of stereo systems: a wide angle stereo system detects the face and steers an active narrow FOV stereo system to track the eye at high resolution. For high resolution tracking, the eye is modeled in 3D, including the corneal ball, pupil and fovea. We discuss the calibration of the stereo systems, the eye model, eye detection and tracking, and we close with an evaluation of the accuracy of the estimated gaze point on the monitor.

291 citations


Journal ArticleDOI
TL;DR: In nonanxious volunteers the effects of fearful gaze did not differ from neutral gaze, but fearful expression had a more powerful influence in a selected high anxious group.
Abstract: We investigated whether a fearful expression enhances the effect of another's gaze in directing the attention of an observer. Participants viewed photographs of faces whose gaze was directed ahead, to the left or to the right. Target letters then appeared unpredictably to the left or right. As expected, targets in the location indicated by gaze were detected more rapidly. In nonanxious volunteers the effects of fearful gaze did not differ from neutral gaze, but fearful expression had a more powerful influence in a selected high anxious group. Attention is thus more likely to be guided by the direction of fearful than neutral gaze, but only in anxiety-prone individuals.

231 citations


Proceedings ArticleDOI
07 Jul 2003
TL;DR: An ECA is presented that uses verbal and nonverbal grounding acts to update dialogue state and proposes a design for embodied conversational agents that relies on both kinds of signals to establish common ground in human-computer interaction.
Abstract: We investigate the verbal and nonverbal means for grounding, and propose a design for embodied conversational agents that relies on both kinds of signals to establish common ground in human-computer interaction. We analyzed eye gaze, head nods and attentional focus in the context of a direction-giving task. The distribution of nonverbal behaviors differed depending on the type of dialogue move being grounded, and the overall pattern reflected a monitoring of lack of negative feedback. Based on these results, we present an ECA that uses verbal and nonverbal grounding acts to update dialogue state.

230 citations


Journal ArticleDOI
TL;DR: It is suggested that increases in neural processing in the amygdala facilitate the analysis of gaze cues when a person is actively monitoring for emotional gaze events, whereas increases in brain activations in the superior temporal sulcus support the analysisof gaze cues that provide socially meaningful spatial information.

Proceedings ArticleDOI
05 Apr 2003
TL;DR: Empirically evaluated whether eye contact perception is affected by automated camera direction, which causes angular shifts in the transmitted images, and suggest camera shifts do not affect eye contact Perception, and are not considered highly distractive.
Abstract: GAZE-2 is a novel group video conferencing system that uses eye-controlled camera direction to ensure parallax-free transmission of eye contact. To convey eye contact, GAZE-2 employs a video tunnel that allows placement of cameras behind participant images on the screen. To avoid parallax, GAZE-2 automatically directs the cameras in this video tunnel using an eye tracker, selecting a single camera closest to where the user is looking for broadcast. Images of users are displayed in a virtual meeting room, and rotated towards the participant each user looks at. This way, eye contact can be conveyed to any number of users with only a single video stream per user. We empirically evaluated whether eye contact perception is affected by automated camera direction, which causes angular shifts in the transmitted images. Findings suggest camera shifts do not affect eye contact perception, and are not considered highly distractive.

Journal ArticleDOI
TL;DR: Both word frequency and familiarity showed an early but lasting influence on eye fixation durations and concreteness and AoA effects on eye fixations, respectively.
Abstract: The present experiment investigated the influence of 5 intercorrelated variables on word recognition using a multiple regression analysis The 5 variables were word frequency, subjective familiarity, word length, concreteness, and age of acquisition (AoA) Target words were embedded in sentences and eye tracking methodology was used to investigate the predictive power of these variables All 5 variables were found to influence reading time However, the time course of these variables differed Both word frequency and familiarity showed an early but lasting influence on eye fixation durations Word length only significantly predicted fixation durations after refixations on the target words were taken into account This is the 1st experiment to demonstrate concreteness and AoA effects on eye fixations

Patent
29 Jul 2003
TL;DR: In this article, a combination of a high speed eye tracking device, measuring fast translation or saccadic motion of the eye, and an eye position measurement device, determining multiple dimensions of eye position or other components of eye, relative to an ophthalmic diagnostic or treatment instrument is presented.
Abstract: The present invention relates to improved ophthalmic diagnostic measurement or treatment methods or devices, that make use of a combination of a high speed eye tracking device, measuring fast translation or saccadic motion of the eye, and an eye position measurement device, determining multiple dimensions of eye position or other components of eye, relative to an ophthalmic diagnostic or treatment instrument.

Journal ArticleDOI
TL;DR: The results establish that human FEFs are critical to visual selection, regardless of the need to generate a saccade command.
Abstract: Recent physiological recording studies in monkeys have suggested that the frontal eye fields (FEFs) are involved in visual scene analysis even when eye movement commands are not required. We examined this proposed function of the human frontal eye fields during performance of visual search tasks in which difficulty was matched and eye movements were neither necessary nor required. Magnetic stimulation over FEF modulated performance on a conjunction search task and a simple feature search task in which the target was unpredictable from trial to trial, primarily by increasing false alarm responses. Simple feature search with a predictable target was not affected. The results establish that human FEFs are critical to visual selection, regardless of the need to generate a saccade command.

Patent
23 Jul 2003
TL;DR: In this paper, a system for tracking a gaze of an operator includes a head mounted eye tracking assembly, a head-mounted head tracking assembly and a processing element, which is capable of being disposed such that at least a portion of the visor is located outside a field of view of the operator.
Abstract: A system for tracking a gaze of an operator includes a head-mounted eye tracking assembly, a head-mounted head tracking assembly and a processing element. The head-mounted eye tracking assembly comprises a visor having an arcuate shape including a concave surface and an opposed convex surface. The visor is capable of being disposed such that at least a portion of the visor is located outside a field of view of the operator. The head-mounted head tracking sensor is capable of repeatedly determining a position of the head to thereby track movement of the head. In this regard, each position of the head is associated with a position of the at least one eye. Thus, the processing element can repeatedly determine the gaze of the operator, based upon each position of the head and the associated position of the eyes, thereby tracking the gaze of the operator.

Proceedings ArticleDOI
13 Oct 2003
TL;DR: The two key contributions are that the possibility of finding the unique eye gaze direction from a single image of one eye is shown and that one can obtain better accuracy as a consequence of this.
Abstract: We present a novel approach, called the "one-circle " algorithm, for measuring the eye gaze using a monocular image that zooms in on only one eye of a person. Observing that the iris contour is a circle, we estimate the normal direction of this iris circle, considered as the eye gaze, from its elliptical image. From basic projective geometry, an ellipse can be back-projected into space onto two circles of different orientations. However, by using an anthropometric property of the eyeball, the correct solution can be disambiguated. This allows us to obtain a higher resolution image of the iris with a zoom-in camera and thereby achieving higher accuracies in the estimation. The robustness of our gaze determination approach was verified statistically by the extensive experiments on synthetic and real image data. The two key contributions are that we show the possibility of finding the unique eye gaze direction from a single image of one eye and that one can obtain better accuracy as a consequence of this.

Book ChapterDOI
01 Jan 2003
TL;DR: This chapter provides a practical guide for either the software usability engineer who considers the benefits of eye tracking or the eye tracking specialist who considers software usability evaluation as an application.
Abstract: Publisher Summary This chapter provides a practical guide for either the software usability engineer who considers the benefits of eye tracking or the eye tracking specialist who considers software usability evaluation as an application. Usability evaluation is defined rather loosely by industry as any of several applied techniques where users interact with a product, system, or service and some behavioral data are collected. Usability goals are often stipulated as criteria, and an attempt is made to use test participants similar to the target-market users. The chapter discusses methodological issues first in usability evaluation and then in the eye-tracking realm. An integrated knowledge of both of these areas is beneficial for the experimenter who conducts eye tracking as part of a usability evaluation. Within each of these areas, major issues are presented in the chapter by a rhetorical questioning style. By presenting the usability evaluation, the practical use of an eye-tracking methodology is placed into a proper and realistic perspective.

Journal ArticleDOI
TL;DR: It is suggested that evolution results in information-processing biases that shape and constrain the outcome of individual development to eventually result in adult adaptive specializations.

Patent
Stephen Farrell1, Shumin Zhai1
25 Aug 2003
TL;DR: In this paper, a computer-driven system amplifies a target region based on integrating eye gaze and manual operator input, thus reducing pointing time and operator fatigue, and a gaze tracking apparatus monitors operator eye orientation while the operator views a video screen.
Abstract: A computer-driven system amplifies a target region based on integrating eye gaze and manual operator input, thus reducing pointing time and operator fatigue. A gaze tracking apparatus monitors operator eye orientation while the operator views a video screen. Concurrently, the computer monitors an input indicator for mechanical activation or activity by the operator. According to the operator's eye orientation, the computer calculates the operator's gaze position. Also computed is a gaze area, comprising a sub-region of the video screen that includes the gaze position. The system determines a region of the screen to expand within the current gaze area when mechanical activation of the operator input device is detected. The graphical components contained are expanded, while components immediately outside of this radius may be contracted and/or translated, in order to preserve visibility of all the graphical components at all times.

Book ChapterDOI
01 Jan 2003
TL;DR: This chapter describes the applicability of the eye-tracking method in studying global text processing, which is more complex and varied than the mental processing associated with lexical processing.
Abstract: Publisher Summary This chapter describes the applicability of the eye-tracking method in studying global text processing. Eye tracking is used to study basic reading processes and syntactic parsing, but there are few studies where eye tracking is employed to examine global text processing. As one moves from the study of lexical processing to syntactic processing, the potential units of analysis increase in both number and size. There are four relevant levels of processing in the study of syntactic processing: (1) the word at which a parsing choice is expected to be made or a syntactic ambiguity to reveal itself, (2) the phrase, (3) the clause, and (4) the whole sentence. Related to the increase in the number and size of potentially interesting units of analysis, the mental processing associated with syntactic processes is more complex and varied than the mental processing associated with lexical processing. Thus, syntactic effects on eye movements are correspondingly more complex than lexical effects on eye movements.

Journal ArticleDOI
TL;DR: In this article, a forced-choice face recognition task was conducted where the direction of eye gaze was manipulated over the course of the initial presentation and subsequent test phase of the experiment, and the results revealed the encoding advantages enjoyed by faces with direct gaze was present for both children and adults.
Abstract: Children and adults were tested on a forced-choice face recognition task in which the direction of eye gaze was manipulated over the course of the initial presentation and subsequent test phase of the experiment. To establish the effects of gaze direction on the encoding process, participants were presented with to-be-studied faces displaying either direct or deviated gaze (i.e. encoding manipulation). At test, all the faces depicted persons with their eyes closed. To investigate the effects of gaze direction on the efficiency of the retrieval process, a second condition (i.e. retrieval manipulation) was run in which target faces were presented initially with eyes closed and tested with either direct or deviated gaze. The results revealed the encoding advantages enjoyed by faces with direct gaze was present for both children and adults. Faces with direct gaze were also recognized better than faces with deviated gaze at retrieval, although this effect was most pronounced for adults. Finally, the advantage for direct gaze over deviated gaze at encoding was greater than the advantage for direct gaze over deviated gaze at retrieval. We consider the theoretical implications of these findings.

Patent
23 Jan 2003
TL;DR: In this paper, an eye-tracking system for displaying a video screen pointer at a point of regard of a user's gaze is presented, which consists of a camera focused on the user's eye, a support connected to the camera for fixing the relative position of the camera to the pupil, and computer instructions for segmenting the digital pixel data of the image of the eye into black and white sections based upon user selectable RGB threshold settings.
Abstract: An eye-tracking system for displaying a video screen pointer at a point of regard of a user's gaze. The system comprises a camera focused on the user's eye; a support connected to the camera for fixing the relative position of the camera to the user's pupil; a computer having a CPU, memory, video display screen, an eye-tracking interface, and computer instructions for: segmenting the digital pixel data of the image of the eye into black and white sections based upon user selectable RGB threshold settings; determining the center of the eye based upon the segmented digital data; mapping the determined center of the eye to a pair of coordinates on the video screen; and displaying a pointer on the video display screen at the point of the regard. The processing performed by the computer includes a fine-tuning capability for positioning the cursor at point on the video screen substantially overlapping the point of regard, and a gaze activated method for selecting computer actions. The system includes additional user mounted sensors for determining the axial position of the camera, thereby compensating for inadvertent eye movement when the point of regard has not changed.

Patent
08 Jan 2003
TL;DR: In this article, an apparatus for eye tracking, including an illuminator; reflection apparatus to reflect illumination from the illuminators onto a surface of a windshield of a vehicle in which the windshield is installed, and a sensor to receive a reflection of the image of the at least an eye reflected by the reflection apparatus and to produce an output signal representative thereof, is presented.
Abstract: In a preferred embodiment, an apparatus for eye tracking, including: an illuminator; reflection apparatus to reflect illumination from the illuminator onto a surface of a windshield of a vehicle in which the windshield is installed, such that the illumination is reflected onto at least an eye of a person in the vehicle, and to reflect an image of the at least an eye, and a sensor to receive a reflection of the image of the at least an eye reflected by the reflection apparatus and to produce an output signal representative thereof.

Patent
02 Jun 2003
Abstract: An object awareness determination system and method of determining awareness of a driver of a vehicle to an object is provided. The system includes an object monitor including an object detection sensor for sensing an object in a field of view and determining a position of the object. The system also includes an eye gaze monitor including an imaging camera oriented to capture images of the vehicle driver including an eye of the driver. The gaze monitor determines an eye gaze vector. The system further has a controller for determining driver awareness of the object based on the detected object position and the eye gaze vector.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the processes involved in a common graph reading task using two types of Cartesian graphs and show that optimal scan paths assumed in the task analysis approximate the detailed sequences of saccades made by individuals.
Abstract: We report an investigation into the processes involved in a common graph-reading task using two types of Cartesian graph. We describe an experiment and eye movement study, the results of which show that optimal scan paths assumed in the task analysis approximate the detailed sequences of saccades made by individuals. The research demonstrates the computational inequivalence of two sets of informationally equivalent graphs and illustrates how the computational advantages of a representation outweigh factors such as user unfamiliarity. We describe two models, using the ACT rational perceptual motor (ACT-R/PM) cognitive architecture, that replicate the pattern of observed response latencies and the complex scan paths revealed by the eye movement study. Finally, we outline three guidelines for designers of visual displays: Designers should (a) consider how different quantities are encoded within any chosen representational format, (b) consider the full range of alternative varieties of a given task, and (c) balance the cost of familiarization with the computational advantages of less familiar representations. Actual or potential applications of this research include informing the design and selection of appropriate visual displays and illustrating the practice and utility of task analysis, eye tracking, and cognitive modeling for understanding interactive tasks with external representations.

Journal ArticleDOI
TL;DR: Investigating whether children with high functioning autism have difficulty in detecting mutual gaze under experimental conditions revealed that children with autism were no better at detecting direct gaze than at detecting averted gaze, which suggests that whereas typically developing children have the ability to detect direct gaze, children with autistic children do not.

01 Mar 2003
TL;DR: This paper discusses the limits of traditional usability testing and will show how tracking the eye gaze can fill this gap, and gives a short introduction of the eye tracking method in applied marketing research and its benefits.
Abstract: Over the last ten years the Internet has become an incredibly important media in everyday life of ordinary people. By now the World Wide Web is not a foreign concept anymore; millions of people make use of the internet in terms of e-mail, online banking, online shops, etc. Companies have realised the necessity of user friendly interfaces and software which even an inexperienced user is able to handle. Since the interaction with the Internet becomes ubiquitous, assessing usability of interfaces is a fundamental and necessary part of HCI development. Observing the overt behaviour of users, (which button does a user click on, how is the mouse used), retrospective self report, questionnaires and thinking aloud methods are just examples of traditional and quite successful strategies in order to investigate usability problems. So, why should there be a need for a new method when conventional methods seem sufficient in optimising usability and what is it’s contribution in the evaluation of how usable a particular design is? We will give a short introduction of the eye tracking method in applied marketing research and its benefits. Reporting two of our studies we will discuss the limits of traditional usability testing and will show how tracking the eye gaze can fill this gap.

Journal ArticleDOI
TL;DR: Although it is acknowledged that E-Z Reader is incomplete, it is maintained that it provides a good framework for systematically trying to understand how the cognitive, perceptual, and motor systems influence the eyes during reading.
Abstract: The issues the commentators have raised and which we address, include: the debate over how attention is allocated during reading; our distinction between early and late stages of lexical processing; our assumptions about saccadic programming; the determinants of skipping and refixations; and the role that higher-level linguistic processing may play in influencing eye movements during reading. In addition, we provide a discussion of model development and principles for evaluating and comparing models. Although we acknowledge that E-Z Reader is incomplete, we maintain that it provides a good framework for systematically trying to understand how the cognitive, perceptual, and motor systems influence the eyes during reading.

Proceedings Article
09 Dec 2003
TL;DR: A new model of human eye movements that directly ties eye movements to the ongoing demands of behavior is introduced and simulations show the protocol is superior to a simple round robin scheduling mechanism.
Abstract: Recent eye tracking studies in natural tasks suggest that there is a tight link between eye movements and goal directed motor actions. However, most existing models of human eye movements provide a bottom up account that relates visual attention to attributes of the visual scene. The purpose of this paper is to introduce a new model of human eye movements that directly ties eye movements to the ongoing demands of behavior. The basic idea is that eye movements serve to reduce uncertainty about environmental variables that are task relevant. A value is assigned to an eye movement by estimating the expected cost of the uncertainty that will result if the movement is not made. If there are several candidate eye movements, the one with the highest expected value is chosen. The model is illustrated using a humanoid graphic figure that navigates on a sidewalk in a virtual urban environment. Simulations show our protocol is superior to a simple round robin scheduling mechanism.