scispace - formally typeset
Search or ask a question

Showing papers on "Eye tracking published in 2004"


Journal ArticleDOI
TL;DR: Support Vector Tracking integrates the Support Vector Machine (SVM) classifier into an optic-flow-based tracker and maximizes the SVM classification score to account for large motions between successive frames.
Abstract: Support Vector Tracking (SVT) integrates the Support Vector Machine (SVM) classifier into an optic-flow-based tracker. Instead of minimizing an intensity difference function between successive frames, SVT maximizes the SVM classification score. To account for large motions between successive frames, we build pyramids from the support vectors and use a coarse-to-fine approach in the classification stage. We show results of using SVT for vehicle tracking in image sequences.

1,131 citations


Journal ArticleDOI
TL;DR: This work proposes an approach that incorporates appearance-adaptive models in a particle filter to realize robust visual tracking and recognition algorithms and demonstrates the effectiveness and robustness of the tracking algorithm.
Abstract: We present an approach that incorporates appearance-adaptive models in a particle filter to realize robust visual tracking and recognition algorithms. Tracking needs modeling interframe motion and appearance changes, whereas recognition needs modeling appearance changes between frames and gallery images. In conventional tracking algorithms, the appearance model is either fixed or rapidly changing, and the motion model is simply a random walk with fixed noise variance. Also, the number of particles is typically fixed. All these factors make the visual tracker unstable. To stabilize the tracker, we propose the following modifications: an observation model arising from an adaptive appearance model, an adaptive velocity motion model with adaptive noise variance, and an adaptive number of particles. The adaptive-velocity model is derived using a first-order linear predictor based on the appearance difference between the incoming observation and the previous particle configuration. Occlusion analysis is implemented using robust statistics. Experimental results on tracking visual objects in long outdoor and indoor video sequences demonstrate the effectiveness and robustness of our tracking algorithm. We then perform simultaneous tracking and recognition by embedding them in a particle filter. For recognition purposes, we model the appearance changes between frames and gallery images by constructing the intra- and extrapersonal spaces. Accurate recognition is achieved when confronted by pose and view variations.

742 citations


Proceedings ArticleDOI
25 Jul 2004
TL;DR: This work investigates how users interact with the results page of a WWW search engine using eye-tracking to gain insight into how users browse the presented abstracts and how they select links for further exploration.
Abstract: We investigate how users interact with the results page of a WWW search engine using eye-tracking. The goal is to gain insight into how users browse the presented abstracts and how they select links for further exploration. Such understanding is valuable for improved interface design, as well as for more accurate interpretations of implicit feedback (e.g. clickthrough) for machine learning. The following presents initial results, focusing on the amount of time spent viewing the presented abstracts, the total number of abstract viewed, as well as measures of how thoroughly searchers evaluate their results set.

738 citations


Proceedings ArticleDOI
27 Jun 2004
TL;DR: A method is presented for robust tracking in highly cluttered environments that makes effective use of 3D depth sensing technology, resulting in illumination-invariant tracking.
Abstract: A method is presented for robust tracking in highly cluttered environments. The method makes effective use of 3D depth sensing technology, resulting in illumination-invariant tracking. A few applications using tracking are presented including face tracking and hand tracking.

507 citations


Book
18 Jun 2004
TL;DR: This chapter discusses eye tracking in Spoken Language Comprehension: Using Eye Movements to Bridge the Product and Action Traditions, and discusses the role of language and imagery in this process.
Abstract: F. Ferreira, J.M. Henderson, Introduction to the Integration of Language, Vision and Action. J.M. Henderson, F. Ferreira, Scene Perception for Psycholinguists. K. Rayner, S.P. Liversedge, Visual and Linguistic Processing During Eye Fixations in Reading. D.E. Irwin, Fixation Location Duration as Indices of Cognitive Processing. J.M. Findlay, Eye Scanning and Visual Search. M.J. Spivey, D.C. Richardson, S.A. Fitneva, Thinking Outside the Brain: Spatial Indices to Visual and Linguistic Information. A.S. Meyer, F. Lethaus, The Use of Eye Tracking in Studies of Sentence Generation. Z.M. Griffin, Why Look? Reasons for Eye Movements Related to Language Production. K. Bock, D.E. Irwin, D.J. Davidson, Putting First Things First. M.K. Tanenhaus, C.G. Chambers, J.E. Hanna, Referential Domains in Spoken Language Comprehension: Using Eye Movements to Bridge the Product and Action Traditions. J. Trueswell, L. Gleitman, Children's Eye Movements During Listening:Developmental Evidence for a Constraint-Based Theory of Sentence Processing. G.T.M. Altmann, Y. Kamide, Now You See It, Now You Don't: Mediating the Mapping between Language and the Visual World.

506 citations


Journal ArticleDOI
01 Feb 2004
TL;DR: A novel approach to three-dimensional (3-D) gaze tracking using 3-D computer vision techniques is proposed which renders the inconvenient system calibration process which may produce possible calibration errors unnecessary.
Abstract: A novel approach to three-dimensional (3-D) gaze tracking using 3-D computer vision techniques is proposed in this paper This method employs multiple cameras and multiple point light sources to estimate the optical axis of user's eye without using any user-dependent parameters Thus, it renders the inconvenient system calibration process which may produce possible calibration errors unnecessary A real-time 3-D gaze tracking system has been developed which can provide 30 gaze measurements per second Moreover, a simple and accurate calibration method is proposed to calibrate the gaze tracking system Before using the system, each user only has to stare at a target point for a few (2-3) seconds so that the constant angle between the 3-D line of sight and the optical axis can be estimated The test results of six subjects showed that the gaze tracking system is very promising achieving an average estimation error of under 1/spl deg/

299 citations


Journal ArticleDOI
01 Jan 2004-Infancy
TL;DR: This article found that newborns were faster to make saccades to peripheral targets cued by the direction of eye movement of a central schematic face, but only when the motion of the pupils was visible.
Abstract: Eye gaze has been shown to be an effective cue for directing attention in adults. Whether this ability operates from birth is unknown. Three experiments were carried out with 2- to 5-day-old newborns. The first experiment replicated the previous finding that newborns are able to discriminate between direct and averted gaze, and extended this finding from real to schematic faces. In Experiments 2 and 3 newborns were faster to make saccades to peripheral targets cued by the direction of eye movement of a central schematic face, but only when the motion of the pupils was visible. These results suggest that newborns may show a rudimentary form of gaze following.

297 citations


Journal ArticleDOI
01 Jul 2004
TL;DR: A computer vision system based on active IR illumination for real-time gaze tracking for interactive graphic display that can perform robust and accurate gaze estimation without calibration and under rather significant head movement is described.
Abstract: This paper describes a computer vision system based on active IR illumination for real-time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using generalized regression neural networks (GRNNs). With GRNNs, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. To further improve the gaze estimation accuracy, we employ a hierarchical classification scheme that deals with the classes that tend to be misclassified. This leads to a 10% improvement in classification error. The angular gaze accuracy is about 5° horizontally and 8° vertically. The effectiveness of our gaze tracker is demonstrated by experiments that involve gaze-contingent interactive graphic display.

273 citations


Journal ArticleDOI
TL;DR: It is indicated that eye gaze attracted attention more effectively than the arrow in typically developed children, while children with autism shifted their attention equally in response to eye gaze and arrow direction, failing to show preferential sensitivity to the social cue.
Abstract: Background: This study investigated whether another person’s social attention, specifically the direction of their eye gaze, and a non-social directional cue, an arrow, triggered reflexive orienting in children with and without autism in an experimental situation. Methods: Children with autism and typically developed children participated in one of two experiments. Both experiments involved the localization of a target that appeared to the left or right of the fixation point. Before the target appeared, the participant’s attention was cued to the left or right by either an arrow or the direction of eye gaze on a computerized face. Results: Children with autism were slower to respond, which suggests a slight difference in the general cognitive ability of the groups. In Experiment 1, although the participants were instructed to disregard the cue and the target was correctly cued in only 50% of the trials, both groups of children responded significantly faster to cued targets than to uncued targets, regardless of the cue. In Experiment 2, children were instructed to attend to the direction opposite that of the cues and the target was correctly cued in only 20% of the trials. Typically developed children located targets cued by eye gaze more quickly, while the arrow cue did not trigger such reflexive orienting in these children. However, both social and non-social cues shifted attention to the cued location in children with autism. Conclusion: These results indicate that eye gaze attracted attention more effectively than the arrow in typically developed children, while children with autism shifted their attention equally in response to eye gaze and arrow direction, failing to show preferential sensitivity to the social cue. Difficulty in shifting controlled attention to the instructed side was also found in children with autism. Keywords: Autism, eye gaze, joint attention, reflexive orienting, arrow. The morphology of the human eye is unique among primates (Kobayashi & Kohshima, 1997, 2001). The sclera is more widely exposed and is much paler in color than the iris and skin, which makes it easier to discern where the eyes are looking. This unique characteristic can be considered an adaptation that facilitates a higher level of effective communication using gaze signals. The direction of another person’s

246 citations


Proceedings ArticleDOI
22 Mar 2004
TL;DR: The results from eye gaze analysis showed that novices needed more visual feedback of the tool position to complete the task than did experts, and the experts tended to maintain eye gaze on the target while manipulating the tool, whereas novice were more varied in their behaviours.
Abstract: Visual information is important in surgeons' manipulative performance especially in laparoscopic surgery where tactual feedback is less than in open surgery. The study of surgeons' eye movements is an innovative way of assessing skill, in that a comparison of the eye movement strategies between expert surgeons and novices may show important differences that could be used in training. We conducted a preliminary study comparing the eye movements of 5 experts and 5 novices performing a one-handed aiming task on a computer-based laparoscopic surgery simulator. The performance results showed that experts were quicker and generally committed fewer errors than novices. We investigated eye movements as a possible factor for experts performing better than novices. The results from eye gaze analysis showed that novices needed more visual feedback of the tool position to complete the task than did experts. In addition, the experts tended to maintain eye gaze on the target while manipulating the tool, whereas novices were more varied in their behaviours. For example, we found that on some trials, novices tracked the movement of the tool until it reached the target.

240 citations


Patent
19 May 2004
TL;DR: In this paper, a system for eye tracking that determines the line of sight of a user according to the relative position between the center of the pupil and a reference point is presented.
Abstract: A system for eye tracking that determines the line of sight of a user according to the relative position between the center of the pupil and a reference point, the system including an image detector that captures an image of the eye, a pupil-illuminating light source that illuminates the pupil of the user, a reference light source that illuminates a different portion of the face of the user as a reference point and an imaging processor that analyzes the captured eye image to determine the line of sight.


Patent
15 Nov 2004
TL;DR: In this article, a system and method for eye gaze tracking in human or animal subjects without calibration of cameras, specific measurements of eye geometries or the tracking of a cursor image on a screen by the subject through a known trajectory is presented.
Abstract: A system and method for eye gaze tracking in human or animal subjects without calibration of cameras, specific measurements of eye geometries or the tracking of a cursor image on a screen by the subject through a known trajectory. The preferred embodiment includes one uncalibrated camera for acquiring video images of the subject's eye(s) and optionally having an on-axis illuminator, and a surface, object, or visual scene with embedded off-axis illuminator markers. The off-axis markers are reflected on the corneal surface of the subject's eyes as glints. The glints indicate the distance between the point of gaze in the surface, object, or visual scene and the corresponding marker on the surface, object, or visual scene. The marker that causes a glint to appear in the center of the subject's pupil is determined to be located on the line of regard of the subject's eye, and to intersect with the point of gaze. Point of gaze on the surface, object, or visual scene is calculated as follows. First, by determining which marker glints, as provided by the corneal reflections of the markers, are closest to the center of the pupil in either or both of the subject's eyes. This subset of glints forms a region of interest (ROI). Second, by determining the gaze vector (relative angular or cartesian distance to the pupil center) for each of the glints in the ROI. Third, by relating each glint in the ROI to the location or identification (ID) of a corresponding marker on the surface, object, or visual scene observed by the eyes. Fourth, by interpolating the known locations of each these markers on the surface, object, or visual scene, according to the relative angular distance of their corresponding glints to the pupil center.

Proceedings ArticleDOI
21 Mar 2004
TL;DR: A vision-based real-time driver fatigue detection system is proposed for driving safely, by using the characteristic of skin colors to locate the regions of eyes from color images captured in a car.
Abstract: A vision-based real-time driver fatigue detection system is proposed for driving safely. The driver's face is located, from color images captured in a car, by using the characteristic of skin colors. Then, edge detection is used to locate the regions of eyes. In addition to being used as the dynamic templates for eye tracking in the next frame, the obtained eyes' images are also used for fatigue detection in order to generate some warning alarms for driving safety. The system is tested on a Pentium III 550 CPU with 128 MB RAM. The experiment results seem quite encouraging andpromising. The system can reach 20 frames per second for eye tracking, and the average correct rate for eye location and tracking can achieve 99.1% on four test videos. The correct rate for fatigue detection is l00%, but the average precision rate is 88.9% on the test videos.

Proceedings ArticleDOI
22 Mar 2004
TL;DR: An automatic data-driven method is presented, which clusters visual point-of-regard (POR) measurements into gazes and regions- of-interest using the mean shift procedure, which forms a structured representation of viewer interest.
Abstract: Characterizing the location and extent of a viewer's interest, in terms of eye movement recordings, informs a range of investigations in image and scene viewing. We present an automatic data-driven method for accomplishing this, which clusters visual point-of-regard (POR) measurements into gazes and regions-of-interest using the mean shift procedure. Clusters produced using this method form a structured representation of viewer interest, and at the same time are replicable and not heavily influenced by noise or outliers. Thus, they are useful in answering fine-grained questions about where and how a viewer examined an image.

Proceedings ArticleDOI
22 Mar 2004
TL;DR: A gaze tracking system that offers freehead, simple personal calibration, and it does not require the user wear anything on her head, and she can move her head freely.
Abstract: Human eye gaze is a strong candidate to create a new application area based on human-computer interaction. To implement a really practical gaze-based interaction system, gaze detection must be realized without placing any restriction on the user's behavior or comfort. This paper describes a gaze tracking system that offers freehead, simple personal calibration. It does not require the user wear anything on her head, and she can move her head freely. Personal calibration takes only a very short time; the user is asked to look at two markers on the screen. An experiment shows that the accuracy of the implemented system is about 1.0 degrees (view angle).

Journal ArticleDOI
TL;DR: It is concluded that an image-based mechanism is responsible for the influence of head profile on gaze perception, whereas the analysis of nose angle involves the configural processing of face features.
Abstract: We report seven experiments that investigate the influence that head orientation exerts on the perception of eye-gaze direction. In each of these experiments, participants were asked to decide whether the eyes in a brief and masked presentation were looking directly at them or were averted. In each case, the eyes could be presented alone, or in the context of congruent or incongruent stimuli. In Experiment 1A, the congruent and incongruent stimuli were provided by the orientation of face features and head outline. Discrimination of gaze direction was found to be better when face and gaze were congruent than in both of the other conditions, an effect that was not eliminated by inversion of the stimuli (Experiment 1B). In Experiment 2A, the internal face features were removed, but the outline of the head profile was found to produce an identical pattern of effects on gaze discrimination, effects that were again insensitive to inversion (Experiment 2B) and which persisted when lateral displacement of the eyes was controlled (Experiment 2C). Finally, in Experiment 3A, nose angle was also found to influence participants’ ability to discriminate direct gaze from averted gaze, but here the effectwas eliminated by inversion of the stimuli (Experiment 3B). We concluded that an image-based mechanism is responsible for the influence of head profile on gaze perception, whereas the analysis of nose angle involves the configural processing of face features.

Patent
15 Nov 2004
TL;DR: In this paper, a system and method for eye gaze tracking in human or animal subjects without calibration of cameras, specific measurements of eye geometries or the tracking of a cursor image on a screen by the subject through a known trajectory is presented.
Abstract: A system and method for eye gaze tracking in human or animal subjects without calibration of cameras, specific measurements of eye geometries or the tracking of a cursor image on a screen by the subject through a known trajectory. The preferred embodiment includes one uncalibrated camera for acquiring video images of the subject's eye(s) and optionally having an on-axis illuminator, and a surface, object, or visual scene with embedded off-axis illuminator markers. The off-axis markers are reflected on the corneal surface of the subject's eyes as glints. The glints indicate the distance between the point of gaze in the surface, object, or visual scene and the corresponding marker on the surface, object, or visual scene. The marker that causes a glint to appear in the center of the subject's pupil is determined to be located on the line of regard of the subject's eye, and to intersect with the point of gaze. Point of gaze on the surface, object, or visual scene is calculated as follows. First, by determining which marker glints, as provided by the corneal reflections of the markers, are closest to the center of the pupil in either or both of the subject's eyes. This subset of glints forms a region of interest (ROI). Second, by determining the gaze vector (relative angular or cartesian distance to the pupil center) for each of the glints in the ROI. Third, by relating each glint in the ROI to the location or identification (ID) of a corresponding marker on the surface, object, or visual scene observed by the eyes. Fourth, by interpolating the known locations of each these markers on the surface, object, or visual scene, according to the relative angular distance of their corresponding glints to the pupil center.

Journal ArticleDOI
TL;DR: The results support the previous finding that cortical processing of faces in infants is enhanced when accompanied by direct gaze, however, this effect is only found when eyes are presented within the context of an upright face.
Abstract: Previous work has shown that infants are sensitive to the direction of gaze of another's face, and that gaze direction can cue attention. The present study replicates and extends results on the ERP correlates of gaze processing in 4-month-olds. In two experiments, we recorded ERPs while 4-month-olds viewed direct and averted gaze within the context of averted and inverted heads. Our results support the previous finding that cortical processing of faces in infants is enhanced when accompanied by direct gaze. However, this effect is only found when eyes are presented within the context of an upright face.

Journal ArticleDOI
TL;DR: This result shows that the direction of eye gaze of another cannot only bias infant attention, but also lead to enhanced information processing of the objects concerned.
Abstract: A major issue in developmental science is how infants use the direction of other's eye gaze to facilitate the processing of information. Four-month-old infants passively viewed images of an adult face gazing toward or away from objects. When presented with the objects a second time, infants showed differences in a slow wave event-related potential, indicating that uncued objects were perceived as less familiar than objects previously cued by the direction of gaze of another person. This result shows that the direction of eye gaze of another cannot only bias infant attention, but also lead to enhanced information processing of the objects concerned.

Book ChapterDOI
TL;DR: A brand new technique of performing human identification which is based on eye movements characteristic, which compiles behavioral and physiological aspects and therefore it is difficult to counterfeit and at the same time it is easy to perform.
Abstract: The paper presents a brand new technique of performing human identification which is based on eye movements characteristic. Using this method, the system measures human’s eyes reaction for visual stimulation. The eyes of the person who is being identified follow the point on the computer screen and eye movement tracking system is used to collect information about eye movements during the experiment. The first experiments showed that it was possible to identify people by means of that method. The method scrutinized here has several significant advantages. It compiles behavioral and physiological aspects and therefore it is difficult to counterfeit and at the same time it is easy to perform. Moreover, it is possible to combine it with other camera-based techniques like iris or face recognition.

Journal ArticleDOI
TL;DR: The results seem to suggest that, in children with autism, the visual system processes information about another person's gaze direction and sends this information to those areas that subserve reflexive attention orienting.
Abstract: Background: The aim of this study was to investigate attention orienting triggered by another's gaze direction in autism. Method: Twelve high-functioning children with autism and gender- and age-matched normal control children were studied using two tasks. In the first task, children were asked to detect laterally presented target stimuli preceded by centrally presented facial cue stimuli in which gaze was either straight ahead or averted. The direction of the cue was either congruent, neutral, or incongruent with respect to the laterality of the target stimulus. In the second task, children were asked to discriminate the direction of eye gaze. Results: The results showed that another person's static gaze direction triggered an automatic shift of visual attention, both in children with autism and in normally developing children. The children in both groups were also able to overtly discriminate the direction of the gaze. Conclusion: These results seem to suggest that, in children with autism, the visual system processes information about another person's gaze direction and sends this information to those areas that subserve reflexive attention orienting. However, future studies are needed to investigate whether the processing of eyes and gaze direction relies on similar neural mechanisms in children with autism and in normally developing children.

Patent
26 Feb 2004
TL;DR: In this article, the eye gaze data pertaining to a glint and pupil image of the eye in an image plane of the camera is sampled, and the determined eye gaze parameters include orthogonal projections of a pupil-glint displacement vector, a ratio of a major semi-axis dimension to a minor semiaxis dimension of an ellipse that is fitted to the pupil image in the image plane.
Abstract: A method and computer system for tracking eye gaze. A camera is focused on an eye of subject viewing a gaze point on a screen while directing light toward the eye. Eye gaze data pertaining to a glint and pupil image of the eye in an image plane of the camera is sampled. Eye gaze parameters are determined from the eye gaze data. The determined eye gaze parameters include: orthogonal projections of a pupil-glint displacement vector, a ratio of a major semi-axis dimension to a minor semi-axis dimension of an ellipse that is fitted to the pupil image in the image plane, an angular orientation of the major semi-axis dimension in the image plane, and mutually orthogonal coordinates of the center of the glint in the image plane. The gaze point is estimated from the eye gaze parameters.

Journal ArticleDOI
TL;DR: The existence and flexibility of spatial indexing in adults and 6-month-old infants is investigated by adapting an eye-tracking paradigm and showed that infants are capable of both binding multimodal events to locations and tracking those locations when they move.
Abstract: The ability to keep track of locations in a dynamic, multimodal environment is crucial for successful interactions with other people and objects. The authors investigated the existence and flexibility of spatial indexing in adults and 6-month-old infants by adapting an eye-tracking paradigm from D. C. Richardson and M. J. Spivey (2000). Multimodal events were presented in specific locations, and eye movements were measured when the auditory portion of the stimulus was presented without its visual counterpart. Experiment 1 showed that adults spatially index auditory information even when the original associated locations move. Experiments 2 and 3 showed that infants are capable of both binding multimodal events to locations and tracking those locations when they move.

Proceedings ArticleDOI
22 Mar 2004
TL;DR: This paper investigates the usability of gaze-typing systems for disabled people in a broad perspective that takes into account the usage scenarios and the particular users that these systems benefit.
Abstract: This paper investigates the usability of gaze-typing systems for disabled people in a broad perspective that takes into account the usage scenarios and the particular users that these systems benefit. Design goals for a gaze-typing system are identified: productivity above 25 words per minute, robust tracking, high availability, and support of multimodal input. A detailed investigation of the efficiency and user satisfaction with a Danish and a Japanese gaze-typing system compares it to head- and mouse (hand) - typing. We found gaze typing to be more erroneous than the other two modalities. Gaze typing was just as fast as head typing, and both were slower than mouse (hand-) typing. Possibilities for design improvements are discussed.

Journal Article
TL;DR: The results indicated that humans are highly sensitive to gaze direction and that information from both eyes is used to determine direction of regard.
Abstract: The authors measured observers' ability to determine direction of gaze toward an object in space. In Experiment 1, they determined the difference threshold for determining whether a live "looker" was looking to the left or right of a target point. Acuity for eye direction was quite high (approximately 30 s arc). Viewing the movement of the looker's eyes did not improve acuity. When one of the looker's eyes was occluded, the observers' acuity was disrupted and their point of subjective equality was shifted away from the exposed eye. Experiment 2 was a replication of Experiment 1, but digitized gaze displays were used. The results of Experiment 3 showed that the acuity for direction of gaze depended on the position of the looker's target. Overall, the results indicated that humans are highly sensitive to gaze direction and that information from both eyes is used to determine direction of regard.

Journal ArticleDOI
TL;DR: This paper briefly reviews past and present display techniques as well as emerging graphics and eye tracking technology for GCD development.
Abstract: Gaze-contingent displays (GCDs) attempt to balance the amount of information displayed against the visual information processing capacity of the observer through real-time eye movement sensing. Based on the assumed knowledge of the instantaneous location of the observer's focus of attention, GCD content can be "tuned" through several display processing means. Screen-based displays alter pixel level information generally matching the resolvability of the human retina in an effort to maximize bandwidth. Model-based displays alter geometric-level primitives along similar goals. Attentive user interfaces (AUIs) manage object- level entities (e.g., windows, applications) depending on the assumed attentive state of the observer. Such real-time display manipulation is generally achieved through non-contact, unobtrusive tracking of the observer's eye movements. This paper briefly reviews past and present display techniques as well as emerging graphics and eye tracking technology for GCD development.


Journal ArticleDOI
TL;DR: This work proposes new algorithms to extract and track the positions of eyes in a real-time video stream using a template of ‘Between-the-Eyes,’ which is updated frame-by-frame, instead of the eyes themselves.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: Using eye tracking, the way viewers look at photos and image based NPR illustrations is studied to validate the method of meaningful abstraction used in DeCarlo and Santella [2002] and suggest eye tracking can be a useful tool for evaluation of NPR systems.
Abstract: Using eye tracking, we study the way viewers look at photos and image based NPR illustrations. Viewers examine the same number of locations in photos and in NPR images with uniformly high or low detail. In contrast, viewers are attracted to areas where detail is locally preserved in meaningfully abstracted images. This accords with the idea that artists carefully manipulate detail to control interest and understanding. It also validates the method of meaningful abstraction used in DeCarlo and Santella [2002]. Results also suggest eye tracking can be a useful tool for evaluation of NPR systems.