scispace - formally typeset
Search or ask a question

Showing papers on "Eye tracking published in 2002"


Journal ArticleDOI
TL;DR: Of regions in the extended system for face perception, the amygdala plays a central role in processing the social relevance of information gleaned from faces, particularly when that information may signal a potential threat.

1,224 citations


Journal ArticleDOI
TL;DR: The results show that, from birth, human infants prefer to look at faces that engage them in mutual gaze and that, at an early age, healthy babies show enhanced neural processing of direct gaze.
Abstract: Making eye contact is the most powerful mode of establishing a communicative link between humans. During their first year of life, infants learn rapidly that the looking behaviors of others conveys significant information. Two experiments were carried out to demonstrate special sensitivity to direct eye contact from birth. The first experiment tested the ability of 2- to 5-day-old newborns to discriminate between direct and averted gaze. In the second experiment, we measured 4-month-old infants' brain electric activity to assess neural processing of faces when accompanied by direct (as opposed to averted) eye gaze. The results show that, from birth, human infants prefer to look at faces that engage them in mutual gaze and that, from an early age, healthy babies show enhanced neural processing of direct gaze. The exceptionally early sensitivity to mutual gaze demonstrated in these studies is arguably the major foundation for the later development of social skills.

1,199 citations


Journal ArticleDOI
TL;DR: Eye-tracking applications are surveyed in a breadth-first manner, reporting on work from the following domains: neuroscience, psychology, industrial engineering and human factors, marketing/advertising, and computer science.
Abstract: Eye-tracking applications are surveyed in a breadth-first manner, reporting on work from the following domains: neuroscience, psychology, industrial engineering and human factors, marketing/advertising, and computer science. Following a review of traditionally diagnostic uses, emphasis is placed on interactive applications, differentiating between selective and gaze-contingent approaches.

1,017 citations


Journal ArticleDOI
TL;DR: The paper describes how this tracking system has been extended to provide a general framework for tracking in complex configurations and a visual servoing system constructed using this framework is presented together with results showing the accuracy of the tracker.
Abstract: Presents a framework for three-dimensional model-based tracking. Graphical rendering technology is combined with constrained active contour tracking to create a robust wire-frame tracking system. It operates in real time at video frame rate (25 Hz) on standard hardware. It is based on an internal CAD model of the object to be tracked which is rendered using a binary space partition tree to perform hidden line removal. A Lie group formalism is used to cast the motion computation problem into simple geometric terms so that tracking becomes a simple optimization problem solved by means of iterative reweighted least squares. A visual servoing system constructed using this framework is presented together with results showing the accuracy of the tracker. The paper then describes how this tracking system has been extended to provide a general framework for tracking in complex configurations. The adjoint representation of the group is used to transform measurements into common coordinate frames. The constraints are then imposed by means of Lagrange multipliers. Results from a number of experiments performed using this framework are presented and discussed.

729 citations


Journal ArticleDOI
TL;DR: The system was tested in a simulating environment with subjects of different ethnic backgrounds, different genders, ages, with/without glasses, and under different illumination conditions, and it was found very robust, reliable and accurate.
Abstract: This paper describes a real-time prototype computer vision system for monitoring driver vigilance. The main components of the system consists of a remotely located video CCD camera, a specially designed hardware system for real-time image acquisition and for controlling the illuminator and the alarm system, and various computer vision algorithms for simultaneously, real-time and non-intrusively monitoring various visual bio-behaviors that typically characterize a driver's level of vigilance. The visual behaviors include eyelid movement, face orientation, and gaze movement (pupil movement). The system was tested in a simulating environment with subjects of different ethnic backgrounds, different genders, ages, with/without glasses, and under different illumination conditions, and it was found very robust, reliable and accurate.

601 citations


Proceedings ArticleDOI
01 Jul 2002
TL;DR: This work describes a computational approach to stylizing and abstracting photographs that explicitly responds to the design goal of good information design and represents a new alternative for non-photorealistic rendering both in its visual style, in its approach to visual form, and in its techniques for interaction.
Abstract: Good information design depends on clarifying the meaningful structure in an image. We describe a computational approach to stylizing and abstracting photographs that explicitly responds to this design goal. Our system transforms images into a line-drawing style using bold edges and large regions of constant color. To do this, it represents images as a hierarchical structure of parts and boundaries computed using state-of-the-art computer vision. Our system identifies the meaningful elements of this structure using a model of human perception and a record of a user's eye movements in looking at the photo; the system renders a new image using transformations that preserve and highlight these visual elements. Our method thus represents a new alternative for non-photorealistic rendering both in its visual style, in its approach to visual form, and in its techniques for interaction.

552 citations


Journal ArticleDOI
07 Nov 2002
TL;DR: The "Camera Mouse" system tracks the computer user's movements with a video camera and translates them into the movements of the mouse pointer on the screen, and body features such as the tip of the user's nose or finger can be tracked.
Abstract: The "Camera Mouse" system has been developed to provide computer access for people with severe disabilities. The system tracks the computer user's movements with a video camera and translates them into the movements of the mouse pointer on the screen. Body features such as the tip of the user's nose or finger can be tracked. The visual tracking algorithm is based on cropping an online template of the tracked feature from the current image frame and testing where this template correlates in the subsequent frame. The location of the highest correlation is interpreted as the new location of the feature in the subsequent frame. Various body features are examined for tracking robustness and user convenience. A group of 20 people without disabilities tested the Camera Mouse and quickly learned how to use it to spell out messages or play games. Twelve people with severe cerebral palsy or traumatic brain injury have tried the system, nine of whom have shown success. They interacted with their environment by spelling out messages and exploring the Internet.

533 citations


Proceedings ArticleDOI
25 Mar 2002
TL;DR: The features, functionality and methods used in the eye typing systems developed in the last twenty years are considered and other communication related issues, among them customization and voice output are addressed.
Abstract: Eye typing provides a means of communication for severely handicapped people, even those who are only capable of moving their eyes. This paper considers the features, functionality and methods used in the eye typing systems developed in the last twenty years. Primary concerned with text production, the paper also addresses other communication related issues, among them customization and voice output.

490 citations


Journal ArticleDOI
Jason Tipples1
TL;DR: The findings suggest that the eye gaze is not unique in automatically triggering orientation, and arrow cues might also trigger automatic orienting.
Abstract: Recent studies (Driver et al., 1999; Friesen & Kingstone, 1998; Langton & Bruce, 1999) have argued that the perception of eye gaze may be unique, as compared with other symbolic cues (e.g., arrows), in being able to automatically trigger attentional orienting. In Experiment 1, 17 participants took part in a visuospatial orienting task to investigate whether arrow cues might also trigger automatic orienting. Two arrow cues were presented for 75 msec to the left and right of a fixation asterisk. After an interval of either 25 or 225 msec, the letter O or X appeared. After both time intervals, mean response times were reliably faster when the arrows pointed toward, rather than away from, the location of the target letter. This occurred despite the fact that the participants were informed that the arrows did not predict where the target would appear. In Experiment 2, the same pattern of data was recorded when several adjustments had been made in an attempt to rule out alternative explanations for the cuing effects. Overall, the findings suggest that the eye gaze is not unique in automatically triggering orienting.

440 citations


Journal ArticleDOI
TL;DR: A considerable degree of overlap is demonstrated between the medial frontal areas involved in eye gaze processing and theory of mind tasks and a PET study that controls for these factors is presented.

333 citations


Proceedings ArticleDOI
03 Dec 2002
TL;DR: This work presents a method for estimating eye gaze direction, which represents a departure from conventional eye gaze estimation methods, the majority of which are based on tracking specific optical phenomena like corneal reflection and the Purkinje images, and employs an appearance manifold model.
Abstract: We present a method for estimating eye gaze direction, which represents a departure from conventional eye gaze estimation methods, the majority of which are based on tracking specific optical phenomena like corneal reflection and the Purkinje images. We employ an appearance manifold model, but instead of using a densely sampled spline to perform the nearest manifold point query, we retain the original set of sparse appearance samples and use linear interpolation among a small subset of samples to approximate the nearest manifold point. The advantage of this approach is that since we are only storing a sparse set of samples, each sample can be a high dimensional vector that retains more representational accuracy than short vectors produced with dimensionality reduction methods. The algorithm was tested with a set of eye images labelled with ground truth point-of-regard coordinates. We have found that the algorithm is capable of estimating eye gaze with a mean angular error of 0.38 degrees, which is comparable to that obtained by commercially available eye trackers.

Journal ArticleDOI
TL;DR: The effects of eye gaze on basic aspects of the person-perception process, namely, person construal and the extraction of category-related knowledge from semantic memory, and the results supported these predictions.
Abstract: Previous research has highlighted the pivotal role played by gaze detection and interpretation in the development of social cognition. Extending work of this kind, the present research investigated the effects of eye gaze on basic aspects of the person-perception process, namely, person construal and the extraction of category-related knowledge from semantic memory. It was anticipated that gaze direction would moderate the efficiency of the mental operations through which these social-cognitive products are generated. Specifically, eye gaze was expected to influence both the speed with which targets could be categorized as men and women and the rate at which associated stereotypic material could be accessed from semantic memory. The results of two experiments supported these predictions: Targets with nondeviated (i.e., direct) eye gaze elicited facilitated categorical responses. The implications of these findings for recent treatments of person perception are considered.

Journal ArticleDOI
TL;DR: The model provides a detailed explanation for center-of-gravity saccades that have been observed in many previous experiments and has been compared with the eye movements of human subjects under identical conditions in natural visual search tasks.

Journal ArticleDOI
TL;DR: The existence of distinct neural processes for visual selection and saccade production is necessary to explain the flexibility of visually guided behaviour.
Abstract: Recent research has provided new insights into the neural processes that select the target for and control the production of a shift of gaze. Being a key node in the network that subserves visual processing and saccade production, the frontal eye field (FEF) has been an effective area in which to monitor these processes. Certain neurons in the FEF signal the location of conspicuous or meaningful stimuli that may be the targets for saccades. Other neurons control whether and when the gaze shifts. The existence of distinct neural processes for visual selection and saccade production is necessary to explain the flexibility of visually guided behaviour.

Proceedings ArticleDOI
25 Mar 2002
TL;DR: This paper presents a new real-time eye detection and tracking methodology that works under variable and realistic lighting conditions and can robustly track eyes when the pupils are not very bright due to significant external illumination interferences.
Abstract: Non-intrusive methods based on active remote IR illumination for eye tracking are important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tracking methodology that works under variable and realistic lighting conditions. Based on combining the bright-pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils are not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tracking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.

Proceedings ArticleDOI
25 Mar 2002
TL;DR: A novel gaze tracking system called FreeGaze, which is designed for the use of everyday gaze interaction with a geometric eyeball model and sophisticated image processing and needs only two points for each individual calibration.
Abstract: In this paper we introduce a novel gaze tracking system called FreeGaze, which is designed for the use of everyday gaze interaction. Among various possible applications of gaze tracking system, Human-Computer Interaction (HCI) is one of the most promising elds. However, existing systems require complicated and burden-some calibration and are not robust to the measurement variations. To solve these problems, we introduce a geometric eyeball model and sophisticated image processing. Unlike existing systems, our system needs only two points for each individual calibration. When the personalization nishes, our system needs no more calibration before each measurement session. Evaluation tests show that the system is accurate and applicable to everyday use for the applications.

Proceedings ArticleDOI
20 May 2002
TL;DR: New algorithms for detecting the inner eye corner and the center of an iris at subpixel accuracy are presented, and these new methods are applied in developing a real-time gaze tracking system.
Abstract: This paper addresses the accuracy problem of an eye gaze tracking system. We first analyze the technical barrier for a gaze tracking system to achieve a desired accuracy, and then propose a subpixel tracking method to break this barrier. We present new algorithms for detecting the inner eye corner and the center of an iris at subpixel accuracy, and we apply these new methods in developing a real-time gaze tracking system. Experimental results indicate that the new methods achieve an average accuracy within 1.4/spl deg/ using normal eye image resolutions.

Proceedings ArticleDOI
11 Aug 2002
TL;DR: A method for quickly and robustly localizing the iris and pupil boundaries of a human eye in close-up images can be critical for iris identification, or for applications that must determine the subject's gaze direction, e.g., human-computer interaction or driver attentiveness determination.
Abstract: This paper describes a method for quickly and robustly localizing the iris and pupil boundaries of a human eye in close-up images. Such an algorithm can be critical for iris identification, or for applications that must determine the subject's gaze direction, e.g., human-computer interaction or driver attentiveness determination. A multi-resolution coarse-to-fine search approach is used, seeking to maximize gradient strengths and uniformities measured across rays radiating from a candidate iris or pupil's central point. An empirical evaluation of 670 eye images, both with and without glasses, resulted in a 98% localization accuracy. The algorithm has also shown robustness to weak illumination and most specular reflections (e.g., at eyewear and cornea), simplifying system component requirements. Rapid execution is achieved on a 750 MHz desktop processor.

Journal ArticleDOI
01 Jun 2002
TL;DR: A novel approach called the "one-circle" algorithm for measuring the eye gaze using a monocular image that zooms in on only one eye of a person, showing the possibility of finding the unique eye gaze direction from a single image of one eye.
Abstract: There are two components to the human visual line-of-sight: pose of human head and the orientation of the eye within their sockets. We have investigated these two aspects but will concentrate on eye gaze estimation. We present a novel approach called the "one-circle" algorithm for measuring the eye gaze using a monocular image that zooms in on only one eye of a person. Observing that the iris contour is a circle, we estimate the normal direction of this iris circle, considered as the eye gaze, from its elliptical image. From basic projective geometry, an ellipse can be back-projected into space onto two circles of different orientations. However, by using a geometric constraint, namely, that the distance between the eyeball's center and the two eye corners should be equal to each other, the correct solution can be disambiguated. This allows us to obtain a higher resolution image of the iris with a zoom-in camera, thereby achieving higher accuracies in the estimation. A general approach that combines head pose determination with eye gaze estimation is also proposed. The searching of the eye gaze is guided by the head pose information. The robustness of our gaze determination approach was verified statistically by the extensive experiments on synthetic and real image data. The two key contributions are that we show the possibility of finding the unique eye gaze direction from a single image of one eye and that one can obtain better accuracy as a consequence of this.

Journal ArticleDOI
TL;DR: The disparity can be used to monitor the deterioration in the accuracy of the eye tracker calibration and to automatically invoke a re-calibration procedure when necessary and to reduce the systematic error in the eye movement data collected for that participant.
Abstract: In the course of running an eye-tracking experiment, one computer system or subsystem typically presents the stimuli to the participant and records manual responses, and another collects the eye movement data, with little interaction between the two during the course of the experiment. This article demonstrates how the two systems can interact with each other to facilitate a richer set of experimental designs and applications and to produce more accurate eye tracking data. In an eye-tracking study, a participant is periodically instructed to look at specific screen locations, orexplicit required fixation locations (RFLs), in order to calibrate the eye tracker to the participant. The design of an experimental procedure will also often produce a number ofimplicit RFLs—screen locations that the participant must look at within a certain window of time or at a certain moment in order to successfully and correctly accomplish a task, but without explicit instructions to fixate those locations. In these windows of time or at these moments, the disparity between the fixations recorded by the eye tracker and the screen locations corresponding to implicit RFLs can be examined, and the results of the comparison can be used for a variety of purposes. This article shows how the disparity can be used to monitor the deterioration in the accuracy of the eye tracker calibration and to automatically invoke a re-calibration procedure when necessary. This article also demonstrates how the disparity will vary across screen regions and participants and how each participant’s uniqueerror signature can be used to reduce the systematic error in the eye movement data collected for that participant.

Journal ArticleDOI
TL;DR: The oculomotor system uses a prediction of the time at which the eye trajectory will cross the target, defined as the "eye crossing time" (T(XE), which depends on both position error and velocity error, as the criterion used to switch between smooth and saccadic pursuit.
Abstract: When tracking moving visual stimuli, primates orient their visual axis by combining two kinds of eye movements, smooth pursuit and saccades, that have very different dynamics. Yet, the mechanisms that govern the decision to switch from one type of eye movement to the other are still poorly understood, even though they could bring a significant contribution to the understanding of how the CNS combines different kinds of control strategies to achieve a common motor and sensory goal. In this study, we investigated the oculomotor responses to a large range of different combinations of position error and velocity error during visual tracking of moving stimuli in humans. We found that the oculomotor system uses a prediction of the time at which the eye trajectory will cross the target, defined as the "eye crossing time" (T(XE)). The eye crossing time, which depends on both position error and velocity error, is the criterion used to switch between smooth and saccadic pursuit, i.e., to trigger catch-up saccades. On average, for T(XE) between 40 and 180 ms, no saccade is triggered and target tracking remains purely smooth. Conversely, when T(XE) becomes smaller than 40 ms or larger than 180 ms, a saccade is triggered after a short latency (around 125 ms).

Journal ArticleDOI
TL;DR: It is found that both position error and retinal slip are taken into account in catch-up saccade programming to predict the future trajectory of the moving target.
Abstract: During visual tracking of a moving stimulus, primates orient their visual axis by combining two very different types of eye movements, smooth pursuit and saccades. The purpose of this paper was to investigate quantitatively the catch-up saccades occurring during sustained pursuit. We used a ramp-step-ramp paradigm to evoke catch-up saccades during sustained pursuit. In general, catch-up saccades followed the unexpected steps in position and velocity of the target. We observed catch-up saccades in the same direction as the smooth eye movement (forward saccades) as well as in the opposite direction (reverse saccades). We made a comparison of the main sequences of forward saccades, reverse saccades, and control saccades made to stationary targets. They were all three significantly different from each other and were fully compatible with the hypothesis that the smooth pursuit component is added to the saccadic component during catch-up saccades. A multiple linear regression analysis was performed on the saccadic component to find the parameters determining the amplitude of catch-up saccades. We found that both position error and retinal slip are taken into account in catch-up saccade programming to predict the future trajectory of the moving target. We also demonstrated that the saccadic system needs a minimum period of approximately 90 ms for taking into account changes in target trajectory. Finally, we reported a saturation (above 15 degrees /s) in the contribution of retinal slip to the amplitude of catch-up saccades.

Proceedings ArticleDOI
11 Aug 2002
TL;DR: A new real time eye tracking methodology that works under variable and realistic lighting conditions and various face orientations is presented by combining the conventional appearance based object recognition method (SVM) and object tracking method (mean shift) with Kalman filtering based on active IR illumination.
Abstract: Most eye trackers based on active IR illumination require distinctive bright pupil effect to work well. In this paper, we present a new real time eye tracking methodology that works under variable and realistic lighting conditions and various face orientations. By combining the conventional appearance based object recognition method (SVM) and object tracking method (mean shift) with Kalman filtering based on active IR illumination, our technique is able to benefit from the strengths of different techniques and overcome their respective limitations. Experimental study shows significant improvement of our technique over the existing techniques.

Proceedings ArticleDOI
25 Mar 2002
TL;DR: Findings show that on the World Wide Web, with somewhat complex visual digital images, some viewers' eye movements may follow a habitually preferred path -- a scanpath -- across the visual display.
Abstract: The somewhat controversial and often-discussed theory of visual perception, that of scanpaths, was tested using Web pages as visual stimuli. In 1971, Noton and Stark defined "scanpaths" as repetitive sequences of fixations and saccades that occur upon re-exposure to a visual stimulus, facilitating recognition of that stimulus. Since Internet users are repeatedly exposed to certain visual displays of information, the Web is an ideal stimulus to test this theory. Eye-movement measures were recorded while subjects repeatedly viewed three different kinds of Internet pages -- a portal page, an advertising page and a news story page -- over the course of a week. Scanpaths were compared by using the string-edit methodology that measures resemblance between sequences. Findings show that on the World Wide Web, with somewhat complex visual digital images, some viewers' eye movements may follow a habitually preferred path -- a scanpath -- across the visual display. In addition, strong similarity among eye-path sequences of different viewers may indicate that other forces such as features of the Web site or memory are important.

Proceedings ArticleDOI
Milton Chen1
20 Apr 2002
TL;DR: Experimental results suggest parameters for the design of videoconferencing systems and support a theory that people are prone to perceive eye contact, that is, they will think that someone is making eye contact with us unless they are certain that the person is not looking into the authors' eyes.
Abstract: Eye contact is a natural and often essential element in the language of visual communication. Unfortunately, perceiving eye contact is difficult in most video-conferencing systems and hence limits their effectiveness. We conducted experiments to determine how accurately people perceive eye contact. We discovered that the sensitivity to eye contact is asymmetric, in that we are an order of magnitude less sensitive to eye contact when people look below our eyes than when they look to the left, right, or above our eyes. Additional experiments support a theory that people are prone to perceive eye contact, that is, we will think that someone is making eye contact with us unless we are certain that the person is not looking into our eyes. These experimental results suggest parameters for the design of videoconferencing systems. As a demonstration, we were able to construct from commodity components a simple dyadic videoconferencing prototype that supports eye contact

Proceedings ArticleDOI
01 Aug 2002
TL;DR: A new method for computing the 3D position of an eye and its gaze direction from a single camera and at least two near infra-red light sources that does not require to be calibrated with the user before each user session, and it allows for free head motion.
Abstract: We introduce a new method for computing the 3D position of an eye and its gaze direction from a single camera and at least two near infra-red light sources. The method is based on the theory of spherical optical surfaces and uses the Gullstrand model of the eye to estimate the positions of the center of the cornea and the center of the pupil in 3D. The direction of gaze can then be computed from the vector connecting these two points. The point of regard can also be computed from the intersection of the direction of gaze with an object in the scene. We have simulated this model using ray traced images of the eye, and obtained very promising results. The major contribution of this new technique over current eye tracking technology is that the system does not require to be calibrated with the user before each user session, and it allows for free head motion.

Patent
Laurence Durnell1
08 Aug 2002
TL;DR: In this article, an eye tracking system for monitoring the movement of a user's eye comprises an eye camera (2) and a scene camera (4) for supplying to interlace electronics (6) video data indicative of an image of the user eye and an image observed by the user, and a spot location module (12) includes adaptive threshold sub-module for providing an indication of parts of the image produced by the eye camera.
Abstract: An eye tracking system for monitoring the movement of a user's eye comprises an eye camera (2) and a scene camera (4) for supplying to interlace electronics (6) video data indicative of an image of the user's eye and an image of the scene observed by the user. In addition the system incorporates a frame grabber (8) for digitising the video data and for separating the eye and scene data into two processing channels, and a spot location module (12) for determining from the video data the location of a reference spot formed on the user's eye by illumination of the user's eye by a point source of light. The spot location module (12) includes adaptive threshold sub-module for providing an indication of parts of the image produced by the eye camera (2) which have a brightness greater than a threshold value, and a spot identification sub-module for selecting a valid reference spot by comparing those parts of the image with predetermined validity criteria. The system further incorporates a pupil location module (14) for determining the location of the pupil of the user's eye relative to the reference spot in order to determine the user's line of gaze, and a display for indicating the user's point of regard from the user's line of gaze determined by the pupil and spot location modules. Such an arrangement provides a fast and versatile eye tracking system.

Proceedings ArticleDOI
20 May 2002
TL;DR: A new system to estimate the direction of a user's eye gaze, consisting of five IR LEDs and a CCD camera, is suggested, which is comparatively simple and fast.
Abstract: In this paper, we suggest a new system to estimate the direction of a user's eye gaze. Our system consisted of five IR LEDs and a CCD camera. The IR LEDs, which are attached to the corners of a computer monitor, make glints on the surface of the cornea of the eye. If the user sees the monitor, the center of a pupil is always in a polygon that is made by the glints. Consequently, the direction of the user's eye gaze can be computed without computing the geometrical relation between the eye, the camera and the monitor in 3D space. Our method is comparatively simple and fast. We introduce the method and show some experimental results.

Patent
11 Dec 2002
TL;DR: In this paper, individual eye tracking data (100) is used to determine whether an individual has actually looked at a particular region of a visual field Aggregation of data corresponding to multiple individuals can provide trends and other data useful for designers of graphical representation (eg, Web pages, advertisements) as well as other indicates both regions viewed and regions not viewed, can be accomplished using several different techniques.
Abstract: Individual eye tracking data (100) can be used to determine whether an individual has actually looked at a particular region of a visual field Aggregation of data corresponding to multiple individuals can provide trends and other data useful for designers of graphical representation (eg, Web pages, advertisements) as well as other indicates both regions viewed and regions not viewed, can be accomplished using several different techniques For example, percentages of the number of viewers that viewed a particular region can be represented as a particular color, or the underlying image being viewed can be blurred based on and acuity gradient and the number of individuals viewing various regions The various regions represented as viewed can be selected based on the type of viewing activity (eg, reading, gazing) is associated with a particular region

Proceedings ArticleDOI
16 Nov 2002
TL;DR: It is commendable to use synchronized gaze models when designing CVEs, but depending on task situation, random models generating sufficient amounts of gaze may suffice.
Abstract: We present an experiment examining effects of gaze on speech during three-person conversations. Understanding such effects is crucial for the design of teleconferencing systems and Collaborative Virtual Environments (CVEs). Previous findings suggest subjects take more turns when they experience more gaze. We evaluated whether this is because more gaze allowed them to better observe whether they were being addressed. We compared speaking behavior between two conditions: (1) in which subjects experienced gaze synchronized with conversational attention, and (2) in which subjects experienced random gaze. The amount of gaze experienced by subjects was a covariate. Results show subjects were 22% more likely to speak when gaze behavior was synchronized with conversational attention. However, covariance analysis showed these results were due to differences in amount of gaze rather than synchronization of gaze, with correlations of .62 between amount of gaze and amount of subject speech. Task performance was 46% higher when gaze was synchronized. We conclude it is commendable to use synchronized gaze models when designing CVEs, but depending on task situation, random models generating sufficient amounts of gaze may suffice.