scispace - formally typeset
Open AccessProceedings ArticleDOI

Manual and gaze input cascaded (MAGIC) pointing

Reads0
Chats0
TLDR
This work explores a new direction in utilizing eye gaze for computer input by proposing an alternative approach, dubbed MAGIC (Manual And Gaze Input Cascaded) pointing, which might offer many advantages, including reduced physical effort and fatigue as compared to traditional manual pointing, greater accuracy and naturalness than traditional gaze pointing, and possibly fasterspeed than manual pointing.
Abstract
This work explores a new direction in utilizing eye gaze for computer input. Gaze tracking has long been considered as an alternative or potentially superior pointing method for computer input. We believe that many fundamental limitations exist with traditional gaze pointing. In particular, it is unnatural to overload a perceptual channel such as vision with a motor control task. We therefore propose an alternative approach, dubbed MAGIC (Manual And Gaze Input Cascaded) pointing. With such an approach, pointing appears to the user to be a manual task, used for fine manipulation and selection. However, a large portion of the cursor movement is eliminated by warping the cursor to the eye gaze area, which encompasses the target. Two specific MAGIC pointing techniques, one conservative and one liberal, were designed, analyzed, and implemented with an eye tracker we developed. They were then tested in a pilot study. This early- stage exploration showed that the MAGIC pointing techniques might offer many advantages, including reduced physical effort and fatigue as compared to traditional manual pointing, greater accuracy and naturalness than traditional gaze pointing, and possibly faster speed than manual pointing. The pros and cons of the two techniques are discussed in light of both performance data and subjective reports.

read more

Content maybe subject to copyright    Report

Papers
CHI 99
15-20 MAY 1999
Manual And Gaze Input Cascaded (MAGIC) Pointing
Shumin Zhai Carlos Morimoto
Steven Ihde
IBM Almaden Research Center
650 Harry Road
San Jose, CA 95 120 USA
+14089271112
(zhai, morimoto, ihde}@almaden.ibm.com
ABSTRACT
This work explores a new direction in utilizing eye gaze for
computer input. Gaze tracking has long been considered as an
alternative or potentially superior pointing method for
computer input. We believe that many fundamental
limitations exist with traditional gaze pointing. In particular,
it is unnatural to overload a perceptual channel such as vision
with a motor control task. We therefore propose an
alternative approach, dubbed
MAGIC
(Manual And Gaze Input
Cascaded) pointing. With such an approach, pointing appears
to the user to be a manual task, used for fine manipulation
and selection. However, a large portion of the cursor
movement is eliminated by warping the cursor to the eye gaze
area, which encompasses the target. Two specific
MAGIC
pointing techniques, one conservative and one liberal, were
designed, analyzed, and implemented with an eye tracker we
developed. They were then tested in a pilot study. This early-
stage exploration showed that the
MAGIC
pointing techniques
might offer many advantages, including reduced physical
effort and fatigue as compared to traditional manual pointing,
greater accuracy and naturalness than traditional gaze
pointing, and possibly faster speed than manual pointing. The
pros and cons of the two techniques are discussed in light of
both performance data and subjective reports.
Keywords
Gaze, eye, computer input, eye tracking, gaze tracking,
pointing, multi-modal interface, Fitts’ law, computer vision.
INTRODUCTION
Using the eyes as a source of input in “advanced user
interfaces” has long been a topic of interest to the HCI field
[l] [2] [3] [4]. Reports on eye tracking frequently appear not
only in the research literature, but also in the popular press,
such as the July 1996 issue of Byte magazine [5]. One of the
basic goals that numerous researchers have attempted to
Permission to make digital or hard copies of all or part of this work fat
personal or classroom USC is granted without fee provided that topics
arc not made or distributed for profit or commercial advarltagc and that
copies bear this notice and the full citation
011 the firSt page. ‘YO Copy
otherwise. to republish, to post on servers or to redistribute to lists.
requires prior specific permission and/or a fee.
CHI ‘99 Pittsburgh PA USA
Copyright ACM 1999 0-ZOl-48559-1/99/05...$5.00
246
achieve is to operate the user interface through eye gaze, with
pointing (target acquisition) as the core element. There are
many compelling reasons to motivate such a goal, including
the following:
1.
2.
3.
There are situations that prohibit the use of the hands,
such as when the user’s hands are (disabled or
continuously occupied with other tasks.
The eye can move very quickly in comparison to other
parts of the body. Furthermore, as many researchers have
long argued [3] [6], target acquisition usually requires
the user to look at the target first, before actuating cursor
control. Theoretically this means that if the eye gaze can
be tracked and effectively used, no other input method
can act as quickly. Increasing the speed of user input to
the computer has long been an interest of HCI research.
Reducing fatigue and potential injury caused by
operating keyboard and pointing devices is also an
important concern in the user interface field. Repetitive
stress injury affects an increasing number of computer
users. Most users are not concerned with RSI until
serious problems occur. Utilizing eye gaze movement to
replace or reduce the amount of stress to the hand can be
beneficial.
Clearly, to replace “what you see (and click on) is what you
get” with “what you look at is what you get” [4] [6] has
captivating appeal. However, the design and implementation
of eye gaze-based computer input has been faced with two
types of challenges. One is eye tracking technology itself,
which will be briefly discussed in the Implementation section
of the paper. The other challenge is the human factor issues
involved in utilizing eye movement for computer input. Jacob
[7] eloquently discussed many of these issues with insightful
observations.
In our view, there are two fundamental shortcomings to the
existing gaze pointing techniques, regardless of the maturity
of eye tracking technology. First, given the one-degree size
of the fovea and the subconscious jittery motions that the eyes
constantly produce, eye gaze is not precise enough to operate
UI widgets such as scrollbars, hyperlinks, and slider handles
on today’s GUI interfaces. At a 25-inch viewing distance to
the screen, one degree of arc corresponds to 0.44 in, which is

CHI99 15-20 MAY 1999
Papers
twice the size of a typical scroll bar and much greater than the
size of a typical character.
Second, and perhaps more importantly, the eye, as one of our
primary perceptual devices, has not evolved to be a control
organ. Sometimes its movements are voluntarily controlled
while at other times it is driven by external events. With the
target selection by dwell time method, considered more
natural than selection by blinking [7], one has to be conscious
of where one looks and how long one looks at an object. If
one does not look at a target continuously for a set threshold
(e.g., 200 ms), the target will not be successfully selected. On
the other hand, if one stares at an object for more than the set
threshold, the object will be selected, regardless of the user’s
intention. In some cases there is not an adverse effect to a
false target selection. Other times it can be annoying and
counter-productive (such as unintended jumps to a web page).
Furthermore, dwell time can only substitute for one mouse
click. There are otten two steps to target activation. A single
click selects the target (e.g., an application icon) and a double
click (or a different physical button click) opens the icon
(e.g., launches an application). To perform both steps with
dwell time is even more difficult.
In short, to load the visual perception channel with a motor
control task seems fundamentally at odds with users’ natural
mental model in which the eye searches for and takes in
information and the hand produces output that manipulates
external objects. Other than for disabled users, who have no
alternative, using eye gaze for practical pointing does not
appear to be very promising.
MAGIC POINTING
Are there interaction techniques that utilize eye movement to
assist the control task but do not force the user to be overly
conscious of his eye movement? We wanted to design a
technique in which pointing and selection remained primarily
a manual control task but were also aided by gaze tracking.
Our key idea is to use gaze to dynamically redefine (warp)
the “home” position of the pointing cursor to be at the
vicinity of the target, which was presumably what the user
was looking at, thereby effectively reducing the cursor
movement amplitude needed for target selection. Once the
cursor position had been redefined, the user would need to
only make a small movement to, and click on, the target with
a regular manual input device. In other words, we wanted to
achieve Manual And Gaze Input Cascaded
(MAGIC) pointing,
or
Manual Acquisition with Gaze Initiated Cursor. There are
many different ways of designing a
MAGIC
pointing technique.
Critical to its effectiveness is the identification of the target
the user intends to acquire. We have designed two
MAGIC
pointing techniques, one liberal and the other conservative in
terms of target identification and cursor placement.
The liberal approach is to warp the cursor to every new object
the user looks at (See Figure 1). The user can then take
control of the cursor by hand near (or on) the target, or ignore
it and search for the next target. Operationally, a new object
Gaze position
True target will be
within the circle with
95% probability
The cursor is
warped to eye
Eyetracking
boundary with
95% confidence
tracking position,
which is on or near
the true target
Previous cursor position,
far from target (e.g., 200 _cc__C__ k
pixels)
Figure 1. The liberal
MAGIC pointing technique:
cursor is placed in the vicinity of a target that the user
fixates on.
is defined by sufficient distance (e.g., 120 pixels) from the
current cursor position, unless the cursor is in a controlled
motion by hand. Since there is a 120-pixel threshold, the
cursor will not be warped when the user does continuous
manipulation such as drawing. Note that this
MAGIC pointing
technique is different from traditional eye gaze control, where
the user uses his eye to point at targets either without a cursor
[7] or with a cursor [3] that constantly follows the jittery eye
gaze motion.
The liberal approach may appear “pro-active,” since the
cursor waits readily in the vicinity of or on every potential
target. The user may move the cursor once he decides to
acquire the target he is looking at. On the other hand, the user
may also feel that the cursor is over-active when he is merely
looking at a target, although he may gradually adapt to ignore
this behavior.
The more conservative
MAGIC
pointing technique we have
explored does not warp a cursor to a target until the manual
input
device has been actuated. Once the manual input device
has been actuated, the cursor is warped to the gaze area
reported by the eye tracker. This area should be on or in the
vicinity of the target. The user would then steer the cursor
manually towards the target to complete the target
acquisition.
As illustrated in Figure 2, to
minimize directional uncertainty
after the cursor appears in the conservative technique, we
introduced an “intelligent” bias. Instead of being placed at the
center of the gaze area, the cursor position is offset to the
intersection of the manual actuation vector and the boundary
of the gaze area. This means that once warped, the cursor is
likely to appear in motion towards the target, regardless of
how the user actually actuated the manual input device. We
hoped that with the intelligent bias the user would not have to
actuate input device, observe the cursor position and decide
247

Papers
CHI 99 15-:20 MAY 1999
in which direction to steer the cursor. The cost to this method
is the increased manual movement amplitude.
Gaze position
reported by eye
tracker
True target will be
Eyetracking
a
within the circle
with 95%
probability
/
7
boundary with 95%
confidence
The cursor is
warped to the
boundary of the
gaze area, along the
Initial manual
/
actuation vector
initial actuation
vector
Previous curs&
position, far from target
Figure 2. The conservative
MAGIC
pointing technique
with “intelligent offset”
To initiate a pointing trial, there are two strategies available
to the user. One is to follow “virtual inertia:” move from the
cursor’s current position towards the new target the user is
looking at. This is likely the strategy the user will employ,
due to the way the user interacts with today’s interface. The
alternative strategy, which may be more advantageous but
takes time to learn, is to ignore the previous cursor position
and make a motion which is most convenient and least
effortful to the user for a given input device. For example, on
a small touchpad, the user may find it convenient to make an
upward stroke with the index finger, causing the cursor to
appear below the target.
The goal of the conservative
MAGIC
pointing method is the
following. Once the user looks at a target and moves the input
device, the cursor will appear “out of the blue” in motion
towards the target, on the side of the target opposite to the
initial actuation vector. In comparison to the liberal approach,
this conservative approach has both pros and cons. While
with this technique the cursor would never be over-active and
jump to a place the user does not intend to acquire, it may
require more hand-eye coordination effort.
Both the liberal and the conservative
MAGIC
pointing
techniques offer
the
following
potential
advantages:
1. Reduction of manual stress and fatigue, since the cross-
screen long-distance cursor movement is eliminated from
manual control.
2. Practical accuracy level. In comparison to traditional
pure gaze pointing whose accuracy is fundamentally
limited by the nature of eye movement, the
MAGIC
pointing techniques let the hand complete the pointing
task, so they can be as accurate as any other manual input
techniques.
3.
4.
5.
A more natural mental model for the user. The user does
not have to be aware of the role of the eye gaze. To the
user, pointing continues to be a manual task, with a
cursor conveniently appearing where it needs to be.
Speed. Since the need for large magnitude pointing
operations is less than with pure manual cursor control, it
is possible
that MAGIC
pointing will be faster than pure
manual pointing.
Improved subjective speed and ease-of-use. Since the
manual pointing amplitude is smaller, the user may
perceive the
MAGIC
pointing system to operate faster and
more
pleasantly than pure manual control,, even if it
operates at the same speed or more slowly.
The fourth point warrants mother discussion. According to the
well accepted Fitts’ Law [8], manual pointing time is
logarithmically proportional to the A/W ratio, where A is the
movement distance and W is the target size. In other words,
targets which are smaller or farther away take longer to
acquire. For
MAGIC
pointing, since the target size remains the
same but the cursor movement distance is shortened, the
pointing time can hence be reduced.
It is less clear if eye gaze control follows Fitts’ Law. In Ware
and Mikaelian’s study [3], selection time was shown to be
logarithmically proportional to target distance, thereby
conforming to Fitts’ Law. To the contrary, Silbert and Jacob
[9] found that trial completion time with eye tracking input
increases little with distance, therefore defying Fins Law.
In addition to problems with today’s eye tracking systems,
such as delay, error, and inconvenience, there may also be
many potential human factor disadvantages to the
MAGIC
pointing techniques we have proposed, including the
following:
1. With the more liberal
MAGIC
pointing technique, the
cursor warping can be overactive at times, since the
cursor moves to the new gaze location whenever the eye
gaze moves more than a set distance (e.g., 120 pixels)
away from the cursor. This could be particularly
distracting when the user is trying to read. It is possible
to introduce additional constraint according to the
context. For example, when the user’s eye appears to
follow a text reading pattern
MAGIC pointing can
be
automatically suppressed.
2.
With the more conservative
MAGIC pointing technique,
the uncertainty of the exact location at which the cursor
might appear may force the user, especially a novice, to
adopt a cumbersome strategy: take a touch (use the
manual input device to activate the cursor), wait (for the
cursor to appear), and move (the cursor to the target
manually). Such a strategy may prolong the target
acquisition time. The user may have to learn a novel
hand-eye coordination pattern to be efficient with this
technique.
248

CHI 99 15-20 MAY 1999
Papers
Clearly, experimental (implementation and empirical) work is
needed to validate, refine, or invent alternative MAGIC
pointing techniques.
IMPLEMENTATION
We took two engineering efforts to implement the MAGIC
pointing techniques. One was to design and implement an eye
tracking system and the other was to implement MAGIC
pointing techniques at the operating systems level, so that the
techniques can work with all software applications beyond
“demonstration” software.
The IBM Almaden Eye Tracker
Since the goal of this work is to explore
MAGIC
pointing as a
user interface technique, we started out by purchasing a
commercial eye tracker (ASL Model 5000) after a market
survey. In comparison to the system reported in early studies
(e.g.
[7]),
this system is much more compact and reliable.
However, we felt that it was still not robust enough for a
variety of people with different eye characteristics, such as
pupil brightness and correction glasses. We hence chose to
develop and use our own eye tracking system [IO]. Available
commercial systems, such as those made by
ISCAN
Incorporated, LC Technologies, and Applied Science
Laboratories (ASL), rely on a single light source that is
positioned either off the camera axis in the case of the ISCAN
ETL-400 systems, or on-axis in the case of the LCT and the
ASL E504 systems. Illumination from an off-axis source (or
ambient illumination) generates a dark pupil image. When the
light source is placed on-axis with the camera optical axis, the
camera is able to detect the light reflected
from
the interior of
the eye, and the image of the pupil appears bright [ 11] [ 12]
(see Figure 3). This effect is often seen as the red-eye in flash
photographs when the flash is close to the camera lens.
The
Almaden
system uses two near
infrared
(IR)
time
multiplexed light sources, composed of two sets of IR
LED's,
which were synchronized with the camera frame rate. One
light source is placed very close to the camera’s optical axis
and is synchronized with the even frames. Odd frames are
synchronized with the second light source, positioned
off-
axis. The two light sources are calibrated to provide
approximately. equivalent whole-scene illumination. Pupil
detection is realized by means of subtracting the dark pupil
image from the bright pupil image. After thresholding the
difference, the largest connected component is identified as
the pupil. This technique significantly increases the
robustness and reliability of the eye tracking system. After
implementing our system with satisfactory results, we
discovered that similar pupil detection schemes had been
independently developed by
Tomono
et al
[
131
and Ebisawa
and
Satoh
[14]. It is unfortunate that such a method has not
been used in the commercial systems. We recommend that
future eye tracking product designers consider such an
approach.
Once the pupil has been detected, the cornea1 reflection (the
glint reflected from the surface of the cornea due to one of the
light sources) is determined from the dark pupil image. The
reflection is then used to estimate the user’s point of gaze in
terms of the screen coordinates where the user is looking at.
The estimation of the user’s gaze requires an initial
calibration procedure, similar to that required by commercial
eye trackers.
Our system operates at 30 frames per second on a Pentium II
333 MHz machine running Windows NT. It can work with
any
PC1
frame grabber compatible with Video for Windows.
We programmed the
two
MAGIC
pointing techniques on a
Windows NT system. The techniques work independently
from the applications. The
MAGIC
pointing program takes data
from
both the manual input device (of any type, such as a
mouse) and the eye tracking system running either on
the
same machine or on another machine connected via serial
port.
Raw data
from
an eye tracker can not be directly used for
gaze-based interaction, due to noise
from
image processing,
eye movement jitters, and samples taken during
saccude
(ballistic eye movement) periods. We experimented with
various filtering techniques and found the most effective filter
in our case is similar to that described in
[7].
The goal of
filter design in general is to make the best compromise
between preserving signal bandwidth and eliminating
unwanted noise. In the case of eye tracking, as Jacob argued,
eye information relevant to interaction lies in the fixations.
The key is to select fixation points with minimal delay.
Samples collected during a saccade are unwanted and should
be avoided. In designing our algorithm for picking points of
fixation, we considered our tracking system speed (30 Hz),
and that the MAGIC pointing techniques utilize gaze
information only once for each new target, probably

Papers
CHI 99
15-20
MAY 1999
immediately after a saccade. Our filtering algorithm was
designed to pick a fixation with minimum delay by means of
selecting two adjacent points over two samples.
EXPERIMENT
Empirical studies, such as
[3],
are relatively rare in eye
tracking-based interaction
research, although they are
particularly needed in this field. Human behavior and
processes at the perceptual motor level
often
do not conform
to conscious-level reasoning. One usually cannot correctly
describe how to make a turn on a bicycle. Hypotheses on
novel interaction techniques can
only
be validated by
empirical data. However, it is also particularly difficult to
conduct empirical research on gaze-based interaction
techniques, due to the complexity of eye movement and the
lack of reliability in eye tracking equipment. Satisfactory
results only come when “everything is going right.” When
results are not as expected, it is difficult to
find
the true
reason among many possible reasons: Is it because a subject’s
particular eye property fooled the eye tracker? Was there a
calibration error? Or random noise in the imaging system? Or
is the hypothesis in fact invalid?
pointing is a touchpad: the user can choose one convenient
gesture and to take advantage of the intelligent offset.
The experimental task was essentially a
Fins
pointing task.
Subjects were asked to point and click at targets appearing in
random order. If the subject clicked off-target, a miss was
logged but the trial continued until a target was (clicked. An
extra trial was added to make up for the missed trial. Only
trials with no misses were collected for time performance
analyses. Subjects were asked to complete the task as quickly
as possible and as accurately as possible. To serve as a
motivator, a $20 cash prize was set for the subject with the
shortest mean session completion time with any technique.
We are still at a very early stage of exploring the MAGIC
pointing techniques. More refined or even very different
techniques may be designed in the future. We are by no
means ready to conduct the definitive empirical studies on
subject our work to empirical evaluations early so that
quantitative observations can be made and fed back to the
iterative design-evaluation-design cycle. We therefore
decided to conduct a small-scale pilot study to take an initial
peek at the use of
MAGIC
pointing, however unrefined.
Experimental Design
The two MAGIC pointing techniques described earlier were
put to test using a set of parameters such as the filter’s
temporal and spatial thresholds, the minimum cursor warping
distance, and the amount of “intelligent bias” (subjectively
selected by the authors without extensive user testing).
Ultimately the
MAGIC
pointing techniques should be evaluated
with an array of manual input devices, against both pure
manual and pure gaze-operated pointing methods (in the case
of large targets suitable for gaze pointing). Since this is an
early pilot study, we decided to limit ourselves to one manual
input device. A standard mouse was first considered to be the
manual input device in the experiment. However, it was soon
realized not to be the most suitable device for
MAGIC
pointing,
especially when a user decides to use the push-upwards
strategy with the intelligent offset. Because in such a case the
user always moves in one direction, the mouse tends to be
moved off the pad, forcing the user adjust the mouse position,
often during a pointing trial. We hence decided to use a
miniature isometric pointing stick (IBM
TrackPoint
IV,
commercially used in the IBM
Thinkpad
600 and 770 series
notebook computers). Another device suitable for MAGIC
The task was presented on a 20 inch CRT color monitor, with
a 15 by 11 inch
viewable
area set at resolution of 1280 by
1024 pixels. Subjects sat from the screen at a distance of 25
inches.
The following factors were manipulated in the experiments:
l three pointing directions: horizontal, vertical and
diagonal
A within-subject design was used. Each subject performed
the task with all three techniques: (1) Standard, pure manual
pointing with no gaze tracking (No-Gaze); (2) The
conservative
MAGIC
pointing method with intelligent offset
(Gazel); (3) The liberal MAGIC pointing method (Gaze2).
Nine subjects, seven male and two female, completed the
experiment. The order of techniques was balanced by a Latin
square pattern. Seven subjects were experienced
TrackPoint
users, while two had little or no experience.
With each technique, a
36-trial
practice session was first
given, during which subjects were encouraged to explore and
to
find
the most suitable strategies (aggressive, gentle, etc.).
The practice session was followed by two data collection
sessions.

Citations
More filters
Book

Eye Tracking Methodology: Theory and Practice

TL;DR: To the Human Visual System (HVS), Visual Attention, Neurological Substrate of the HVS, and Neuroscience and Psychology, and Industrial Engineering and Human Factors.
Book ChapterDOI

Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the Promises

TL;DR: This chapter discusses the application of eye movements to user interfaces, both for analyzing interfaces (measuring usability) and as an actual control medium within a human–computer dialogue.
Journal ArticleDOI

A breadth-first survey of eye-tracking applications

TL;DR: Eye-tracking applications are surveyed in a breadth-first manner, reporting on work from the following domains: neuroscience, psychology, industrial engineering and human factors, marketing/advertising, and computer science.
Proceedings ArticleDOI

Sensing techniques for mobile interaction

TL;DR: This work introduces and integrates a set of sensors into a handheld device, and demonstrates several new functionalities engendered by the sensors, such as recording memos when the device is held like a cell phone, switching between portrait and landscape display modes by holding the device in the desired orientation.
Proceedings ArticleDOI

Evaluation of eye gaze interaction

TL;DR: Two experiments are presented that compare an interaction technique developed for object selection based on a where a person is looking with the most commonly used selection method using a mouse and find that the eye gaze interaction technique is faster than selection with a mouse.
References
More filters
Journal ArticleDOI

The information capacity of the human motor system in controlling the amplitude of movement.

TL;DR: The motor system in the present case is defined as including the visual and proprioceptive feedback loops that permit S to monitor his own activity, and the information capacity of the motor system is specified by its ability to produce consistently one class of movement from among several alternative movement classes.
Journal ArticleDOI

Fitts' law as a research and design tool in human-computer interaction

TL;DR: The present study provides a historical and theoretical context for the Fitts' law model, including an analysis of problems that have emerged through the systematic deviation of observations from predictions.
Journal ArticleDOI

Survey of eye movement recording methods

TL;DR: Most of the known techniques for measuring eye movements are reviewed, explaining their principle of operation and their primary advantages and disadvantages.
Journal ArticleDOI

The use of eye movements in human-computer interaction techniques: what you look at is what you get

TL;DR: In this paper, the usefulness of eye movements as a fast and convenient auxiliary user-to-computer communication mode was investigated, and the first eye movement-based interaction techniques were devised and implemented in a laboratory.
Proceedings ArticleDOI

What you look at is what you get: eye movement-based interaction techniques

TL;DR: Some of the human factors and technical considerations that arise in trying to use eye movements as an input medium are discussed and the first eye movement-based interaction techniques that are devised and implemented in the laboratory are described.
Frequently Asked Questions (11)
Q1. What have the authors contributed in "Manual and gaze input cascaded (magic) pointing" ?

This work explores a new direction in utilizing eye gaze for computer input. The authors therefore propose an alternative approach, dubbed MAGIC ( Manual And Gaze Input Cascaded ) pointing. They were then tested in a pilot study. The pros and cons of the two techniques are discussed in light of both performance data and subjective reports. Gaze tracking has long been considered as an alternative or potentially superior pointing method for computer input. The authors believe that many fundamental limitations exist with traditional gaze pointing. 

Law experiment: to simulate more realistic tasks the authors used circular targets distributed in varied directions in a randomly shuffled order, instead of two vertical bars displaced only in the horizontal dimension. 

By the end of the experiment, subjects had less than 10 minutes of exposure to each technique, but were able to perform at a speed similar to their manual control skills. 

In the second session of the experiment, on average, subjects using the liberal MAGIC pointing technique performed slightly faster (6.8%) and those using the conservative technique slightlyslower (4.3%) than those using pure manual pointing (1.41 seconds). 

As computer power and the price of cameras and video processing hardware continue to exponentially improve, it is conceivable that in the future mainstream computers will all be equipped with technology similar to that which the authors used in this experiment. 

Some also pointed out that it took them several trials to get used to the conservative technique, specifically the uncertainty of not knowing exactly where the cursor would appear. 

ACKNOWLEDGMENTSThis study was conducted as part of the IBM Blue Eyes project, led by Myron Flickner, who provided us great support. 

On a -5 (most unfavorable) to +5 (most favorable) scale, subjects gave an average rating of 1.5 (spread from -1 to +3) to the Gaze1 technique and 3.5 (from 2 to 4.5) to the Gaze2 technique. 

”The targets used in the experiment varied from small (0.53 degree) to large (1.6 degree), resembling realistic targets in practice. 

the intelligent offset, designed to reduce the directional uncertainty, was not unnoticed by some users who pointed out that the conservative technique had greater “tracking error”: the cursor was farther from the target. 

The price (and size) of commercial eye tracking equipment has dropped significantly in the last decade, from over US$lOOk to around US$20k.