scispace - formally typeset
Search or ask a question

Showing papers on "Eye tracking published in 2009"


Proceedings ArticleDOI
01 Sep 2009
TL;DR: This paper collects eye tracking data of 15 viewers on 1003 images and uses this database as training and testing examples to learn a model of saliency based on low, middle and high-level image features.
Abstract: For many applications in graphics, design, and human computer interaction, it is essential to understand where humans look in a scene Where eye tracking devices are not a viable option, models of saliency can be used to predict fixation locations Most saliency approaches are based on bottom-up computation that does not consider top-down image semantics and often does not match actual eye movements To address this problem, we collected eye tracking data of 15 viewers on 1003 images and use this database as training and testing examples to learn a model of saliency based on low, middle and high-level image features This large database of eye tracking data is publicly available with this paper

2,093 citations


Journal ArticleDOI
TL;DR: Research on the following topics is reviewed with respect to reading: (a) the perceptual span, (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements.
Abstract: Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.

2,033 citations


Proceedings ArticleDOI
20 Jun 2009
TL;DR: It is shown that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks.
Abstract: In this paper, we address the problem of learning an adaptive appearance model for object tracking. In particular, a class of tracking techniques called “tracking by detection” have been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrades the classifier and can cause further drift. In this paper we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks. We present a novel online MIL algorithm for object tracking that achieves superior results with real-time performance.

1,752 citations



Book ChapterDOI
20 Aug 2009
TL;DR: The objective of the tutorial is to give an overview on how eye tracking is currently used and how it can be used as a method in human computer interaction research and especially in usability research.
Abstract: The objective of the tutorial is to give an overview on how eye tracking is currently used and how it can be used as a method in human computer interaction research and especially in usability research. An eye tracking system records how the eyes move while a subject is completing a task for example on a web site. By analyzing these eye movements we are able to gain an objective insight into the behavior of that person.

681 citations


Journal ArticleDOI
TL;DR: The importance of the eye region and the impact of gaze on the most significant aspects of face processing has been discussed in this paper, where the existence of a neuronal eye detector mechanism is discussed as well as the links between eye gaze and social cognition impairments in autism.

530 citations


Journal ArticleDOI
TL;DR: A well-known bottom-up computer model of saliency-driven attention that includes conspicuity maps for color, orientation, and intensity is refined by adding high-level semantic information and it is demonstrated that this significantly improves the ability to predict eye fixations in natural images.
Abstract: Previous studies of eye gaze have shown that when looking at images containing human faces, observers tend to rapidly focus on the facial regions. But is this true of other high-level image features as well? We here investigate the extent to which natural scenes containing faces, text elements, and cell phones - as a suitable control - attract attention by tracking the eye movements of subjects in two types of tasks - free viewing and search. We observed that subjects in free-viewing conditions look at faces and text 16.6 and 11.1 times more than similar regions normalized for size and position of the face and text. In terms of attracting gaze, text is almost as effective as faces. Furthermore, it is difficult to avoid looking at faces and text even when doing so imposes a cost. We also found that subjects took longer in making their initial saccade when they were told to avoid faces/text and their saccades landed on a non-face/non-text object. We refine a well-known bottom–up computer model of saliency-driven attention that includes conspicuity maps for color, orientation, and intensity by adding high-level semantic information (i.e., the location of faces or text) and demonstrate that this significantly improves the ability to predict eye fixations in natural images. Our enhanced model’s predictions yield an area under the ROC curve over 84% for images that contain faces or text when compared against the actual fixation pattern of subjects. This suggests that the primate visual system allocates attention using such an enhanced saliency map.

362 citations


Journal ArticleDOI
TL;DR: It is demonstrated that viewing task biases the selection of scene regions and aggregate measures of fixation time on those regions but does not influence other measures, such as the duration of individual fixations.
Abstract: Expanding on the seminal work of G Buswell (1935) and I A Yarbus (1967), we investigated how task instruction influences specific parameters of eye movement control In the present study, 20 participants viewed color photographs of natural scenes under two instruction sets: visual search and memorization Results showed that task influenced a number of eye movement measures including the number of fixations and gaze duration on specific objects Additional analyses revealed that the areas fixated were qualitatively different between the two tasks However, other measures such as average saccade amplitude and individual fixation durations remained constant across the viewing of the scene and across tasks The present study demonstrates that viewing task biases the selection of scene regions and aggregate measures of fixation time on those regions but does not influence other measures, such as the duration of individual fixations

356 citations


Book
04 Oct 2009
TL;DR: In this article, the human eye movement repertoire and how our eyes question the world are discussed. But they do not discuss the role of human eye movements in the human visual world.
Abstract: PRELIMINARIES 1. Introduction 2. The human eye movement repertoire 3. How our eyes question the world OBSERVATIONS 4. Sedentary tasks 5. Domestic tasks 6. Locomotion on foot 7. Driving 8. Ball games: when to look where? 9. Social roles of eye movements COMMENTARIES 10. Representations of the visual world 11. Neuroscience of gaze and action 12. Attention, memory and learning

305 citations


Journal ArticleDOI
TL;DR: The manipulation of anxiety resulted in significant reductions in the duration of the quiet eye period and free throw success rate, thus supporting the predictions of attentional control theory.
Abstract: The aim of this study was to test the predictions of attentional control theory using the quiet eye period as an objective measure of attentional control. Ten basketball players took free throws in two counterbalanced experimental conditions designed to manipulate the anxiety they experienced. Point of gaze was measured using an ASL Mobile Eye tracker and fixations including the quiet eye were determined using frame-by-frame analysis. The manipulation of anxiety resulted in significant reductions in the duration of the quiet eye period and free throw success rate, thus supporting the predictions of attentional control theory. Anxiety impaired goal-directed attentional control (quiet eye period) at the expense of stimulus-driven control (more fixations of shorter duration to various targets). The findings suggest that attentional control theory may be a useful theoretical framework for examining the relationship between anxiety and performance in visuomotor sport skills.

273 citations


Journal ArticleDOI
TL;DR: A detailed description of how to calibrate, collect, and analyze infants' gaze in a series of experimental paradigms is provided, focusing specifically on the analysis of visual tracking, point of gaze, and the latency of gaze shifts (prediction and reactive gaze shifts).
Abstract: The current review offers a unique introduction to the use of corneal reflection eye tracking in infancy research. We provide a detailed description of how to calibrate, collect, and analyze infants' gaze in a series of experimental paradigms, focusing specifically on the analysis of visual tracking, point of gaze, and the latency of gaze shifts (prediction and reactive gaze shifts). The article ends with a critical discussion about the pros and cons of corneal reflection eye tracking.

Journal ArticleDOI
TL;DR: Quantitative results from a naturalistic driving study show that head motion, when combined with lane position and vehicle dynamics, is a reliable cue for lane-change intent prediction, and there may be a biological basis for head motion to begin earlier than eye motion during "lane-change"-related gaze shifts.
Abstract: Driver behavioral cues may present a rich source of information and feedback for future intelligent advanced driver-assistance systems (ADASs). With the design of a simple and robust ADAS in mind, we are interested in determining the most important driver cues for distinguishing driver intent. Eye gaze may provide a more accurate proxy than head movement for determining driver attention, whereas the measurement of head motion is less cumbersome and more reliable in harsh driving conditions. We use a lane-change intent-prediction system (McCall et al., 2007) to determine the relative usefulness of each cue for determining intent. Various combinations of input data are presented to a discriminative classifier, which is trained to output a prediction of probable lane-change maneuver at a particular point in the future. Quantitative results from a naturalistic driving study are presented and show that head motion, when combined with lane position and vehicle dynamics, is a reliable cue for lane-change intent prediction. The addition of eye gaze does not improve performance as much as simpler head dynamics cues. The advantage of head data over eye data is shown to be statistically significant (p < 0.01) 3 s ahead of lane-change situations, indicating that there may be a biological basis for head motion to begin earlier than eye motion during "lane-change"-related gaze shifts.

Journal ArticleDOI
TL;DR: A novel solution to this dilemma by considering the context of the tracking scene by integrating into the tracking process a set of auxiliary objects that are automatically discovered in the video on the fly by data mining.
Abstract: Enormous uncertainties in unconstrained environments lead to a fundamental dilemma that many tracking algorithms have to face in practice: Tracking has to be computationally efficient, but verifying whether or not the tracker is following the true target tends to be demanding, especially when the background is cluttered and/or when occlusion occurs. Due to the lack of a good solution to this problem, many existing methods tend to be either effective but computationally intensive by using sophisticated image observation models or efficient but vulnerable to false alarms. This greatly challenges long-duration robust tracking. This paper presents a novel solution to this dilemma by considering the context of the tracking scene. Specifically, we integrate into the tracking process a set of auxiliary objects that are automatically discovered in the video on the fly by data mining. Auxiliary objects have three properties, at least in a short time interval: 1) persistent co-occurrence with the target, 2) consistent motion correlation to the target, and 3) easy to track. Regarding these auxiliary objects as the context of the target, the collaborative tracking of these auxiliary objects leads to efficient computation as well as strong verification. Our extensive experiments have exhibited exciting performance in very challenging real-world testing cases.

Proceedings ArticleDOI
01 Sep 2009
TL;DR: This work proposes a new approach, called Tracking-Modeling-Detection (TMD), that closely integrates adaptive tracking with online learning of the object-specific detector and shows the real-time learning and classification is achievable with random forests.
Abstract: This work investigates the problem of robust, longterm visual tracking of unknown objects in unconstrained environments. It therefore must cope with frame-cuts, fast camera movements and partial/total object occlusions/dissapearances. We propose a new approach, called Tracking-Modeling-Detection (TMD) that closely integrates adaptive tracking with online learning of the object-specific detector. Starting from a single click in the first frame, TMD tracks the selected object by an adaptive tracker. The trajectory is observed by two processes (growing and pruning event) that robustly model the appearance and build an object detector on the fly. Both events make errors, the stability of the system is achieved by their can-celation. The learnt detector enables re-initialization of the tracker whenever previously observed appearance reoccurs. We show the real-time learning and classification is achievable with random forests. The performance and the long-term stability of TMD is demonstrated and evaluated on a set of challenging video sequences with various objects such as cars, people and animals.

Proceedings ArticleDOI
20 Jun 2009
TL;DR: It is shown that a simple extension of the framework to include motion features in a bottom-up saliency mode can robustly identify salient moving objects and automatically initialize the tracker.
Abstract: We propose a biologically inspired framework for visual tracking based on discriminant center surround saliency. At each frame, discrimination of the target from the background is posed as a binary classification problem. From a pool of feature descriptors for the target and background, a subset that is most informative for classification between the two is selected using the principle of maximum marginal diversity. Using these features, the location of the target in the next frame is identified using top-down saliency, completing one iteration of the tracking algorithm. We also show that a simple extension of the framework to include motion features in a bottom-up saliency mode can robustly identify salient moving objects and automatically initialize the tracker. The connections of the proposed method to existing works on discriminant tracking are discussed. Experimental results comparing the proposed method to the state of the art in tracking are presented, showing improved performance.

Journal ArticleDOI
TL;DR: In a naturalistic newspaper reading study, two pairs of information graphics have been designed to study the effects of a) the spatial contiguity principle and b) the dual scripting principle by means of eye tracking measurements as discussed by the authors.
Abstract: In a naturalistic newspaper reading study, two pairs of information graphics have been designed to study the effects of a) the spatial contiguity principle and b) the dual scripting principle by means of eye tracking measurements. Our data clearly show that different spatial layouts have a significant effect on readers’ eye movement behaviour. An integrated format with spatial contiguity between text and illustrations facilitates integration. Reading of information graphics is moreover significantly enhanced by a serial format, resulting from dual attentional guidance. The dual scripting principle is associated with a bottom-up guidance through the spatial layout of the presentation, suggesting a specific reading path, and with a top-down guidance through the conceptual pre-processing of the contents, facilitating information processing and semantic integration of the material. The integrated and serial formats not only attract readers’ initial attention, but also sustain the readers’ interest, thereby promoting a longer and deeper processing of the complex material. The results are an important contribution to the study of the cognitive processes involved in text-picture integration and offer relevant insights about attentional guidance in printed media, computer-based instructional materials, and textbook design. (Less)

Journal ArticleDOI
TL;DR: High socially anxious women tended to fixate the eye region of the presented face longer than MSA and LSA, respectively, and responded to direct gaze with more pronounced cardiac acceleration, indicating that direct gaze may be a fear-relevant feature for socially anxious individuals in social interaction.

Journal ArticleDOI
TL;DR: It is hoped that the presented methodology and case study will help cartographers and map interface designers to better identify design issues in their products, and that these insights will eventually lead to more effective and efficient online map interfaces.
Abstract: This paper proposes combining traditional usability methods with the analysis of eye movement recordings to evaluate interactive map interfaces, and presents a case study in support of this approach The case study evaluates two informationally equivalent, but differently designed online interactive map interfaces presented to novice users In a mixed factorial experiment, thirty participants were asked to solve three typical map-use tasks using one of the two interfaces; we then measured user satisfaction, efficiency (completion time) and effectiveness (accuracy) with standard SEE usability metrics While traditional (bottom line) usability metrics can reveal a range of usability problems, they may be enhanced by additional procedural measures such as eye movement recordings Eye movements have been shown to help reveal the amount of cognitive processing a display requires and where these cognitive resources are required Therefore, we can establish how a display may or may not facilitate task completion by analyzing eye movement recordings User satisfaction informa- tion related to tested stimuli (ie, collected through standardized questionnaires) can also be linked to eye tracking data for further analysis We hope that the presented methodology and case study will help cartographers and map interface designers to better identify design issues in their products, and that these insights will eventually lead to more effective and efficient online map interfaces

Journal ArticleDOI
TL;DR: Results indicate that color coding increased retention and transfer performance, and enhancement of learning by color coding was due to efficiency of locating corresponding information between illustration and text.
Abstract: Color coding has been proposed to promote more effective learning. However, insufficient evidence currently exists to show how color coding leads to better learning. The goal of this study was to investigate the underlying cause of the color coding effect by utilizing eye movement data. Fifty-two participants studied either a color-coded or conventional format of multimedia instruction. Eye movement data were collected during the study. The results indicate that color coding increased retention and transfer performance. Enhancement of learning by color coding was due to efficiency of locating corresponding information between illustration and text. Color coding also attracted attention of learners to perceptually salient information.

Journal ArticleDOI
TL;DR: It is demonstrated that a perceived eye gaze results in an automatic saccade following the gaze and that the gaze cue cannot be ignored, even when attending to it is detrimental to the task.
Abstract: The present study investigates how people's voluntary saccades are influenced by where another person is looking, even when this is counterpredictive of the intended saccade direction. The color of a fixation point instructed participants to make saccades either to the left or right. These saccade directions were either congruent or incongruent with the eye gaze of a centrally presented schematic face. Participants were asked to ignore the eyes, which were congruent only 20% of the time. At short gaze-fixation-cue stimulus onset asynchronies (SOAs; 0 and 100 msec), participants made more directional errors on incongruent than on congruent trials. At a longer SOA (900 msec), the pattern tended to reverse. We demonstrate that a perceived eye gaze results in an automatic saccade following the gaze and that the gaze cue cannot be ignored, even when attending to it is detrimental to the task. Similar results were found for centrally presented arrow cues, suggesting that this interference is not unique to gazes.

Journal ArticleDOI
TL;DR: This study investigates whether such a procedure can further enhance the effectiveness of examples in which a solution procedure is demonstrated to students by a (expert) model and results show that combined with a verbal description of the thought process, this form of attention guidance had detrimental effects on learning.

Journal ArticleDOI
TL;DR: The prototype of a gaze‐controlled, head‐mounted camera (EyeSeeCam) was developed that provides the functionality for fundamental studies on human gaze behavior even under dynamic conditions like locomotion.
Abstract: The prototype of a gaze-controlled, head-mounted camera (EyeSeeCam) was developed that provides the functionality for fundamental studies on human gaze behavior even under dynamic conditions like locomotion. EyeSeeCam incorporates active visual exploration by saccades with image stabilization during head, object, and surround motion just as occurs in human ocular motor control. This prototype is a first attempt to combine free user mobility with image stabilization and unrestricted exploration of the visual surround in a man-made technical vision system. The gaze-driven camera is supplemented by an additional wide-angle, head-fixed scene camera. In this scene view, the focused gaze view is embedded with picture-in-picture functionality, which provides an approximation of the foveated retinal content. Such a combined video clip can be viewed more comfortably than the saccade-pervaded image of the gaze camera alone. EyeSeeCam consists of a video-oculography (VOG) device and a camera motion device. The benchmark for the evaluation of such a device is the vestibulo-ocular reflex (VOR), which requires a latency on the order of 10 msec between head and eye (camera) movements for proper image stabilization. A new lightweight VOG was developed that is able to synchronously measure binocular eye positions at up to 600 Hz. The camera motion device consists of a parallel kinematics setup with a backlash-free gimbal joint that is driven by piezo actuators with no reduction gears. As a result, the latency between the rotations of an artificial eye and the camera was 10 msec, which is VOR-like.

Journal ArticleDOI
TL;DR: These results contrast with earlier reports regarding the lack of interest in the eye region in patients with autism, and demonstrate for the first time that perception of the face is dependent on eye dominance.

Journal ArticleDOI
TL;DR: In order to normally process a scene, viewers needed to see the scene for at least 150 ms during each eye fixation, which is surprising because viewers can extract the gist of a scene from a brief 40- to 100-ms exposure.
Abstract: The amount of time viewers could process a scene during eye fixations was varied by a mask that appeared at a certain point in each eye fixation. The scene did not reappear until the viewer made an eye movement. The main finding in the studies was that in order to normally process a scene, viewers needed to see the scene for at least 150 ms during each eye fixation. This result is surprising because viewers can extract the gist of a scene from a brief 40- to 100-ms exposure. It also stands in marked contrast to reading, as readers need only to view the words in the text for 50 to 60 ms to read normally. Thus, although the same neural mechanisms control eye movements in scene perception and reading, the cognitive processes associated with each task drive processing in different ways.

Journal ArticleDOI
TL;DR: A mechanism which compiles feedback related to the behavioral state of the user in the context of reading an electronic document is presented, achieved using a non-intrusive scheme, which uses a simple web camera to detect and track the head, eye and hand movements.
Abstract: Most e-learning environments which utilize user feedback or profiles, collect such information based on questionnaires, resulting very often in incomplete answers, and sometimes deliberate misleading input. In this work, we present a mechanism which compiles feedback related to the behavioral state of the user (e.g. level of interest) in the context of reading an electronic document; this is achieved using a non-intrusive scheme, which uses a simple web camera to detect and track the head, eye and hand movements and provides an estimation of the level of interest and engagement with the use of a neuro-fuzzy network initialized from evidence from the idea of Theory of Mind and trained from expert-annotated data. The user does not need to interact with the proposed system, and can act as if she was not monitored at all. The proposed scheme is tested in an e-learning environment, in order to adapt the presentation of the content to the user profile and current behavioral state. Experiments show that the proposed system detects reading- and attention-related user states very effectively, in a testbed where children's reading performance is tracked.

Journal ArticleDOI
TL;DR: In this article, the authors evaluated the potential impact of secondary cognitive tasks on the allocation of drivers' visual attention and on vehicle control, and found that drivers were presented with increasingly complex forms of an auditory cognitive task while driving an instrumented vehicle.
Abstract: Cognitive distractions have been shown to affect drivers adversely and are a leading cause of accidents. Research indicates that drivers alter how they allocate their visual attention while engaging in secondary cognitive tasks. To evaluate the potential impact of secondary cognitive tasks on the allocation of drivers' visual attention and on vehicle control, drivers were presented with increasingly complex forms of an auditory cognitive task while driving an instrumented vehicle. Measures of vehicle performance and eye gaze were assessed. Consistent with theories of visual tunneling, gaze distributions were significantly smaller while drivers performed certain levels of the secondary task; peripheral vision was thereby reduced. During the most difficult level of the secondary task, gaze dispersion was smaller than during any other level of the task. Changes in visual attention may provide earlier indications of cognitive distraction than changes in vehicle control, the latter of which were observed only ...

01 Jan 2009
TL;DR: This paper provides an overview of the current state of the art regarding the recognition of hand gestures as these are observed and recorded by typical video cameras and discusses proposed approaches for vision-driven interactive user interfaces.
Abstract: In recent years, research efforts seeking to provide more natural, human-centered means of interacting with computers have gained growing interest. A particularly important direction is that of perceptive user interfaces, where the computer is endowed with perceptive capabilities that allow it to acquire both implicit and explicit information about the user and the environment. Vision has the potential of carrying a wealth of information in a non-intrusive manner and at a low cost, therefore it constitutes a very attractive sensing modality for developing perceptive user interfaces. Proposed approaches for vision-driven interactive user interfaces resort to technologies such as head tracking, face and facial expression recognition, eye tracking and gesture recognition. In this paper, we focus our attention to vision-based recognition of hand gestures. The first part of the paper provides an overview of the current state of the art regarding the recognition of hand gestures as these are observed and recorded by typical video cameras. In order to make the review of the related literature tractable, this paper does not discuss:

Journal ArticleDOI
01 Apr 2009
TL;DR: The analysis of eye motion as a new input modality for activity recognition, context-awareness and mobile HCI applications is introduced and it is demonstrated that EOG is a suitable measurement technique for the recognition of reading activity and eye-based human-computer interaction.
Abstract: In this article we introduce the analysis of eye motion as a new input modality for activity recognition, context-awareness and mobile HCI applications. We describe a novel embedded eye tracker that, in contrast to common systems using video cameras, relies on Electrooculography (EOG). This self-contained wearable device consists of goggles with dry electrodes integrated into the frame and a small pocket-worn component with a DSP for real-time EOG signal processing. It can store data locally for long-term recordings or stream processed EOG signals to a remote device over Bluetooth. We show how challenges associated with wearability, eye motion analysis and signal artefacts caused by physical activity can be addressed with a combination of a special mechanical design, optimised algorithms for eye movement detection and adaptive signal processing. In two case studies, we demonstrate that EOG is a suitable measurement technique for the recognition of reading activity and eye-based human-computer interaction. Eventually, wearable EOG goggles may pave the way for seamless eye movement analysis in everyday environments and new forms of context-awareness not possible today.

Journal ArticleDOI
TL;DR: A novel technique that combines eye-tracking with subtle image-space modulation to direct a viewer's gaze about a digital image has potential application in many areas including large scale display systems, perceptually adaptive rendering, and complex visual search tasks.
Abstract: This article presents a novel technique that combines eye-tracking with subtle image-space modulation to direct a viewer's gaze about a digital image. We call this paradigm subtle gaze direction. Subtle gaze direction exploits the fact that our peripheral vision has very poor acuity compared to our foveal vision. By presenting brief, subtle modulations to the peripheral regions of the field of view, the technique presented here draws the viewer's foveal vision to the modulated region. Additionally, by monitoring saccadic velocity and exploiting the visual phenomenon of saccadic masking, modulation is automatically terminated before the viewer's foveal vision enters the modulated region. Hence, the viewer is never actually allowed to scrutinize the stimuli that attracted her gaze. This new subtle gaze directing technique has potential application in many areas including large scale display systems, perceptually adaptive rendering, and complex visual search tasks.

Proceedings ArticleDOI
15 Jul 2009
TL;DR: Different evaluations on technical aspects, usability, security and memorability show that EyePassShapes can significantly increase security while being easy to use and fast at the same time.
Abstract: Authentication systems for public terminals and thus public spaces have to be fast, easy and secure. Security is of utmost importance since the public setting allows manifold attacks from simple shoulder surfing to advanced manipulations of the terminals. In this work, we present EyePassShapes, an eye tracking authentication method that has been designed to meet these requirements. Instead of using standard eye tracking input methods that require precise and expensive eye trackers, EyePassShapes uses eye gestures. This input method works well with data about the relative eye movement, which is much easier to detect than the precise position of the user's gaze and works with cheaper hardware. Different evaluations on technical aspects, usability, security and memorability show that EyePassShapes can significantly increase security while being easy to use and fast at the same time.