scispace - formally typeset
Search or ask a question

Showing papers on "Eye tracking published in 2015"


Book
12 May 2015
TL;DR: Eye tracking: a comprehensive guide to methods and measures, Oxford, UK: Oxford University Press.
Abstract: Holmqvist, K., Nystrom, N., Andersson, R., Dewhurst, R., Jarodzka, H., & Van de Weijer, J. (Eds.) (2011). Eye tracking: a comprehensive guide to methods and measures, Oxford, UK: Oxford University Press.

1,904 citations


Journal ArticleDOI
TL;DR: This paper comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers and performs detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance.
Abstract: While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

684 citations


Proceedings ArticleDOI
07 Jun 2015
TL;DR: A mouse-contingent multi-resolutional paradigm based on neurophysiological and psychophysical studies of peripheral vision, to simulate the natural viewing behavior of humans is designed, thus enabling large-scale data collection.
Abstract: Saliency in Context (SALICON) is an ongoing effort that aims at understanding and predicting visual attention. This paper presents a new method to collect large-scale human data during natural explorations on images. While current datasets present a rich set of images and task-specific annotations such as category labels and object segments, this work focuses on recording and logging how humans shift their attention during visual exploration. The goal is to offer new possibilities to (1) complement task-specific annotations to advance the ultimate goal in visual understanding, and (2) understand visual attention and learn saliency models, all with human attentional data at a much larger scale. We designed a mouse-contingent multi-resolutional paradigm based on neurophysiological and psychophysical studies of peripheral vision, to simulate the natural viewing behavior of humans. The new paradigm allowed using a general-purpose mouse instead of an eye tracker to record viewing behaviors, thus enabling large-scale data collection. The paradigm was validated with controlled laboratory as well as large-scale online data. We report in this paper a proof-of-concept SALICON dataset of human “free-viewing” data on 10,000 images from the Microsoft COCO (MS COCO) dataset with rich contextual information. We evaluated the use of the collected data in the context of saliency prediction, and demonstrated them a good source as ground truth for the evaluation of saliency algorithms.

637 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: This paper presents a focused study to narrow the semantic gap with an architecture based on Deep Neural Network (DNN), which leverages the representational power of high-level semantics encoded in DNNs pretrained for object recognition.
Abstract: Saliency in Context (SALICON) is an ongoing effort that aims at understanding and predicting visual attention. Conventional saliency models typically rely on low-level image statistics to predict human fixations. While these models perform significantly better than chance, there is still a large gap between model prediction and human behavior. This gap is largely due to the limited capability of models in predicting eye fixations with strong semantic content, the so-called semantic gap. This paper presents a focused study to narrow the semantic gap with an architecture based on Deep Neural Network (DNN). It leverages the representational power of high-level semantics encoded in DNNs pretrained for object recognition. Two key components are fine-tuning the DNNs fully convolutionally with an objective function based on the saliency evaluation metrics, and integrating information at different image scales. We compare our method with 14 saliency models on 6 public eye tracking benchmark datasets. Results demonstrate that our DNNs can automatically learn features particularly for saliency prediction that surpass by a big margin the state-of-the-art. In addition, our model ranks top to date under all seven metrics on the MIT300 challenge set.

577 citations


Proceedings ArticleDOI
07 Jun 2015
TL;DR: This paper proposes a novel tracking method which track objects based on parts with multiple correlation filters which can run in real-time and the Bayesian inference framework and a structural constraint mask are adopted to enable this tracker to be robust to various appearance changes.
Abstract: Robust object tracking is a challenging task in computer vision. To better solve the partial occlusion issue, part-based methods are widely used in visual object trackers. However, due to the complicated online training and updating process, most of these part-based trackers cannot run in real-time. Correlation filters have been used in tracking tasks recently because of the high efficiency. However, the conventional correlation filter based trackers cannot deal with occlusion. Furthermore, most correlation filter based trackers fix the scale and rotation of the target which makes the trackers unreliable in long-term tracking tasks. In this paper, we propose a novel tracking method which track objects based on parts with multiple correlation filters. Our method can run in real-time. Additionally, the Bayesian inference framework and a structural constraint mask are adopted to enable our tracker to be robust to various appearance changes. Extensive experiments have been done to prove the effectiveness of our method.

420 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: Zhang et al. as mentioned in this paper proposed a framework by breaking a tracker down into five constituent parts, namely, motion model, feature extractor, observation model, model updater, and ensemble post-processor, and conduct ablative experiments on each component to study how it affects the overall result.
Abstract: Several benchmark datasets for visual tracking research have been created in recent years. Despite their usefulness, whether they are sufficient for understanding and diagnosing the strengths and weaknesses of different trackers remains questionable. To address this issue, we propose a framework by breaking a tracker down into five constituent parts, namely, motion model, feature extractor, observation model, model updater, and ensemble post-processor. We then conduct ablative experiments on each component to study how it affects the overall result. Surprisingly, our findings are discrepant with some common beliefs in the visual tracking research community. We find that the feature extractor plays the most important role in a tracker. On the other hand, although the observation model is the focus of many studies, we find that it often brings no significant improvement. Moreover, the motion model and model updater contain many details that could affect the result. Also, the ensemble post-processor can improve the result substantially when the constituent trackers have high diversity. Based on our findings, we put together some very elementary building blocks to give a basic tracker which is competitive in performance to the state-of-the-art trackers. We believe our framework can provide a solid baseline when conducting controlled experiments for visual tracking research.

341 citations


Posted Content
TL;DR: This work pre-training a CNN offline and then transferring the rich feature hierarchies learned to online tracking, and proposes to generate a probability map instead of producing a simple class label to fit the characteristics of object tracking.
Abstract: Convolutional neural network (CNN) models have demonstrated great success in various computer vision tasks including image classification and object detection. However, some equally important tasks such as visual tracking remain relatively unexplored. We believe that a major hurdle that hinders the application of CNN to visual tracking is the lack of properly labeled training data. While existing applications that liberate the power of CNN often need an enormous amount of training data in the order of millions, visual tracking applications typically have only one labeled example in the first frame of each video. We address this research issue here by pre-training a CNN offline and then transferring the rich feature hierarchies learned to online tracking. The CNN is also fine-tuned during online tracking to adapt to the appearance of the tracked target specified in the first video frame. To fit the characteristics of object tracking, we first pre-train the CNN to recognize what is an object, and then propose to generate a probability map instead of producing a simple class label. Using two challenging open benchmarks for performance evaluation, our proposed tracker has demonstrated substantial improvement over other state-of-the-art trackers.

334 citations


Patent
09 May 2015
TL;DR: In this article, the intent of a device wearer is detected based on the movements of the eye, which are under voluntary control by the device wearer and can be detected by a remote eye tracking camera, remote displays and other ancillary inputs.
Abstract: Systems and methods are provided for discerning the intent of a device wearer primarily based on movements of the eyes. The system can be included within unobtrusive headwear that performs eye tracking and controls screen display. The system can also utilize remote eye tracking camera(s), remote displays and/or other ancillary inputs. Screen layout is optimized to facilitate the formation and reliable detection of rapid eye signals. The detection of eye signals is based on tracking physiological movements of the eye that are under voluntary control by the device wearer. The detection of eye signals results in actions that are compatible with wearable computing and a wide range of display devices.

296 citations


Journal ArticleDOI
04 Nov 2015-Neuron
TL;DR: Compared with matched controls, people with ASD had a stronger image center bias regardless of object distribution, reduced saliency for faces and for locations indicated by social gaze, and yet a general increase in pixel- level saliency at the expense of semantic-level saliency.

255 citations


Posted Content
TL;DR: This paper introduces a webcam-based gaze tracking system that supports large-scale, crowdsourced eye tracking deployed on Amazon Mechanical Turk (AMTurk), and builds a saliency dataset for a large number of natural images.
Abstract: Traditional eye tracking requires specialized hardware, which means collecting gaze data from many observers is expensive, tedious and slow. Therefore, existing saliency prediction datasets are order-of-magnitudes smaller than typical datasets for other vision recognition tasks. The small size of these datasets limits the potential for training data intensive algorithms, and causes overfitting in benchmark evaluation. To address this deficiency, this paper introduces a webcam-based gaze tracking system that supports large-scale, crowdsourced eye tracking deployed on Amazon Mechanical Turk (AMTurk). By a combination of careful algorithm and gaming protocol design, our system obtains eye tracking data for saliency prediction comparable to data gathered in a traditional lab setting, with relatively lower cost and less effort on the part of the researchers. Using this tool, we build a saliency dataset for a large number of natural images. We will open-source our tool and provide a web server where researchers can upload their images to get eye tracking results from AMTurk.

247 citations


Journal ArticleDOI
TL;DR: It is reported the first evidence that intranasal oxytocin administration improves a core problem that individuals with autism have in using eye contact appropriately in real-world social settings, providing evidence of a therapeutic effect in a key aspect of social communication.
Abstract: Autism spectrum conditions (autism) affect ~1% of the population and are characterized by deficits in social communication. Oxytocin has been widely reported to affect social-communicative function and its neural underpinnings. Here we report the first evidence that intranasal oxytocin administration improves a core problem that individuals with autism have in using eye contact appropriately in real-world social settings. A randomized double-blind, placebo-controlled, within-subjects design is used to examine how intranasal administration of 24 IU of oxytocin affects gaze behavior for 32 adult males with autism and 34 controls in a real-time interaction with a researcher. This interactive paradigm bypasses many of the limitations encountered with conventional static or computer-based stimuli. Eye movements are recorded using eye tracking, providing an objective measurement of looking patterns. The measure is shown to be sensitive to the reduced eye contact commonly reported in autism, with the autism group spending less time looking to the eye region of the face than controls. Oxytocin administration selectively enhanced gaze to the eyes in both the autism and control groups (transformed mean eye-fixation difference per second=0.082; 95% CI:0.025–0.14, P=0.006). Within the autism group, oxytocin has the most effect on fixation duration in individuals with impaired levels of eye contact at baseline (Cohen’s d=0.86). These findings demonstrate that the potential benefits of oxytocin in autism extend to a real-time interaction, providing evidence of a therapeutic effect in a key aspect of social communication.

Proceedings ArticleDOI
05 Nov 2015
TL;DR: Orbits, a novel gaze interaction technique that enables hands-free input on smart watches using off-the-shelf devices, relies on moving controls to leverage the smooth pursuit movements of the eyes and detect whether and at which control the user is looking at.
Abstract: We introduce Orbits, a novel gaze interaction technique that enables hands-free input on smart watches. The technique relies on moving controls to leverage the smooth pursuit movements of the eyes and detect whether and at which control the user is looking at. In Orbits, controls include targets that move in a circular trajectory in the face of the watch, and can be selected by following the desired one for a small amount of time. We conducted two user studies to assess the technique's recognition and robustness, which demonstrated how Orbits is robust against false positives triggered by natural eye movements and how it presents a hands-free, high accuracy way of interacting with smart watches using off-the-shelf devices. Finally, we developed three example interfaces built with Orbits: a music player, a notifications face plate and a missed call menu. Despite relying on moving controls -- very unusual in current HCI interfaces -- these were generally well received by participants in a third and final study.

Journal ArticleDOI
TL;DR: Previous claims about eye movements and face perception that are based on a single social context can only be generalized with caution and a complete understanding of face perception needs to address both functions of social gaze.

Journal ArticleDOI
TL;DR: In this article, a review summarizes literature on the relation between eye measurement parameters and drivers' mental workload, and recommends using multiple assessment methods to increase validity and robustness in driver assessment.

Journal ArticleDOI
TL;DR: The proposed framework to predict visual scanpaths of observers while they freely watch a visual scene is a new framework inferred from bottom-up saliency and several oculomotor biases and computing saliency maps from simulated visual scan paths allows to outperform existing saliency models.

Journal ArticleDOI
TL;DR: This study hypothesized that the ecological nature of the social stimuli would affect participants' social attention, with gaze behavior during more naturalistic scenes being most predictive of ASD vs typical development.
Abstract: Autism Spectrum Disorder (ASD) is characterized by social impairments that have been related to deficits in social attention, including diminished gaze to faces. Eye-tracking studies are commonly used to examine social attention and social motivation in ASD, but they vary in sensitivity. In this study, we hypothesized that the ecological nature of the social stimuli would affect participants' social attention, with gaze behavior during more naturalistic scenes being most predictive of ASD vs. typical development. Eighty-one children with and without ASD participated in three eye-tracking tasks that differed in the ecological relevance of the social stimuli. In the "Static Visual Exploration" task, static images of objects and people were presented; in the "Dynamic Visual Exploration" task, video clips of individual faces and objects were presented side-by-side; in the "Interactive Visual Exploration" task, video clips of children playing with objects in a naturalistic context were presented. Our analyses uncovered a three-way interaction between Task, Social vs. Object Stimuli, and Diagnosis. This interaction was driven by group differences on one task only-the Interactive task. Bayesian analyses confirmed that the other two tasks were insensitive to group membership. In addition, receiver operating characteristic analyses demonstrated that, unlike the other two tasks, the Interactive task had significant classification power. The ecological relevance of social stimuli is an important factor to consider for eye-tracking studies aiming to measure social attention and motivation in ASD.

Proceedings Article
07 Dec 2015
TL;DR: A deep neural network-based approach for gaze-following and a new benchmark dataset, GazeFollow, for thorough evaluation are proposed and it is shown that this approach produces reliable results, even when viewing only the back of the head.
Abstract: Humans have the remarkable ability to follow the gaze of other people to identify what they are looking at. Following eye gaze, or gaze-following, is an important ability that allows us to understand what other people are thinking, the actions they are performing, and even predict what they might do next. Despite the importance of this topic, this problem has only been studied in limited scenarios within the computer vision community. In this paper, we propose a deep neural network-based approach for gaze-following and a new benchmark dataset, GazeFollow, for thorough evaluation. Given an image and the location of a head, our approach follows the gaze of the person and identifies the object being looked at. Our deep network is able to discover how to extract head pose and gaze orientation, and to select objects in the scene that are in the predicted line of sight and likely to be looked at (such as televisions, balls and food). The quantitative evaluation shows that our approach produces reliable results, even when viewing only the back of the head. While our method outperforms several baseline approaches, we are still far from reaching human performance on this task. Overall, we believe that gaze-following is a challenging and important problem that deserves more attention from the community.

Proceedings ArticleDOI
16 May 2015
TL;DR: Inspired by the linearity that people exhibit while natural language text reading, local and global gaze-based measures to characterize linearity in reading source code indicate that there are specific differences between reading natural language and source code, and suggest that non-linear reading skills increase with expertise.
Abstract: Code reading is an important skill in programming. Inspired by the linearity that people exhibit while natural language text reading, we designed local and global gaze-based measures to characterize linearity (left-to-right and top-to-bottom) in reading source code. Unlike natural language text, source code is executable and requires a specific reading approach. To validate these measures, we compared the eye movements of novice and expert programmers who were asked to read and comprehend short snippets of natural language text and Java programs. Our results show that novices read source code less linearly than natural language text. Moreover, experts read code less linearly than novices. These findings indicate that there are specific differences between reading natural language and source code, and suggest that non-linear reading skills increase with expertise. We discuss the implications for practitioners and educators.

Journal ArticleDOI
26 Aug 2015-PLOS ONE
TL;DR: Insight is offered into the temporal dynamics of live dyadic interactions and a new method of analysis for eye gaze data when temporal relationships are of interest is provided and Convergent with theoretical models of social interaction suggested.
Abstract: Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.

Journal ArticleDOI
TL;DR: In this article, a review article provides an overview of the efforts made on tackling this demanding task and discusses how these findings can be synthesized in computer graphics and can be utilized in the domains of Human-Robot Interaction and Human-Computer Interaction for allowing humans to interact with virtual agents and other artificial entities.
Abstract: A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: 'The face is the portrait of the mind; the eyes, its informers'. This presents a significant challenge for Computer Graphics researchers who generate artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This review article provides an overview of the efforts made on tackling this demanding task. As with many topics in computer graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We begin with a discussion of the movement of the eyeballs, eyelids and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Furthermore, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye gaze, during the expression of emotion or during conversation. We discuss how these findings are synthesized in computer graphics and can be utilized in the domains of Human-Robot Interaction and Human-Computer Interaction for allowing humans to interact with virtual agents and other artificial entities. We conclude with a summary of guidelines for animating the eye and head from the perspective of a character animator.

Journal ArticleDOI
TL;DR: In this paper, the influence of different ways of disclosing brand placement on viewers' visual attention, the use of persuasion knowledge, and brand responses was investigated, and the results showed that a combination of text and a product placement logo was most effective in enhancing the recognition of advertising and that a logo alone was least effective.
Abstract: This eye tracking experiment (N = 149) investigates the influence of different ways of disclosing brand placement on viewers’ visual attention, the use of persuasion knowledge, and brand responses. The results showed that (1) a combination of text (“This program contains product placement”) and a product placement (PP) logo was most effective in enhancing the recognition of advertising and that a logo alone was least effective; (2) this effect was mediated by viewers’ visual attention to the disclosure and brand placement; and (3) the recognition of advertising consequently increased brand memory and led to more negative brand attitudes.

Journal ArticleDOI
TL;DR: It is shown that if an experimenter passively knows when individuals move their eyes the experimenter can change complex moral decisions, achieved by simply adjusting the timing of the decisions.
Abstract: Eye gaze is a window onto cognitive processing in tasks such as spatial memory, linguistic processing, and decision making. We present evidence that information derived from eye gaze can be used to change the course of individuals’ decisions, even when they are reasoning about high-level, moral issues. Previous studies have shown that when an experimenter actively controls what an individual sees the experimenter can affect simple decisions with alternatives of almost equal valence. Here we show that if an experimenter passively knows when individuals move their eyes the experimenter can change complex moral decisions. This causal effect is achieved by simply adjusting the timing of the decisions. We monitored participants’ eye movements during a two-alternative forced-choice task with moral questions. One option was randomly predetermined as a target. At the moment participants had fixated the target option for a set amount of time we terminated their deliberation and prompted them to choose between the two alternatives. Although participants were unaware of this gaze-contingent manipulation, their choices were systematically biased toward the target option. We conclude that even abstract moral cognition is partly constituted by interactions with the immediate environment and is likely supported by gaze-dependent decision processes. By tracking the interplay between individuals, their sensorimotor systems, and the environment, we can influence the outcome of a decision without directly manipulating the content of the information available to them.

Journal ArticleDOI
TL;DR: An algorithm for eye tracking in which the Cartesian coordinates of the right and left pupils are tracked over 200 sec and compared to each other as a subject watches a short film clip moving inside an aperture on a computer screen helps quantify the severity of ocular motility disruption associated with concussion and structural brain injury.
Abstract: Disconjugate eye movements have been associated with traumatic brain injury since ancient times. Ocular motility dysfunction may be present in up to 90% of patients with concussion or blast injury. We developed an algorithm for eye tracking in which the Cartesian coordinates of the right and left pupils are tracked over 200 sec and compared to each other as a subject watches a short film clip moving inside an aperture on a computer screen. We prospectively eye tracked 64 normal healthy noninjured control subjects and compared findings to 75 trauma subjects with either a positive head computed tomography (CT) scan (n=13), negative head CT (n=39), or nonhead injury (n=23) to determine whether eye tracking would reveal the disconjugate gaze associated with both structural brain injury and concussion. Tracking metrics were then correlated to the clinical concussion measure Sport Concussion Assessment Tool 3 (SCAT3) in trauma patients. Five out of five measures of horizontal disconjugacy were increase...

Journal ArticleDOI
TL;DR: Age-related eye-movement changes as measured in the laboratory only partly resemble those in the real world, and the importance of validity for natural situations when studying the impact of aging on real-life performance is highlighted.
Abstract: The effects of aging on eye movements are well studied in the laboratory. Increased saccade latencies or decreased smooth-pursuit gain are well established findings. The question remains whether these findings are influenced by the rather untypical environment of a laboratory; that is, whether or not they transfer to the real world. We measured 34 healthy participants between the age of 25 and 85 during two everyday tasks in the real world: (I) walking down a hallway with free gaze, (II) visual tracking of an earth-fixed object while walking straight-ahead. Eye movements were recorded with a mobile light-weight eye tracker, the EyeSeeCam (ESC). We find that age significantly influences saccade parameters. With increasing age, saccade frequency, amplitude, peak velocity, and mean velocity are reduced and the velocity/amplitude distribution as well as the velocity profile become less skewed. In contrast to laboratory results on smooth pursuit, we did not find a significant effect of age on tracking eye-movements in the real world. Taken together, age-related eye-movement changes as measured in the laboratory only partly resemble those in the real world. It is well-conceivable that in the real world additional sensory cues, such as head-movement or vestibular signals, may partially compensate for age-related effects, which, according to this view, would be specific to early motion processing. In any case, our results highlight the importance of validity for natural situations when studying the impact of aging on real-life performance.

Proceedings ArticleDOI
18 May 2015
TL;DR: This paper presents the first statistical model to predict readers with and without dyslexia using eye tracking measures, based on a Support Vector Machine binary classifier, with 80.18% accuracy.
Abstract: Worldwide, around 10% of the population has dyslexia, a specific learning disorder. Most of previous eye tracking experiments with people with and without dyslexia have found differences between populations suggesting that eye movements reflect the difficulties of individuals with dyslexia. In this paper, we present the first statistical model to predict readers with and without dyslexia using eye tracking measures. The model is trained and evaluated in a 10-fold cross experiment with a dataset composed of 1,135 readings of people with and without dyslexia that were recorded with an eye tracker. Our model, based on a Support Vector Machine binary classifier, reaches 80.18% accuracy using the most informative features. To the best of our knowledge, this is the first time that eye tracking measures are used to predict automatically readers with dyslexia using machine learning.

Proceedings ArticleDOI
09 Nov 2015
TL;DR: Results show that gaze+gesture can outperform systems using gaze or gesture alone, and in general, approach the performance of "gold standard" input systems, such as the mouse and trackpad.
Abstract: Humans rely on eye gaze and hand manipulations extensively in their everyday activities. Most often, users gaze at an object to perceive it and then use their hands to manipulate it. We propose applying a multimodal, gaze plus free-space gesture approach to enable rapid, precise and expressive touch-free interactions. We show the input methods are highly complementary, mitigating issues of imprecision and limited expressivity in gaze-alone systems, and issues of targeting speed in gesture-alone systems. We extend an existing interaction taxonomy that naturally divides the gaze+gesture interaction space, which we then populate with a series of example interaction techniques to illustrate the character and utility of each method. We contextualize these interaction techniques in three example scenarios. In our user study, we pit our approach against five contemporary approaches; results show that gaze+gesture can outperform systems using gaze or gesture alone, and in general, approach the performance of "gold standard" input systems, such as the mouse and trackpad.

Journal ArticleDOI
TL;DR: A reliable, standardized protocol that appears to differentiate mTBI from normals was developed for use in future research, and represents a step toward objective identification of those with PCS.
Abstract: OBJECTIVES:: Objective measures to diagnose and to monitor improvement of symptoms following mild traumatic brain injury (mTBI) are lacking. Computerized eye tracking has been advocated as a rapid, user friendly, and field-ready technique to meet this need. DESIGN:: Eye-tracking data collected via a head-mounted, video-based binocular eye tracker was used to examine saccades, fixations, and smooth pursuit movement in military Service Members with postconcussive syndrome (PCS) and asymptomatic control subjects in an effort to determine if eye movement differences could be found and quantified. PARTICIPANTS:: Sixty Military Service Members with PCS and 26 asymptomatic controls. OUTCOME MEASURES:: The diagnosis of mTBI was confirmed by the study physiatrist's history, physical examination, and a review of any medical records. Various features of saccades, fixation and smooth pursuit eye movements were analyzed. RESULTS:: Subjects with symptomatic mTBI had statistically larger position errors, smaller saccadic amplitudes, smaller predicted peak velocities, smaller peak accelerations, and longer durations. Subjects with symptomatic mTBI were also less likely to follow a target movement (less primary saccades). In general, symptomatic mTBI tracked the stepwise moving targets less accurately, revealing possible brain dysfunction. CONCLUSIONS:: A reliable, standardized protocol that appears to differentiate mTBI from normals was developed for use in future research. This investigation represents a step toward objective identification of those with PCS. Future studies focused on increasing the specificity of eye movement differences in those with PCS are needed. Language: en

Journal ArticleDOI
TL;DR: Converging evidence revealing dissociations between the contents of perceptual awareness and different types of eye movement reveals situations in which eye movements are sensitive to particular visual features that fail to modulate perceptual reports.

Patent
02 Mar 2015
TL;DR: In this paper, an image or data capture device associated with a display device may capture an image of a space associated with the user or capture data related to other objects in the space.
Abstract: Methods and systems are described for determining an image resource allocation for displaying content within a display area. An image or data capture device associated with a display device may capture an image of a space associated with the user or capture data related to other objects in the space. The viewing distance between the user and the display area (e.g., the display device) may be monitored and processed to determine and/or adjust the image resource allocation for content displayed within the display area. User movement, including eye movement, may also be monitored and processed to determine and/or adjust the image resource allocation for content displayed within the display area.

Journal ArticleDOI
TL;DR: An automated system to measure Optomotor and Optokinetic responses under identical stimulation conditions, enabling a direct comparison of the two reflexes and a significant increment over existing systems which rely on subjective human observation is presented.