scispace - formally typeset
Search or ask a question

Showing papers by "Hans Gellersen published in 2019"


Proceedings ArticleDOI
17 Oct 2019
TL;DR: This work proposes to leverage the synergetic movement of eye and head, and identifies design principles for Eye&Head gaze interaction, to enable dynamic coupling of gaze and pointer, hover interaction, visual exploration around pre-selections, and iterative and fast confirmation of targets.
Abstract: Eye gaze involves the coordination of eye and head movement to acquire gaze targets, but existing approaches to gaze pointing are based on eye-tracking in abstraction from head motion. We propose to leverage the synergetic movement of eye and head, and identify design principles for Eye&Head gaze interaction. We introduce three novel techniques that build on the distinction of head-supported versus eyes-only gaze, to enable dynamic coupling of gaze and pointer, hover interaction, visual exploration around pre-selections, and iterative and fast confirmation of targets. We demonstrate Eye&Head interaction on applications in virtual reality, and evaluate our techniques against baselines in pointing and confirmation studies. Our results show that Eye&Head techniques enable novel gaze behaviours that provide users with more control and flexibility in fast gaze pointing and selection.

75 citations


Journal ArticleDOI
TL;DR: A study of gaze shifts in virtual reality aimed to address the gap and inform design, and argue to treat gaze as multimodal input, and eye, head and body movement as synergetic in interaction design.
Abstract: Humans perform gaze shifts naturally through a combination of eye, head and body movements. Although gaze has been long studied as input modality for interaction, this has previously ignored the coordination of the eyes, head and body. This article reports a study of gaze shifts in virtual reality aimed to address the gap and inform design. We identify general eye, head and torso coordination patterns and provide an analysis of the relative movements’ contribution and temporal alignment. We quantify effects of target distance, direction and user posture, describe preferred eye-in-head motion ranges and identify a high variability in head movement tendency. Study insights lead us to propose gaze zones that reflect different levels of contribution from eye, head and body. We discuss design implications for HCI and VR, and in conclusion argue to treat gaze as multimodal input, and eye, head and body movement as synergetic in interaction design.

69 citations


Journal ArticleDOI
02 Aug 2019
TL;DR: Eye-tracking can distinguish between patients with the amnesic and the non-amnesic variants of MCI, providing further support for eye-tracking as a useful diagnostic biomarker in the assessment of dementia.
Abstract: Background: There is increasing evidence that people in the early stages of Alzheimer’s disease (AD) have subtle impairments in cognitive inhibition that can be detected by using relatively simple eye-tracking paradigms, but these subtle impairments are often missed by traditional cognitive assessments. People with mild cognitive impairment (MCI) are at an increased likelihood of dementia due to AD. No study has yet investigated and contrasted the MCI subtypes in relation to eye movement performance. Methods: In this work we explore whether eye-tracking impairments can distinguish between patients with the amnesic and the non-amnesic variants of MCI. Participants were 68 people with dementia due to AD, 42 had a diagnosis of aMCI, and 47 had a diagnosis of naMCI, and 92 age-matched cognitively healthy controls. Results: The findings revealed that eye-tracking can distinguish between the two forms of MCI. Conclusions: The work provides further support for eye-tracking as a useful diagnostic biomarker in the assessment of dementia.

52 citations


Proceedings ArticleDOI
23 Mar 2019
TL;DR: EyeSeeThrough is introduced as a novel interaction technique that utilizes eye-tracking in VR to merge the two-step process of selection of a mode in a menu and applying it to a target, into one unified interaction.
Abstract: In 2D interfaces, actions are often represented by fixed tools arranged in menus, palettes, or dedicated parts of a screen, whereas 3D interfaces afford their arrangement at different depths relative to the user and the user can move them relative to each other. In this paper, we introduce EyeSeeThrough as a novel interaction technique that utilizes eye-tracking in VR. The user can apply an action to an intended object by visually aligning the object with the tool at the line-of-sight, and then issue a confirmation command. The underlying idea is to merge the two-step process of 1) selection of a mode in a menu and 2) applying it to a target, into one unified interaction. We present a user study where we compare the method to the baseline two-step selection. The results of our user study showed that our technique outperforms the two step selection in terms of speed and comfort. We further developed a prototype of a virtual living room to demonstrate the practicality of the proposed technique.

30 citations


Proceedings ArticleDOI
02 May 2019
TL;DR: It is shown that VOR eye movement presents an alternative to vergence for gaze depth estimation, that is feasible also with monocular tracking, and in an evaluation of its use for target disambiguation, the method outperforms verging for targets presented at greater depth.
Abstract: Target disambiguation is a common problem in gaze interfaces, as eye tracking has accuracy and precision limitations. In 3D environments this is compounded by objects overlapping in the field of view, as a result of their positioning at different depth with partial occlusion. We introduce VOR depth estimation, a method based on the Vestibulo-ocular reflex of the eyes in compensation of head movement, and explore its application to resolve target ambiguity. The method estimates gaze depth by comparing the rotations of the eye and the head when the users look at a target and deliberately rotate their head. We show that VOR eye movement presents an alternative to vergence for gaze depth estimation, that is feasible also with monocular tracking. In an evaluation of its use for target disambiguation, our method outperforms vergence for targets presented at greater depth.

24 citations


Proceedings ArticleDOI
02 May 2019
TL;DR: Evidence that playing the game can affect the authors' visual skills, with greater peripheral awareness is found, and perceptual and interaction challenges that require players not to look and rely on their periphery are proposed.
Abstract: In this work, we challenge the Gaze interaction paradigm "What you see is what you get" to introduce "playing with peripheral vision". We developed the conceptual framework to introduce this novel gaze-aware game dynamic. We illustrated the concept with SuperVision, a collection of three games that play with peripheral vision. We propose perceptual and interaction challenges that require players not to look and rely on their periphery. To validate the game dynamic and experience, we conducted a user study with twenty-four participants. Results show how the game concept created an engaging and playful experience playing with peripheral vision. Participants showed proficiency in overcoming the game challenges, developing clear strategies to succeed. Moreover, we found evidence that playing the game can affect our visual skills, with greater peripheral awareness.

17 citations


Proceedings ArticleDOI
17 Oct 2019
TL;DR: Twileyed, a collection of three games that challenge the "common" use of gaze as a pointer to navigate; select; and aim; to pose a challenging new way to play with the eyes, is developed.
Abstract: Gaze interaction in games moved from being a tool for accessibility to be at the core of mass-market game franchises, offering enhanced controller performance and greater immersion. We propose to explore three different popular gaze-based interaction mechanics to create novel opportunities in the game design space. We developed Twileyed, a collection of three games that challenge the "common" use of gaze as a pointer to navigate; select; and aim; to pose a challenging new way to play with the eyes. We used the games as data to reflect on the gaze design space. We asked users to play the games to validate them, and we observed their experience and strategies. Based on the observations, we discussed through 5 themes, the dimensions of gaze interactions and the potential outcomes to create engaging and playful gaze-enabled games. We contribute a position in gaze gameplay design, but also a conversation starter to engage the EyePlay research community.

10 citations


Journal ArticleDOI
TL;DR: It is revealed that the inhibitory errors of the past have a negative effect on the future performance of healthy adults as well as people with a neurodegenerative cognitive impairment.
Abstract: This work investigated in Alzheimer's disease dementia (AD), whether the probability of making an error on a task (or a correct response) was influenced by the outcome of the previous trials. We used the antisaccade task (AST) as a model task given the emerging consensus that it provides a promising sensitive and early biological test of cognitive impairment in AD. It can be employed equally well in healthy young and old adults, and in clinical populations. This study examined eye-movements in a sample of 202 participants (42 with dementia due to AD; 65 with mild cognitive impairment (MCI); 95 control participants). The findings revealed an overall increase in the frequency of AST errors in AD and MCI compared to the control group, as predicted. The errors on the current trial increased in proportion to the number of consecutive errors on the previous trials. Interestingly, the probability of errors was reduced on the trials that followed a previously corrected error, compared to the trials where the error remained uncorrected, revealing a level of adaptive control in participants with MCI or AD dementia. There was an earlier peak in the AST distribution of the saccadic reaction times for the inhibitory errors in comparison to the correct saccades. These findings revealed that the inhibitory errors of the past have a negative effect on the future performance of healthy adults as well as people with a neurodegenerative cognitive impairment.

9 citations


Proceedings ArticleDOI
25 Jun 2019
TL;DR: The results show that VOR Gain is comparable with vergence in capturing depth while only requiring one eye, and provide insight into open challenges in harnessing VOR gain as a robust measure.
Abstract: Gaze depth estimation presents a challenge for eye tracking in 3D. This work investigates a novel approach to the problem based on eye movement mediated by the vestibulo-ocular reflex (VOR). VOR stabilises gaze on a target during head movement, with eye movement in the opposite direction, and the VOR gain increases the closer the fixated target is to the viewer. We present a theoretical analysis of the relationship between VOR gain and depth which we investigate with empirical data collected in a user study (N=10). We show that VOR gain can be captured using pupil centres, and propose and evaluate a practical method for gaze depth estimation based on a generic function of VOR gain and two-point depth calibration. The results show that VOR gain is comparable with vergence in capturing depth while only requiring one eye, and provide insight into open challenges in harnessing VOR gain as a robust measure.

8 citations


Proceedings ArticleDOI
19 Nov 2019
TL;DR: A framework containing a spectrum of five discrete categories for this unexpected use of gaze sensing is created and described based on whether specific game events mean the player might not; cannot; should not; must not; or does not look.
Abstract: Gaze interaction paradigms rely on the user needing to look at objects in the interface to select them or trigger actions. ”Not looking” is an atypical and unexpected interaction to perform, but the eye-tracker can sense it. We illustrate the use of ”not looking” as an interaction dynamic with examples of gaze-enabled games. We created a framework containing a spectrum of five discrete categories for this unexpected use of gaze sensing. For each category, we analyse games that use gaze interaction and make the user look away from the game action up to the extent they close their eyes. The framework is described based on whether specific game events mean the player might not; cannot; should not; must not; or does not look. Finally, we discuss the outcomes of using unexpected gaze interactions and the potential of the proposed framework as a new approach to guide the design of sensing-based interfaces.

6 citations


Proceedings ArticleDOI
25 Jun 2019
TL;DR: An open-source software for extracting and analyzing the eye movement data of different types of saccade tests that can be used to extract and compare participants' performance and various task-related measures across participants is described.
Abstract: Various types of saccadic paradigms, in particular, Prosaccade and Antisaccade tests are widely used in Pathophysiology and Psychology. Despite been widely used, there has not been a standard tool for processing and analyzing the eye tracking data obtained from saccade tests. We describe an open-source software for extracting and analyzing the eye movement data of different types of saccade tests that can be used to extract and compare participants' performance and various task-related measures across participants. We further demonstrate the utility of the software by using it to analyze the data from an antisaccade, and a recent distractor experiment.

24 Mar 2019
TL;DR: An analysis of contemporary interactive VR applications that support subtitles is presented, to provide insight into current practices and design issues and propose three subtitle techniques that leverage eye tracking to address depth related issues and reflect on the design space of supporting subtitles with gaze.
Abstract: Subtitles are a research area of growing importance in interactive virtual reality (VR), for localisation and hearing aid as the medium becomes more accessible to the public. Currently, there are no established guidelines for implementing subtitles in VR. Depth is a particular issue for subtitles in VR as it can result in occlusion or strain on users due to conflicting depth cues between the environment and subtitles. We present an analysis of contemporary interactive VR applications that support subtitles, to provide insight into current practices and design issues. Based on this, we propose three subtitle techniques that leverage eye tracking to address depth related issues, and reflect on the design space of supporting subtitles with gaze.