scispace - formally typeset
Search or ask a question
Author

Yuki Shiga

Bio: Yuki Shiga is an academic researcher from Osaka Prefecture University. The author has contributed to research in topics: Eye movement & Reading (process). The author has an hindex of 3, co-authored 4 publications receiving 102 citations.

Papers
More filters
Proceedings ArticleDOI
08 Sep 2013
TL;DR: This work investigates whether different document types can be automatically detected from visual behaviour recorded using a mobile eye tracker, and presents an initial recognition approach that uses special purpose eye movement features as well as machine learning for document type detection.
Abstract: Reading is a ubiquitous activity that many people even perform in transit, such as while on the bus or while walking. Tracking reading enables us to gain more insights about expertise level and potential knowledge of users -- towards a reading log tracking and improve knowledge acquisition. As a first step towards this vision, in this work we investigate whether different document types can be automatically detected from visual behaviour recorded using a mobile eye tracker. We present an initial recognition approach that com- bines special purpose eye movement features as well as machine learning for document type detection. We evaluate our approach in a user study with eight participants and five Japanese document types and achieve a recognition performance of 74% using user-independent training.

77 citations

Proceedings ArticleDOI
25 Aug 2013
TL;DR: A new paradigm focusing on recognizing the activities and habits of users while they are reading is introduced, and evidence that reading and non-reading related activities can be separated over 3 users using 6 classes, perfectly separating reading from non- reading is presented.
Abstract: The document analysis community spends substantial resources towards computer recognition of any type of text (e.g. characters, handwriting, document structure etc.). In this paper, we introduce a new paradigm focusing on recognizing the activities and habits of users while they are reading. We describe the differences to the traditional approaches of document analysis. We present initial work towards recognizing reading activities. We report our initial findings using a commercial, dry electrode Electroencephalography (EEG) system. We show the feasibility to distinguish reading tasks for 3 different document genres with one user and near perfect accuracy. Distinguishing reading tasks for 3 different document types we achieve 97 % with user specific training. We present evidence that reading and non-reading related activities can be separated over 3 users using 6 classes, perfectly separating reading from non-reading. A simple EEG system seems promising for distinguishing the reading of different document genres.

20 citations

Proceedings ArticleDOI
13 Sep 2014
TL;DR: A method for recognition of user daily activities using gaze motion features and image-based visual features and the fusion of those different type of features improves performance of userdaily activity recognition.
Abstract: Recognition of user activities is a key issue for context-aware computing. We present a method for recognition of user daily activities using gaze motion features and image-based visual features. Gaze motion features dominate for inferring the user's egocentric context whereas image-based visual features dominate for recognition of the environments and the target objects. The experimental results show the fusion of those different type of features improves performance of user daily activity recognition.

19 citations


Cited by
More filters
Proceedings ArticleDOI
14 Jun 2018
TL;DR: This work presents a novel learning-based method for eye region landmark localization that enables conventional methods to be competitive to latest appearance-based methods and exceeds the state of the art for iris localization and eye shape registration on real-world imagery.
Abstract: Conventional feature-based and model-based gaze estimation methods have proven to perform well in settings with controlled illumination and specialized cameras. In unconstrained real-world settings, however, such methods are surpassed by recent appearance-based methods due to difficulties in modeling factors such as illumination changes and other visual artifacts. We present a novel learning-based method for eye region landmark localization that enables conventional methods to be competitive to latest appearance-based methods. Despite having been trained exclusively on synthetic data, our method exceeds the state of the art for iris localization and eye shape registration on real-world imagery. We then use the detected landmarks as input to iterative model-fitting and lightweight learning-based gaze estimation methods. Our approach outperforms existing model-fitting and appearance-based methods in the context of person-independent and personalized gaze estimation.

139 citations

Proceedings ArticleDOI
15 Sep 2015
TL;DR: An architecture based on head-and eye-tracking data is introduced in this study and several features are analyzed, showing promising results towards in-vehicle driver-activity recognition.
Abstract: This paper presents a novel approach to automated recognition of the driver's activity, which is a crucial factor for determining the take-over readiness in conditionally autonomous driving scenarios. Therefore, an architecture based on head-and eye-tracking data is introduced in this study and several features are analyzed. The proposed approach is evaluated on data recorded during a driving simulator study with 73 subjects performing different secondary tasks while driving in an autonomous setting. The proposed architecture shows promising results towards in-vehicle driver-activity recognition. Furthermore, a significant improvement in the classification performance is demonstrated due to the consideration of novel features derived especially for the autonomous driving context.

103 citations

Journal ArticleDOI
07 Jan 2016-Sensors
TL;DR: A review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame.
Abstract: Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people. However, current systems based on cameras located in the environment present a number of problems, such as occlusions and a limited field of view. Recently, wearable cameras have begun to be exploited. This paper presents a review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame. The current egocentric vision literature suggests that ADLs recognition is mainly driven by the objects present in the scene, especially those associated with specific tasks. However, although object-based approaches have proven popular, object recognition remains a challenge due to the intra-class variations found in unconstrained scenarios. As a consequence, the performance of current systems is far from satisfactory.

99 citations

Proceedings ArticleDOI
08 Sep 2013
TL;DR: This work investigates whether different document types can be automatically detected from visual behaviour recorded using a mobile eye tracker, and presents an initial recognition approach that uses special purpose eye movement features as well as machine learning for document type detection.
Abstract: Reading is a ubiquitous activity that many people even perform in transit, such as while on the bus or while walking. Tracking reading enables us to gain more insights about expertise level and potential knowledge of users -- towards a reading log tracking and improve knowledge acquisition. As a first step towards this vision, in this work we investigate whether different document types can be automatically detected from visual behaviour recorded using a mobile eye tracker. We present an initial recognition approach that com- bines special purpose eye movement features as well as machine learning for document type detection. We evaluate our approach in a user study with eight participants and five Japanese document types and achieve a recognition performance of 74% using user-independent training.

77 citations

Journal ArticleDOI
TL;DR: The authors describe a blueprint of the embedded architecture of smart eyeglasses and identify various software app clusters that provide frequently needed sensing and interaction and develop universal assistance systems that remain unobtrusive and thus can support wearers throughout their daily life.
Abstract: The authors discuss the vast application potential of multipurpose smart eyeglasses that integrate into the form factor of traditional spectacles and provide frequently needed sensing and interaction. In combination with software apps running on smart eyeglasses, the authors develop universal assistance systems that remain unobtrusive and thus can support wearers throughout their daily life. They describe a blueprint of the embedded architecture of smart eyeglasses and identify various software app clusters. They discuss findings from using smart eyeglasses prototypes in three case studies: to recognize cognitive workload, quantify reading habits, and monitor light exposure to estimate the circadian phase. This article is part of a special issue on digitally enhanced reality.

77 citations