scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Unraveling Human Perception of Facial Aging Using Eye Gaze

TL;DR: Eye gaze is utilized as a medium to unravel the cues utilized by humans for the perception of facial aging and explore the tasks of face age estimation and age-separate face verification and analyze the eye gaze patterns of participants to understand the strategy followed by human participants.
Abstract: Continuous efforts are being made to understand human perception network with the purpose of developing enhanced computational models for vision-based tasks. In this paper, we utilize eye gaze as a medium to unravel the cues utilized by humans for the perception of facial aging. Specifically, we explore the tasks of face age estimation and age-separate face verification and analyze the eye gaze patterns of participants to understand the strategy followed by human participants. To facilitate this, eye gaze data from 50 participants is acquired using two different eye gaze trackers: Eye Tribe and GazePoint GP3. Comprehensive analysis of various eye movement metrics is performed with respect to different face parts to illustrate their relevance for age estimation and age-separated face verification tasks.

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI
01 Jan 2023
TL;DR: In this paper , the most used devices and libraries in gaze tracking solutions in the last 5 years as well as the most important characteristics of each of them are presented, based on features such as price, technical performance and compatibility.
Abstract: Gaze tracking is a technological discipline that offers an alternative interaction between human and computer. A common solution based on gaze tracking has two elements: a hardware device to obtain data from user’s gaze and a library to process the gathered data. This work makes a formal analysis of previous works in gaze tracking. It presents the most used devices and libraries in gaze tracking solutions in the last 5 years as well as the most important characteristics of each of them. To fulfill this purpose, a search for articles related with eye tracking solutions was carried out in different scientific databases. After that, a classification of results was made, as well as a review of them. In addition, the article offers the areas of IoT in which gaze tracking has had the most influence. Finally, various selection criteria for devices and libraries are offered. These criteria are based on features such as price, technical performance and compatibility.

1 citations

References
More filters
Proceedings ArticleDOI
21 Jun 1994
TL;DR: This is the first reported work to classify age, and to successfully extract and use natural wrinkles, from facial images, based on cranio-facial changes in feature-position ratios, and on skin wrinkle analysis.
Abstract: The ability to classify age from a facial image has not been pursued in computer vision. This research addresses the limited task of age classification of a facial image into a baby, young adult, and senior adult. This is the first reported work to classify age, and to successfully extract and use natural wrinkles. We present a theory and practical computations for visual age classification from facial images, based on cranio-facial changes in feature-position ratios, and on skin wrinkle analysis. Three age groups are classified. >

402 citations


"Unraveling Human Perception of Faci..." refers background in this paper

  • ...As illustrated in Figure 1, researchers [2, 3] have observed that during the formative years of an individual, transformations in the facial shape are prominent while in later years, textural variations such as wrinkles and pigmentation are more noticeable....

    [...]

Journal ArticleDOI
TL;DR: It is found that optimal recognition performance is achieved with two fixations; performance does not improve with additional fixations, and the fixations made during face learning differ in location from thosemade during face recognition and are also more variable in duration; this suggests that different strategies are used for face learning and face recognition.
Abstract: It is well known that there exist preferred landing positions for eye fixations in visual word recognition. However, the existence of preferred landing positions in face recognition is less well established. It is also unknown how many fixations are required to recognize a face. To investigate these questions, we recorded eye movements during face recognition. During an otherwise standard face-recognition task, subjects were allowed a variable number of fixations before the stimulus was masked. We found that optimal recognition performance is achieved with two fixations; performance does not improve with additional fixations. The distribution of the first fixation is just to the left of the center of the nose, and that of the second fixation is around the center of the nose. Thus, these appear to be the preferred landing positions for face recognition. Furthermore, the fixations made during face learning differ in location from those made during face recognition and are also more variable in duration; thi...

355 citations


"Unraveling Human Perception of Faci..." refers background in this paper

  • ...Hsiao and Cottrell [7] showcased that while processing stimuli for face recognition, the initial fixations are more directed towards the center of the face stimulus, i.e. the nose area....

    [...]

  • ...Hsiao and Cottrell [7] showcased that while processing stimuli for face recognition, the initial fixations are more directed towards the center of the face stimulus, i....

    [...]

  • ...Several psychological and perception based studies have analyzed the performance of human participants for the task of age estimation [4, 5] and face verification [6, 7]....

    [...]

Proceedings ArticleDOI
04 Jun 2013
TL;DR: This work proposes a hierarchical approach for automatic age estimation, and provides an analysis of how aging influences individual facial components, and experimental results show that eyes and nose are more informative than the other facial components inautomatic age estimation.
Abstract: There has been a growing interest in automatic age estimation from facial images due to a variety of potential applications in law enforcement, security control, and human-computer interaction. However, despite advances in automatic age estimation, it remains a challenging problem. This is because the face aging process is determined not only by intrinsic factors, e.g. genetic factors, but also by extrinsic factors, e.g. lifestyle, expression, and environment. As a result, different people with the same age can have quite different appearances due to different rates of facial aging. We propose a hierarchical approach for automatic age estimation, and provide an analysis of how aging influences individual facial components. Experimental results on the FG-NET, MORPH Album2, and PCSO databases show that eyes and nose are more informative than the other facial components in automatic age estimation. We also study the ability of humans to estimate age using data collected via crowdsourcing, and show that the cumulative score (CS) within 5-year mean absolute error (MAE) of our method is better than the age estimates provided by humans.

246 citations


"Unraveling Human Perception of Faci..." refers background or result in this paper

  • ...Several psychological and perception based studies have analyzed the performance of human participants for the task of age estimation [4, 5] and face verification [6, 7]....

    [...]

  • ...This result is consistent with existing studies that highlight the significance of the eyes in predicting the facial age [5, 4]....

    [...]

Journal ArticleDOI
TL;DR: It is proposed that gaze behavior while determining a person’s identity, emotional state, or gender can be explained as an adaptive brain strategy to learn eye movements that optimize performance in these evolutionarily important perceptual tasks.
Abstract: When viewing a human face, people often look toward the eyes. Maintaining good eye contact carries significant social value and allows for the extraction of information about gaze direction. When identifying faces, humans also look toward the eyes, but it is unclear whether this behavior is solely a byproduct of the socially important eye movement behavior or whether it has functional importance in basic perceptual tasks. Here, we propose that gaze behavior while determining a person’s identity, emotional state, or gender can be explained as an adaptive brain strategy to learn eye movement plans that optimize performance in these evolutionarily important perceptual tasks. We show that humans move their eyes to locations that maximize perceptual performance determining the identity, gender, and emotional state of a face. These optimal fixation points, which differ moderately across tasks, are predicted correctly by a Bayesian ideal observer that integrates information optimally across the face but is constrained by the decrease in resolution and sensitivity from the fovea toward the visual periphery (foveated ideal observer). Neither a model that disregards the foveated nature of the visual system and makes fixations on the local region with maximal information, nor a model that makes center-of-gravity fixations correctly predict human eye movements. Extension of the foveated ideal observer framework to a large database of real-world faces shows that the optimality of these strategies generalizes across the population. These results suggest that the human visual system optimizes face recognition performance through guidance of eye movements not only toward but, more precisely, just below the eyes.

234 citations


"Unraveling Human Perception of Faci..." refers background in this paper

  • ...ilar results have been shown for traditional face verification task, where the binocular region is the most frequently visited region [6, 23, 24]....

    [...]

  • ...Several psychological and perception based studies have analyzed the performance of human participants for the task of age estimation [4, 5] and face verification [6, 7]....

    [...]

Journal ArticleDOI
TL;DR: Eye movements appear to follow both stimulus-driven and goal-driven perceptual strategies when decoding emotional information from a face.
Abstract: When distinguishing whether a face displays a certain emotion, some regions of the face may contain more useful information than others. Here we ask whether people differentially attend to distinct regions of a face when judging different emotions. Experiment 1 measured eye movements while participants discriminated between emotional (joy, anger, fear, sadness, shame, and disgust) and neutral facial expressions. Participant eye movements primarily fell in five distinct regions (eyes, upper nose, lower nose, upper lip, nasion). Distinct fixation patterns emerged for each emotion, such as a focus on the lips for joyful faces and a focus on the eyes for sad faces. These patterns were strongest for emotional faces but were still present when viewers sought evidence of emotion within neutral faces, indicating a goal-driven influence on eye-gaze patterns. Experiment 2 verified that these fixation patterns tended to reflect attention to the most diagnostic regions of the face for each emotion. Eye movements appear to follow both stimulus-driven and goal-driven perceptual strategies when decoding emotional information from a face.

211 citations


"Unraveling Human Perception of Faci..." refers background in this paper

  • ...ilar results have been shown for traditional face verification task, where the binocular region is the most frequently visited region [6, 23, 24]....

    [...]