scispace - formally typeset
Open AccessProceedings ArticleDOI

What Makes a Gesture a Gesture? Neural Signatures Involved in Gesture Recognition

Reads0
Chats0
TLDR
The results suggest that coordinated activity in visual and motor cortices is sensitive to motion trajectories during gesture observation, and it is consistent with the proposal that inflection points operate as placeholders in gesture recognition.
Abstract
Previous work in the area of gesture production, has made the assumption that machines can replicate humanlike gestures by connecting a bounded set of salient points in the motion trajectory. Those inflection points were hypothesized to also display cognitive saliency. The purpose of this paper is to validate that claim using electroencephalography (EEG). That is, this paper attempts to find neural signatures of gestures (also referred as placeholders) in human cognition, which facilitate the understanding, learning and repetition of gestures. Further, it is discussed whether there is a direct mapping between the placeholders and kinematic salient points in the gesture trajectories. These are expressed as relationships between inflection points in the gestures trajectories with oscillatory mu rhythms (8-12 Hz) in the EEG. This is achieved by correlating fluctuations in mu power during gesture observation with salient motion points found for each gesture. Peaks in the EEG signal at central electrodes (motor cortex; C3/Cz/C4) and occipital electrodes (visual cortex; O3/Oz/O4) were used to isolate the salient events within each gesture. We found that a linear model predicting mu peaks from motion inflections fits the data well. Increases in EEG power were detected 380 and 500ms after inflection points at occipital and central electrodes, respectively. These results suggest that coordinated activity in visual and motor cortices is sensitive to motion trajectories during gesture observation, and it is consistent with the proposal that inflection points operate as placeholders in gesture recognition.

read more

Citations
More filters
Journal ArticleDOI

A Human-Centered Approach to One-Shot Gesture Learning

TL;DR: The focus of this work is on the process that leads to the realization of a gesture, rather than on the gesture itself, and the strategy is to generate a data set of realistic samples based on features extracted from a single gesture sample.
Proceedings ArticleDOI

One-Shot Gesture Recognition: One Step Towards Adaptive Learning

TL;DR: The framework presented in this work focuses on learning the process that leads to gesture generation, rather than treating the gestures as the outcomes of a stochastic process only, and achieves this by leveraging kinematic and cognitive aspects of human interaction.
DissertationDOI

Assessing Collaborative Physical Tasks via Gestural Analysis using the "MAGIC" Architecture

TL;DR: The results indicate that the proposed framework to represent, compare, and assess gestures’ morphology, semantics, and pragmatics acts as a good estimator for task understanding and provides task understanding insights in scenarios where other proxies show inconsistencies.
Proceedings ArticleDOI

Towards academic affect modeling through experimental hybrid gesture recognition algorithm

TL;DR: The experimental results show that head-poses when properly modeled can be used to define affect as applied to examination behavior, and the divide-and-conquer algorithm implementation on object detection using Haar Cascade feature extraction and HMM classification resulted in 78.77% accuracy level.
Journal ArticleDOI

Introducing the NEMO-Lowlands iconic gesture dataset, collected through a gameful human–robot interaction

TL;DR: A novel dataset of iconic gestures, together with a publicly available robot-based elicitation method to record these gestures, which consists of playing a game of charades with a humanoid robot, can be used for research into human gesturing behavior, and for the gesture recognition and production capabilities of robots and virtual agents.
References
More filters
Journal Article

On the complexity of best-arm identification in multi-armed bandit models

TL;DR: This work introduces generic notions of complexity for the two dominant frameworks considered in the literature: fixed-budget and fixed-confidence settings, and provides the first known distribution-dependent lower bound on the complexity that involves information-theoretic quantities and holds when m ≥ 1 under general assumptions.
Journal ArticleDOI

The anthropomorphic brain: the mirror neuron system responds to human and robotic actions.

TL;DR: The findings suggest that the mirror neuron system could contribute to the understanding of a wider range of actions than previously assumed, and that the goal of an action might be more important for mirror activations than the way in which the action is performed.
Journal ArticleDOI

Mu rhythm modulation during observation of an object-directed grasp

TL;DR: In this paper, the human electroencephalographic mu rhythm is suppressed during the observation of actions performed by other persons, an effect that may be functionally related to the behaviour of so-called mirror neurons observed in area F5 of nonhuman primates.
Journal ArticleDOI

Gesturing makes learning last

TL;DR: Manipulating children's gesture during instruction in a new mathematical concept found that requiring children to gesture while learning the new concept helped them retain the knowledge they had gained during instruction and had no effect on solidifying learning.
Journal ArticleDOI

When Language Meets Action: The Neural Integration of Gesture and Speech

TL;DR: Findings provide direct evidence that action and language processing share a high-level neural integration system.
Related Papers (5)