scispace - formally typeset
S

Sander Koelstra

Researcher at Queen Mary University of London

Publications -  8
Citations -  3844

Sander Koelstra is an academic researcher from Queen Mary University of London. The author has contributed to research in topics: Feature extraction & Image registration. The author has an hindex of 8, co-authored 8 publications receiving 2822 citations.

Papers
More filters
Journal ArticleDOI

DEAP: A Database for Emotion Analysis ;Using Physiological Signals

TL;DR: A multimodal data set for the analysis of human affective states was presented and a novel method for stimuli selection is proposed using retrieval by affective tags from the last.fm website, video highlight detection, and an online assessment tool.
Journal ArticleDOI

A Dynamic Texture-Based Approach to Recognition of Facial Actions and Their Temporal Models

TL;DR: A dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos is proposed.
Journal ArticleDOI

Fusion of facial expressions and EEG for implicit affective tagging

TL;DR: This work proposes to involve the user and investigate methods for implicit tagging, wherein users' responses to the interaction with the multimedia content are analyzed in order to generate descriptive tags.
Book ChapterDOI

Single trial classification of EEG and peripheral physiological signals for recognition of emotions induced by music videos

TL;DR: This work presents some promising results of its research in classification of emotions induced by watching music videos, and shows robust correlations between users' self-assessments of arousal and valence and the frequency powers of their EEG activity.
Proceedings ArticleDOI

Non-rigid registration using free-form deformations for recognition of facial actions and their temporal dynamics

TL;DR: An appearance-based approach to recognition of facial action units (AUs) and their temporal segments in frontal-view face videos by using non-rigid registration using free-form deformations to determine motion in the face region of an input video.