scispace - formally typeset
S

Stefanos Zafeiriou

Researcher at Imperial College London

Publications -  406
Citations -  26443

Stefanos Zafeiriou is an academic researcher from Imperial College London. The author has contributed to research in topics: Facial recognition system & Computer science. The author has an hindex of 60, co-authored 375 publications receiving 17993 citations. Previous affiliations of Stefanos Zafeiriou include Huawei & Aristotle University of Thessaloniki.

Papers
More filters
Journal ArticleDOI

300 Faces In-The-Wild Challenge

TL;DR: This paper proposes a semi-automatic annotation technique that was employed to re-annotate most existing facial databases under a unified protocol, and presents the 300 Faces In-The-Wild Challenge (300-W), the first facial landmark localization challenge that was organized twice, in 2013 and 2015.
Proceedings ArticleDOI

Robust Discriminative Response Map Fitting with Constrained Local Models

TL;DR: A novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario.
Proceedings ArticleDOI

AgeDB: The First Manually Collected, In-the-Wild Age Database

TL;DR: This paper presents the first, to the best of knowledge, manually collected "in-the-wild" age database, dubbed AgeDB, containing images annotated with accurate to the year, noise-free labels, which renders AgeDB suitable when performing experiments on age-invariant face verification, age estimation and face age progression "in the wild".
Journal ArticleDOI

End-to-End Multimodal Emotion Recognition Using Deep Neural Networks

TL;DR: This work proposes an emotion recognition system using auditory and visual modalities using a convolutional neural network to extract features from the speech, while for the visual modality a deep residual network of 50 layers is used.
Proceedings ArticleDOI

Incremental Face Alignment in the Wild

TL;DR: It is shown that it is possible to automatically construct robust discriminative person and imaging condition specific models 'in- the-wild' that outperform state-of-the-art generic face alignment strategies.