scispace - formally typeset
I

Irene Kotsia

Researcher at Middlesex University

Publications -  61
Citations -  5327

Irene Kotsia is an academic researcher from Middlesex University. The author has contributed to research in topics: Facial expression & Facial recognition system. The author has an hindex of 24, co-authored 61 publications receiving 3405 citations. Previous affiliations of Irene Kotsia include Aristotle University of Thessaloniki & University of London.

Papers
More filters
Proceedings ArticleDOI

GANFIT: Generative Adversarial Network Fitting for High Fidelity 3D Face Reconstruction

TL;DR: In this paper, the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) is harnessed to reconstruct the facial texture and shape from single images.
Posted Content

Generating faces for affect analysis.

TL;DR: This paper presents a novel approach for synthesizing facial affect, in terms of the six basic expressions, and a set of 3D facial meshes is produced from the 4DFAB database and is used to build a blendshape model that generates the new facial affect.
Proceedings ArticleDOI

Facial Expression Recognition in Videos using a Novel Multi-Class Support Vector Machines Variant

TL;DR: A novel class of support vector machines (SVM) is introduced to deal with facial expression recognition, and the proposed classifier incorporates statistic information about the classes under examination into the classical SVM.
Proceedings ArticleDOI

Real time facial expression recognition from image sequences using support vector machines

TL;DR: In this paper, a real-time method is proposed as a solution to the problem of facial expression classiffication in video sequences, where the user manually places some of the Candide grid nodes to the face depicted at the first frame.
Proceedings ArticleDOI

Texture and Shape Information Fusion for Facial Action Unit Recognition

TL;DR: A novel method that fuses texture and shape information to achieve Facial Action Unit (FAU) recognition from video sequences is proposed and the accuracy achieved is equal to 92.1% when recognizing the 17 FAUs that are responsible for facial expression development.