P
Patrick Lucey
Researcher at Disney Research
Publications - 126
Citations - 7846
Patrick Lucey is an academic researcher from Disney Research. The author has contributed to research in topics: Facial recognition system & Audio-visual speech recognition. The author has an hindex of 31, co-authored 126 publications receiving 6527 citations. Previous affiliations of Patrick Lucey include University of Pittsburgh & Queensland University of Technology.
Papers
More filters
Proceedings ArticleDOI
The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression
TL;DR: The Cohn-Kanade (CK+) database is presented, with baseline results using Active Appearance Models (AAMs) and a linear support vector machine (SVM) classifier using a leave-one-out subject cross-validation for both AU and emotion detection for the posed data.
Proceedings ArticleDOI
Painful data: The UNBC-McMaster shoulder pain expression archive database
TL;DR: A major factor hindering the deployment of a fully functional automatic facial expression detection system is the lack of representative data, so enough data is available to build robust models so high performance can be gained.
Journal ArticleDOI
Automatically Detecting Pain in Video Through Facial Action Units
Patrick Lucey,Jeffrey F. Cohn,Iain Matthews,Simon Lucey,Sridha Sridharan,Jessica M. Howlett,Kenneth M. Prkachin +6 more
TL;DR: In this article, an active appearance model (AAM)-based system was proposed to automatically detect the frames in video in which a patient is in pain, which can deal with these movements and can achieve significant improvements in both the AU and pain detection performance compared to the current state-of-the-art approaches which utilize similarity-normalized appearance features only.
Automatically detecting pain in video through facial action units
Patrick Lucey,Jeffrey F. Cohn,Iain Matthews,Simon Lucey,Sridha Sridharan,Jessica M. Howlett,Kenneth M. Prkachin +6 more
TL;DR: An active appearance model (AAM)-based system that can automatically detect the frames in video in which a patient is in pain is described and can achieve significant improvements in both the AU and pain detection performance compared to the current-state-of-the-art approaches which utilize similarity-normalized appearance features only.
Journal ArticleDOI
Painful monitoring: Automatic pain monitoring using the UNBC-McMaster shoulder pain expression archive database
TL;DR: To promote and facilitate research into pain and augmentcurrent datasets, a portion of this database, which includes 200 sequences across 25 subjects, containing more than 48,000 coded frames of spontaneous facial expressions with 66-point AAM tracked facial feature landmarks, is publicly made available.