scispace - formally typeset
J

Jason Saragih

Researcher at Facebook

Publications -  84
Citations -  8254

Jason Saragih is an academic researcher from Facebook. The author has contributed to research in topics: Rendering (computer graphics) & Computer science. The author has an hindex of 23, co-authored 73 publications receiving 5916 citations. Previous affiliations of Jason Saragih include Australian National University & Commonwealth Scientific and Industrial Research Organisation.

Papers
More filters
Proceedings ArticleDOI

The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression

TL;DR: The Cohn-Kanade (CK+) database is presented, with baseline results using Active Appearance Models (AAMs) and a linear support vector machine (SVM) classifier using a leave-one-out subject cross-validation for both AU and emotion detection for the posed data.
Journal ArticleDOI

Deformable Model Fitting by Regularized Landmark Mean-Shift

TL;DR: This work proposes a principled optimization strategy where nonparametric representations of these likelihoods are maximized within a hierarchy of smoothed estimates and is shown to outperform some common existing methods on the task of generic face fitting.
Proceedings ArticleDOI

PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization

TL;DR: In this paper, a multi-level architecture is proposed to estimate high-resolution human shape from low-resolution images, where a coarse level observes the whole image at lower resolution and focuses on holistic reasoning, and a fine level estimates highly detailed geometry by observing higher resolution images.
Proceedings ArticleDOI

Face alignment through subspace constrained mean-shifts

TL;DR: A principled optimization strategy is proposed where a nonparametric representation of the landmark distributions is maximized within a hierarchy of smoothed estimates, and is shown to outperform other existing methods on the task of generic face fitting.
Journal ArticleDOI

Neural volumes: learning dynamic renderable volumes from images

TL;DR: This work presents a learning-based approach to representing dynamic objects inspired by the integral projection model used in tomographic imaging, and learns a latent representation of a dynamic scene that enables us to produce novel content sequences not seen during training.