scispace - formally typeset
C

Chris Landreth

Researcher at University of Toronto

Publications -  7
Citations -  361

Chris Landreth is an academic researcher from University of Toronto. The author has contributed to research in topics: Viseme & Animation. The author has an hindex of 5, co-authored 7 publications receiving 187 citations.

Papers
More filters
Journal ArticleDOI

JALI: an animator-centric viseme model for expressive lip synchronization

TL;DR: A system that, given an input audio soundtrack and speech transcript, automatically generates expressive lip-synchronized facial animation that is amenable to further artistic refinement, and that is comparable with both performance capture and professional animator output is presented.
Journal ArticleDOI

Visemenet: audio-driven animator-centric speech animation

TL;DR: A novel deep-learning based approach to producing animator-centric speech motion curves that drive a JALI or standard FACS-based production face-rig, directly from input audio, that integrates seamlessly into existing animation pipelines.
Journal ArticleDOI

RigNet: neural rigging for articulated characters

TL;DR: RigNet as discussed by the authors predicts a skeleton that matches the animator expectations in joint placement and topology, and also estimates surface skin weights based on the predicted skeleton, which is trained on a large and diverse collection of rigged models.
Posted Content

RigNet: Neural Rigging for Articulated Characters

TL;DR: RigNet, an end-to-end automated method for producing animation rigs from input character models, predicts a skeleton that matches the animator expectations in joint placement and topology and estimates surface skin weights based on the predicted skeleton.
Posted Content

VisemeNet: Audio-Driven Animator-Centric Speech Animation

TL;DR: In this article, a three-stage Long Short-Term Memory (LSTM) network architecture is proposed to produce animator-centric speech motion curves that drive a JALI or standard FACS-based production face-rig, directly from input audio.