scispace - formally typeset
K

Kyle Olszewski

Researcher at University of Southern California

Publications -  31
Citations -  1075

Kyle Olszewski is an academic researcher from University of Southern California. The author has contributed to research in topics: Computer science & Rendering (computer graphics). The author has an hindex of 9, co-authored 19 publications receiving 603 citations. Previous affiliations of Kyle Olszewski include Institute for Creative Technologies.

Papers
More filters
Journal ArticleDOI

Facial performance sensing head-mounted display

TL;DR: A novel HMD that enables 3D facial performance-driven animation in real-time that is suitable for social interactions in virtual worlds and a short calibration step to readjust the Gaussian mixture distribution of the mapping before each use is proposed.
Journal ArticleDOI

High-fidelity facial reflectance and geometry inference from an unconstrained image

TL;DR: A deep learning-based technique to infer high-quality facial reflectance and geometry given a single unconstrained image of the subject, which may contain partial occlusions and arbitrary illumination conditions, and demonstrates the rendering of high-fidelity 3D avatars from a variety of subjects captured under different lighting conditions.
Journal ArticleDOI

High-fidelity facial and speech animation for VR HMDs

TL;DR: This work introduces a novel system for HMD users to control a digital avatar in real-time while producing plausible speech animation and emotional expressions and demonstrates the quality of the system on a variety of subjects and evaluates its performance against state-of-the-art real- time facial tracking techniques.
Proceedings ArticleDOI

Realistic Dynamic Facial Textures from a Single Image Using GANs

TL;DR: A Deep Generative Network is trained that can infer realistic per-frame texture deformations of the target identity using the per-frames source textures and the single target texture, and can both animate the face and perform video face replacement on the source video using the target appearance.
Proceedings ArticleDOI

Transformable Bottleneck Networks

TL;DR: It is demonstrated that the bottlenecks produced by networks trained for this task contain meaningful spatial structure that allows them to intuitively perform a variety of image manipulations in 3D, well beyond the rigid transformations seen during training.