scispace - formally typeset
E

Etienne Vouga

Researcher at University of Texas at Austin

Publications -  63
Citations -  2398

Etienne Vouga is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Computer science & Engineering. The author has an hindex of 21, co-authored 48 publications receiving 1946 citations. Previous affiliations of Etienne Vouga include Rice University & Harvard University.

Papers
More filters
Journal ArticleDOI

Programming curvature using origami tessellations

TL;DR: In this article, scale-independent elementary geometric constructions and constrained optimization algorithms can be used to determine spatially modulated patterns that yield approximations to given surfaces of constant or varying curvature.
Journal ArticleDOI

Discrete viscous threads

TL;DR: In this paper, a continuum-based discrete model for thin threads of viscous fluid by drawing upon the Rayleigh analogy to elastic rods is presented, demonstrating canonical coiling, folding, and breakup in dynamic simulations.
Journal ArticleDOI

3D self-portraits

TL;DR: An automatic pipeline that allows ordinary users to capture complete and fully textured 3D models of themselves in minutes, using only a single Kinect sensor, in the uncontrolled lighting environment of their own home, is developed.
Proceedings ArticleDOI

Dense Human Body Correspondences Using Convolutional Networks

TL;DR: This work uses a deep convolutional neural network to train a feature descriptor on depth map pixels, but crucially, rather than training the network to solve the shape correspondence problem directly, it trains it to solve a body region classification problem, modified to increase the smoothness of the learned descriptors near region boundaries.
Posted Content

Dense Human Body Correspondences Using Convolutional Networks

TL;DR: In this paper, a deep learning approach for finding dense correspondences between 3D scans of people is proposed, which requires only partial geometric information in the form of two depth maps or partial reconstructed surfaces, works for humans in arbitrary poses and wearing any clothing, does not require the two people to be scanned from similar viewpoints, and runs in real time.