G
Guandao Yang
Researcher at Cornell University
Publications - 21
Citations - 1167
Guandao Yang is an academic researcher from Cornell University. The author has contributed to research in topics: Computer science & Point cloud. The author has an hindex of 13, co-authored 18 publications receiving 650 citations.
Papers
More filters
Posted Content
PointFlow: 3D Point Cloud Generation with Continuous Normalizing Flows
TL;DR: A principled probabilistic framework to generate 3D point clouds by modeling them as a distribution of distributions with the invertibility of normalizing flows enables the computation of the likelihood during training and allows the model to train in the variational inference framework.
Proceedings ArticleDOI
PointFlow: 3D Point Cloud Generation With Continuous Normalizing Flows
TL;DR: PointFlow as discussed by the authors proposes a principled probabilistic framework to generate 3D point clouds by modeling them as a distribution of distributions, where the first level is the distribution of shapes and the second level is given a shape.
Posted Content
Learning Gradient Fields for Shape Generation.
Ruojin Cai,Guandao Yang,Hadar Averbuch-Elor,Zekun Hao,Serge Belongie,Noah Snavely,Bharath Hariharan +6 more
TL;DR: This work generates point clouds by performing stochastic gradient ascent on an unnormalized probability density, thereby moving sampled points toward the high-likelihood regions and allowing for extraction of a high-quality implicit surface.
Proceedings ArticleDOI
Learning to Evaluate Image Captioning
TL;DR: This paper proposed a learning-based discriminative evaluation metric that is directly trained to distinguish between human and machine-generated captions, and further proposed a data augmentation scheme to explicitly incorporate pathological transformations as negative examples during training.
Proceedings Article
SWALP : Stochastic Weight Averaging in Low-Precision Training
Guandao Yang,Tianyi Zhang,Polina Kirichenko,Junwen Bai,Andrew Gordon Wilson,Christopher De Sa +5 more
TL;DR: It is shown that SWALP converges arbitrarily close to the optimal solution for quadratic objectives, and to a noise ball asymptotically smaller than low precision SGD in strongly convex settings.