T
Tongzhou Wang
Researcher at Massachusetts Institute of Technology
Publications - 14
Citations - 1530
Tongzhou Wang is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Rewriting & Gibbs sampling. The author has an hindex of 10, co-authored 14 publications receiving 639 citations.
Papers
More filters
Posted Content
Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere
Tongzhou Wang,Phillip Isola +1 more
TL;DR: This work identifies two key properties related to the contrastive loss: alignment (closeness) of features from positive pairs, and uniformity of the induced distribution of the (normalized) features on the hypersphere.
Proceedings Article
Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere
Tongzhou Wang,Phillip Isola +1 more
TL;DR: In contrast to contrastive loss, SsnL as discussed by the authors identifies two key properties related to the Contrastive Loss: alignment (closeness) of features from positive pairs, and uniformity of the induced distribution of the normalized features on the hypersphere.
Proceedings ArticleDOI
Learning to Synthesize a 4D RGBD Light Field from a Single Image
TL;DR: In this paper, a convolutional neural network (CNN) is used to estimate scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects.
Posted Content
Learning to Synthesize a 4D RGBD Light Field from a Single Image
TL;DR: This work presents a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction), unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.
Posted Content
Dataset Distillation
TL;DR: This paper keeps the model fixed and instead attempts to distill the knowledge from a large training dataset into a small one, to synthesize a small number of data points that do not need to come from the correct data distribution but will approximate the model trained on the original data.