W
Weilong Yang
Researcher at Google
Publications - 41
Citations - 1712
Weilong Yang is an academic researcher from Google. The author has contributed to research in topics: Feature (computer vision) & Latent variable. The author has an hindex of 17, co-authored 39 publications receiving 1391 citations. Previous affiliations of Weilong Yang include University of Houston & Simon Fraser University.
Papers
More filters
Journal ArticleDOI
Discriminative Latent Models for Recognizing Contextual Group Activities
TL;DR: This paper proposes a novel framework for recognizing group activities which jointly captures the group activity, the individual person actions, and the interactions among them and introduces a new feature representation called the action context (AC) descriptor.
Proceedings ArticleDOI
Recognizing human actions from still images with latent poses
Weilong Yang,Yang Wang,Greg Mori +2 more
TL;DR: This work proposes a novel approach that treats the pose of the person in the image as latent variables that will help with recognition, and shows that by inferring the latent poses, it can improve the final action recognition results.
Proceedings Article
Beyond Actions: Discriminative Models for Contextual Group Activities
TL;DR: The proposed model jointly captures the group activity, the individual person actions, and the interactions among them, and implicitly infer it during learning and inference can significantly improve activity recognition performance.
Proceedings Article
Beyond Synthetic Noise: Deep Learning on Controlled Noisy Labels
TL;DR: In this article, the authors established the first benchmark of controlled real-world label noise from the web, which enabled them to study the web label noise in a controlled setting for the first time, and they showed that their method achieves the best result on their dataset as well as on two public benchmarks (CIFAR and WebVision).
Proceedings ArticleDOI
Regularizing Generative Adversarial Networks under Limited Data
TL;DR: LeCam-GAN as discussed by the authors proposes a regularization approach for training robust GAN models on limited data, which theoretically shows a connection between the regularized loss and an f-divergence, which is more robust under limited training data.