X
Xi Chen
Researcher at University of California, Berkeley
Publications - 53
Citations - 26834
Xi Chen is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Reinforcement learning & Autoregressive model. The author has an hindex of 39, co-authored 53 publications receiving 22393 citations.
Papers
More filters
Posted Content
Improved Techniques for Training GANs
TL;DR: In this article, the authors present a variety of new architectural features and training procedures that apply to the generative adversarial networks (GANs) framework and achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN.
Proceedings Article
Improved techniques for training GANs
TL;DR: In this article, a variety of new architectural features and training procedures are applied to the generative adversarial networks (GANs) framework and achieved state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN.
Posted Content
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
TL;DR: InfoGAN as mentioned in this paper is a generative adversarial network that maximizes the mutual information between a small subset of the latent variables and the observation, which can be interpreted as a variation of the Wake-Sleep algorithm.
Proceedings Article
InfoGAN: interpretable representation learning by information maximizing generative adversarial nets
TL;DR: InfoGAN as mentioned in this paper is an information-theoretic extension to the GAN that is able to learn disentangled representations in a completely unsupervised manner, and it also discovers visual concepts that include hair styles, presence of eyeglasses, and emotions on the CelebA face dataset.
Posted Content
Evolution Strategies as a Scalable Alternative to Reinforcement Learning.
TL;DR: This work explores the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients, and highlights several advantages of ES as a blackbox optimization technique.