I
Ian Goodfellow
Researcher at Google
Publications - 139
Citations - 178656
Ian Goodfellow is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & MNIST database. The author has an hindex of 85, co-authored 137 publications receiving 135390 citations. Previous affiliations of Ian Goodfellow include OpenAI & Université de Montréal.
Papers
More filters
Posted Content
MixMatch: A Holistic Approach to Semi-Supervised Learning
TL;DR: MixMatch as discussed by the authors predicts low-entropy labels for unlabeled examples and combines them with labeled and unlabelled data using MixUp to obtain state-of-the-art results.
Posted Content
Theano: new features and speed improvements
Frédéric Bastien,Pascal Lamblin,Razvan Pascanu,James Bergstra,Ian Goodfellow,Arnaud Bergeron,Nicolas Bouchard,David Warde-Farley,Yoshua Bengio +8 more
TL;DR: New features and efficiency improvements to Theano are presented, and benchmarks demonstrating Theano's performance relative to Torch7, a recently introduced machine learning library, and to RNNLM, a C++ library targeted at recurrent neural networks.
Proceedings Article
Ensemble Adversarial Training: Attacks and Defenses
TL;DR: Ensemble adversarial training as discussed by the authors augments training data with perturbations transferred from other models to improve robustness to black-box attacks, and achieves state-of-the-art performance on ImageNet.
Posted Content
NIPS 2016 Tutorial: Generative Adversarial Networks
TL;DR: This report summarizes the tutorial presented by the author at NIPS 2016 on generative adversarial networks (GANs), and describes state-of-the-art image models that combine GANs with other methods.
Posted Content
Intriguing properties of neural networks
Christian Szegedy,Wojciech Zaremba,Ilya Sutskever,Joan Bruna,Dumitru Erhan,Ian Goodfellow,Rob Fergus,Rob Fergus +7 more
TL;DR: This article showed that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend, which suggests that it is the space, rather than individual units, that contains of the semantic information in the high layers of neural networks.