I
Ian Goodfellow
Researcher at Google
Publications - 139
Citations - 178656
Ian Goodfellow is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & MNIST database. The author has an hindex of 85, co-authored 137 publications receiving 135390 citations. Previous affiliations of Ian Goodfellow include OpenAI & Université de Montréal.
Papers
More filters
Proceedings Article
On the challenges of physical implementations of RBMs
TL;DR: In this paper, the authors conduct software simulations to determine how harmful each of these restrictions is, and suggest that designers of new physical computing hardware and algorithms for physical computers should focus their efforts on overcoming the limitations imposed by the topology restrictions of currently existing physical computers.
Posted Content
Adversarial Examples that Fool both Computer Vision and Time-Limited Humans
Gamaleldin F. Elsayed,Shreya Shankar,Brian Cheung,Nicolas Papernot,Alex Kurakin,Ian Goodfellow,Jascha Sohl-Dickstein +6 more
TL;DR: This article showed that adversarial examples that strongly transfer across computer vision models can influence the classifications made by time-limited human observers, by matching the initial processing of the human visual system.
Posted Content
Joint Training of Deep Boltzmann Machines
TL;DR: A new method for training deep Boltzmann machines jointly is introduced that requires an initial learning pass that trains the deep BoltZmann machine greedily, one layer at a time, or do not perform well on classifi- cation tasks.
Proceedings Article
A Domain Agnostic Measure for Monitoring and Evaluating GANs
Paulina Grnarova,Kfir Y. Levy,Aurelien Lucchi,Nathanaël Perraudin,Ian Goodfellow,Thomas Hofmann,Andreas Krause +6 more
TL;DR: This work uses the notion of duality gap from game theory to propose a measure that addresses both relative assessment of different GAN models and monitoring the progress of a single model throughout training at a low computational cost.
Posted Content
Adversarial Reprogramming of Neural Networks
TL;DR: In this article, the authors introduce a method to reprogram the target model to perform a task chosen by the attacker without the attacker needing to specify or compute the desired output for each test-time input.