scispace - formally typeset
I

Ian Goodfellow

Researcher at Google

Publications -  139
Citations -  178656

Ian Goodfellow is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & MNIST database. The author has an hindex of 85, co-authored 137 publications receiving 135390 citations. Previous affiliations of Ian Goodfellow include OpenAI & Université de Montréal.

Papers
More filters
Proceedings Article

Realistic Evaluation of Deep Semi-Supervised Learning Algorithms

TL;DR: This work creates a unified reimplemention and evaluation platform of various widely-used SSL techniques and finds that the performance of simple baselines which do not use unlabeled data is often underreported, that SSL methods differ in sensitivity to the amount of labeled and unlabeling data, and that performance can degrade substantially when the unlabelED dataset contains out-of-class examples.
Posted Content

Adversarial Training Methods for Semi-Supervised Text Classification

TL;DR: This work extends adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings in a recurrent neural network rather than to the original input itself.
Proceedings Article

Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data

TL;DR: Papernot et al. as discussed by the authors proposed Private Aggregation of Teacher Ensembles (PATE), which combines multiple models trained with disjoint datasets, such as records from different subsets of users.
Posted Content

Adversarial Autoencoders

TL;DR: The adversarial autoencoder (AAE) as discussed by the authors uses the generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoders with an arbitrary prior distribution, which ensures that generating from any part of prior space results in meaningful samples.
Posted Content

MaskGAN: Better Text Generation via Filling in the______

TL;DR: This work introduces an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context and shows qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.