scispace - formally typeset
I

Ilya Sutskever

Researcher at OpenAI

Publications -  137
Citations -  294374

Ilya Sutskever is an academic researcher from OpenAI. The author has contributed to research in topics: Artificial neural network & Reinforcement learning. The author has an hindex of 75, co-authored 131 publications receiving 235539 citations. Previous affiliations of Ilya Sutskever include Google & University of Toronto.

Papers
More filters
Proceedings Article

Intriguing properties of neural networks

TL;DR: It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
Posted Content

Improving neural networks by preventing co-adaptation of feature detectors

TL;DR: The authors randomly omits half of the feature detectors on each training case to prevent complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors.
Proceedings Article

On the importance of initialization and momentum in deep learning

TL;DR: It is shown that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter, it can train both DNNs and RNNs to levels of performance that were previously achievable only with Hessian-Free optimization.
Posted Content

InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets

TL;DR: InfoGAN as mentioned in this paper is a generative adversarial network that maximizes the mutual information between a small subset of the latent variables and the observation, which can be interpreted as a variation of the Wake-Sleep algorithm.
Posted Content

Recurrent Neural Network Regularization

TL;DR: This paper shows how to correctly apply dropout to LSTMs, and shows that it substantially reduces overfitting on a variety of tasks.