Open AccessJournal Article
Dropout: a simple way to prevent neural networks from overfitting
TLDR
It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.Abstract:
Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different "thinned" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.read more
Citations
More filters
Journal ArticleDOI
iPrivacy: Image Privacy Protection by Identifying Sensitive Objects via Deep Multi-Task Learning
TL;DR: This paper consists of the following contributions: massive social images and their privacy settings are leveraged to learn the object-privacy relatedness effectively and identify a set of privacy-sensitive object classes automatically and a deep multi-task learning algorithm is developed.
Journal ArticleDOI
Deep Learning for Automated Skeletal Bone Age Assessment in X-Ray Images
TL;DR: This paper proposes and test several deep learning approaches to assess skeletal bone age automatically and shows an average discrepancy between manual and automatic evaluation of about 0.8 years, which is state‐of‐the‐art performance.
Proceedings Article
Model Based Reinforcement Learning for Atari
Łukasz Kaiser,Mohammad Babaeizadeh,Piotr Milos,Blazej Osinski,Roy H. Campbell,Konrad Czechowski,Dumitru Erhan,Chelsea Finn,Piotr Kozakowski,Sergey Levine,Afroz Mohiuddin,Ryan Sepassi,George Tucker,Henryk Michalewski +13 more
TL;DR: Simulated Policy Learning (SimPLe), a complete model-based deep RL algorithm based on video prediction models, is described and a comparison of several model architectures is presented, including a novel architecture that yields the best results in the authors' setting.
Proceedings ArticleDOI
Long-Term Feature Banks for Detailed Video Understanding
TL;DR: In this article, a long-term feature bank is proposed to augment state-of-the-art video models that otherwise would only view short clips of 2-5 seconds, enabling existing video models to relate the present to the past, and put events in context.
Proceedings Article
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
TL;DR: A novel adversarial training algorithm is proposed, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI
Regression Shrinkage and Selection via the Lasso
TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Journal ArticleDOI
Reducing the Dimensionality of Data with Neural Networks
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Journal ArticleDOI
A fast learning algorithm for deep belief nets
TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Dissertation
Learning Multiple Layers of Features from Tiny Images
TL;DR: In this paper, the authors describe how to train a multi-layer generative model of natural images, using a dataset of millions of tiny colour images, described in the next section.