scispace - formally typeset
Open AccessPosted Content

Distilling the Knowledge in a Neural Network

Reads0
Chats0
TLDR
This work shows that it can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model and introduces a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse.
Abstract
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.

read more

Citations
More filters
Journal ArticleDOI

Image Super-Resolution as a Defense Against Adversarial Attacks

TL;DR: In this article, deep image restoration networks learn mapping functions that can bring off-the-manifold adversarial samples onto the natural image manifold, thus restoring classification towards correct classes.
Proceedings ArticleDOI

Improved Techniques for Training Adaptive Deep Networks

TL;DR: This paper considers a typical adaptive deep network with multiple intermediate classifiers and presents three techniques to improve its training efficacy from two aspects: a Gradient Equilibrium algorithm to resolve the conflict of learning of different classifiers; an Inline Subnetwork Collaboration approach and a One-for-all Knowledge Distillation algorithm to enhance the collaboration among classifiers.
Proceedings ArticleDOI

Neural Compatibility Modeling with Attentive Knowledge Distillation

TL;DR: Zhang et al. as mentioned in this paper presented a neural compatibility modeling scheme with attentive knowledge distillation based on the teacher-student network for complementary clothing matching by integrating the advanced deep neural networks and the rich fashion domain knowledge.
Proceedings ArticleDOI

Pretrained Language Models for Biomedical and Clinical Tasks: Understanding and Extending the State-of-the-Art.

TL;DR: A large-scale study is presented across 18 established biomedical and clinical NLP tasks to determine which of several popular open-source biomedical andclinical NLP models work well in different settings, and applies recent advances in pretraining to train new biomedical language models.
Posted ContentDOI

Deep learning in bioinformatics: introduction, application, and perspective in big data era

TL;DR: This review provides both the exoteric introduction of deep learning, and concrete examples and implementations of its representative applications in bioinformatics, and introduces deep learning in an easy-to-understand fashion.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI

Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups

TL;DR: This article provides an overview of progress and represents the shared views of four research groups that have had recent successes in using DNNs for acoustic modeling in speech recognition.
Posted Content

Improving neural networks by preventing co-adaptation of feature detectors

TL;DR: The authors randomly omits half of the feature detectors on each training case to prevent complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors.
Book ChapterDOI

Ensemble Methods in Machine Learning

TL;DR: Some previous studies comparing ensemble methods are reviewed, and some new experiments are presented to uncover the reasons that Adaboost does not overfit rapidly.