scispace - formally typeset
Open AccessProceedings Article

Discriminative Autoencoder

TLDR
The proposed discriminative autoencoder outperforms state-of-the-art representation learning tools in terms of classification results in breast cancer related histopathological image set MITOS and AMIDA and some of the benchmark image datasets.
Abstract
Classification using cross-datasets (where a classifier trained using annotated image set A is used to test similar images of set B due to lack of training images in B) is important for many classification problems especially in biomedical imaging. We propose a discriminative autoencoder, useful for addressing the challenge of classification using cross-datasets. Our autoencoder learns an encoder and decoder such that the distances between the representations of the same class is minimized whereas the distances between the representations of different classes are maximized. We derive a fast algorithm to solve the aforementioned problem using the Augmented Lagrangian Alternating Directions Method of Multipliers (ADMM) approach. ADMM is a faster alternative to back-propagation which is used in standard autoencoders. The proposed method outperforms state-of-the-art representation learning tools in terms of classification results in breast cancer related histopathological image set MITOS and AMIDA and some of the benchmark image datasets.

read more

Citations
More filters
Journal ArticleDOI

Discriminative ensemble learning for few-shot chest x-ray diagnosis.

TL;DR: The proposed method for few-shot diagnosis of diseases and conditions from chest x-rays using discriminative ensemble learning is modular and easily adaptable to new tasks requiring the training of only the saliency-based classifier.
Journal ArticleDOI

Learning image features with fewer labels using a semi-supervised deep convolutional network

TL;DR: A novel semi-supervised deep network training strategy that comprises a convolutional network and an autoencoder using a joint classification and reconstruction loss function is presented and it is shown the learned feature embedding is improved when including the unlabelled data in the training process.
Journal ArticleDOI

DSAE-Impute: Learning Discriminative Stacked Autoencoders for Imputing Single-cell RNA-seq Data

TL;DR: DSAE-Impute embeds the discriminative cell similarity to perfect the feature representation of stacked autoencoders, and comprehensively learns the scRNA-seq data expression pattern through layer-by-layer training to achieve accurate imputation.
Patent

Learning method, learning program, and learning device

TL;DR: In this article, a learning device is configured so as to inherit and use the features of the data-expanded training data to improve the learning accuracy of a deep learning model.
References
More filters
Journal ArticleDOI

Random Forests

TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Book

Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers

TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Journal ArticleDOI

Reducing the Dimensionality of Data with Neural Networks

TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Related Papers (5)