scispace - formally typeset
Book ChapterDOI

Associative Deep Clustering - Training a Classification Network with no Labels

Reads0
Chats0
TLDR
In this article, the authors propose an end-to-end clustering training schedule for neural networks that is direct, i.e., the output is a probability distribution over cluster memberships.
Abstract
We propose a novel end-to-end clustering training schedule for neural networks that is direct, i.e. the output is a probability distribution over cluster memberships. A neural network maps images to embeddings. We introduce centroid variables that have the same shape as image embeddings. These variables are jointly optimized with the network’s parameters. This is achieved by a cost function that associates the centroid variables with embeddings of input images. Finally, an additional layer maps embeddings to logits, allowing for the direct estimation of the respective cluster membership. Unlike other methods, this does not require any additional classifier to be trained on the embeddings in a separate step. The proposed approach achieves state-of-the-art results in unsupervised classification and we provide an extensive ablation study to demonstrate its capabilities.

read more

Citations
More filters
Proceedings ArticleDOI

Invariant Information Clustering for Unsupervised Image Classification and Segmentation

TL;DR: IIC as mentioned in this paper learns a neural network classifier from scratch, given only unlabeled data samples, and achieves state-of-the-art results in eight unsupervised clustering benchmarks spanning image classification and segmentation.
Posted Content

Contrastive Clustering

TL;DR: A one-stage online clustering method called Contrastive Clustering (CC) which explicitly performs the instance- and cluster-level contrastive learning, which remarkably outperforms 17 competitive clustering methods on six challenging image benchmarks.
Posted Content

SCAN: Learning to Classify Images without Labels

TL;DR: This paper deviates from recent works, and advocate a two-step approach where feature learning and clustering are decoupled, and achieves promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime without the use of any ground-truth annotations.
Proceedings Article

Stacked capsule autoencoders

TL;DR: This work introduces an unsupervised capsule autoencoder (SCAE), which explicitly uses geometric relationships between parts to reason about objects, and finds that object capsule presences are highly informative of the object class, which leads to state-of-the-art results for un supervised classification on SVHN and MNIST.
Book ChapterDOI

SCAN: Learning to Classify Images Without Labels

TL;DR: Wang et al. as mentioned in this paper proposed a two-step approach where feature learning and clustering are decoupled, and obtained semantically meaningful features as a prior in a learnable clustering approach.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.