scispace - formally typeset
Open AccessPosted Content

Self-supervised Pretraining of Visual Features in the Wild

Reads0
Chats0
TLDR
Recently, self-supervised learning methods like MoCo, SimCLR, BYOL and SwAV have reduced the gap with supervised methods as mentioned in this paper. But self-learning cannot learn from any random image and from any unbounded dataset.
Abstract
Recently, self-supervised learning methods like MoCo, SimCLR, BYOL and SwAV have reduced the gap with supervised methods. These results have been achieved in a control environment, that is the highly curated ImageNet dataset. However, the premise of self-supervised learning is that it can learn from any random image and from any unbounded dataset. In this work, we explore if self-supervision lives to its expectation by training large models on random, uncurated images with no supervision. Our final SElf-supERvised (SEER) model, a RegNetY with 1.3B parameters trained on 1B random images with 512 GPUs achieves 84.2% top-1 accuracy, surpassing the best self-supervised pretrained model by 1% and confirming that self-supervised learning works in a real world setting. Interestingly, we also observe that self-supervised models are good few-shot learners achieving 77.9% top-1 with access to only 10% of ImageNet. Code: this https URL

read more

Citations
More filters
Journal ArticleDOI

Self-supervised Learning: Generative or Contrastive.

TL;DR: This survey takes a look into new self-supervised learning methods for representation in computer vision, natural language processing, and graph learning, and comprehensively review the existing empirical methods into three main categories according to their objectives.
Posted Content

Emerging Properties in Self-Supervised Vision Transformers

TL;DR: In this paper, self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets) beyond the fact that adapting selfsupervised methods to this architecture works particularly well, they make the following observations: first, self-vised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets.
Journal ArticleDOI

Artificial intelligence and machine learning for medical imaging: A technology review.

TL;DR: Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing as discussed by the authors.
Journal ArticleDOI

Review on self-supervised image recognition using deep neural networks

TL;DR: Self-supervised learning as discussed by the authors is a form of unsupervised deep learning that allows the network to learn rich visual features that help in performing downstream computer vision tasks such as image classification, object detection, and image segmentation.
Posted ContentDOI

Self-Supervised Deep-Learning Encodes High-Resolution Features of Protein Subcellular Localization

TL;DR: In this article, a deep learning-based approach for fully self-supervised protein localization profiling and clustering is presented, which does not require pre-existing knowledge, categories, or annotations.
References
More filters
Posted Content

Fixing the train-test resolution discrepancy: FixEfficientNet

TL;DR: This strategy is advantageously combined with recent training recipes from the literature and significantly outperforms the initial architecture with the same number of parameters, and establishes the new state of the art for ImageNet with a single crop.
Proceedings ArticleDOI

ClusterFit: Improving Generalization of Visual Representations

TL;DR: ClusterFit as mentioned in this paper uses k-means clustering to improve the robustness of the visual representations learned during pre-training by re-training a pre-trained network on a new dataset using cluster assignments as pseudo-labels.
Posted Content

Milking CowMask for Semi-Supervised Image Classification

TL;DR: A novel mask-based augmentation method called CowMask is presented, using it to provide perturbations for semi-supervised consistency regularization, which achieves a state-of-the-art result on ImageNet with 10% labeled data.
Related Papers (5)