Open AccessPosted Content
Mine Your Own vieW: Self-Supervised Learning Through Across-Sample Prediction
Mehdi Azabou,Mohammad Gheshlaghi Azar,Ran Liu,Chi-Heng Lin,Erik C. B. Johnson,Kiran Bhaskaran-Nair,Max Dabagia,Bernardo Avila-Pires,Lindsey Kitchell,Keith B. Hengen,William Gray-Roncal,Michal Valko,Eva L. Dyer +12 more
Reads0
Chats0
TLDR
Mine Your Own vieW (MYOW) as discussed by the authors is a self-supervised learning approach that looks within the dataset to define diverse targets for prediction by mining views, finding samples that are neighbors in the representation space of the network and then predicting, from one sample's latent representation, the representation of a nearby sample.Abstract:
State-of-the-art methods for self-supervised learning (SSL) build representations by maximizing the similarity between different transformed "views" of a sample. Without sufficient diversity in the transformations used to create views, however, it can be difficult to overcome nuisance variables in the data and build rich representations. This motivates the use of the dataset itself to find similar, yet distinct, samples to serve as views for one another. In this paper, we introduce Mine Your Own vieW (MYOW), a new approach for self-supervised learning that looks within the dataset to define diverse targets for prediction. The idea behind our approach is to actively mine views, finding samples that are neighbors in the representation space of the network, and then predict, from one sample's latent representation, the representation of a nearby sample. After showing the promise of MYOW on benchmarks used in computer vision, we highlight the power of this idea in a novel application in neuroscience where SSL has yet to be applied. When tested on multi-unit neural recordings, we find that MYOW outperforms other self-supervised approaches in all examples (in some cases by more than 10%), and often surpasses the supervised baseline. With MYOW, we show that it is possible to harness the diversity of the data to build rich views and leverage self-supervision in new domains where augmentations are limited or unknown.read more
Citations
More filters
Posted Content
With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations
TL;DR: Nearest-Neighbor Contrastive Learning of visual representations (NNCLR) as mentioned in this paper samples the nearest neighbors from the dataset in the latent space, and treats them as positives, which provides more semantic variations than pre-defined transformations.
Posted ContentDOI
Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity
Ran Liu,Mehdi Azabou,Max Dabagia,Chi-Heng Lin,Mohammad Gheshlaghi Azar,Keith B. Hengen,Michal Valko,Eva L. Dyer +7 more
TL;DR: In this paper, an unsupervised approach for learning disentangled representations of neural activity called Swap-VAE is proposed, which combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state).
Posted ContentDOI
Parallel inference of hierarchical latent dynamics in two-photon calcium imaging of neuronal populations
TL;DR: VaLPACa as mentioned in this paper uses variational ladder autoencoders to disentangle deeper and shallower-level dynamics by incorporating a ladder architecture that can infer a hierarchy of dynamical systems.
Posted Content
On Feature Decorrelation in Self-Supervised Learning
TL;DR: In this article, the authors verify the existence of complete collapse and discover another reachable collapse pattern that is usually overlooked, namely dimensional collapse, and connect dimensional collapse with strong correlations between axes and consider such connection as a strong motivation for feature decorrelation.
Posted Content
Unsupervised Object-Level Representation Learning from Scene Images
TL;DR: Zhang et al. as discussed by the authors leverage image-level self-supervised pre-training as the prior to discover object-level semantic correspondence, thus realizing objectlevel representation learning from scene images.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Dissertation
Learning Multiple Layers of Features from Tiny Images
TL;DR: In this paper, the authors describe how to train a multi-layer generative model of natural images, using a dataset of millions of tiny colour images, described in the next section.
Posted Content
A Simple Framework for Contrastive Learning of Visual Representations
TL;DR: It is shown that composition of data augmentations plays a critical role in defining effective predictive tasks, and introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning.
Posted Content
Representation Learning with Contrastive Predictive Coding
TL;DR: This work proposes a universal unsupervised learning approach to extract useful representations from high-dimensional data, which it calls Contrastive Predictive Coding, and demonstrates that the approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.
Journal ArticleDOI
Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects.
Rajesh P. N. Rao,Dana H. Ballard +1 more
TL;DR: Results suggest that rather than being exclusively feedforward phenomena, nonclassical surround effects in the visual cortex may also result from cortico-cortical feedback as a consequence of the visual system using an efficient hierarchical strategy for encoding natural images.