scispace - formally typeset
Open AccessJournal ArticleDOI

Unsupervised learning of invariant representations

Reads0
Chats0
TLDR
The theory offers novel unsupervised learning algorithms for "deep" architectures for image and speech recognition, and conjecture that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and selective for recognition.
About
This article is published in Theoretical Computer Science.The article was published on 2016-06-20 and is currently open access. It has received 104 citations till now. The article focuses on the topics: Unsupervised learning & Semi-supervised learning.

read more

Citations
More filters
Journal ArticleDOI

Building machines that learn and think like people.

TL;DR: In this article, a review of recent progress in cognitive science suggests that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it.
Book ChapterDOI

Safety Verification of Deep Neural Networks

TL;DR: A novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT) is developed, which defines safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image.
Journal ArticleDOI

Unmasking Clever Hans Predictors and Assessing What Machines Really Learn

TL;DR: The authors investigate how these methods approach learning in order to assess the dependability of their decision making and propose a semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines.
Journal ArticleDOI

Neural scene representation and rendering

TL;DR: The Generative Query Network (GQN) is introduced, a framework within which machines learn to represent scenes using only their own sensors, demonstrating representation learning without human labels or domain knowledge.
Journal ArticleDOI

Toward an Integration of Deep Learning and Neuroscience.

TL;DR: In this paper, the authors argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes.
References
More filters
Journal ArticleDOI

A logical calculus of the ideas immanent in nervous activity

TL;DR: In this article, it is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under another and gives the same results, although perhaps not in the same time.
Journal ArticleDOI

Receptive fields, binocular interaction and functional architecture in the cat's visual cortex

TL;DR: This method is used to examine receptive fields of a more complex type and to make additional observations on binocular interaction and this approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours.
Journal ArticleDOI

Backpropagation applied to handwritten zip code recognition

TL;DR: This paper demonstrates how constraints from the task domain can be integrated into a backpropagation network through the architecture of the network, successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service.
Journal ArticleDOI

Emergence of simple-cell receptive field properties by learning a sparse code for natural images

TL;DR: It is shown that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex.
Journal ArticleDOI

Neocognitron: A Self Organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position

TL;DR: A neural network model for a mechanism of visual pattern recognition that is self-organized by “learning without a teacher”, and acquires an ability to recognize stimulus patterns based on the geometrical similarity of their shapes without affected by their positions.
Related Papers (5)