On the relation of slow feature analysis and laplacian eigenmaps
Reads0
Chats0
TLDR
It is shown that LEMs are closely related to slow feature analysis (SFA), a biologically inspired, unsupervised learning algorithm originally designed for learning invariant visual representations, and SFA can be interpreted as a function approximation of Lems, where the topological neighborhoods required for Lems are implicitly defined by the temporal structure of the data.Abstract:
The past decade has seen a rise of interest in Laplacian eigenmaps (LEMs) for nonlinear dimensionality reduction. LEMs have been used in spectral clustering, in semisupervised learning, and for providing efficient state representations for reinforcement learning. Here, we show that LEMs are closely related to slow feature analysis (SFA), a biologically inspired, unsupervised learning algorithm originally designed for learning invariant visual representations. We show that SFA can be interpreted as a function approximation of LEMs, where the topological neighborhoods required for LEMs are implicitly defined by the temporal structure of the data. Based on this relation, we propose a generalization of SFA to arbitrary neighborhood relations and demonstrate its applicability for spectral clustering. Finally, we review previous work with the goal of providing a unifying view on SFA and LEMs.read more
Citations
More filters
Proceedings Article
A Laplacian Framework for option discovery in reinforcement learning
TL;DR: This paper addresses the option discovery problem by showing how PVFs implicitly define options by introducing eigenpurposes, intrinsic reward functions derived from the learned representations, which traverse the principal directions of the state space.
Patent
Methods and apparatus for autonomous robotic control
TL;DR: The OpenSense technology as discussed by the authors uses neurally inspired processing to identify and locate objects in a robot's environment, which enables the robot to navigate its environment more quickly and with lower computational and power requirements.
Journal ArticleDOI
Learning Structures: Predictive Representations, Replay, and Generalization
TL;DR: Evidence showing that capturing structures as predictive representations updated via replay offers a neurally plausible account of human behavior and the neural representations of predictive cognitive maps is reviewed.
Proceedings Article
Eigenoption Discovery through the Deep Successor Representation
TL;DR: This paper proposes an algorithm that discovers eigenoptions while learning non-linear state representations from raw pixels, and exploits recent successes in the deep reinforcement learning literature and the equivalence between proto-value functions and the successor representation.
References
More filters
Journal ArticleDOI
Normalized cuts and image segmentation
Jianbo Shi,Jitendra Malik +1 more
TL;DR: This work treats image segmentation as a graph partitioning problem and proposes a novel global criterion, the normalized cut, for segmenting the graph, which measures both the total dissimilarity between the different groups as well as the total similarity within the groups.
Proceedings ArticleDOI
Normalized cuts and image segmentation
Jianbo Shi,Jitendra Malik +1 more
TL;DR: This work treats image segmentation as a graph partitioning problem and proposes a novel global criterion, the normalized cut, for segmenting the graph, which measures both the total dissimilarity between the different groups as well as the total similarity within the groups.
Journal ArticleDOI
A tutorial on spectral clustering
TL;DR: In this article, the authors present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches, and discuss the advantages and disadvantages of these algorithms.
Proceedings Article
On Spectral Clustering: Analysis and an algorithm
TL;DR: A simple spectral clustering algorithm that can be implemented using a few lines of Matlab is presented, and tools from matrix perturbation theory are used to analyze the algorithm, and give conditions under which it can be expected to do well.
Journal ArticleDOI
Laplacian Eigenmaps for dimensionality reduction and data representation
Mikhail Belkin,Partha Niyogi +1 more
TL;DR: In this article, the authors proposed a geometrically motivated algorithm for representing high-dimensional data, based on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold and the connections to the heat equation.
Related Papers (5)
Slow feature analysis: unsupervised learning of invariances
Proto-value Functions: A Laplacian Framework for Learning Representation and Control in Markov Decision Processes
Sridhar Mahadevan,Mauro Maggioni +1 more
Laplacian Eigenmaps for dimensionality reduction and data representation
Mikhail Belkin,Partha Niyogi +1 more