scispace - formally typeset
Proceedings ArticleDOI

Attention: A machine learning perspective

Reads0
Chats0
TLDR
A statistical machine learning model of top-down task driven attention based on the notion of `gist' which shows the performance of the classifier equipped with the attention mechanism is almost as good as one that has access to all low-level features and clearly improving over a simple `random attention' alternative.
Abstract
We review a statistical machine learning model of top-down task driven attention based on the notion of ‘gist’. In this framework we consider the task to be represented as a classification problem with two sets of features — a gist of coarse grained global features and a larger set of low-level local features. Attention is modeled as the choice process over the low-level features given the gist. The model takes its departure in a classical information theoretic framework for experimental design. This approach requires the evaluation over marginalized and conditional distributions. By implementing the classifier within a Gaussian Discrete mixture it is straightforward to marginalize and condition, hence, we obtained a relatively simple expression for the feature dependent information gain — the top-down saliency. As the top-down attention mechanism is modeled as a simple classification problem, we can evaluate the strategy simply by estimating error rates on a test data set. We illustrate the attention mechanism on a simple simulated visual domain in which the choice is over nine patches in which a binary pattern has to be classified. The performance of the classifier equipped with the attention mechanism is almost as good as one that has access to all low-level features and clearly improving over a simple ‘random attention’ alternative.

read more

Citations
More filters

Active mobile robot localization by entropy minimization

TL;DR: In this article, an active localization approach for mobile robots is proposed, which provides rational criteria for determining the robot's motion direction (exploration) and determining the pointing direction of the sensors so as to most efficiently localize the robot.
Book

Attention in Cognitive Systems. Theories and Systems from an Interdisciplinary Viewpoint: 4th International Workshop on Attention in Cognitive Systems, WAPCV 2007 Hyderabad, India, January 8, 2007 Revised Selected Papers

TL;DR: Simulation and Formal Analysis of Visual Attention in Cognitive Systems and Applications of Attentive Vision, including Simultaneous Robot Localization and Mapping Based on a Visual Attention System, are presented.
References
More filters
Journal ArticleDOI

Learning the parts of objects by non-negative matrix factorization

TL;DR: An algorithm for non-negative matrix factorization is demonstrated that is able to learn parts of faces and semantic features of text and is in contrast to other methods that learn holistic, not parts-based, representations.

Learning parts of objects by non-negative matrix factorization

D. D. Lee
TL;DR: In this article, non-negative matrix factorization is used to learn parts of faces and semantic features of text, which is in contrast to principal components analysis and vector quantization that learn holistic, not parts-based, representations.
Journal ArticleDOI

The "independent components" of natural scenes are edge filters.

TL;DR: It is shown that a new unsupervised learning algorithm based on information maximization, a nonlinear "infomax" network, when applied to an ensemble of natural scenes produces sets of visual filters that are localized and oriented.
Journal ArticleDOI

Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search.

TL;DR: An original approach of attentional guidance by global scene context is presented that combines bottom-up saliency, scene context, and top-down mechanisms at an early stage of visual processing and predicts the image regions likely to be fixated by human observers performing natural search tasks in real-world scenes.
Journal ArticleDOI

On a Measure of the Information Provided by an Experiment

TL;DR: In this paper, a measure of the information provided by an experiment is introduced, derived from the work of Shannon and involves the knowledge prior to performing the experiment, expressed through a prior probability distribution over the parameter space.
Related Papers (5)