scispace - formally typeset
Book ChapterDOI

Functional object class detection based on learned affordance cues

TLDR
This paper proposes a system for the detection of functional object classes, based on a representation of visually distinct hints on object affordances (affordance cues), which spans the complete range from tutordriven acquisition of affordance cues, learning of corresponding object models, and detecting novel instances offunctional object classes in real images.
Abstract
Current approaches to visual object class detection mainly focus on the recognition of basic level categories, such as cars, motorbikes, mugs and bottles. Although these approaches have demonstrated impressive performance in terms of recognition, their restriction to these categories seems inadequate in the context of embodied, cognitive agents. Here, distinguishing objects according to functional aspects based on object affordances is important in order to enable manipulation of and interaction between physical objects and cognitive agent. In this paper, we propose a system for the detection of functional object classes, based on a representation of visually distinct hints on object affordances (affordance cues). It spans the complete range from tutordriven acquisition of affordance cues, learning of corresponding object models, and detecting novel instances of functional object classes in real images.

read more

Citations
More filters
Journal ArticleDOI

Data-Driven Grasp Synthesis—A Survey

TL;DR: A review of the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps and an overview of the different methodologies are provided, which draw a parallel to the classical approaches that rely on analytic formulations.
Journal ArticleDOI

An overview of 3D object grasp synthesis algorithms

TL;DR: This overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands by focusing on analytical as well as empirical grasp synthesis approaches.
Journal ArticleDOI

50 Years of object recognition: Directions forward☆

TL;DR: It is argued that the next step in the evolution of object recognition algorithms will require radical and bold steps forward in terms of the object representations, as well as the learning and inference algorithms used.
Proceedings ArticleDOI

What makes a chair a chair

TL;DR: This novel approach “imagines” an actor performing an action typical for the target object class, instead of relying purely on the visual object appearance, and handles function as a cue complementary to appearance, rather than being a consideration after appearance-based detection.
References
More filters
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Proceedings Article

Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data

TL;DR: This work presents iterative parameter estimation algorithms for conditional random fields and compares the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.
Proceedings ArticleDOI

Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories

TL;DR: This paper presents a method for recognizing scene categories based on approximate global geometric correspondence that exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories.
Journal ArticleDOI

Basic objects in natural categories

TL;DR: In this paper, the authors define basic objects as those categories which carry the most information, possess the highest category cue validity, and are the most differentiated from one another, and thus the most distinctive from each other.