scispace - formally typeset
Search or ask a question
Author

David Vernon

Bio: David Vernon is an academic researcher from University of Skövde. The author has contributed to research in topics: Cognitive robotics & Cognition. The author has an hindex of 25, co-authored 64 publications receiving 3401 citations. Previous affiliations of David Vernon include University of Genoa & Istituto Italiano di Tecnologia.


Papers
More filters
Proceedings ArticleDOI
19 Aug 2008
TL;DR: The iCub is a humanoid robot for research in embodied cognition that will be able to crawl on all fours and sit up to manipulate objects and its hands have been designed to support sophisticate manipulation skills.
Abstract: We report about the iCub, a humanoid robot for research in embodied cognition. At 104 cm tall, the iCub has the size of a three and half year old child. It will be able to crawl on all fours and sit up to manipulate objects. Its hands have been designed to support sophisticate manipulation skills. The iCub is distributed as Open Source following the GPL/FDL licenses. The entire design is available for download from the project homepage and repository (http://www.robotcub.org). In the following, we will concentrate on the description of the hardware and software systems. The scientific objectives of the project and its philosophical underpinning are described extensively elsewhere [1].

573 citations

Journal ArticleDOI
TL;DR: The iCub is described, which was designed to support collaborative research in cognitive development through autonomous exploration and social interaction and which has attracted a growing community of users and developers.

549 citations

Journal ArticleDOI
TL;DR: A broad survey of the various paradigms of cognition, addressing cognitivist (physical symbol systems) approaches, emergent systems approaches, encompassing connectionist, dynamical, and enactive systems, and also efforts to combine the two in hybrid systems.
Abstract: This survey presents an overview of the autonomous development of mental capabilities in computational agents. It does so based on a characterization of cognitive systems as systems which exhibit adaptive, anticipatory, and purposive goal-directed behavior. We present a broad survey of the various paradigms of cognition, addressing cognitivist (physical symbol systems) approaches, emergent systems approaches, encompassing connectionist, dynamical, and enactive systems, and also efforts to combine the two in hybrid systems. We then review several cognitive architectures drawn from these paradigms. In each of these areas, we highlight the implications and attendant problems of adopting a developmental approach, both from phylogenetic and ontogenetic points of view. We conclude with a summary of the key architectural features that systems capable of autonomous development of mental capabilities should exhibit

423 citations

Journal ArticleDOI
TL;DR: The design of the mechanisms and structures forming the basic 'body' of the iCub are described and kinematic structures dynamic design criteria, actuator specification and selection, and detailed mechanical and electronic design are considered.
Abstract: The development of robotic cognition and the advancement of understanding of human cognition form two of the current greatest challenges in robotics and neuroscience, respectively. The RobotCub project aims to develop an embodied robotic child (iCub) with the physical (height 90 cm and mass less than 23 kg) and ultimately cognitive abilities of a 2.5-year-old human child. The iCub will be a freely available open system which can be used by scientists in all cognate disciplines from developmental psychology to epigenetic robotics to enhance understanding of cognitive systems through the study of cognitive development. The iCub will be open both in software, but more importantly in all aspects of the hardware and mechanical design. In this paper the design of the mechanisms and structures forming the basic 'body' of the iCub are described. The papers considers kinematic structures dynamic design criteria, actuator specification and selection, and detailed mechanical and electronic design. The paper conclude...

279 citations

Book
01 Aug 1991
TL;DR: An introduction to computer vision illumination and fixturing sensors image acquisition and representation fundamentals of digital image processing image analysis the segmentation problem 2-D shape description and recognition 3-D object representations robot programming and robot vision bin picking trends and aspirations.
Abstract: An introduction to computer vision illumination and fixturing sensors image acquisition and representation fundamentals of digital image processing image analysis the segmentation problem 2-D shape description and recognition 3-D object representations robot programming and robot vision bin picking trends and aspirations.

178 citations


Cited by
More filters
01 Nov 2008

2,686 citations

Journal Article
TL;DR: In this article, the authors propose that the brain produces an internal representation of the world, and the activation of this internal representation is assumed to give rise to the experience of seeing, but it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness.
Abstract: Many current neurophysiological, psychophysical, and psychological approaches to vision rest on the idea that when we see, the brain produces an internal representation of the world. The activation of this internal representation is assumed to give rise to the experience of seeing. The problem with this kind of approach is that it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness. An alternative proposal is made here. We propose that seeing is a way of acting. It is a particular way of exploring the environment. Activity in internal representations does not generate the experience of seeing. The outside world serves as its own, external, representation. The experience of seeing occurs when the organism masters what we call the governing laws of sensorimotor contingency. The advantage of this approach is that it provides a natural and principled way of accounting for visual consciousness, and for the differences in the perceived quality of sensory experience in the different sensory modalities. Several lines of empirical evidence are brought forward in support of the theory, in particular: evidence from experiments in sensorimotor adaptation, visual \"filling in,\" visual stability despite eye movements, change blindness, sensory substitution, and color perception.

2,271 citations

Journal ArticleDOI
TL;DR: In this paper, the authors offer a new book that enPDFd the perception of the visual world to read, which they call "Let's Read". But they do not discuss how to read it.
Abstract: Let's read! We will often find out this sentence everywhere. When still being a kid, mom used to order us to always read, so did the teacher. Some books are fully read in a week and we need the obligation to support reading. What about now? Do you still love reading? Is reading only for you who have obligation? Absolutely not! We here offer you a new book enPDFd the perception of the visual world to read.

2,250 citations

Journal ArticleDOI
01 Feb 1980-Nature

1,368 citations

Patent
23 Feb 2011
TL;DR: A smart phone senses audio, imagery, and/or other stimulus from a user's environment, and acts autonomously to fulfill inferred or anticipated user desires as discussed by the authors, and can apply more or less resources to an image processing task depending on how successfully the task is proceeding or based on the user's apparent interest in the task.
Abstract: A smart phone senses audio, imagery, and/or other stimulus from a user's environment, and acts autonomously to fulfill inferred or anticipated user desires. In one aspect, the detailed technology concerns phone-based cognition of a scene viewed by the phone's camera. The image processing tasks applied to the scene can be selected from among various alternatives by reference to resource costs, resource constraints, other stimulus information (e.g., audio), task substitutability, etc. The phone can apply more or less resources to an image processing task depending on how successfully the task is proceeding, or based on the user's apparent interest in the task. In some arrangements, data may be referred to the cloud for analysis, or for gleaning. Cognition, and identification of appropriate device response(s), can be aided by collateral information, such as context. A great number of other features and arrangements are also detailed.

1,056 citations