scispace - formally typeset
Search or ask a question
Author

Erhan Oztop

Bio: Erhan Oztop is an academic researcher from Özyeğin University. The author has contributed to research in topics: Robot & Robot learning. The author has an hindex of 27, co-authored 103 publications receiving 2891 citations. Previous affiliations of Erhan Oztop include National Institute of Information and Communications Technology & Max Planck Society.


Papers
More filters
Journal ArticleDOI
TL;DR: The hand-state hypothesis is offered as a new explanation of the evolution of this capability of mirror neurons, and it is shown that the connectivity pattern of mirror neuron circuitry can be established through training, and that the resultant network can exhibit a range of novel, physiologically interesting behaviors during the process of action recognition.
Abstract: Mirror neurons within a monkey's premotor area F5 fire not only when the monkey performs a certain class of actions but also when the monkey observes another monkey (or the experimenter) perform a similar action. It has thus been argued that these neurons are crucial for understanding of actions by others. We offer the hand-state hypothesis as a new explanation of the evolution of this capability: the basic functionality of the F5 mirror system is to elaborate the appropriate feedback - what we call the hand state - for opposition-space based control of manual grasping of an object. Given this functionality, the social role of the F5 mirror system in understanding the actions of others may be seen as an exaptation gained by generalizing from one's own hand to an other's hand. In other words, mirror neurons first evolved to augment the "canonical" F5 neurons (active during self-movement based on observation of an object) by providing visual feedback on "hand state," relating the shape of the hand to the shape of the object. We then introduce the MNS1 (mirror neuron system 1) model of F5 and related brain regions. The existing Fagg-Arbib-Rizzolatti-Sakata model represents circuitry for visually guided grasping of objects, linking the anterior intraparietal area (AIP) with F5 canonical neurons. The MNS1 model extends the AIP visual pathway by also modeling pathways, directed toward F5 mirror neurons, which match arm-hand trajectories to the affordances and location of a potential target object. We present the basic schemas for the MNS1 model, then aggregate them into three "grand schemas" - visual analysis of hand state, reach and grasp, and the core mirror circuit - for each of which we present a useful implementation (a non-neural visual processing system, a multijoint 3-D kinematics simulator, and a learning neural network, respectively). With this implementation we show how the mirror system may learn to recognize actions already in the repertoire of the F5 canonical neurons. We show that the connectivity pattern of mirror neuron circuitry can be established through training, and that the resultant network can exhibit a range of novel, physiologically interesting behaviors during the process of action recognition. We train the system on the basis of final grasp but then observe the whole time course of mirror neuron activity, yielding predictions for neurophysiological experiments under conditions of spatial perturbation, altered kinematics, and ambiguous grasp execution which highlight the importance of the timing of mirror neuron activity.

318 citations

Journal ArticleDOI
TL;DR: A meta-analysis underlines the gap between conceptual and computational models and points out the research effort required from both sides to reduce this gap.
Abstract: Neurophysiology reveals the properties of individual mirror neurons in the macaque while brain imaging reveals the presence of 'mirror systems' (not individual neurons) in the human. Current conceptual models attribute high level functions such as action understanding, imitation, and language to mirror neurons. However, only the first of these three functions is well-developed in monkeys. We thus distinguish current opinions (conceptual models) on mirror neuron function from more detailed computational models. We assess the strengths and weaknesses of current computational models in addressing the data and speculations on mirror neurons (macaque) and mirror systems (human). In particular, our mirror neuron system (MNS), mental state inference (MSI) and modular selection and identification for control (MOSAIC) models are analyzed in more detail. Conceptual models often overlook the computational requirements for posited functions, while too many computational models adopt the erroneous hypothesis that mirror neurons are interchangeable with imitation ability. Our meta-analysis underlines the gap between conceptual and computational models and points out the research effort required from both sides to reduce this gap.

270 citations

Journal ArticleDOI
TL;DR: The article contributes to the quest to relate global data on brain and behavior from PET, Positron Emission Tomography, and fMRI to the underpinning neural networks by using computational models of biological neural circuitry based on animal data to predict and analyze the results of human PET studies.
Abstract: The article contributes to the quest to relate global data on brain and behavior (e.g. from PET, Positron Emission Tomography, and fMRI, functional Magnetic Resonance Imaging) to the underpinning neural networks. Models tied to human brain imaging data often focus on a few “boxes” based on brain regions associated with exceptionally high blood flow, rather than analyzing the cooperative computation of multiple brain regions. For analysis directly at the level of such data, a schema-based model may be most appropriate. To further address neurophysiological data, the Synthetic PET imaging method uses computational models of biological neural circuitry based on animal data to predict and analyze the results of human PET studies. This technique makes use of the hypothesis that rCBF (regional cerebral blood flow) is correlated with the integrated synaptic activity in a localized brain region. We also describe the possible extension of the Synthetic PET method to fMRI. The second half of the paper then exemplifies this general research program with two case studies, one on visuo-motor processing for control of grasping (Section 3 in which the focus is on Synthetic PET) and the imitation of motor skills (Sections 4 and 5, with a focus on Synthetic fMRI). Our discussion of imitation pays particular attention to data on the mirror system in monkey (neural circuitry which allows the brain to recognize actions as well as execute them). Finally, Section 6 outlines the immense challenges in integrating models of different portions of the nervous system which address detailed neurophysiological data from studies of primates and other species; summarizes key issues for developing the methodology of Synthetic Brain Imaging; and shows how comparative neuroscience and evolutionary arguments will allow us to extend Synthetic Brain Imaging even to language and other cognitive functions for which few or no animal data are available.

187 citations

Journal ArticleDOI
TL;DR: A computational model of mental state inference that builds upon a generic visuomanual feedback controller, and implements mental simulation andmental state inference functions using circuitry that subserves sensorimotor control is developed.
Abstract: Although we can often infer the mental states of others by observing their actions, there are currently no computational models of this remarkable ability. Here we develop a computational model of mental state inference that builds upon a generic visuomanual feedback controller, and implements mental simulation and mental state inference functions using circuitry that subserves sensorimotor control. Our goal is (1) to show that control mechanisms developed for manual manipulation are readily endowed with visual and predictive processing capabilities and thus allows a natural extension to the understanding of movements performed by others; and (2) to give an explanation on how cortical regions, in particular the parietal and premotor cortices, may be involved in such dual mechanism. To analyze the model, we simulate tasks in which an observer watches an actor performing either a reaching or a grasping movement. The observer's goal is to estimate the 'mental state' of the actor: the goal of the reaching movement or the intention of the agent performing the grasping movement. We show that the motor modules of the observer can be used in a 'simulation mode' to infer the mental state of the actor. The simulations with different grasping and non-straight line reaching strategies show that the mental state inference model is applicable to complex movements. Moreover, we simulate deceptive reaching, where an actor imposes false beliefs about his own mental state on an observer. The simulations show that computational elements developed for sensorimotor control are effective in inferring the mental states of others. The parallels between the model and cortical organization of movement suggest that primates might have developed a similar resource utilization strategy for action understanding, and thus lead to testable predictions about the brain mechanisms of mental state inference.

185 citations

Journal ArticleDOI
TL;DR: This paper proposes a method that learns to generalize parametrized motor plans by adapting a small set of global parameters, called meta-parameters, and introduces an appropriate reinforcement learning algorithm based on a kernelized version of the reward-weighted regression.
Abstract: Humans manage to adapt learned movements very quickly to new situations by generalizing learned behaviors from similar situations. In contrast, robots currently often need to re-learn the complete movement. In this chapter, we propose a method that learns to generalize parametrized motor plans by adapting a small set of global parameters, called meta-parameters. We employ reinforcement learning to learn the required meta-parameters to deal with the current situation, described by states. We introduce an appropriate reinforcement learning algorithm based on a kernelized version of the reward-weighted regression. To show its feasibility, we evaluate this algorithm on a toy example and compare it to several previous approaches. Subsequently, we apply the approach to three robot tasks, i.e., the generalization of throwing movements in darts, of hitting movements in table tennis, and of throwing balls where the tasks are learned on several different real physical robots, i.e., a Barrett WAM, a BioRob, the JST-ICORP/SARCOS CBi and a Kuka KR 6.

182 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: It is argued and present evidence that great apes understand the basics of intentional action, but they still do not participate in activities involving joint intentions and attention (shared intentionality), and children's skills of shared intentionality develop gradually during the first 14 months of life.
Abstract: We propose that the crucial difference between human cognition and that of other species is the ability to participate with others in collaborative activities with shared goals and intentions: shared intentionality. Participation in such activities requires not only especially powerful forms of intention reading and cultural learning, but also a unique motivation to share psychological states with oth- ers and unique forms of cognitive representation for doing so. The result of participating in these activities is species-unique forms of cultural cognition and evolution, enabling everything from the creation and use of linguistic symbols to the construction of social norms and individual beliefs to the establishment of social institutions. In support of this proposal we argue and present evidence that great apes (and some children with autism) understand the basics of intentional action, but they still do not participate in activities involving joint intentions and attention (shared intentionality). Human children's skills of shared intentionality develop gradually during the first 14 months of life as two ontogenetic pathways intertwine: (1) the general ape line of understanding others as animate, goal-directed, and intentional agents; and (2) a species-unique motivation to share emotions, experience, and activities with other persons. The develop- mental outcome is children's ability to construct dialogic cognitive representations, which enable them to participate in earnest in the collectivity that is human cognition.

3,660 citations

Journal ArticleDOI
TL;DR: This article attempts to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots by highlighting both key challenges in robot reinforcement learning as well as notable successes.
Abstract: Reinforcement learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors. Conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in reinforcement learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this article, we attempt to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots. We highlight both key challenges in robot reinforcement learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our paper lies on the choice between model-based and model-free as well as between value-function-based and policy-search methods. By analyzing a simple problem in some detail we demonstrate how reinforcement learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research.

2,391 citations

Journal Article
TL;DR: In this article, the authors propose that the brain produces an internal representation of the world, and the activation of this internal representation is assumed to give rise to the experience of seeing, but it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness.
Abstract: Many current neurophysiological, psychophysical, and psychological approaches to vision rest on the idea that when we see, the brain produces an internal representation of the world. The activation of this internal representation is assumed to give rise to the experience of seeing. The problem with this kind of approach is that it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness. An alternative proposal is made here. We propose that seeing is a way of acting. It is a particular way of exploring the environment. Activity in internal representations does not generate the experience of seeing. The outside world serves as its own, external, representation. The experience of seeing occurs when the organism masters what we call the governing laws of sensorimotor contingency. The advantage of this approach is that it provides a natural and principled way of accounting for visual consciousness, and for the differences in the perceived quality of sensory experience in the different sensory modalities. Several lines of empirical evidence are brought forward in support of the theory, in particular: evidence from experiments in sensorimotor adaptation, visual \"filling in,\" visual stability despite eye movements, change blindness, sensory substitution, and color perception.

2,271 citations

Journal Article
TL;DR: In this article, a guided policy search method is used to map raw image observations directly to torques at the robot's motors, with supervision provided by a simple trajectory-centric reinforcement learning method.
Abstract: Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.

1,934 citations