scispace - formally typeset
Journal ArticleDOI

Learning to Perceive the World as Probabilistic or Deterministic via Interaction With Others: A Neuro-Robotics Experiment

TLDR
Two different ways of treating uncertainty about perceptual events in learning, namely, probabilistic modeling and deterministic modeling, contribute to the development of different dynamic neuronal structures governing the two types of behavior generation schemes.
Abstract
We suggest that different behavior generation schemes, such as sensory reflex behavior and intentional proactive behavior, can be developed by a newly proposed dynamic neural network model, named stochastic multiple timescale recurrent neural network (S-MTRNN). The model learns to predict subsequent sensory inputs, generating both their means and their uncertainty levels in terms of variance (or inverse precision) by utilizing its multiple timescale property. This model was employed in robotics learning experiments in which one robot controlled by the S-MTRNN was required to interact with another robot under the condition of uncertainty about the other’s behavior. The experimental results show that self-organized and sensory reflex behavior—based on probabilistic prediction—emerges when learning proceeds without a precise specification of initial conditions. In contrast, intentional proactive behavior with deterministic predictions emerges when precise initial conditions are available. The results also showed that, in situations where unanticipated behavior of the other robot was perceived, the behavioral context was revised adequately by adaptation of the internal neural dynamics to respond to sensory inputs during sensory reflex behavior generation. On the other hand, during intentional proactive behavior generation, an error regression scheme by which the internal neural activity was modified in the direction of minimizing prediction errors was needed for adequately revising the behavioral context. These results indicate that two different ways of treating uncertainty about perceptual events in learning, namely, probabilistic modeling and deterministic modeling, contribute to the development of different dynamic neuronal structures governing the two types of behavior generation schemes.

read more

Citations
More filters
Journal ArticleDOI

Affordances in Psychology, Neuroscience, and Robotics: A Survey

TL;DR: The main definitions and formalizations of the affordance theory are discussed, the most significant evidence in psychology and neuroscience that support it are reported, and the most relevant applications of this concept in robotics are reviewed.
Journal ArticleDOI

Weight-Adapted Convolution Neural Network for Facial Expression Recognition in Human–Robot Interaction

TL;DR: Experimental results show the recognition accuracies of the WACNN are superior to that of the state-of-the-art, such as local directional ternary pattern and weighted mixture deep neural network (DNN), which aim to extract discriminative and are the DNN-based methods.
Journal ArticleDOI

A Novel Predictive-Coding-Inspired Variational RNN Model for Online Prediction and Recognition

TL;DR: This article introduced PV-RNN, a variational RNN inspired by predictive coding ideas, which learns to extract the probabilistic structures hidden in fluctuating temporal patterns by dynami...
Journal ArticleDOI

Dealing With Large-Scale Spatio-Temporal Patterns in Imitative Interaction Between a Robot and a Human by Using the Predictive Coding Framework

TL;DR: The findings suggest that the error minimization principle in predictive coding could provide a primal account for the mirror neuron functions for generating actions as well as recognizing those generated by others in a social cognitive context.
Journal ArticleDOI

Learning, planning, and control in a monolithic neural event inference architecture.

TL;DR: REPRISE infers the unobservable contextual event state and accompanying temporal predictive models that best explain the recently encountered sensorimotor experiences retrospectively and can exploit the learned model to induce goal-directed, model-predictive control, that is, approximate active inference.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Journal ArticleDOI

Finding Structure in Time

TL;DR: A proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory and suggests a method for representing lexical categories and the type/token distinction is developed.
Journal ArticleDOI

A learning algorithm for continually running fully recurrent neural networks

TL;DR: The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks.
Journal ArticleDOI

Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory.

TL;DR: The account presented here suggests that memories are first stored via synaptic changes in the hippocampal system, that these changes support reinstatement of recent memories in the neocortex, that neocortical synapses change a little on each reinstatement, and that remote memory is based on accumulated neocorticals changes.
Journal ArticleDOI

Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects.

TL;DR: Results suggest that rather than being exclusively feedforward phenomena, nonclassical surround effects in the visual cortex may also result from cortico-cortical feedback as a consequence of the visual system using an efficient hierarchical strategy for encoding natural images.
Related Papers (5)