scispace - formally typeset
Search or ask a question
Author

Jun Tani

Other affiliations: University of Tokyo, KAIST, Chiyoda Corporation  ...read more
Bio: Jun Tani is an academic researcher from Okinawa Institute of Science and Technology. The author has contributed to research in topics: Recurrent neural network & Artificial neural network. The author has an hindex of 36, co-authored 253 publications receiving 5684 citations. Previous affiliations of Jun Tani include University of Tokyo & KAIST.


Papers
More filters
Journal ArticleDOI
TL;DR: The proposed network model, coordinating the physical body of a humanoid robot through high-dimensional sensori-motor control, also successfully situated itself within a physical environment and suggests that it is not only the spatial connections between neurons but also the timescales of neural activity that act as important mechanisms leading to functional hierarchy in neural systems.
Abstract: It is generally thought that skilled behavior in human beings results from a functional hierarchy of the motor control system, within which reusable motor primitives are flexibly integrated into various sensori-motor sequence patterns. The underlying neural mechanisms governing the way in which continuous sensori-motor flows are segmented into primitives and the way in which series of primitives are integrated into various behavior sequences have, however, not yet been clarified. In earlier studies, this functional hierarchy has been realized through the use of explicit hierarchical structure, with local modules representing motor primitives in the lower level and a higher module representing sequences of primitives switched via additional mechanisms such as gate-selecting. When sequences contain similarities and overlap, however, a conflict arises in such earlier models between generalization and segmentation, induced by this separated modular structure. To address this issue, we propose a different type of neural network model. The current model neither makes use of separate local modules to represent primitives nor introduces explicit hierarchical structure. Rather than forcing architectural hierarchy onto the system, functional hierarchy emerges through a form of self-organization that is based on two distinct types of neurons, each with different time properties (“multiple timescales”). Through the introduction of multiple timescales, continuous sequences of behavior are segmented into reusable primitives, and the primitives, in turn, are flexibly integrated into novel sequences. In experiments, the proposed network model, coordinating the physical body of a humanoid robot through high-dimensional sensori-motor control, also successfully situated itself within a physical environment. Our results suggest that it is not only the spatial connections between neurons but also the timescales of neural activity that act as important mechanisms leading to functional hierarchy in neural systems.

481 citations

Journal ArticleDOI
01 Jun 1996
TL;DR: This paper discusses how a behavior-based robot can construct a "symbolic process" that accounts for its deliberative thinking processes using models of the environment and shows that the robot is capable of learning grammatical structure hidden in the geometry of the workspace from the local sensory inputs through its navigational experiences.
Abstract: This paper discusses how a behavior-based robot can construct a "symbolic process" that accounts for its deliberative thinking processes using models of the environment. The paper focuses on two essential problems; one is the symbol grounding problem and the other is how the internal symbolic processes can be situated with respect to the behavioral contexts. We investigate these problems by applying a dynamical system's approach to the robot navigation learning problem. Our formulation, based on a forward modeling scheme using recurrent neural learning, shows that the robot is capable of learning grammatical structure hidden in the geometry of the workspace from the local sensory inputs through its navigational experiences. Furthermore, the robot is capable of generating diverse action plans to reach an arbitrary goal using the acquired forward model which incorporates chaotic dynamics. The essential claim is that the internal symbolic process, being embedded in the attractor, is grounded since it is self-organized solely through interaction with the physical world. It is also shown that structural stability arises in the interaction between the neural dynamics and the environmental dynamics, which accounts for the situatedness of the internal symbolic process, The experimental results using a mobile robot, equipped with a local sensor consisting of a laser range finder, verify our claims.

363 citations

Proceedings Article
01 Sep 1998
TL;DR: In this article, a mixture of recurrent neural net (RNN) experts is proposed to learn an internal model of the world structurally by focusing on the problem of behavior-based articulation.
Abstract: This paper describes how agents can learn an internal model of the world structurally by focusing on the problem of behavior-based articulation. We develop an on-line learning scheme-the so-called mixture of recurrent neural net (RNN) experts-in which a set of RNN modules become self-organized as experts on multiple levels, in order to account for the different categories of sensory-motor flow which the robot experiences. Autonomous switching of activated modules in the lower level actually represents the articulation of the sensory-motor flow. In the meantime, a set of RNNs in the higher level competes to learn the sequences of module switching in the lower level, by which articulation at a further, more abstract level can be achieved. The proposed scheme was examined through simulation experiments involving the navigation learning problem. Our dynamical system analysis clarified the mechanism of the articulation. The possible correspondence between the articulation mechanism and the attention switching mechanism in thalamo-cortical loops is also discussed.

247 citations

Journal ArticleDOI
TL;DR: An on-line learning scheme is developed-the so-called mixture of recurrent neural net (RNN) experts-in which a set of RNN modules become self-organized as experts on multiple levels, in order to account for the different categories of sensory-motor flow which the robot experiences.

236 citations

Journal ArticleDOI
TL;DR: A connectionist model, the recurrent neural network with parametric biases (RNNPB), in which multiple behavior schemata can be learned by the network in a distributed manner is reviewed, explaining how self-organizing internal structures can contribute to generalization in learning, and diversity in behavior generation, in the proposed distributed representation scheme.

236 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
06 Jun 1986-JAMA
TL;DR: The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or her own research.
Abstract: I have developed "tennis elbow" from lugging this book around the past four weeks, but it is worth the pain, the effort, and the aspirin. It is also worth the (relatively speaking) bargain price. Including appendixes, this book contains 894 pages of text. The entire panorama of the neural sciences is surveyed and examined, and it is comprehensive in its scope, from genomes to social behaviors. The editors explicitly state that the book is designed as "an introductory text for students of biology, behavior, and medicine," but it is hard to imagine any audience, interested in any fragment of neuroscience at any level of sophistication, that would not enjoy this book. The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or

7,563 citations

Journal ArticleDOI
08 Sep 1978-Science

5,182 citations