scispace - formally typeset
Search or ask a question
Author

Mitsuo Kawato

Bio: Mitsuo Kawato is an academic researcher from Nara Institute of Science and Technology. The author has contributed to research in topics: Motor learning & Artificial neural network. The author has an hindex of 86, co-authored 422 publications receiving 35640 citations. Previous affiliations of Mitsuo Kawato include Hokkaido University & Osaka University.


Papers
More filters
Journal ArticleDOI
TL;DR: The 'minimum variance model' is another major recent advance in the computational theory of motor control, strongly suggesting that both kinematic and dynamic internal models are utilized in movement planning and control.

2,469 citations

Journal ArticleDOI
TL;DR: This review will focus on the possibility that the cerebellum contains an internal model or models of the motor apparatus, and the necessity of such a model and the evidence, based on the ocular following response, that inverse models are found within the Cerebellar circuitry.

2,147 citations

Journal ArticleDOI
TL;DR: A modular approach to motor learning and control based on multiple pairs of inverse (controller) and forward (predictor) models that can simultaneously learn the multiple inverse models necessary for control as well as how to select the inverse models appropriate for a given environment is proposed.

2,101 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed a mathematical model which accounts for formation of hand trajectories by defining an objective function, a measure of performance for any possible movement: square of the rate of change of torque integrated over the entire movement.
Abstract: In this paper, we study trajectory planning and control in voluntary, human arm movements. When a hand is moved to a target, the central nervous system must select one specific trajectory among an infinite number of possible trajectories that lead to the target position. First, we discuss what criterion is adopted for trajectory determination. Several researchers measured the hand trajectories of skilled movements and found common invariant features. For example, when moving the hand between a pair of targets, subjects tended to generate roughly straight hand paths with bell-shaped speed profiles. On the basis of these observations and dynamic optimization theory, we propose a mathematical model which accounts for formation of hand trajectories. This model is formulated by defining an objective function, a measure of performance for any possible movement: square of the rate of change of torque integrated over the entire movement. That is, the objective function C T is defined as follows: $$C_T = \frac{1}{2}{}^t\int\limits_0^f {\sum\limits_{i = 1}^n {\left( {\frac{{{\text{d}}z_i }}{{{\text{d}}t}}} \right)^2 {\text{d}}t,} } $$ where z iis the torque generated by the i-th actuator (muslce) out of n actuators, and t fis the movement time. Since this objective function critically depends on the complex nonlinear dynamics of the musculoskeletal system, it is very difficult to determine the unique trajectory which yields the best performance. We overcome this difficult by developing an iterative scheme, with which the optimal trajectory and the associated motor command are simultaneously computed. To evaluate our model, human hand trajectories were experimentally measured under various behavioral situations. These results supported the idea that the human hand trajectory is planned and controlled in accordance with the minimum torquechange criterion.

1,584 citations

Journal ArticleDOI
TL;DR: A hierarchical neural network model which accounts for the learning and control capability of the CNS and provides a promising parallel-distributed control scheme for a large-scale complex object whose dynamics are only partially known is proposed.
Abstract: In order to control voluntary movements, the central nervous system (CNS) must solve the following three computational problems at different levels: the determination of a desired trajectory in the visual coordinates, the transformation of its coordinates to the body coordinates and the generation of motor command. Based on physiological knowledge and previous models, we propose a hierarchical neural network model which accounts for the generation of motor command. In our model the association cortex provides the motor cortex with the desired trajectory in the body coordinates, where the motor command is then calculated by means of long-loop sensory feedback. Within the spinocerebellum — magnocellular red nucleus system, an internal neural model of the dynamics of the musculoskeletal system is acquired with practice, because of the heterosynaptic plasticity, while monitoring the motor command and the results of movement. Internal feedback control with this dynamical model updates the motor command by predicting a possible error of movement. Within the cerebrocerebellum — parvocellular red nucleus system, an internal neural model of the inverse-dynamics of the musculo-skeletal system is acquired while monitoring the desired trajectory and the motor command. The inverse-dynamics model substitutes for other brain regions in the complex computation of the motor command. The dynamics and the inverse-dynamics models are realized by a parallel distributed neural network, which comprises many sub-systems computing various nonlinear transformations of input signals and a neuron with heterosynaptic plasticity (that is, changes of synaptic weights are assumed proportional to a product of two kinds of synaptic inputs). Control and learning performance of the model was investigated by computer simulation, in which a robotic manipulator was used as a controlled system, with the following results: (1) Both the dynamics and the inverse-dynamics models were acquired during control of movements. (2) As motor learning proceeded, the inverse-dynamics model gradually took the place of external feedback as the main controller. Concomitantly, overall control performance became much better. (3) Once the neural network model learned to control some movement, it could control quite different and faster movements. (4) The neural netowrk model worked well even when only very limited information about the fundamental dynamical structure of the controlled system was available. Consequently, the model not only accounts for the learning and control capability of the CNS, but also provides a promising parallel-distributed control scheme for a large-scale complex object whose dynamics are only partially known.

1,508 citations


Cited by
More filters
28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

01 Jan 2016
TL;DR: The using multivariate statistics is universally compatible with any devices to read, allowing you to get the most less latency time to download any of the authors' books like this one.
Abstract: Thank you for downloading using multivariate statistics. As you may know, people have look hundreds times for their favorite novels like this using multivariate statistics, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some harmful bugs inside their laptop. using multivariate statistics is available in our digital library an online access to it is set as public so you can download it instantly. Our books collection saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the using multivariate statistics is universally compatible with any devices to read.

14,604 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations