Showing papers in "Neural Networks in 2003"
••
TL;DR: Hierarchy generative models enable the learning of empirical priors and eschew prior assumptions about the causes of sensory input that are inherent in non-hierarchical models, but are not necessary in a hierarchical context.
628 citations
••
TL;DR: Empirical results on a large (20,000-instance) speech recognition task and on 26 other learning tasks demonstrate that convergence can be reached significantly faster using on-line training than batch training, with no apparent difference in accuracy.
419 citations
••
TL;DR: It is suggested that the phasic and tonic components of dopamine neuron firing can encode the signal required for meta-learning of reinforcement learning.
240 citations
••
TL;DR: This work decomposes the recorded spectral-domain signals into independent components by a complex infomax ICA algorithm, leading to a model of convolutive signal superposition, in contrast with the commonly used instantaneous mixing model.
208 citations
••
TL;DR: The fading equalization problem can be successfully solved with a single complex-valued neuron with the highest generalization ability and the XOR problem and the detection of symmetry problem are solved.
185 citations
••
TL;DR: This article describes several new extensions of the standard SOM, developed in the past few years: the growing SOM, magnification control, and generalized relevance learning vector quantization, and demonstrates their effect on both low-dimensional traditional multi-spectral imagery and approximately 200-dimensional hyperspectral imagery.
180 citations
••
TL;DR: It is concluded that the interaction between the bottom-up process of recalling the past and the top-down process of predicting the future enables both robust and flexible situated behavior.
159 citations
••
TL;DR: The result of this paper shows that the mixture model can attain the more precise prediction than regular statistical models if Bayesian estimation is applied in statistical inference.
129 citations
••
TL;DR: It is shown that large ensembles of (neural network) models, obtained e.g. in bootstrapping or sampling from (Bayesian) probability distributions, can be effectively summarized by a relatively small number of representative models.
122 citations
••
TL;DR: A broad class of neural network (NN) applications dealing with the remote measurements of geophysical (physical, chemical, and biological) parameters of the oceans, atmosphere, and land surface is presented in this article.
119 citations
••
TL;DR: A new Self-Organising NN approach, called the Co-Adaptive Net, which involves not just unsupervised learning to train neurons, but also allows neurons to co-operate and compete amongst themselves depending on their situation.
••
TL;DR: A set of experiments which are unsolvable by classical recurrent networks but which are solved elegantly and robustly and quickly by LSTM combined with Kalman filters are presented.
••
TL;DR: In this paper, the authors propose a set of indices that characterize how a network node participates in a larger network and what roles it may take given the specific sub-network of interest.
••
TL;DR: In this study, intelligent optimal control problem is considered as a nonlinear optimization with dynamic equality constraints, and DNN as a control trajectory priming system and the resulting algorithm operates as an auto-trainer for DNN and generates optimal feed-forward control trajectories in a significantly smaller number of iterations.
••
TL;DR: The NN approach introduced in this paper can provide numerically efficient solutions to a wide range of problems in numerical models where lengthy, complicated calculations, which describe physical, chemical and/or biological processes, must be repeated frequently.
••
TL;DR: A new adaptive training method is presented, which is able to modify both the structure of the network (the number of nodes in the hidden layer) and the output weights, as the algorithm proceeds, which makes the algorithm suitable for modeling dynamical time varying systems.
••
TL;DR: It is shown that RBFs are not required to be integrable for the REF networks to be universal approximators, and can uniformly approximate any continuous function on a compact set provided that the radial basis activation function is continuous almost everywhere, locally essentially bounded, and not a polynomial.
••
TL;DR: In this article, a pinning control method focused on the chaotic neural network is proposed and the computer simulation proves that the chaos in the chaotic network can be controlled with this method and the states of the network can converge in one of its stored patterns if the control strength and the pinning density are chosen suitable.
••
TL;DR: The Sensor Exploitation Group of MIT Lincoln Laboratory incorporated an early version of the ARTMAP neural network as the recognition engine of a hierarchical system for fusion and data mining of registered geospatial images to set a standard for a variety of spatial data mining tasks.
••
TL;DR: A Spatial Number Network, or SpaN, model is developed to explain how these shared numerical capabilities are computed using a spatial representation of number quantities in the Where cortical processing stream, notably the inferior parietal cortex.
••
TL;DR: The dual extended Kalman filtering (DEKF) is used for this dual estimation and how to use the proposing DEKF for removing some unimportant weights from a trained RNN.
••
TL;DR: It is proved that for the implementation of an arbitrary Boolean function of n-variables by using SCA, [3L/2] hidden neurons are necessary and sufficient, where L is the number of unit spheres contained in the chosen USC of the n-dimensional HS.
••
TL;DR: Modular reward is implemented for a multiple model-based reinforcement learning (MMRL) architecture and is shown to show its effectiveness in simulations of a pursuit task with hidden states and a continuous-time non-linear control task.
••
TL;DR: Empirical results show that the polynomial harmonic version phGMDH outperforms the previous GMDH, a Neurofuzzy GMDH and traditional MLP neural networks on time series modeling tasks.
••
TL;DR: Several neuroinformatics tools included in an on-line knowledge management system, the NeuroHomology Database, are developed, equipped with inference engines both to relate and translate information across equivalent cortical maps and to evaluate degrees of homology for brain regions of interest in different species.
••
TL;DR: It is shown that the continuous attractor network models developed here are able to demonstrate the key features of motor function and use of 'trace' learning rules which incorporate a form of temporal average of recent cell activity underlies the ability of the networks to learn temporal sequences of behaviour.
••
TL;DR: There exists cross-generational changes in brain shape, that is, the young generation has a shorter and wider brain than the older generation, and the effect of aging on the volume of gray matter and white matter is determined by voxel based morphometry.
••
TL;DR: An overview of independent component analysis, an emerging signal processing technique based on neural networks, is presented, with the aim to provide an up-to-date survey of the theoretical streams in this discipline and of the current applications in the engineering area.
••
TL;DR: This review is aimed to both astronomers and computer scientists (who often know little about potentially interesting applications), and will focus their attention on some of the most interesting fields of application, namely: object extraction and classification, time series analysis, noise identification, and data mining.
••
TL;DR: The main results here reported include the short time prediction of the concentration of hydrocarbons in the local air, the comparison between different methods based on fuzzy neural systems, and the proposal of local models of non-linear interactions among traffic, atmospheric and pollution data.