Showing papers in "Neurocomputing in 1995"
••
TL;DR: Results drawn from this research show that the Taguchi method provides an effective means to enhance the performance of the neural network in terms of the speed for learning and the accuracy for recall.
208 citations
••
TL;DR: This method of determining input features is shown to result in more accurate, faster training multilayer perceptron classifiers.
161 citations
••
TL;DR: This paper extends earlier results on adaptive control and estimation of nonlinear systems using gaussian radial basis functions to the on-line generation of irregularly sampled networks, using tools from multiresolution analysis and wavelet theory to yield much more compact and efficient system representations.
130 citations
••
TL;DR: The approach uses functional-link neural network implementations which have several beneficial properties giving advantages over the more common generalized delta rule implementations, and these are coordinated to provide the real-time learning of the optimal control path.
89 citations
••
TL;DR: This approach joins two forms of learning, the technique of neural networks and rough sets, and aims to improve the overall classification effectiveness of learned objects' description and refine the dependency factors of the rules.
75 citations
••
TL;DR: The methods presented in this paper are general enough to be applicable regardless of how many satellite signals are being processed by the receiver, and the computational benefit of neural network-based satellite selection is discussed.
71 citations
••
TL;DR: The performance of the presented Stochastic Estimator Learning Automaton (SELA) is superior to all previous well-known S- model ergodic schemes and it is proved that SELA is ϵ-optimal in every S-model random environment.
69 citations
••
TL;DR: A memory-based approach to robot learning is explored, using memory- based neural networks to learn models of the task to be performed, and how this approach has been used to enable a robot to learn a difficult juggling task.
64 citations
••
60 citations
••
TL;DR: Some important DOs and DON'Ts for researchers in order to improve on new neural learning algorithms benchmarked only poorly are gathered.
51 citations
••
TL;DR: A competitive neural network model is used to improve the initialization phase of a parallel route construction heuristic for the vehicle routing problem with time windows, and the network converges towards the centroids of clusters of customers, when such clusters are present.
••
TL;DR: A novel neural network architecture for image recognition and classification, called an image recognition neural network (IRNN), is designed to recognize an object or to estimate an attribute of an object.
••
TL;DR: The ability of the network to solve for constrained paths is illustrated with both a graph theoretic example and a scenario involving an unmanned vehicle that must travel a constrained path through a real terrain area containing artificially generated keep out zones.
••
••
TL;DR: Results presented substantiate the feasibility of using neural networks in robust nonlinear adaptive control of spacecraft and demonstrate the capabilities of neuro-controllers using a nonlinear model of the Space Station Freedom.
••
TL;DR: The Tank-Hopfield linear programming network is modified to solve mixed integer linear programming with the addition of step-function amplifiers to avoid the traditional problems associated with most Hopfield networks using quadratic energy functions.
••
TL;DR: In order to reduce the network complexity a new approach is proposed which makes it possible to construct simply and effectively a neural network containing only one single artificial neuron with an on chip adaptive learning algorithm.
••
TL;DR: A neural network architecture for incremental learning called ‘Neural network based on Distance between Patterns’ (NDP), which differs from conventional radial basis function neural networks in the area of incremental learning and is shown to be effective in experiments on image recognition.
••
TL;DR: Key stability results are proved in the paper along with illustrative examples to show the effectiveness of applying such a technique and other practical considerations.
••
••
TL;DR: Direct reinforcement learning techniques are discussed and their role in learning control by relating them to similar adaptive control methods and several examples are presented to illustrate the power and utility of direct reinforcementlearning techniques for learning control.
••
TL;DR: It is proved rigorously that the continuous-time differential equations corresponding to this proposed PCA algorithm will converge to the principal eigenvectors of the autocorrelation matrix of the input signals with the norm of the initial weight vector.
••
TL;DR: It is demonstrated that recurrent networks can entertain models with colored noise and that they can perform state estimation and the capability of the networks is illustrated by examples where underlying periodic attractors are learnt by observing noisy trajectories.
••
TL;DR: A connectionist simulation of familiar face recognition by humans, which incorporates aspects of the Bruce and Young cognitive model, is discussed in detail.
••
TL;DR: This proposed neural network is satisfactory for the real time application of signal processing in which it is desired to provide as fast as possible the eigenvector corresponding to the largest eigenvalue of a positive matrix.
••
TL;DR: It is shown that, in terms of the classification error, overfitting does occur for certain representations used to encode the discrete attributes in neural networks.
••
TL;DR: The number of learning parameters in the conclusion for the mean-value-based functional reasoning is shown to be reduced drastically, compared with those due to the usual functional reasoning and simplified reasoning.
••
TL;DR: Systematic analysis of the sensitivity of the classification results on various simulation parameters showed that the Cascade-correlation network is able to perform satisfactorily in an extremely noisy environment.
••
TL;DR: Comparative performance analysis of the proposed MFA algorithm with two wellknown heuristics, simulated annealing and Kernighan-Lin, indicates that MFA is a successful alternative heuristic for the circuit partitioning problem.