scispace - formally typeset
Search or ask a question

Showing papers in "Neurocomputing in 1997"


Journal ArticleDOI
TL;DR: It has been verified experimentally that when nonlinear Principal Component Analysis (PCA) learning rules are used for the weights of a neural layer, the neurons have signal separation capabilities and can be used for image and speech signal separation.

237 citations


Journal ArticleDOI
TL;DR: The search for redundant input data components proposed in the paper is based on the concept of sensitivity in linearized models and the mappings considered are R I → R K with continuous and differentiable outputs.

178 citations


Journal ArticleDOI
TL;DR: NeuroLinear is a system for extracting oblique decision rules from neural networks that have been trained for classification of patterns that is effective in extracting compact and comprehensible rules with high predictive accuracy from Neural networks.

134 citations


Journal ArticleDOI
TL;DR: It is shown that all the existing RNN architecture can be considered as special cases of the general RNN architectures, and how these existing architectures can be transformed to the generalRNN architectures.

109 citations


Journal ArticleDOI
TL;DR: A simple feed-forward network was trained to recognise the relationship between the input values and the output values of the FK problem and was able to provide the solution around an average error of 1.0 ° and 2.0 mm.

84 citations


Journal ArticleDOI
TL;DR: By combining the concepts of self-organization and topographic mapping with those of multiscale image segmentation the HSOM alleviates the shortcomings of the conventional SOM in the context of image segmentsation.

75 citations


Journal ArticleDOI
TL;DR: The objective is to significantly reduce network size, by pruning larger size networks, but also to preserve generalization, that is, to come up with pruned networks which generalize as good or even better than their unpruned counterparts.

72 citations


Journal ArticleDOI
TL;DR: This work develops artificial neural network group theory, then shows how neural network groups are able to approximate any kind of piecewise continuous function, and to any degree of accuracy, and illustrated by way of an ANN expert system for rainfall estimation.

61 citations


Journal ArticleDOI
TL;DR: With the optimal initial weights determined, the initial error is substantially smaller and therefore the number of iterations required to achieve the error criterion is reduced, and the time required for the initialisation process is negligible when compared to the training process.

54 citations


Journal ArticleDOI
TL;DR: A distributed implementation of the targeting problem in reaching movements is presented, which uses a pair of interacting topologyrepresenting networks that support a relaxation process, with terminal attractor dynamics, which implements the flow-line tracking in the force fields.

54 citations


Journal ArticleDOI
TL;DR: A new model for the design of Fuzzy Inference Neural Network (FINN), which can automatically partition an input-output pattern space and can extract fuzzy if-then rules from numerical data is proposed.

Journal ArticleDOI
TL;DR: This work compares the performance of the two cost functions using both a controlled simulation experiment and a non-trivial application in estimating stock returns on the basis of multiple factor exposures and shows that in both cases the DLS procedure gives significantly better results.

Journal ArticleDOI
TL;DR: A self-generating modular neural network architecture for supervised learning that can yield both good performance and significantly faster training is presented.

Journal ArticleDOI
TL;DR: This work presents a pruning method lprune that automatically adapts the pruning strength to the evolution of weights and loss of generalization during training, and requires no algorithm parameter adjustment by the user.

Journal ArticleDOI
TL;DR: A multifeature scheme for terrain classification in SAR image analysis using different neural classifiers, trained on different features of the same sample space, are combined by using a non-linear ensemble method.

Journal ArticleDOI
Junhong Nie1
TL;DR: A fuzzy-neural approach to the prediction of nonlinear time series, where only a single learning epoch is needed, thereby providing a useful paradigm for some situations, where fast learning is critical.

Journal ArticleDOI
TL;DR: A constructive algorithm for a general recurrent neural network is proposed based on radial basis functions and it is shown that this algorithm is globally convergent.

Journal ArticleDOI
TL;DR: It is shown that the N-bit parity problem is solvable by a standard feedforward neural network and a solution can be easily obtained by solving a system of linear equations.

Journal ArticleDOI
TL;DR: A genetic algorithm is used for selecting the initial seed points (prototypes, kernels) for a Radial Basis Function (RBF) classifier to serve a condensing technique that can hopefully lead to a small subset which still retains relevant classification information.

Journal ArticleDOI
TL;DR: The correct and unique pointwise generalization capability of the new so-called similarity network topology is proved and illustrated using two examples.

Journal ArticleDOI
TL;DR: A novel multi-layer perceptron neural network architecture selection and weight training algorithm for classification problems that improves the effectiveness of Backprop and enables Backprop to solve a new class of problems, i.e., problems with areas of low mean-squared error.

Journal ArticleDOI
TL;DR: The test results presented exhibit the capability of the recurrent neural network model to capture the complex dynamics of the system, yielding accurate predictions of the System Response, in a non-linear, complex dynamic system characterized by a large number of state variables.

Journal ArticleDOI
TL;DR: An efficient control law that exploits the approximation capabilities of RBF networks and the stabilizing properties of a servo loop is developed and a stability analysis for the proposed system is carried out.

Journal ArticleDOI
TL;DR: In this paper, the authors attempt to measure the "neural complexity" of any regular set of binary strings, that is, to quantify the size of a recurrent continuous-valued neural network that is needed for correctly classifying the given regular set.

Journal ArticleDOI
TL;DR: The use of Fourier-type activation functions in fully recurrent neural networks with main theoretical advantage is that, in principle, the problem of recovering internal coefficients from input/output data is solvable in closed form.

Journal ArticleDOI
TL;DR: This paper assumes epoch-based training and derive a new error function having several desirable properties absent from the traditional sum-of-squares-error function, and argues for skip (shortcut) connections where appropriate and the preference for a bipolar sigmoidal yielding values over the [−1, 1] interval.

Journal ArticleDOI
TL;DR: Simulations on the Traveling Salesman Problem (TSP) have shown that the proposed chaotic neural network can converge to the global minimum or its approximate solutions more efficiently than the Hopfield network.


Journal ArticleDOI
TL;DR: A new recurrent dynamic neural network approach to solve noisy signal representation and processing problems by solving for the sets of representation coefficients required to model a given signal in terms of basis elementary signals.

Journal ArticleDOI
TL;DR: This paper proves that, in the case of local minima free error functions, terminal attractor algorithms guarantee that the optimal solution is reached in a number of steps that is independent of the cost function.