scispace - formally typeset
Search or ask a question

Showing papers in "Neurocomputing in 1991"


Journal ArticleDOI
TL;DR: The theory and practice of the multilayer perceptron is reviewed, addressing a range of issues which are important from the point of view of applying this approach to practical problems.

428 citations


Journal ArticleDOI
TL;DR: This approach has several significant advantages over other conventional forecasting methods such as regression and Box-Jenkins; besides simplicity, another major advantage is that it does not require any assumption to be made about the underlying function or model to be used.

158 citations


Journal ArticleDOI
C.H. de Groot1, D. Würtz1
TL;DR: As the results show, in the forecasting process the principle of parsimony becomes evident, indicating that models with less parameters generally produce better results.

131 citations


Journal ArticleDOI
TL;DR: Abduction not only classifies the distinct type of reasoning performed when neural networks are applied, but gives a logical framework for expanding current neural network research to include network concepts not constrained by neuron analogies.

67 citations


Journal ArticleDOI
TL;DR: Peculiarities of designing a multiprocessor computer system for analyzing composite images on the basis of universal homogeneous neural networks with close nonlinear coupling are considered and some possible patterns of collective activity in those neural networks are investigated.

28 citations


Journal ArticleDOI
S. Amato1, Bruno Apolloni1, G. Caporali1, U. Madesani1, Anna Maria Zanaboni1 
TL;DR: Two ways of embedding simulated annealing to improve the usual gradient descent method for the achievement of good minima of the error function are checked.

27 citations


Journal ArticleDOI
TL;DR: A model of machine learning in engineering design, called PERHID, is presented based on the concept of perceptron learning algorithm with a two-layer neural network, and a comparison of the learning by the previously developed single-layer perceptron is compared.

27 citations


Journal ArticleDOI
TL;DR: The paper suggests a concept of neural gates which are similar to the processing elements in ANN, but generalized into handling various types of information such as fuzzy logic, probabilistic and Boolean information together.

21 citations


Journal ArticleDOI
TL;DR: This work demonstrated that system identification can be performed using feed-forward neural networks, and that the duration for which the prediction is being performed is limited by the number of nodes in the hidden layer, and should be much smaller than the characteristic time for the process.

21 citations


Journal ArticleDOI
TL;DR: A technique to solve CNF-SAT by means of a class of simulated neural networks which are trained through a supervised procedure and the results of significant tests are described.

19 citations


Journal ArticleDOI
TL;DR: The goal of the algorithm is to find the pattern of a draw game between players where they should not mark the corners of a square in the game of ‘Hip’.

Journal ArticleDOI
TL;DR: A simulation method on transputers for backpropagation networks of any feed-forward topology and the problems caused by the parallelization of the algorithms are described as well as the achievable performance using different transputer topologies.

Journal ArticleDOI
Petri A. Jokinen1
TL;DR: The dynamically capacity allocating (DCA) network model is able to learn incrementally as more information becomes available and to avoid the spatially unselective forgetting of commonly used learning algorithms.


Journal ArticleDOI
TL;DR: It is shown that the particular functionality of the computing units in connectionist networks plays only a negligible role for the emergent overall behavior of large networks.

Journal ArticleDOI
TL;DR: A new per-step minimization method for relaxation is introduced and is favorably contrasted in performance to other relaxation methods and a modified training procedure that requires neither a global norm operation nor division is proposed.

Journal ArticleDOI
TL;DR: This new modification of the structure of the Hopfield neural network employs a special feedback loop called delayed self-feedback in order to improve the system stability and is tested for the VLSI layer assignment problem.

Journal ArticleDOI
TL;DR: The ergodic EDN rp has the best speed of convergence and the highest accuracy among all previous discretized two-action reward-penalty schemes, and is e-optimal in all environments whose minimum penalty probability is less than 0.5.

Journal ArticleDOI
TL;DR: Using a certain preprocessing, the possibility of finding computer viruses with the help of neural networks is investigated and extremely modified variations of viruses and even selfmodifying viruses are found.

Journal ArticleDOI
TL;DR: One of the possible approaches to the speech recognition in the family of the modern Indo-European languages, especially in Slovak is described, which describes the processes of a parrot-like perception by a neural net and the learning and recognition by probabilistic neural net.

Journal ArticleDOI
TL;DR: The developed combination of the networks forms a unique, internally consisted tool for the biomolecular spectra analysis.

Journal ArticleDOI
TL;DR: The aim of this report is to give a short overview on state-of-the-art neural network VLSI activities.

Journal ArticleDOI
TL;DR: A new invariant filter based on Kohonen's idea of self-organization is proposed, which transforms 2-D images into a set of features invariant to change in position, rotation and scale.

Journal ArticleDOI
TL;DR: A new algorithm for selecting initial conditions, called k -greedy, is presented, which is heuristically motivated by arguments based on the standard greedy algorithm and substantially better in both speed and final value attained of objective function than other neural network methods in the literature.



Journal ArticleDOI
TL;DR: The point-by-point direct and dense read-out supports a wave front sampling instantaneously, which exhibits the direction finding capability, as well as the possibility of wave-front correction for sharper image formation when coupled in situ with a neurocomputer.


Journal ArticleDOI
TL;DR: The relationship between properties of recently very intensively studied artificial neural networks and real biological neural networks is discussed and indicates that in biological neural nets memories would work on the basis of more complex attractors.

Journal ArticleDOI
TL;DR: This contribution describes a new software package which supports creating and exploring artificial neural networks and helps with the development and exploration of these models.