scispace - formally typeset
Search or ask a question

Showing papers in "Neurocomputing in 1994"


Journal ArticleDOI
TL;DR: The learning and generalization characteristics of the random vector version of the Functional-link net are explored and compared with those attainable with the GDR algorithm and it seems that ‘ overtraining ’ occurs for stochastic mappings.

876 citations


Journal ArticleDOI
TL;DR: A system based on Kohonen's SOM (Self-Organizing Map) for protein classification according to Circular Dichroism (CD) spectra is described, and proteins with different secondary structures are clearly separated through a completely unsupervised training process.

131 citations


Journal ArticleDOI
TL;DR: According to computer simulations using the mirror symmetry problem, the Weights power method has shown the best performance with respect to size reduction, generalization performance, and the amount of computation required.

80 citations


Journal ArticleDOI
TL;DR: A multilayer neural network development environment, called ANNDE, is presented for implementing effective learning algorithms for the domain of engineering design using the object-oriented programming paradigm.

66 citations


Journal ArticleDOI
TL;DR: It is shown that this number of iterations can be extremely reduced if after each iteration step the weights are changed by a simple mutation as it is done in genetic algorithms.

50 citations


Journal ArticleDOI
TL;DR: It is shown that with careful programming and slight modification to the algorithm, it is possible to achieve a performance gain up to 50 times when compared with conventional implementation.

39 citations


Journal ArticleDOI
TL;DR: A new algorithm (affine shaker) is proposed that uses stochastic search based on function values and affine transformations of the local search region that can be interesting for special-purpose VLSI implementations and for non-differentiable functions.

39 citations


Journal ArticleDOI
TL;DR: The results showed that the Multilayer Perceptrons networks were superior in memory usage and classification time, however, they suffered from long training time and the error rate was slightly higher than that of Radial Basis Function networks.

38 citations


Journal ArticleDOI
TL;DR: An application of the LASSO model to a phoneme recognition problem and the encoding of supervision data during learning leads to results significantly better than those with the standard use of a self-organizing map, and almost as good as those achieved with a multilayer perception.

23 citations


Journal ArticleDOI
TL;DR: It is shown that batch-mode methods are currently more advanced; however, problems with very large training sets or in non-stationary environments will require the development of successful sequential approaches.

18 citations




Journal ArticleDOI
TL;DR: A comparison of ‘plain backpropagation’ to other standard optimization algorithms shows a clear advantage of higher order methods with respect to quality of solution as well as execution times.

Journal ArticleDOI
TL;DR: A neural scheme for computing optical flow from the intensity image and its time derivative and the scheme is accurate to natural scenes and avoids some critical shortcomings of current neural schemes for optical flow computation.

Journal ArticleDOI
TL;DR: This paper proposes an efficient approach to TC using neural networks with Backward Lateral Inhibitions (BLI) for constraints management, and the neural network model with BLI is well-adapted to it.

Journal ArticleDOI
TL;DR: The proposed Time-Delay ART has many features such as its self-organization, classification, and memorization abilities for spatio-temporal patterns, and can deal with many sequences of different lengths.


Journal ArticleDOI
TL;DR: The dynamic OPDF search process, combined with an adaptive stratified sampling technique, completes the modifications of the basic method and is applied to layered artificial neural networks of generalized, fully interconnected, continuous perceptions.

Journal ArticleDOI
TL;DR: An off-line trained and supervised neural network is proposed to decode convolutional codes one block at a time, using a Hamming neural network together with a winner-take-all circuit at each stage to select the decoded sequence.

Journal ArticleDOI
TL;DR: The use of a new quality function, instead of classification score, allows to avoid usage of Monte Carlo events with inherent misleading simplifications and incorrectness and to directly optimize the desired quantity: the significance of source detection and to obtain the complicated nonlinear boundaries of γ-cluster.

Journal ArticleDOI
TL;DR: An application of neural networks in decision making by trained using the data collected by checking every day the ecosystem for the existence of chemical pollutants exceeding a normal threshold is presented.


Journal ArticleDOI
TL;DR: The causes of this phenomenon are explored by studying the internal behavior of the auto-associative network as it learns to reconstruct the input vectors as linear combinations of intrinsic basis vectors each of which is defined by the weights of connections fanning out from a single hidden unit to the output layer.


Journal ArticleDOI
TL;DR: Recurrent connections in dynamic networks are modeled on actual vestibular commissural fibers, which are surgically accessible, and this anatomical feature enables clear, experimentally testable predictions to be derived from the dynamic models, involving the behavior of VOR neurons following lesions of the Vestibular Commissures.

Journal ArticleDOI
TL;DR: A mechanism is proposed to reduce the effects of noise and other defects present in documents reproduced by fax transmission or copy machines and to improve the print quality for a large set of fonts without training based on a neural network model.

Journal ArticleDOI
TL;DR: This study investigates the precision and range requirements for weights in feed-forward neural network classifiers using backpropagation training with simulated signals where the authors could control the difficulty of the problem.

Journal ArticleDOI
TL;DR: A new design technique that builds a feedforward net for an arbitrary set of binary associations by decomposing the mapping into a cascade of the so-called primitive operations, which can be readily employed in constructing feedforward nets for binary association problems of extremely large size.

Journal ArticleDOI
TL;DR: Several different backpropagation networks are presented to solve the satisfaction of a conjuctive normal form of boolean clauses (CNF-SAT), a well-known and very important NP-hard problem.

Journal ArticleDOI
TL;DR: A general convergence theorem that has unified Hopfield-Cohen-Grossberg and Werbos-Rumelhart theorems is given, and a nonlinear Hebbian learning is derived capable for self-determination of ANN architectures.