scispace - formally typeset
Search or ask a question

Showing papers on "Backpropagation published in 1970"


Journal ArticleDOI
TL;DR: A modification to the classical approach of the quasi-Newton method that takes into account the structure of the network and shows that this approach represents a clear gain in terms of computational time without increasing the requirement of memory space.
Abstract: The backpropagation algorithm is the most popular procedure to train self-learning feedforward neural networks. However, the rate of convergence of this algorithm is slow because the backpropagation algorithm is mainly a steepest descent method. Several researchers have proposed other approaches to improve the rate of convergence: conjugate gradient methods, dynamic modification of learning parameters, full quasi-Newton or Newton methods, stochastic methods, etc. Quasi-Newton methods were criticized because they require significant computation time and memory space to perform the update of the hessian matrix. This paper proposes a modification to the classical approach of the quasi-Newton method that takes into account the structure of the network. With this modification, the size of the problem is not proportional to the total number of weights but depends on the number of neurons of each level. The modified quasi-Newton method is tested on two examples and is compared to classical approaches. The numerical results show that this approach represents a clear gain in terms of computational time without increasing the requirement of memory space.

6 citations


Journal ArticleDOI
TL;DR: It is felt that the application of neural network shows superior performance in fault diagnosis, whereas conventional techniques like spectral analysis require complete processing of an input signal to reach a diagnosis.
Abstract: A method is presented for multiple fault diagnosis by means of an Artificial Neural Network (ANN). The major advantage of using an ANN as opposed to any other technique for fault diagnosis in condition ba:-ed maintenance is that the network produces an immediate decision with minimal computation for a given input vector, whereas conventional techniques like spectral analysis require complete processing of an input signal to reach a diagnosis. The basic strategy is to train a neural network to recognize the behavior of the machine condition as well as the behavior of the possible system faults. The multi-layer feed forward network is used in this paper with back propagation learning algorithm. The network is trained by giving training examples, which have known input vector of vibration signatures and output vector of membership of possible faults. Field data of a lubricating oil pump for a residual gas compressor from a LPG recovery plant is used for training and testing the network. For diagnosis purpose, five different states are considered. The correct classification rate during training and testing is very high. On the basis of the results presented it is felt that the application of neural network shows superior performance in fault diagnosis. Transactions on Information and Communications Technologies vol 6, © 1994 WIT Press, www.witpress.com, ISSN 1743-3517

3 citations


01 Jan 1970
TL;DR: A method for estimating water level at Sungai Bedup in Sarawak is presented here, which makes use of Artificial Neural Network (ANN), a new tool that is capable of modeling various nonlinear hydrological processes.
Abstract: A method for estimating water level at Sungai Bedup in Sarawak is presented here. The method makes use of Artificial Neural Network (ANN) – a new tool that is capable of modeling various nonlinear hydrological processes. ANN was chosen based on its ability to generalize patterns in imprecise or noisy and ambiguous input and output data sets. In this study, the networks were developed to forecast daily water level for Sungai Bedup station. Specially designed networks were simulated using data obtained from Drainage and Irrigation Department with MATLAB 6.5 computer software. Various training parameters were considered to achieve the best result. ANN Recurrent Network using Backpropagation algorithm was adopted for this study.

2 citations


Journal ArticleDOI
TL;DR: This communication shows a new approach to inverse problems that uses a neural network and an expert system to determine the external heat transfer given observation of the temperature history at one or more interior points.
Abstract: Inverse problems are problems of determining cause on the basis of the knowledge of their effects. The object of the inverse heat conduction problem is to determine the external heat transfer (the cause) given observation of the temperature history at one or more interior points (the effect). This communication shows a new approach to inverse problems. This approach uses a neural network and an expert system. The examples shown in this paper were computed using back propagation software (neural network) and a system based on Lukasiewicz's many valued logic (expert system). The numerical technique of neural networks evolved from the effort to model the function of the human brain and expert systems take the place of expert knowledge in several areas.

2 citations


Journal Article
TL;DR: Feed-forward back propagation (FFBP) with TRAINBR training function, LEARNGD adaption learning function, and SSE performance function was the final type of networks try and the minimum error and 0.96 correlation coefficient were the end results.
Abstract: Artificial neural networks (ANNs) are the result of academic investigations that use mathematical formulations to model nervous system operations. Neural networks (NNs) represent a meaningfully different approach to using computers in the workplace that is used to learn patterns and relationships in data. In this paper, compressive strength (CS) of lightweight concrete with 0, 20, 30, and 50 percent of scoria instead of sand and different watercement ratio and cement content for 288 cylindrical samples have been studied. Out of them 36 samples were randomly selected to be used for this study. The CS of these samples has been used to train ANNs for CS prediction to get the optimum value. ANNs have been formed by MATLAB software that the minimum error in data training and the maximum correlation coefficient in data were final goals. For this reasons, feed-forward back propagation (FFBP) with TRAINBR training function, LEARNGD adaption learning function, and SSE performance function was the final type of networks try. The FFBP was 3-10-1 (3 inputs, 10 neurons in hidden layer, and 1 output) that the minimum error and 0.96 correlation coefficient were the end results.

2 citations


Journal ArticleDOI
TL;DR: Two hierarchical diagnostic approaches based on hierarchically ordered BPNs and elliptical neural networks are developed to overcome some of their limitations and their applicability and reliability are tested and compared using a hydrocarbon chlorination plant troubleshooting simulator.
Abstract: BackPropagation Neural Networks (BPN) with linear activation functions have some shortcomings including long training time, neuron size determination problems with hidden layers, extrapolation problems which reduces their applicability for real time fault diagnosis of complex processes. Two hierarchical diagnostic approaches that are based on hierarchically ordered BPNs and elliptical neural networks are developed to overcome some of these limitations. Their applicability and reliability are tested and compared using a hydrocarbon chlorination plant troubleshooting simulator.

1 citations