scispace - formally typeset
Search or ask a question

Showing papers by "Ivan Petrović published in 1998"


Journal Article
TL;DR: A new Newton-type learning algorithm is proposed which is a modification of the popular Levenberg-Marquardt learning algorithm regarding the convergence speed and the computation complexity on four nonlinear test functions.
Abstract:  Multilayer perceptrons (MLP) are the most often used neural networks in function approximation applications. They learn by modifying the strength of interconnections between neurons, according to some specified rule called learning algorithm. Many different learning algorithms have been reported in the literature. The majority of them are based on gradient numerical optimization methods such as the steepest descent, conjugate gradient, quasi-Newton and Newton methods. In this paper we have proposed a new Newton-type learning algorithm which is a modification of the popular Levenberg-Marquardt learning algorithm. The algorithm has been compared with the original Levenberg-Marquardt algorithm regarding the convergence speed and the computation complexity on four nonlinear test functions. Also the effects of the data sets size and extremely high accuracy requirements on the efficiency of the algorithms have been analyzed. To provide the algorithms comparison as objective as possible, both algorithms were implemented on the same manner and the network weights were initialized equally for both of them. The proposed algorithm exhibited better performances in all test cases.

8 citations


Proceedings ArticleDOI
18 May 1998
TL;DR: In this paper, the basic principles of the self-tuning generalized predictive controller (GPC) and the selftuning pole placement controller are presented, and a laboratory shell-and-tube heat exchanger is used to test the properties of the presented controllers.
Abstract: The basic principles of the self-tuning generalized predictive controller (GPC) and the self-tuning pole placement controller are presented. A laboratory shell-and-tube heat exchanger is used to test the properties of the presented controllers. The self-tuning GPC were compared with the self-tuning pole placement controller regarding their parameter adjustment complexity, reference and disturbance step responses and robustness to differences between the real process and its mathematical model.

7 citations


Journal ArticleDOI
TL;DR: In this paper, a dynamical mathematical model of a water supply plant based on lumped parameters is described, and a concept of its control system is proposed that ensures output pressure stabilisation under conditions of variable water consumption and variable water recourses.

2 citations


Journal Article
TL;DR: Two new cascadecorrelation learning networks (CCLNS1 and CCLNS2) are proposed, which enable smoothing of the error surface and exhibit much better performances than the original CCLN.
Abstract:  A cascade correlation learning network (CCLN) is a popular supervised learning architecture that gradually grows the hidden neurons of fixed nonlinear activation functions, adding one-by-one neuron in the network during the course of training. Because of fixed activation functions the cascaded connections from the existing neurons to the new candidate neuron are required to approximate high-order nonlinearity. The major drawback of a CCLN is that the error surface is very zigzag and unsmooth due to the use of maximum correlation criterion that consistently pushes the hidden neurons to their saturated extreme values instead of active region. To alleviate this drawback of the original CCLN two new cascadecorrelation learning networks (CCLNS1 and CCLNS2) are proposed, which enable smoothing of the error surface. Smoothing is performed by (re)training the gains of the hidden neurons’ activation functions. In CCLNS1 smothing is enabled by using the sign functions of the neurons’ outputs in the cascaded connections and in CCLNS2 each hidden neuron has two activation functions: fixed one for cascaded connections, and trainable one for connection to the neurons in output layer. The performances of the network structures are tested by learning them to approximate three nonlinear functions. Both proposed structures exhibit much better performances than the original CCLN, while CCLNS1 gives a little bit better results than CCLNS2.

1 citations


Journal Article
TL;DR: In this paper, a water supply plant control system is proposed that ensures output pressure stabilisation under conditions of variable water consumption and variable water resourses, which consists of inner control loops which control the flow of each pump station and of superimposed control loop that controls output pressure of the water supply.
Abstract: A concept of a water supply plant control system is proposed that ensures output pressure stabilisation under conditions of variable water consumption and variable water resourses. The control system consists of inner control loops which control the flow of each pump station and of superimposed control loop that controls output pressure of the water supply plant. The pressure controller calculates the cumulative reference value of the water flow for all controlled pumps and from this value an algorithm for the flow distribution calculates the reference flow for each inner loop, taking into account the level of water accumulation in each water well. Control system performances were investigated by simulations on the water supply plant Mala Mlaka near Zagreb. It has been shown that the system exhibits more robust behaviour if a fuzzy logic controller of output pressure is used instead of a conventional PI controller.

1 citations