scispace - formally typeset
Search or ask a question
Author

Colin F. N. Cowan

Bio: Colin F. N. Cowan is an academic researcher from University of Edinburgh. The author has contributed to research in topics: Adaptive filter & Recursive least squares filter. The author has an hindex of 21, co-authored 50 publications receiving 4971 citations.

Papers
More filters
Journal Articleā€¢DOIā€¢
TL;DR: The authors propose an alternative learning procedure based on the orthogonal least-squares method, which provides a simple and efficient means for fitting radial basis function networks.
Abstract: The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications. >

3,414Ā citations

Journal Articleā€¢DOIā€¢
TL;DR: In this paper, an algorithm for identifying NARMAX models based on radial basis functions from noise-corrupted data is presented. But this algorithm is not suitable for the analysis of a wide class of discrete-time nonlinear systems.
Abstract: A wide class of discrete-time non-linear systems can be represented by the nonlinear autoregressive moving average (NARMAX) model with exogenous inputs. This paper develops a practical algorithm for identifying NARMAX models based on radial basis functions from noise-corrupted data. The algorithm consists of an iterative orthogonal-forward-regression routine coupled with model validity tests. The orthogonal-forward-regression routine selects parsimonious radial-basisTunc-tion models, while the model validity tests measure the quality of fit. The modelling of a liquid level system and an automotive diesel engine are included to demonstrate the effectiveness of the identification procedure.

312Ā citations

Journal Articleā€¢DOIā€¢
TL;DR: It is shown that difficulties associated with channel non-linearities and additive noise correlation can be overcome by the use of equalizers employing a multi-layer perceptron structure, providing further evidence that the neural network approach proposed recently by Gibson et al. is a general solution to the problem of equalization in digital communications systems.

270Ā citations

Journal Articleā€¢DOIā€¢
TL;DR: A new recursive prediction error algorithm is derived for the training of feedforward layered neural networks that enables the weights in each neuron of the network to be updated in an efficient parallel manner and has better convergence properties than the classical back propagation algorithm.
Abstract: A new recursive prediction error algorithm is derived for the training of feedforward layered neural networks. The algorithm enables the weights in each neuron of the network to be updated in an efficient parallel manner and has better convergence properties than the classical back propagation algorithm. The relationship between this new parallel algorithm and other existing learning algorithms is discussed. Examples taken from the fields of communication channel equalization and nonlinear systems modelling are used to demonstrate the superior performance of the new algorithm compared with the back propagation routine.

152Ā citations

Journal Articleā€¢DOIā€¢
TL;DR: In this article, a forward regression algorithm based on an orthogonal decomposition of the regression matrix is employed to select a suitable set of radial basis function centers from a large number of possible candidates.
Abstract: This paper investigates the identification of discrete-time non-linear systems using radial basis functions. A forward regression algorithm based on an orthogonal decomposition of the regression matrix is employed to select a suitable set of radial basis function centers from a large number of possible candidates and this provides, for the first time, fully automatic selection procedure for identifying parsimonious radial basis function models of structure-unknown non-linear systems. The relationship between neural networks and radial basis functions is discussed and the application of the algorithms to real data is included to demonstrate the effectiveness of this approach.

150Ā citations


Cited by
More filters
Bookā€¢
01 Jan 1995
TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Abstract: From the Publisher: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts, the book examines techniques for modelling probability density functions and the properties and merits of the multi-layer perceptron and radial basis function network models. Also covered are various forms of error functions, principal algorithms for error function minimalization, learning and generalization in neural networks, and Bayesian techniques and their applications. Designed as a text, with over 100 exercises, this fully up-to-date work will benefit anyone involved in the fields of neural computation and pattern recognition.

19,056Ā citations

Journal Articleā€¢DOIā€¢
01 May 1993
TL;DR: The architecture and learning procedure underlying ANFIS (adaptive-network-based fuzzy inference system) is presented, which is a fuzzy inference System implemented in the framework of adaptive networks.
Abstract: The architecture and learning procedure underlying ANFIS (adaptive-network-based fuzzy inference system) is presented, which is a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an input-output mapping based on both human knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs. In the simulation, the ANFIS architecture is employed to model nonlinear functions, identify nonlinear components on-line in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificial neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. >

15,085Ā citations

Christopher M. Bishop1ā€¢
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141Ā citations

Journal Articleā€¢DOIā€¢
TL;DR: The authors propose an alternative learning procedure based on the orthogonal least-squares method, which provides a simple and efficient means for fitting radial basis function networks.
Abstract: The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications. >

3,414Ā citations

Journal Articleā€¢DOIā€¢
02 Apr 2004-Science
TL;DR: A method for learning nonlinear systems, echo state networks (ESNs), which employ artificial recurrent neural networks in a way that has recently been proposed independently as a learning mechanism in biological brains is presented.
Abstract: We present a method for learning nonlinear systems, echo state networks (ESNs). ESNs employ artificial recurrent neural networks in a way that has recently been proposed independently as a learning mechanism in biological brains. The learning method is computationally efficient and easy to use. On a benchmark task of predicting a chaotic time series, accuracy is improved by a factor of 2400 over previous techniques. The potential for engineering applications is illustrated by equalizing a communication channel, where the signal error rate is improved by two orders of magnitude.

3,122Ā citations