scispace - formally typeset
Search or ask a question
Topic

Hierarchical RBF

About: Hierarchical RBF is a research topic. Over the lifetime, 599 publications have been published within this topic receiving 21653 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: It is proved thatRBF networks having one hidden layer are capable of universal approximation, and a certain class of RBF networks with the same smoothing factor in each kernel node is broad enough for universal approximation.
Abstract: There have been several recent studies concerning feedforward networks and the problem of approximating arbitrary functionals of a finite number of real variables. Some of these studies deal with cases in which the hidden-layer nonlinearity is not a sigmoid. This was motivated by successful applications of feedforward networks with nonsigmoidal hidden-layer units. This paper reports on a related study of radial-basis-function (RBF) networks, and it is proved that RBF networks having one hidden layer are capable of universal approximation. Here the emphasis is on the case of typical RBF networks, and the results show that a certain class of RBF networks with the same smoothing factor in each kernel node is broad enough for universal approximation.

3,755 citations

Journal Article
TL;DR: The relationship between 'learning' in adaptive layered networks and the fitting of data with high dimensional surfaces is discussed, leading naturally to a picture of 'generalization in terms of interpolation between known data points and suggests a rational approach to the theory of such networks.
Abstract: : The relationship between 'learning' in adaptive layered networks and the fitting of data with high dimensional surfaces is discussed. This leads naturally to a picture of 'generalization in terms of interpolation between known data points and suggests a rational approach to the theory of such networks. A class of adaptive networks is identified which makes the interpolation scheme explicit. This class has the property that learning is equivalent to the solution of a set of linear equations. These networks thus represent nonlinear relationships while having a guaranteed learning rule. Great Britain.

3,538 citations

Journal ArticleDOI
TL;DR: The authors propose an alternative learning procedure based on the orthogonal least-squares method, which provides a simple and efficient means for fitting radial basis function networks.
Abstract: The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications. >

3,414 citations

Proceedings ArticleDOI
01 Aug 2001
TL;DR: It is shown that the RBF representation has advantages for mesh simplification and remeshing applications, and a greedy algorithm in the fitting process reduces the number of RBF centers required to represent a surface and results in significant compression and further computational advantages.
Abstract: We use polyharmonic Radial Basis Functions (RBFs) to reconstruct smooth, manifold surfaces from point-cloud data and to repair incomplete meshes. An object's surface is defined implicitly as the zero set of an RBF fitted to the given surface data. Fast methods for fitting and evaluating RBFs allow us to model large data sets, consisting of millions of surface points, by a single RBF — previously an impossible task. A greedy algorithm in the fitting process reduces the number of RBF centers required to represent a surface and results in significant compression and further computational advantages. The energy-minimisation characterisation of polyharmonic splines result in a “smoothest” interpolant. This scale-independent characterisation is well-suited to reconstructing surfaces from non-uniformly sampled data. Holes are smoothly filled and surfaces smoothly extrapolated. We use a non-interpolating approximation when the data is noisy. The functional representation is in effect a solid model, which means that gradients and surface normals can be determined analytically. This helps generate uniform meshes and we show that the RBF representation has advantages for mesh simplification and remeshing applications. Results are presented for real-world rangefinder data.

1,958 citations

Journal ArticleDOI
TL;DR: A novel paradigm is proposed whereby data information is encapsulated in determining the structure and initial parameters of the RBF neural classifier before learning takes place, and the dimension of the search space is drastically reduced in the gradient paradigm.
Abstract: A general and efficient design approach using a radial basis function (RBF) neural classifier to cope with small training sets of high dimension, which is a problem frequently encountered in face recognition, is presented. In order to avoid overfitting and reduce the computational burden, face features are first extracted by the principal component analysis (PCA) method. Then, the resulting features are further processed by the Fisher's linear discriminant (FLD) technique to acquire lower-dimensional discriminant patterns. A novel paradigm is proposed whereby data information is encapsulated in determining the structure and initial parameters of the RBF neural classifier before learning takes place. A hybrid learning algorithm is used to train the RBF neural networks so that the dimension of the search space is drastically reduced in the gradient paradigm. Simulation results conducted on the ORL database show that the system achieves excellent performance both in terms of error rates of classification and learning efficiency.

656 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
77% related
Image segmentation
79.6K papers, 1.8M citations
76% related
Support vector machine
73.6K papers, 1.7M citations
75% related
Artificial neural network
207K papers, 4.5M citations
73% related
Image processing
229.9K papers, 3.5M citations
73% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20234
20225
20183
201710
201614
201516