scispace - formally typeset
Search or ask a question
JournalISSN: 1019-7168

Advances in Computational Mathematics 

Springer Science+Business Media
About: Advances in Computational Mathematics is an academic journal published by Springer Science+Business Media. The journal publishes majorly in the area(s): Finite element method & Mathematics. It has an ISSN identifier of 1019-7168. Over the lifetime, 1513 publications have been published receiving 49374 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A new discussion of the complex branches of W, an asymptotic expansion valid for all branches, an efficient numerical procedure for evaluating the function to arbitrary precision, and a method for the symbolic integration of expressions containing W are presented.
Abstract: The LambertW function is defined to be the multivalued inverse of the functionw →we w . It has many applications in pure and applied mathematics, some of which are briefly described here. We present a new discussion of the complex branches ofW, an asymptotic expansion valid for all branches, an efficient numerical procedure for evaluating the function to arbitrary precision, and a method for the symbolic integration of expressions containingW.

5,591 citations

Journal ArticleDOI
TL;DR: A new class of positive definite and compactly supported radial functions which consist of a univariate polynomial within their support is constructed, it is proved that they are of minimal degree and unique up to a constant factor.
Abstract: We construct a new class of positive definite and compactly supported radial functions which consist of a univariate polynomial within their support. For given smoothness and space dimension it is proved that they are of minimal degree and unique up to a constant factor. Finally, we establish connections between already known functions of this kind.

2,495 citations

Journal ArticleDOI
TL;DR: Both formulations of regularization and Support Vector Machines are reviewed in the context of Vapnik's theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics.
Abstract: Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular, the regression problem of approximating a multivariate function from sparse data. Radial Basis Functions, for example, are a special case of both regularization and Support Vector Machines. We review both formulations in the context of Vapnik's theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics. The emphasis is on regression: classification is treated as a special case.

1,305 citations

Journal ArticleDOI
TL;DR: Techniques by which MFS-type methods are extended to certain classes of non-trivial problems and adapted for the solution of inhomogeneous problems are outlined.
Abstract: The aim of this paper is to describe the development of the method of fundamental solutions (MFS) and related methods over the last three decades. Several applications of MFS-type methods are presented. Techniques by which such methods are extended to certain classes of non-trivial problems and adapted for the solution of inhomogeneous problems are also outlined.

958 citations

Journal ArticleDOI
Shmuel Rippa1
TL;DR: It is shown, numerically, that the value of the optimal c (the value of c that minimizes the interpolation error) depends on the number and distribution of data points, on the data vector, and on the precision of the computation.
Abstract: The accuracy of many schemes for interpolating scattered data with radial basis functions depends on a shape parameter c of the radial basis function. In this paper we study the effect of c on the quality of fit of the multiquadric, inverse multiquadric and Gaussian interpolants. We show, numerically, that the value of the optimal c (the value of c that minimizes the interpolation error) depends on the number and distribution of data points, on the data vector, and on the precision of the computation. We present an algorithm for selecting a good value for c that implicitly takes all the above considerations into account. The algorithm selects c by minimizing a cost function that imitates the error between the radial interpolant and the (unknown) function from which the data vector was sampled. The cost function is defined by taking some norm of the error vector E = (E 1, ... , EN)T where E k = Ek = fk - Sk xk) and S k is the interpolant to a reduced data set obtained by removing the point x k and the corresponding data value f k from the original data set. The cost function can be defined for any radial basis function and any dimension. We present the results of many numerical experiments involving interpolation of two dimensional data sets by the multiquadric, inverse multiquadric and Gaussian interpolants and we show that our algorithm consistently produces good values for the parameter c.

872 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202340
2022110
202183
202083
2019129
201873