scispace - formally typeset
Search or ask a question

Showing papers in "Advances in Computational Mathematics in 2000"


Journal ArticleDOI
TL;DR: Both formulations of regularization and Support Vector Machines are reviewed in the context of Vapnik's theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics.
Abstract: Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular, the regression problem of approximating a multivariate function from sparse data. Radial Basis Functions, for example, are a special case of both regularization and Support Vector Machines. We review both formulations in the context of Vapnik's theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics. The emphasis is on regression: classification is treated as a special case.

1,305 citations


Journal ArticleDOI
TL;DR: This is a survey of the main results on multivariate polynomial interpolation in the last twenty-five years, a period of time when the subject experienced its most rapid development.
Abstract: This is a survey of the main results on multivariate polynomial interpolation in the last twenty-five years, a period of time when the subject experienced its most rapid development. The problem is considered from two different points of view: the construction of data points which allow unique interpolation for given interpolation spaces as well as the converse. In addition, one section is devoted to error formulas and another to connections with computer algebra. An extensive list of references is also included.

783 citations


Journal ArticleDOI
TL;DR: The error bounds show that the polynomial interpolation on a d-dimensional cube, where d is large, is universal, i.e., almost optimal for many different function spaces.
Abstract: We study polynomial interpolation on a d-dimensional cube, where d is large. We suggest to use the least solution at sparse grids with the extrema of the Chebyshev polynomials. The polynomial exactness of this method is almost optimal. Our error bounds show that the method is universal, i.e., almost optimal for many different function spaces. We report on numerical experiments for d = 10 using up to 652 065 interpolation points.

697 citations


Journal ArticleDOI
TL;DR: The convergence theorems are proved and estimates for the rate of approximation are given by means of these algorithms, which apply to approximation from an arbitrary dictionary in a Hilbert space.
Abstract: Theoretical greedy type algorithms are studied: a Weak Greedy Algorithm, a Weak Orthogonal Greedy Algorithm, and a Weak Relaxed Greedy Algorithm. These algorithms are defined by weaker assumptions than their analogs the Pure Greedy Algorithm, an Orthogonal Greedy Algorithm, and a Relaxed Greedy Algorithm. The weaker assumptions make these new algorithms more ready for practical implementation. We prove the convergence theorems and also give estimates for the rate of approximation by means of these algorithms. The convergence and the estimates apply to approximation from an arbitrary dictionary in a Hilbert space.

222 citations


Journal ArticleDOI
TL;DR: It is proved that integration is strongly tractable for some weighted Korobov and Sobolev spaces as well as for the Hilbert space whose reproducing kernel corresponds to the covariance function of the isotropic Wiener measure.
Abstract: We study multivariate integration and approximation for various classes of functions of d variables with arbitrary d We consider algorithms that use function evaluations as the information about the function We are mainly interested in verifying when integration and approximation are tractable and strongly tractable Tractability means that the minimal number of function evaluations needed to reduce the initial error by a factor of ɛ is bounded by C(d)ɛ−p for some exponent p independent of d and some function C(d) Strong tractability means that C(d) can be made independent of d The ‐exponents of tractability and strong tractability are defined as the smallest powers of ɛ{-1} in these bounds We prove that integration is strongly tractable for some weighted Korobov and Sobolev spaces as well as for the Hilbert space whose reproducing kernel corresponds to the covariance function of the isotropic Wiener measure We obtain bounds on the ‐exponents, and for some cases we find their exact values For some weighted Korobov and Sobolev spaces, the strong ‐exponent is the same as the ‐exponent for d=1, whereas for the third space it is 2 For approximation we also consider algorithms that use general evaluations given by arbitrary continuous linear functionals as the information about the function Our main result is that the ‐exponents are the same for general and function evaluations This holds under the assumption that the orthonormal eigenfunctions of the covariance operator have uniformly bounded L∞ norms This assumption holds for spaces with shift-invariant kernels Examples of such spaces include weighted Korobov spaces For a space with non‐shift‐invariant kernel, we construct the corresponding space with shift-invariant kernel and show that integration and approximation for the non-shift-invariant kernel are no harder than the corresponding problems with the shift-invariant kernel If we apply this construction to a weighted Sobolev space, whose kernel is non-shift-invariant, then we obtain the corresponding Korobov space This enables us to derive the results for weighted Sobolev spaces

119 citations


Journal ArticleDOI
TL;DR: The H-basis concept allows, similarly to the Gröbner basis concept, a reformulation of nonlinear problems in terms of linear algebra and it is proved that n polynomials in n variables are, under mild conditions, already H-bases.
Abstract: The H-basis concept allows, similarly to the Grobner basis concept, a reformulation of nonlinear problems in terms of linear algebra. We exhibit parallels of the two concepts, show properties of H-bases, discuss their construction and uniqueness questions, and prove that n polynomials in n variables are, under mild conditions, already H-bases. We apply H-bases to the solution of polynomial systems by the eigenmethod and to multivariate interpolation.

75 citations


Journal ArticleDOI
TL;DR: It is proved that for any f from the closure of the convex hull of D the error of m-term approximation by WCGA is of order and similar results are obtained for Weak Relaxed Greedy Algorithm and its modification.
Abstract: We study efficiency of approximation and convergence of two greedy type algorithms in uniformly smooth Banach spaces. The Weak Chebyshev Greedy Algorithm (WCGA) is defined for an arbitrary dictionary D and provides nonlinear m-term approximation with regard to D. This algorithm is defined inductively with the mth step consisting of two basic substeps: (1) selection of an mth element ϕ m c from D, and (2) constructing an m-term approximant G m c . We include the name of Chebyshev in the name of this algorithm because at the substep (2) the approximant G m c is chosen as the best approximant from Span(ϕ 1 c ,...,ϕ m c ). The term Weak Greedy Algorithm indicates that at each substep (1) we choose ϕ m c as an element of D that satisfies some condition which is “t m -times weaker” than the condition for ϕ m c to be optimal (t m =1). We got error estimates for Banach spaces with modulus of smoothness ρ(u)≤γu q , 1

69 citations


Journal ArticleDOI
TL;DR: A class of trigonometric polynomial frames suitable for detecting the location of discontinuities of derivatives of a periodic function, given either finitely many Fourier coefficients of the function, or the samples of thefunction at uniform or scattered data points is developed.
Abstract: We discuss the problem of detecting the location of discontinuities of derivatives of a periodic function, given either finitely many Fourier coefficients of the function, or the samples of the function at uniform or scattered data points Using the general theory, we develop a class of trigonometric polynomial frames suitable for this purpose Our methods also help us to analyze the capabilities of periodic spline wavelets, trigonometric polynomial wavelets, and some of the classical summability methods in the theory of Fourier series

62 citations


Journal ArticleDOI
TL;DR: This work considers Smolyak's construction for the numerical integration over the d‐dimensional unit cube and error bounds are derived for the one‐dimensional case, which lead by a recursion formula to error bounds for higher dimensional integration.
Abstract: We consider Smolyak's construction for the numerical integration over the d‐dimensional unit cube. The underlying class of integrands is a tensor product space consisting of functions that are analytic in the Cartesian product of ellipses. The Kronrod–Patterson quadrature formulae are proposed as the corresponding basic sequence and this choice is compared with Clenshaw–Curtis quadrature formulae. First, error bounds are derived for the one‐dimensional case, which lead by a recursion formula to error bounds for higher dimensional integration. The applicability of these bounds is shown by examples from frequently used test packages. Finally, numerical experiments are reported.

58 citations


Journal ArticleDOI
TL;DR: This work considers the problem of approximating the Sobolev class of functions by neural networks with a single hidden layer with a probabilistic approach, based on the Radon and wavelet transforms, and establishes both upper and lower bounds.
Abstract: We consider the problem of approximating the Sobolev class of functions by neural networks with a single hidden layer, establishing both upper and lower bounds The upper bound uses a probabilistic approach, based on the Radon and wavelet transforms, and yields similar rates to those derived recently under more restrictive conditions on the activation function Moreover, the construction using the Radon and wavelet transforms seems very natural to the problem Additionally, geometrical arguments are used to establish lower bounds for two types of commonly used activation functions The results demonstrate the tightness of the bounds, up to a factor logarithmic in the number of nodes of the neural network

55 citations


Journal ArticleDOI
TL;DR: Several algorithms are proposed to construct multivariate biorthogonal wavelets with any general dilation matrix and arbitrary order of vanishing moments, such that the wavelet filters have any number of vanishing Moments.
Abstract: We present a concrete method to build discrete biorthogonal systems such that the wavelet filters have any number of vanishing moments. Several algorithms are proposed to construct multivariate biorthogonal wavelets with any general dilation matrix and arbitrary order of vanishing moments. Examples are provided to illustrate the general theory and the advantages of the algorithms.

Journal ArticleDOI
TL;DR: A class of polynomial frames suitable for analyzing data on the surface of the unit sphere of a Euclidean space that are well localized, and stable with respect to all the Lp norms.
Abstract: We introduce a class of polynomial frames suitable for analyzing data on the surface of the unit sphere of a Euclidean space. Our frames consist of polynomials, but are well localized, and are stable with respect to all the L p norms. The frames belonging to higher and higher scale wavelet spaces have more and more vanishing moments.

Journal ArticleDOI
TL;DR: Two examples of the construction of symmetric/antisymmetric orthogonal multiwavelets of multiplicity 3 with dilation factor 2 and multiplicity 2 with dilution factor 3 are presented to demonstrate the use of these parameterizations of orthogsonal multifilter banks.
Abstract: A complete parameterization for the m‐channel FIR orthogonal multifilter banks is provided based on the lattice structure of the paraunitary systems. Two forms of complete factorization of the m‐channel FIR orthogonal multifilter banks for symmetric/antisymmetric scaling functions and multiwavelets with the same symmetric center \(\frac{1}{2}\) (1 + γ + γ/(m - 1)) for some nonnegative integer γ are obtained. For the case of multiplicity 2 and dilation factor m = 2, the result of the factorization shows that if the scaling function Φ and multiwavelet Ψ are symmetric/antisymmetric about the same symmetric center γ + \(\frac{1}{2}\)\(\frac{1}{2}\) for some nonnegative integer γ, then one of the components of Φ (respectively Ψ) is symmetric and the other is antisymmetric. Two examples of the construction of symmetric/antisymmetric orthogonal multiwavelets of multiplicity 3 with dilation factor 2 and multiplicity 2 with dilation factor 3 are presented to demonstrate the use of these parameterizations of orthogonal multifilter banks.

Journal ArticleDOI
TL;DR: An analog of the classical Bernstein operator is introduced and it is shown that generalized Bernstein polynomials of a continuous function converge to this function, and a convergence result is also proved for degree elevation of the generalized polynomers.
Abstract: A class of generalized polynomials is considered consisting of the null spaces of certain differential operators with constant coefficients. This class strictly contains ordinary polynomials and appropriately scaled trigonometric polynomials. An analog of the classical Bernstein operator is introduced and it is shown that generalized Bernstein polynomials of a continuous function converge to this function. A convergence result is also proved for degree elevation of the generalized polynomials. Moreover, the geometric nature of these functions is discussed and a connection with certain rational parametric curves is established.

Journal ArticleDOI
TL;DR: It is shown that training neural networks and similar network approximation techniques are equivalent to least-squares collocation for a corresponding integral equation with mollified data.
Abstract: This paper is devoted to the analysis of network approximation in the framework of approximation and regularization theory. It is shown that training neural networks and similar network approximation techniques are equivalent to least-squares collocation for a corresponding integral equation with mollified data. Results about convergence and convergence rates for exact data are derived based upon well-known convergence results about least-squares collocation. Finally, the stability properties with respect to errors in the data are examined and stability bounds are obtained, which yield rules for the choice of the number of network elements.

Journal ArticleDOI
TL;DR: A novel concept, the logarithmic Gauss maps of plane curves, plays a key role in this process, furnishing geometrical insights that parallel those associated with the “ordinary” Gauss map.
Abstract: Minkowski geometric algebra is concerned with the complex sets populated by the sums and products of all pairs of complex numbers selected from given complex‐set operands. Whereas Minkowski sums (under vector addition in Rn have been extensively studied, from both the theoretical and computational perspective, Minkowski products in R2 (induced by the multiplication of complex numbers) have remained relatively unexplored. The complex logarithm reveals a close relation between Minkowski sums and products, thereby allowing algorithms for the latter to be derived through natural adaptations of those for the former. A novel concept, the logarithmic Gauss maps of plane curves, plays a key role in this process, furnishing geometrical insights that parallel those associated with the “ordinary” Gauss map. As a natural generalization of Minkowski sums and products, the computation of “implicitly‐defined” complex sets (populated by general functions of values drawn from given sets) is also considered. By interpreting them as one‐parameter families of curves, whose envelopes contain the set boundaries, algorithms for evaluating such sets are sketched.

Journal ArticleDOI
TL;DR: An approach is described to the numerical solution of order conditions for Runge–Kutta methods whose solutions evolve on a given manifold based on least squares minimization using the Levenberg–Marquardt algorithm.
Abstract: An approach is described to the numerical solution of order conditions for Runge–Kutta methods whose solutions evolve on a given manifold. This approach is based on least squares minimization using the Levenberg–Marquardt algorithm. Methods of order four and five are constructed and numerical experiments are presented which confirm that the derived methods have the expected order of accuracy.

Journal ArticleDOI
TL;DR: A class V of sequences is found such that the condition τ∉V is necessary and sufficient for convergence of weak greedy algorithm with weakness sequence τ for each f and all Hilbert spaces H and dictionaries D.
Abstract: We find a class V of sequences such that the condition τ∉V is necessary and sufficient for convergence of weak greedy algorithm with weakness sequence τ for each f and all Hilbert spaces H and dictionaries D. We denote by V the class of sequences x={x k k=1 ∞, x k ≥0, k=1,2,..., with the following property: there exists a sequence 0=q 0

Journal ArticleDOI
Yuan Xu1
TL;DR: In this article, the authors discuss polynomial interpolation in several variables from the point of view of a polynomially ideal point-of-view, and show that for each monomial order there is a unique poynomial that interpolates on the points in the variety.
Abstract: We discuss polynomial interpolation in several variables from a polynomial ideal point of view. One of the results states that if I is a real polynomial ideal with real variety and if its codimension is equal to the cardinality of its variety, then for each monomial order there is a unique polynomial that interpolates on the points in the variety. The result is motivated by the problem of constructing cubature formulae, and it leads to a theorem on cubature formulae which can be considered an extension of Gaussian quadrature formulae to several variables.

Journal ArticleDOI
TL;DR: Most of the current research on high dimensional numerical integration studies one (or more) of the following issues: • Lower bounds, which means that multivariate integration is intractable and suffers the curse of dimension.
Abstract: There has been an increasing interest in studying high dimensional numerical integration. This problem occurs in many applications. In particle physics and in finance the number of variables can be hundreds or even thousands. Also path integrals, for example, with respect to the Wiener measure, are important. Formally, the dimension is infinity. Path integrals can also be needed since they are often the solution of partial differential equations given, for example, by the Feynman–Kac formula. Most of the current research on high dimensional numerical integration studies one (or more) of the following issues: • Lower bounds. For certain classes of functions one can prove exponential lower bounds on the cost of computing an approximation to a multivariate integral. This means that multivariate integration is intractable and suffers the curse of dimension. This is in contrast to discrete complexity theory, where it is only believed, not proved, that certain problems are intractable. Some lower bounds are for a natural class of algorithms (such as quadrature formulas with positive weights or quadrature formulas that yield a sparse grid), other lower bounds are for arbitrary algorithms. • The existence of efficient algorithms. Often such results are proved by probabilistic methods, i.e., the proof is non-constructive. For certain classes of functions we know that the number of sample points (a good measure for the cost of algorithms) must increase only polynomially with the dimension. Such problems are called tractable. An obvious (but often very difficult) question is whether we can give constructive proofs of these results. • The construction of efficient algorithms. In some cases one can prove that multivariate integration is tractable by constructing polynomial-time algorithms. Also polynomial exactness, numerical results and the discussion of implementation details are of interest.

Journal ArticleDOI
TL;DR: This paper considers the Pocklington integro–differential equation for the current induced on a straight, thin wire by an incident harmonic electromagnetic field and obtains a coercive or Gårding type inequality for the associated operator.
Abstract: In this paper we consider the Pocklington integro–differential equation for the current induced on a straight, thin wire by an incident harmonic electromagnetic field. We show that this problem is well posed in suitable fractional order Sobolev spaces and obtain a coercive or Garding type inequality for the associated operator. Combining this coercive inequality with a standard abstract formulation of the Galerkin method we obtain rigorous convergence results for Galerkin type numerical solutions of Pocklington's equation, and we demonstrate that certain convergence rates hold for these methods.

Journal ArticleDOI
TL;DR: An algorithm is derived for generating the information needed to pass efficiently between multi-indices of neighboring degrees for the construction and evaluation of interpolating polynomials and in the construction of good bases for polynomial ideals.
Abstract: An algorithm is derived for generating the information needed to pass efficiently between multi-indices of neighboring degrees, of use in the construction and evaluation of interpolating polynomials and in the construction of good bases for polynomial ideals

Journal ArticleDOI
TL;DR: A fast method for evaluating multiplications Knv, avoiding the need to evaluate Kn explicitly and using fewer than O(n2) operations is presented, demonstrating that there is a large speedup when compared to a direct approach to discretization and solution of the radiosity equation.
Abstract: A “fast matrix–vector multiplication method” is proposed for iteratively solving discretizations of the radiosity equation (I — К)u = E. The method is illustrated by applying it to a discretization based on the centroid collocation method. A convergence analysis is given for this discretization, yielding a discretized linear system (I — Kn)un = En. The main contribution of the paper is the presentation of a fast method for evaluating multiplications Knv, avoiding the need to evaluate Kn explicitly and using fewer than O(n2) operations. A detailed numerical example concludes the paper, and it illustrates that there is a large speedup when compared to a direct approach to discretization and solution of the radiosity equation. The paper is restricted to the surface S being unoccluded, a restriction to be removed in a later paper.

Journal ArticleDOI
TL;DR: Bounds on the exponents of sparse grids for L2‐discrepancy and average case d‐dimensional integration with respect to the Wiener sheet measure are studied to show that sparse grids provide a rather poor exponent.
Abstract: We study bounds on the exponents of sparse grids for L2‐discrepancy and average case d‐dimensional integration with respect to the Wiener sheet measure. Our main result is that the minimal exponent of sparse grids for these problems is bounded from below by 2.1933. This shows that sparse grids provide a rather poor exponent since, due to Wasilkowski and Woźniakowski [16], the minimal exponent of L2‐discrepancy of arbitrary point sets is at most 1.4778. The proof of the latter, however, is non‐constructive. The best known constructive upper bound is still obtained by a particular sparse grid and equal to 2.4526....

Journal ArticleDOI
TL;DR: Set of solvability of Hermite multivariate interpolation problems with the sum of multiplicities less than or equal to 2n + 1, where n is the degree of the polynomial space is characterized.
Abstract: In this paper we characterize sets of solvability of Hermite multivariate interpolation problems with the sum of multiplicities less than or equal to 2n + 1, where n is the degree of the polynomial space. This can be viewed as a natural generalization of a well-known result of Severi (1921).

Journal ArticleDOI
Michael S. Floater1, Tom Lyche
TL;DR: It is shown how An can be viewed as an inverse to the Bernstein polynomial operator and that the derivatives An(g)(r) converge uniformly to g(r) at the rate 1/n for all r.
Abstract: It is well known that the degree‐raised Bernstein–Bezier coefficients of degree n of a polynomial g converge to g at the rate 1/n. In this paper we consider the polynomial An(g) of degree ⩼ n interpolating the coefficients. We show how An can be viewed as an inverse to the Bernstein polynomial operator and that the derivatives An(g)(r) converge uniformly to g(r) at the rate 1/n for all r. We also give an asymptotic expansion of Voronovskaya type for An(g) and discuss some shape preserving properties of this polynomial.

Journal ArticleDOI
TL;DR: It is proved that all the forms (linear functionals) arising are third degree forms, and an introduction toThird degree forms is provided.
Abstract: We deal with the 2‐orthogonal, 2‐symmetric self‐associated sequence (2‐orthogonal Tchebychev polynomials) and its cubic components We prove that all the forms (linear functionals) arising are third degree forms Therefore, an introduction to third degree forms is provided We look for the connection between these components which are 2‐orthogonal with respect to the functional vector t(w0{μ},w1 μ) and orthogonal sequences with respect to w0 μ, μ=0,1,2 Associated forms w0 μ)1) and their inverse w0 μ)-1 are also studied through the symmetrized w0}0 μ, μ=0,1,2 Further, we give integral representations for some of these forms

Journal ArticleDOI
TL;DR: It is shown that the uncertainty products of uniformly local, uniformly regular and uniformly stable scaling functions and wavelets are uniformly bounded from above by a constant.
Abstract: This paper is on the angle–frequency localization of periodic scaling functions and wavelets. It is shown that the uncertainty products of uniformly local, uniformly regular and uniformly stable scaling functions and wavelets are uniformly bounded from above by a constant. Results for the construction of such scaling functions and wavelets are also obtained. As an illustration, scaling functions and wavelets associated with a family of generalized periodic splines are studied. This family is generated by periodic weighted convolutions, and it includes the well‐known periodic B‐splines and trigonometric B‐splines.

Journal ArticleDOI
Yongsun Kim1, Sungyun Lee1
TL;DR: A modified version of the Mini finite element (or the Mini* finite element) for the Stokes problem in ℝ2 orℝ3 is analyzed and the stability is verified with the aid of the macroelement technique introduced by Stenberg.
Abstract: We analyze a modified version of the Mini finite element (or the Mini* finite element) for the Stokes problem in ℝ2 or ℝ3. The cross‐grid element of order one in ℝ3 is also analyzed. The stability is verified with the aid of the macroelement technique introduced by Stenberg. Each of these methods converges with first order in h as the Mini element does. Numerical tests are given for the Mini* element in comparison with the Mini element when Ω is a unit square on ℝ2.

Journal ArticleDOI
TL;DR: In this paper, the authors construct A−stable and L−stable diagonally implicit Runge-Kutta methods of which the diagonal vector in the Butcher matrix has a minimal maximum norm.
Abstract: We construct A‐stable and L‐stable diagonally implicit Runge–Kutta methods of which the diagonal vector in the Butcher matrix has a minimal maximum norm. If the implicit Runge–Kutta relations are iteratively solved by means of the approximately factorized Newton process, then such iterated Runge–Kutta methods are suitable methods for integrating shallow water problems in the sense that the stability boundary is relatively large and that the usually quite fine vertical resolution of the discretized spatial domain is not involved in the stability condition.