scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A note on total degree polynomial optimization by Chebyshev grids

01 Jan 2018-Optimization Letters (Springer Berlin Heidelberg)-Vol. 12, Iss: 1, pp 63-71
TL;DR: Using the approximation theory notions of polynomial mesh and Dubiner distance in a compact set, error estimates for total degree polynometric optimization on Chebyshev grids of the hypercube are derived.
Abstract: Using the approximation theory notions of polynomial mesh and Dubiner distance in a compact set, we derive error estimates for total degree polynomial optimization on Chebyshev grids of the hypercube.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: Such constructions are based on three cornerstones of convex geometry, Bieberbach volume inequality and Leichtweiss inequality on the affine breadth eccentricity, and the Rolling Ball Theorem, respectively.
Abstract: We construct norming meshes for polynomial optimization by the classical Markov inequality on general convex bodies in $${\mathbb {R}}^d$$ , and by a tangential Markov inequality via an estimate of the Dubiner distance on smooth convex bodies. These allow to compute a $$(1-\varepsilon )$$ -approximation to the minimum of any polynomial of degree not exceeding n by $${\mathcal {O}}\left( (n/\sqrt{\varepsilon })^{\alpha d}\right) $$ samples, with $$\alpha =2$$ in the general case, and $$\alpha =1$$ in the smooth case. Such constructions are based on three cornerstones of convex geometry, Bieberbach volume inequality and Leichtweiss inequality on the affine breadth eccentricity, and the Rolling Ball Theorem, respectively.

11 citations

Journal ArticleDOI
TL;DR: It is shown that Lasserre measure-based hierarchies for polynomial optimization can be implemented by directly computing the discrete minimum at a suitable set of algebraic quadrature nodes.
Abstract: We show that Lasserre measure-based hierarchies for polynomial optimization can be implemented by directly computing the discrete minimum at a suitable set of algebraic quadrature nodes. The sampling cardinality can be much lower than in other approaches based on grids or norming meshes. All the vast literature on multivariate algebraic quadrature becomes in such a way relevant to polynomial optimization.

10 citations

Journal ArticleDOI
TL;DR: It is shown that the notion of polynomial mesh (norming set), used to provide discretizations of a compact set nearly optimal for certain approximation theoretic purposes, can also be used to obtain finitely supported near G-optimal designs forPolynomial regression.
Abstract: We show that the notion of polynomial mesh (norming set), used to provide discretizations of a compact set nearly optimal for certain approximation theoretic purposes, can also be used to obtain finitely supported near G-optimal designs for polynomial regression. We approximate such designs by a standard multiplicative algorithm, followed by measure concentration via Caratheodory-Tchakaloff compression.

8 citations

Journal ArticleDOI
TL;DR: The notion of Dubiner distance from algebraic to trigonometric polynomials on subintervals of the period is extended and obtained by the Szegő variant of Videnskii inequality, which allows to improve previous estimates for Chebyshev-like trig onometric norming meshes.
Abstract: We extend the notion of Dubiner distance from algebraic to trigonometric polynomials on subintervals of the period, and we obtain its explicit form by the Szegő variant of Videnskii inequality. This allows to improve previous estimates for Chebyshev-like trigonometric norming meshes, and suggests a possible use of such meshes in the framework of multivariate polynomial optimization on regions defined by circular arcs.

8 citations


Cites background or methods from "A note on total degree polynomial o..."

  • ...It is proved in [15] and the proof is essentially outlined also in [1]....

    [...]

  • ...(28) As already observed in [15, 20] for mesh-based polynomial optimization on cubes, this is a sort of brute-force approach, that could be useful (in low dimension and with relatively small degrees) when a rough estimate of the extremal values is sought without resorting to more sophisticated optimization methods, or conversely as a starting guess for such methods....

    [...]

  • ...Using the trigonometric Dubiner distance, a similar improvement can be obtained also for the constants of the general Jacobi-like norming meshes in ([15])....

    [...]

  • ...This is exactly the error bound obtainable for algebraic polynomial optimization on a real interval by approximately mn Chebyshev nodes, in view of the classical Ehlich-Zeller estimates in [10] (see also [4] and [15, 20] with the references therein)....

    [...]

Journal Article
TL;DR: In this paper, it was shown that L∞-norming sets for finite-dimensional multivariate function spaces on compact sets are stable under small perturbations, which implies stability of interpolation operator norms (Lebesgue constants) in spaces of algebraic and trigonometric polynomials.
Abstract: We prove that L∞-norming sets for finite-dimensional multivariate function spaces on compact sets are stable under small perturbations. This implies stability of interpolation operator norms (Lebesgue constants), in spaces of algebraic and trigonometric polynomials. 2010 AMS subject classification: 41A10, 41A63, 42A15, 65D05.

6 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, a trust region approach for minimizing nonlinear functions subject to simple bounds is proposed, where the trust region is defined by minimizing a quadratic function subject only to an ellipsoidal constraint and the iterates generated by these methods are always strictly feasible.
Abstract: We propose a new trust region approach for minimizing nonlinear functions subject to simple bounds. By choosing an appropriate quadratic model and scaling matrix at each iteration, we show that it is not necessary to solve a quadratic programming subproblem, with linear inequalities, to obtain an improved step using the trust region idea. Instead, a solution to a trust region subproblem is defined by minimizing a quadratic function subject only to an ellipsoidal constraint. The iterates generated by these methods are always strictly feasible. Our proposed methods reduce to a standard trust region approach for the unconstrained problem when there are no upper or lower bounds on the variables. Global and quadratic convergence of the methods is established; preliminary numerical experiments are reported.

3,026 citations

Journal ArticleDOI
TL;DR: In this paper, a rechnerischen Behandlung einer reelle stetigen Funktion f in einem kompakten Intervall J geht man hfiufig so vor: Zun~ichst wird f durch ein Polynom R ersetzt, das f praktisch genau wiedergibt, but eine hohe Ordnung n besitzt.
Abstract: Bei der rechnerischen Behandlung einer reellen stetigen Funktion f in einem kompakten Intervall J geht man hfiufig so vor: Zun~ichst wird f durch ein Polynom R ersetzt, das f praktisch genau wiedergibt, aber eine hohe Ordnung n besitzt. Dann approximiert man R mit einem Polynom Q niederer Ordnung. Es ist dabei oft nfitzlich oder n6tig, den Maximalbetrag des Fehlerpolynoms P = Q R numerisch zu ermitteln. Hierzu berechnet man P an gewissen Gitterpunkten x, und schlieBt mit Absch/itzungen der Art

151 citations


"A note on total degree polynomial o..." refers background in this paper

  • ...Remark 1 In the one-dimensional case with K = [−1, 1], taking as X the set of l Chebyshev points (the zeros of Tl(x) = cos(l cos (x)), l > n), (5) is a wellknown inequality obtained by Ehlich and Zeller in 1964, where θ = nπ/(2l); see [15] and [6], where the case of l+1 Chebyshev-Lobatto points is also considered....

    [...]

Journal ArticleDOI
TL;DR: An object-oriented MATLAB system is described that extends the capabilities of Chebfun to smooth functions of two variables defined on rectangles by using iterative Gaussian elimination with complete pivoting to form “chebfun2” objects representing low rank approximations.
Abstract: An object-oriented MATLAB system is described that extends the capabilities of Chebfun to smooth functions of two variables defined on rectangles. Functions are approximated to essentially machine precision by using iterative Gaussian elimination with complete pivoting to form “chebfun2” objects representing low rank approximations. Operations such as integration, differentiation, function evaluation, and transforms are particularly efficient. Global optimization, the singular value decomposition, and rootfinding are also extended to chebfun2 objects. Numerical applications are presented.

130 citations


"A note on total degree polynomial o..." refers methods in this paper

  • ...On the other hand, polynomial optimization on Chebyshev grids seems to have been studied essentially only via tensor-product polynomial spaces, see [16, 23] and also [22], where it is used by the functions min2 and max2 within the Matlab package Chebfun2 (square) (and more recently Chebfun3 for the cube, see [17])....

    [...]

  • ..., the product of two univariate polynomials), it is evaluated at a Chebyshev n1 × n2 grid, and the discrete optimum used as a starting guess for a superlinearly convergent constrained trust region method, based on [8]; see [22] for a more detailed discussion....

    [...]

Journal ArticleDOI
TL;DR: Uniform approximation of differentiable or analytic functions of one or several variables on a compact set K is studied by a sequence of discrete least squares polynomials if K satisfies a Markov inequality and point evaluations on standard discretization grids provide nearly optimal approximants.

120 citations


"A note on total degree polynomial o..." refers methods in this paper

  • ...The notion of polynomial mesh was introduced in the seminal paper [7] and then used from both the theoretical and the computational point of view....

    [...]

Journal ArticleDOI
TL;DR: This work considers the computational complexity of optimizing various classes of continuous functions over a simplex, hypercube or sphere and reviews known approximation results as well as negative results.
Abstract: We consider the computational complexity of optimizing various classes of continuous functions over a simplex, hypercube or sphere. These relatively simple optimization problems arise naturally from diverse applications. We review known approximation results as well as negative (inapproximability) results from the recent literature.

88 citations