scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Markov inequalities, Dubiner distance, norming meshes and polynomial optimization on convex bodies

01 Jan 2019-Optimization Letters (Springer Verlag)-Vol. 13, Iss: 6, pp 1325-1343
TL;DR: Such constructions are based on three cornerstones of convex geometry, Bieberbach volume inequality and Leichtweiss inequality on the affine breadth eccentricity, and the Rolling Ball Theorem, respectively.
Abstract: We construct norming meshes for polynomial optimization by the classical Markov inequality on general convex bodies in $${\mathbb {R}}^d$$ , and by a tangential Markov inequality via an estimate of the Dubiner distance on smooth convex bodies. These allow to compute a $$(1-\varepsilon )$$ -approximation to the minimum of any polynomial of degree not exceeding n by $${\mathcal {O}}\left( (n/\sqrt{\varepsilon })^{\alpha d}\right) $$ samples, with $$\alpha =2$$ in the general case, and $$\alpha =1$$ in the smooth case. Such constructions are based on three cornerstones of convex geometry, Bieberbach volume inequality and Leichtweiss inequality on the affine breadth eccentricity, and the Rolling Ball Theorem, respectively.

Summary (2 min read)

1 Introduction

  • Sampling methods, typically on suitable grids, are one of the possible approaches in the vast literature on polynomial optimization theory, cf., e.g., [10, 11, 36] with the references therein.
  • All these notions can be given more generally for K ⊂ Polynomial meshes were formally introduced in the seminal paper [9] as a tool for studying the uniform convergence of discrete least squares polynomial approximation, and then studied from both the theoretical and the computational point of view throughout a series of papers.
  • This opens the way for a computational use of polynomial meshes in the framework of polynomial optimization, in view of the general elementary estimate given below.
  • A similar approach, though essentially in a tensor-product framework, was adopted also in [36].

3 General convex bodies

  • The bound (6) is clearly an overestimate, that is attained only in special cases, for example with K = [0, L]d.
  • From (13) and (15) the authors finally obtain the approximate cardinality bound card(An(ε)) .
  • Indeed, a deep result of convex geometry (Leichtweiss inequality [20]) asserts that, given the Loewner minimal volume ellipsoid enclosing a given convex body K, and considering the regular affine transformation, say ψ, that maps the ellipsoid into the unit Euclidean ball, then diam(K ′)/w(K ′) ≤ √ d , where K ′ = ψ(K) , (17) cf. also [15].
  • For an overview on the computation of Loewner ellipsoids the authors quote e.g. [30], with the references therein.
  • The authors stress that the cardinality estimate does not depend on the shape of the convex body.

4 Smooth convex bodies

  • The norming meshes constructed in the previous sections by standard Markov inequalities are ultimately related to (affinely mapped) uniform grids.
  • By tangential Markov inequalities and estimates of the Dubiner distance, obtaining nonuniform norming meshes of much lower cardinality.the authors.
  • It can be proved that good interpolation points for degree n on some standard real compact sets ar sp ced proportionally to 1/n in such a distance, like the Morrow-Patterson and the Padua interpolation points on the square [8], or the Fekete points on the cube or ball (in any dimension), cf. [6, 5] and reference therein.
  • Unfortunately, the Dubiner distance is known analytically ([5] and references therein) only on the d-dimensional cube, ball and on the sphere Sd−1 (where it turns out to be the geodesic distance).
  • More recently it has been computed in the case of univariate trigonometric polynomials (even on subintervals of the period); cf. [6, 34].

5 A numerical example

  • The advantage of using the Dubiner distance is that the mesh constant becomes 1/ cos(θ(ε)), which ensures an error ε (relative to the polynomial range) in mesh-based polynomial optimization by O(n2/ε) samples (notice also that for d = 2 using the general approach of Proposition 3 the authors would use O(n4/ε2) samples).
  • The authors see that the error behavior is consistent with Proposition 6 and quite satisfactory.
  • As expected, it scales linearly with ε, and moreover is below the estimate ε by at least two orders of magnitude (the latter phenomenon has been already observed in other numerical examples on polynomial optimization by norming meshes, cf. [28, 33]).
  • On the other hand, it could be useful not only by its direct application, but also to generate starting guesses for more sophisticated optimization procedures.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Original Citation:
Markov inequalities, Dubiner distance, norming meshes and polynomial optimization on
convex bodies
Springer Verlag
Publisher:
Published version:
DOI:
Terms of use:
Open Access
(Article begins on next page)
This article is made available under terms and conditions applicable to Open Access Guidelines, as
described at http://www.unipd.it/download/file/fid/55401 (Italian only)
Availability:
This version is available at: 11577/3289818 since: 2020-02-09T16:16:25Z
10.1007/s11590-018-1377-0
Università degli Studi di Padova
Padua Research Archive - Institutional Repository

Markov inequalities, Dubiner distance,
norming meshes and polynomial
optimization on convex bodies
Federico Piazzon and Marco Vianello
1
Department of Mathematics, University of Padova, Italy
November 22, 2018
Abstract
We construct norming meshes for polynomial optimization by the clas-
sical Markov inequality on general convex bodies in R
d
, and by a tangen-
tial Markov inequality via an estimate of the Dubiner distance on smooth
convex bodies. These allow to compute a (1ε)-approximation to the min-
imum of any polynomial of degree not exceeding n by O
(n/
ε)
αd
sam-
ples, with α = 2 in the general case, and α = 1 in the smooth case. Such
constructions are based on three cornerstones of convex geometry, Bieber-
bach volume inequality and Leichtweiss inequality on the affine breadth
eccentricity, and the Rolling Ball Theorem, respectively.
2010 AMS subject classification: 41A17, 65K05, 90C26.
Keywords: polynomial optimization, norming mesh, Markov inequality, tangential
Markov inequality, Dubiner distance, convex bodies.
1 Introduction
Sampling methods, typically on suitable grids, are one of the possible approaches
in the vast literature on polynomial optimization theory, cf., e.g., [10, 11, 36]
with the references therein. In this paper we extend in the general framework
of convex bodies our previous work on sampling methods for polynomial opti-
mization, based on the multivariate approximation theory notions of norming
mesh and Dubiner distance, cf. [28, 27, 32, 33, 34].
Polynomial inequalities based on the notion of norming mesh mesh have been
recently playing a relevant role in multivariate approximation theory, as well in
its computational applications. We recall that a polynomial (norming) mesh of
a polynomial determining compact set K R
d
(i.e., a polynomial vanishing on
K vanishes everywhere), is a sequence of finite subsets A
n
K such that
kpk
K
C kpk
A
n
, p P
d
n
, (1)
Work partially supported by the DOR funds and the biennial project BIRD163015 of the
University of Padova, and by the GNCS-INdAM. This research has been accomplished within
the RITA “Research ITalian network on Approximation”.
1
corresponding author: marcov@math.unipd.it
1

for some C > 1 independent of p and n, where card(A
n
) = O(n
s
), s d. Here
and below we denote by P
d
n
the subspace of d-variate real polynomials of total
degree not exceeding n, and by kfk
X
the sup-norm of a bounded real function
on a discrete or continuous compact set X R
d
.
Observe that A
n
is P
d
n
-determining, consequently card(A
n
) dim(P
d
n
) =
n+d
d
n
d
/d! (d fixed, n ). A polynomial mesh is termed optimal when
s = d. All these notions can be given more generally for K C
d
but we restrict
here to real compact sets.
Polynomial meshes were formally introduced in the seminal paper [9] as a
tool for studying the uniform convergence of discrete least squares polynomial
approximation, and then studied from both the theoretical and the computa-
tional point of view throughout a series of papers. Among their features, we
recall for example that the property of being a polynomial mesh is stable under
invertible affine transformations and small perturbations (see [13, 25]). Also,
given the polynomial meshes A
1
n
and A
2
n
for the compact sets K
1
and K
2
re-
spectively, the sequence of sets A
1
n
A
2
n
and A
1
n
×A
2
n
are polynomial meshes for
K
1
K
2
and K
1
×K
2
, with the constants being the maximum and the product
of the constants of K
1
and K
2
respectively. Moreover, if T : R
2
R
d
is a
polynomial map of degree not greater than k and A
n
is a polynomial mesh for
the compact set K R
d
, then T (A
kn
) is an admissible mesh for the compact
set T (K).
Polynomial meshes have been constructed by different analytical and ge-
ometrical techniques on various classes of compact sets, such as Markov and
subanalytic sets, polytopes, convex and starlike bodies; we refer the reader,
e.g., to [3, 9, 16, 23, 25, 29] and the references therein, for a comprehensive view
of construction methods.
Since polynomial meshes have first been introduced in the framework of
discrete least squares, their most direct application is in the approximation
of functions and data. As a consequence, polynomial meshes can be used as
a tool for spectral methods for the solution of PDEs, see [37, 38]. Perhaps
more surprisingly, near optimal interpolation arrays can be extracted from an
admissible mesh by standard numerical linear algebra tools [3]. Note that the
problem of finding unisolvent interpolation arrays with slowly increasing (e.g.,
polynomial in the degree) Lebesgue constant on a given compact set K R
d
is very hard to attack numerically, even for small values of d > 1. Lastly, we
mention that polynomial meshes are the key ingredient for the approximation
algorithms proposed in [24], where the numerical approximation of the main
quantities of pluripotential theory (a non linear potential theory in C
d
, d > 1)
is studied.
In many instances, by suitably increasing the mesh cardinality it is possible
to let C 1, where C is the “constant” of the polynomial mesh in (1). This
opens the way for a computational use of polynomial meshes in the framework of
polynomial optimization, in view of the general elementary estimate given below.
It is however worth to mention that, in view of the exponential dependence
on d of the cardinality of the meshes, this approach is attractive only for low
dimensional problems, e.g. d = 2, 3.
Proposition 1. (cf. [32]). Let {A
n
} be a polynomial mesh of a compact set
2

K R
d
. Then, the following polynomial minimization error estimate holds
min
x∈A
n
p(x) min
xK
p(x) (C 1)
max
xK
p(x) min
xK
p(x)
. (2)
Proof. Consider the polynomial q(x) = p(x) max
xK
p(x) P
d
n
, which is
nonpositive in K. We have that kqk
K
= |min
xK
p(x) max
xK
p(x)| =
max
xK
p(x) min
xK
p(x), and kqk
A
n
= |min
x∈A
n
p(x) max
xK
p(x)| =
max
xK
p(x) min
xK
p(x). Then by (1)
min
x∈A
n
p(x) min
xK
p(x) = kqk
K
kqk
A
n
(C 1) kqk
A
n
(C 1) kqk
K
= (C 1)
max
xK
p(x) min
xK
p(x)
Notice that the error estimate in (2) is relative to the range of p, a usual
requirement in polynomial optimization; cf., e.g., [10]. Clearly, by the arbitrarity
of the polynomial, taking p instead of p we can obtain the same estimate for
the discrete approximation to the maximum of p.
The discrete optimization suggested by Proposition 1 has been already used
in special instances, for example on Chebyshev-like grids with (mn + 1)
d
points
in d-dimensional boxes. Such grids (that are nonuniform) turn out to be poly-
nomial meshes for total degree polynomials, with C =
1
cos(π/(2m))
, as it has
been shown in [28] resorting to the notion of Dubiner distance [6], so that
C 1 = O(1/m
2
). A similar approach, though essentially in a tensor-product
framework, was adopted also in [36]. In [33, 34], the method is applied to
polynomial optimization on 2-dimensional sphere and torus.
On the other hand, polynomial optimization on uniform rational grids is a
well-known procedure on standard compact sets (hypercube, simplex), cf. e.g.
[10, 11] with the references therein.
In Sections 2 and 3 we present a general approach to polynomial optimiza-
tion on norming meshes of Markov compact sets and then of general convex
bodies, constructed starting from sufficiently dense uniform grids. To this pur-
pose, we adapt and refine an approximation theoretic construction of Calvi and
Levenberg [9], based on the fulfillement of a classical Markov polynomial in-
equality, and we resort to some deep results of convex geometry, Bieberbach
volume inequality and Leichtweiss inequality on affine breadth eccentricity. We
get a (1 ε)-approximation to the minimum of a polynomial of degree not
exceeding n, by O
(n
2
)
d
samples.
In Section 3, we modify and improve the construction on convex bodies
with C
2
-boundary via the approximation theoretic notion of Dubiner distance,
providing an original estimate for such a distance by another cornerstone of
convex geometry, the Rolling Ball Theorem, together with a recent deep result
by Totik on the Szeo-version of Bernstein-like inequalities. In such a way we
obtain a (1 ε)-approximation by O
(n/
ε)
d
nonuniform samples.
2 Markov compact sets
Following [9], we’ll now show a general discretization procedure, that allows to
construct a polynomial mesh on any compact set admitting a Markov polynomial
3

inequality (often called Markov compact sets). Given positive scalars r, M > 0,
a compact set K is said to admit a Markov Inequality of exponent r and constant
M if, for every n N , we have
k∇pk
K
M n
r
kpk
K
, p P
d
n
, (3)
where k∇pk
K
= max
xK
k∇p(x)k
2
, k · k
2
denoting the euclidean norm of d-
dimensional vectors. For example, with d = 1 and K = [1, 1] we have r = 2
and M = 1. The Markov exponent can be r = 1 only on real algebraic manifolds
without boundary [7], for example on the sphere S
d1
. The exponent is r = 2
on compact domains with Lipschitz boundary, or more generally satisfying a
uniform interior cone condition; cf. [12, §6.4]. In the special case of a convex
body, we have
r = 2 , M = 4/w(K) , (4)
where w(K) is the width of the convex body (the minimal distance between
parallel supporting hyperplanes); on centrally symmetric bodies the numerator
4 can be replaced by 2, cf. [35]. We refer the reader, e.g., to [2, 9] with the
references therein for a general view on Markov polynomial inequalities.
For the reader’s convenience, we state and prove the following result which
is, in the real case, essentially Theorem 5 of [9].
Proposition 2. Let K R
d
a compact set satisfying (3), and L be the maximal
length of the convex hulls of its projections on the Cartesian axes.
Then, for any fixed ε (0, 1), K possesses a polynomial mesh {A
n
(ε)}
nN
such that, for any n N,
kpk
K
(1 + ε) kpk
A
n
(ε)
, p P
d
n
, (5)
with
card(A
n
(ε))
&
dLMn
r
g(ε)
'!
d
, (6)
where g(ε) = σ(ε) =
ε
1+ε
for K convex, and g(ε) = σ(ε) exp(
d σ(ε)) for K
non convex.
Before proving Proposition 2, we observe that by Proposition 1 we get im-
mediately
min
x∈A
n
(ε)
p(x) min
xK
p(x) ε
max
xK
p(x) min
xK
p(x)
. (7)
The usual way to express an inequality like (7), is to say that min
x∈A
n
(ε)
p(x)
is a (1 ε)-approximation to min
xK
p(x); see, e.g., [10].
Proof of Proposition 2. We first assume K to be convex. Let us pick, for any
n N and (0, 1), a uniform coordinate grid on R
d
of step
σ()
dMn
r
. Let us
denote by B
i
, i I := {1, 2, . . . , S(n, )} the (clearly finite) collection of the
boxes of the grid intersecting K and let us pick y
i
K B
i
, i I. We set
A
n
() = {y
i
}
iI
. The estimate (6) immediately follows by K v + [0, L]
d
for a
suitable vector v R
d
.
4

Citations
More filters
Book ChapterDOI
17 Nov 2019
TL;DR: This work will review some results on inner and inner conic approximations of the convex cone of positive Borel measures, with a special focus on the convergence rate of the hierarchies of upper and lower bounds for the general problem of moments that are obtained from these inner and outer approximation.
Abstract: The generalized problem of moments is a conic linear optimization problem over the convex cone of positive Borel measures with given support. It has a large variety of applications, including global optimization of polynomials and rational functions, option pricing in finance, constructing quadrature schemes for numerical integration, and distributionally robust optimization. A usual solution approach, due to J.B. Lasserre, is to approximate the convex cone of positive Borel measures by finite dimensional outer and inner conic approximations. We will review some results on these approximations, with a special focus on the convergence rate of the hierarchies of upper and lower bounds for the general problem of moments that are obtained from these inner and outer approximations.

25 citations

Journal ArticleDOI
TL;DR: It is shown that Lasserre measure-based hierarchies for polynomial optimization can be implemented by directly computing the discrete minimum at a suitable set of algebraic quadrature nodes.
Abstract: We show that Lasserre measure-based hierarchies for polynomial optimization can be implemented by directly computing the discrete minimum at a suitable set of algebraic quadrature nodes. The sampling cardinality can be much lower than in other approaches based on grids or norming meshes. All the vast literature on multivariate algebraic quadrature becomes in such a way relevant to polynomial optimization.

10 citations

Journal ArticleDOI
TL;DR: It is shown that the notion of polynomial mesh (norming set), used to provide discretizations of a compact set nearly optimal for certain approximation theoretic purposes, can also be used to obtain finitely supported near G-optimal designs forPolynomial regression.
Abstract: We show that the notion of polynomial mesh (norming set), used to provide discretizations of a compact set nearly optimal for certain approximation theoretic purposes, can also be used to obtain finitely supported near G-optimal designs for polynomial regression. We approximate such designs by a standard multiplicative algorithm, followed by measure concentration via Caratheodory-Tchakaloff compression.

8 citations


Cites background from "Markov inequalities, Dubiner distan..."

  • ...This result is stated in the following Lemma 2 Let K ⊂ Rd be a compact set of the form (21)-(22)....

    [...]

  • ...By Lemma 1 we can now prove the following proposition on near G-optimality by polynomial meshes constructed via the Dubiner distance....

    [...]

  • ...Lemma 1 Let Yn = Yn(α), n ≥ 1, be a sequence of finite sets of a compact set K ⊂ Rd, whose covering radius with respect to the Dubiner distance does not exceed α/n, where α ∈ (0, π/2), i.e. r(Yn) = max x∈K dubK(x, Yn) = max x∈K min y∈Yn dubK(x, y) ≤ α n ....

    [...]

  • ...By Lemma 2 we get immediately the following proposition....

    [...]

  • ...(19) The proof follows essentially the lines of that of Proposition 1, with Y2n(π/(2m)) replacing X2mn, observing that by Lemma 1 for every p ∈ Pd2n(K) we have ‖p‖K ≤ cm ‖p‖Y2n(π/(2m))....

    [...]

Journal ArticleDOI
TL;DR: This paper constructed polynomial meshes from Tchakaloff quadrature points for measures on compact domains with certain boundary regularity, via estimates of the Christoffel function and its reciprocal.
Abstract: We construct polynomial meshes from Tchakaloff quadrature points for measures on compact domains with certain boundary regularity, via estimates of the Christoffel function and its reciprocal.

5 citations

Journal ArticleDOI
29 Jan 2021
TL;DR: In this paper, the upper approximation of the constant in a Markov-type inequality on a simplex is considered, and three popular meta-heuristics are applied to the problem, and their results are investigated.
Abstract: Markov-type inequalities are often used in numerical solutions of differential equations, and their constants improve error bounds. In this paper, the upper approximation of the constant in a Markov-type inequality on a simplex is considered. To determine the constant, the minimal polynomial and pluripotential theories were employed. They include a complex equilibrium measure that solves the extreme problem by minimizing the energy integral. Consequently, examples of polynomials of the second degree are introduced. Then, a challenging bilevel optimization problem that uses the polynomials for the approximation was formulated. Finally, three popular meta-heuristics were applied to the problem, and their results were investigated.

3 citations

References
More filters
Book
01 May 2010
TL;DR: This handbook results from a 10-year project conducted by the National Institute of Standards and Technology with an international group of expert authors and validators and is destined to replace its predecessor, the classic but long-outdated Handbook of Mathematical Functions, edited by Abramowitz and Stegun.
Abstract: Modern developments in theoretical and applied science depend on knowledge of the properties of mathematical functions, from elementary trigonometric functions to the multitude of special functions. These functions appear whenever natural phenomena are studied, engineering problems are formulated, and numerical simulations are performed. They also crop up in statistics, financial models, and economic analysis. Using them effectively requires practitioners to have ready access to a reliable collection of their properties. This handbook results from a 10-year project conducted by the National Institute of Standards and Technology with an international group of expert authors and validators. Printed in full color, it is destined to replace its predecessor, the classic but long-outdated Handbook of Mathematical Functions, edited by Abramowitz and Stegun. Included with every copy of the book is a CD with a searchable PDF of each chapter.

3,646 citations

Book
31 Dec 2013

384 citations

Book ChapterDOI
01 Jan 2009
TL;DR: Silverlight’s 2-D drawing support is the basic foundation for many of its more sophisticated features, such as custom-drawn controls, interactive graphics, and animation, so even if you don’t plan to create customized art for your application, you need to have a solid understanding of Silverlight's drawing fundamentals.
Abstract: Silverlight’s 2-D drawing support is the basic foundation for many of its more sophisticated features, such as custom-drawn controls, interactive graphics, and animation Even if you don’t plan to create customized art for your application, you need to have a solid understanding of Silverlight’s drawing fundamentals You’ll use it to add professional yet straightforward touches, like reflection effects You’ll also need it to add interactivity to your graphics—for example, to make shapes move or change in response to user actions

353 citations

Book
21 Dec 2010
TL;DR: Shapes and Geometries: Metrics, Analysis, Differential Calculus, and Optimization, Second Edition as mentioned in this paper provides a self-contained presentation of the mathematical foundations, constructions, and tools necessary for studying problems where the modeling, optimization, or control variable is the shape or the structure of a geometric object.
Abstract: This considerably enriched new edition provides a self-contained presentation of the mathematical foundations, constructions, and tools necessary for studying problems where the modeling, optimization, or control variable is the shape or the structure of a geometric object. Shapes and Geometries: Metrics, Analysis, Differential Calculus, and Optimization, Second Edition presents the latest ground-breaking theoretical foundation to shape optimization in a form that can be used by the engineering and scientific communities. It also clearly explains the state-of-the-art developments in a mathematical language that will attract mathematicians to open questions in this important field. A series of generic examples has been added to the introduction and special emphasis has been put on the construction of important metrics. Advanced engineers in various application areas use basic ideas of shape optimization, but often encounter difficulties due to the sophisticated mathematical foundations for the field. This new version of the book challenges these difficulties by showing how the mathematics community has made extraordinary progress in establishing a rational foundation for shape optimization. This area of research is very broad, rich, and fascinating from both theoretical and numerical standpoints. It is applicable in many different areas such as fluid mechanics, elasticity theory, modern theories of optimal design, free and moving boundary problems, shape and geometric identification, image processing, and design of endoprotheses in interventional cardiology. Audience: This book is intended for applied mathematicians and advanced engineers and scientists, but the book is also structured as an initiation to shape analysis and calculus techniques for a broader audience of mathematicians. Some chapters are self-contained and can be used as lecture notes for a minicourse. The material at the beginning of each chapter is accessible to a broad audience, while the subsequent sections may sometimes require more mathematical maturity. Contents: List of Figures; Preface; Chapter 1: Introduction: Examples, Background, and Perspectives; Chapter 2: Classical Descriptions of Geometries and Their Properties; Chapter 3: Courant Metrics on Images of a Set; Chapter 4: Transformations Generated by Velocities; Chapter 5: Metrics via Characteristic Functions; Chapter 6: Metrics via Distance Functions; Chapter 7: Metrics via Oriented Distance Functions; Chapter 8: Shape Continuity and Optimization; Chapter 9: Shape and Tangential Differential Calculuses; Chapter 10: Shape Gradients under a State Equation Constraint; Elements of Bibliography; Index of Notation; Index.

345 citations

MonographDOI
01 Jan 2011
TL;DR: Shapes and Geometries: Metrics, Analysis, Differential Calculus, and Optimization, Second Edition presents the latest ground-breaking theoretical foundation to shape optimization in a form that can be used by the engineering and scientific communities.

287 citations

Frequently Asked Questions (16)
Q1. What are the contributions in this paper?

In this paper, the authors extend in the general framework of convex bodies their previous work on sampling methods for polynomial optimization, based on the multivariate approximation theory notions of norming mesh and Dubiner distance. 

the authors mention that polynomial meshes are the key ingredient for the approximation algorithms proposed in [24], where the numerical approximation of the main quantities of pluripotential theory (a non linear potential theory in Cd, d > 1) is studied. 

Since polynomial meshes have first been introduced in the framework of discrete least squares, their most direct application is in the approximation of functions and data. 

Given positive scalars r,M > 0, a compact set K is said to admit a Markov Inequality of exponent r and constant M if, for every n ∈ N , the authors have‖∇p‖K ≤Mnr‖p‖K , ∀p ∈ Pdn , (3)where ‖∇p‖K = maxx∈K ‖∇p(x)‖2, ‖ · ‖2 denoting the euclidean norm of ddimensional vectors. 

Polynomial inequalities based on the notion of norming mesh mesh have been recently playing a relevant role in multivariate approximation theory, as well in its computational applications. 

The authors recall that a polynomial (norming) mesh of a polynomial determining compact set K ⊂ Rd (i.e., a polynomial vanishing on K vanishes everywhere), is a sequence of finite subsets 

Polynomial meshes have been constructed by different analytical and geometrical techniques on various classes of compact sets, such as Markov and subanalytic sets, polytopes, convex and starlike bodies; the authors refer the reader, e.g., to [3, 9, 16, 23, 25, 29] and the references therein, for a comprehensive view of construction methods. 

Among their features, the authors recall for example that the property of being a polynomial mesh is stable under invertible affine transformations and small perturbations (see [13, 25]). 

The advantage of using the Dubiner distance is that the mesh constant becomes 1/ cos(θ(ε)), which ensures an error ε (relative to the polynomial range) in mesh-based polynomial optimization by O(n2/ε) samples (notice also that for d = 2 using the general approach of Proposition 3 the authors would use O(n4/ε2) samples). 

In this section the authors modify and improve the construction on smooth convex bodies, by tangential Markov inequalities and estimates of the Dubiner distance, obtaining nonuniform norming meshes of much lower cardinality. 

It has about 19000 points, whereas the Dubiner-like (i.e., constructed by Proposition 6) mesh An( ), n = 4 and ε = 0.2, consists of about 1100 points. 

If the authors move to the case ε = 0.01 keeping n fixed, the grid-based mesh of Proposition 3 has more than 5 millions points, whereas the Dubiner-like one about 23000. 

The latter has been obtained by a Matlab code for polynomial mesh generation on smooth 2-dimensional convex bodies, that computes numerically the boundary curve length and curvature (the rolling ball radius ρ is the reciprocal of the maximal curvature), and then uses an approximate arclength parametrization to compute a geodesic grid with the required density; the code is available at [13]. 

The authors can then search, in the equivalence class of convex bodies generated from K by invertible affine transformations, a representative K ′ with bounded aspect ratio diam(K ′)/w(K ′). 

Note that this algorithm can be generalized to higher dimension d > 2, however this requires to solve O((n2/ )d−1) non linear equations as n2/ → ∞. 

Note that, using the Minkowski functional, one can define the radial projection onto ∂K by settingx′ := xφK(x) ∈ ∂K, ∀x ∈ Rd. (32)Proposition 5. Let K ⊂