scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Near optimal polynomial regression on norming meshes

TL;DR: This work connects the approximation theoretic notions of polynomial norming mesh and Tchakaloff-like quadrature to the statistical theory of optimal designs, obtaining near optimal polynometric regression at a near optimal number of sampling locations on domains with different shapes.
Abstract: We connect the approximation theoretic notions of polynomial norming mesh and Tchakaloff-like quadrature to the statistical theory of optimal designs, obtaining near optimal polynomial regression at a near optimal number of sampling locations on domains with different shapes
Citations
More filters
Journal ArticleDOI
01 Dec 1994-Metrika

117 citations

Journal ArticleDOI
TL;DR: Findings show the usefulness of using WAFP based PC expansion to quantify uncertainty and analyze sensitivity of a oxyhemoglobin dissociation response model, and applying these techniques could help analyze the fidelity of other relevant models in preparation for clinical application.
Abstract: Performing uncertainty quantification (UQ) and sensitivity analysis (SA) is vital when developing a patient-specific physiological model because it can quantify model output uncertainty and estimate the effect of each of the model's input parameters on the mathematical model. By providing this information, UQ and SA act as diagnostic tools to evaluate model fidelity and compare model characteristics with expert knowledge and real world observation. Computational efficiency is an important part of UQ and SA methods and thus optimization is an active area of research. In this work, we investigate a new efficient sampling method for least-squares polynomial approximation, weighted approximate Fekete points (WAFP). We analyze the performance of this method by demonstrating its utility in stochastic analysis of a cardiovascular model that estimates changes in oxyhemoglobin saturation response. Polynomial chaos (PC) expansion using WAFP produced results similar to the more standard Monte Carlo in quantifying uncertainty and identifying the most influential model inputs (including input interactions) when modeling oxyhemoglobin saturation, PC expansion using WAFP was far more efficient. These findings show the usefulness of using WAFP based PC expansion to quantify uncertainty and analyze sensitivity of a oxyhemoglobin dissociation response model. Applying these techniques could help analyze the fidelity of other relevant models in preparation for clinical application.

12 citations

Journal ArticleDOI
09 Jul 2020
TL;DR: In this paper, a numerical package for the computation of a near G-optimal polynomial regression design of degree m on a finite design space X ⊂ R d, by few iterations of a basic multiplicative algorithm followed by Tchakaloff-like compression of the discrete measure keeping the reached G-efficiency, via an accelerated version of the Lawson-Hanson algorithm for Non-Negative Least Squares (NNLS) problems.
Abstract: We provide a numerical package for the computation of a d-variate near G-optimal polynomial regression design of degree m on a finite design space X ⊂ R d , by few iterations of a basic multiplicative algorithm followed by Tchakaloff-like compression of the discrete measure keeping the reached G-efficiency, via an accelerated version of the Lawson-Hanson algorithm for Non-Negative Least Squares (NNLS) problems. This package can solve on a personal computer large-scale problems where c a r d ( X ) × dim ( P 2 m d ) is up to 10 8 – 10 9 , being dim ( P 2 m d ) = 2 m + d d = 2 m + d 2 m . Several numerical tests are presented on complex shapes in d = 3 and on hypercubes in d > 3 .

7 citations


Cites methods from "Near optimal polynomial regression ..."

  • ...When a G-efficiency very close to 1 is needed, one could resort to more sophisticated multiplicative algorithms, see for example, References [9,10]....

    [...]

  • ...Hypercubes: Chebyshev Grids In a recent paper [19], a connection has been studied between the statistical notion of G-optimal design and the approximation theoretic notion of admissible mesh for multivariate polynomial approximation, deeply studied in the last decade after Reference [13] (see, e.g., References [27,28] with the references therein)....

    [...]

  • ...(10) Following References [18,19], we can however effectively compute a design which has the same G-efficiency of u(k) but a support with a cardinality not exceeding N2m = dim(Pd2m(X)), where in many applications N2m card(X), obtaining a remarkable compression of the near optimal design....

    [...]

  • ...On the approximation theory side we may quote, for example, References [11,12]....

    [...]

  • ...Minimization of the polynomial model could then be accomplished by popular methods developed in the growing research field of Polynomial Optimization, such as Lasserre’s SOS (Sum of Squares) and measure-based hierarchies, and other recent methods; cf., for example, References [34–36] with the references therein....

    [...]

References
More filters
Book
01 Jan 1992
TL;DR: In this article, the authors present an analysis of experiments with both qualitative and quantitative factors: Blocking response surface designs, restricted region designs, failure of the experiment and design augmentation, and discrimination between models.
Abstract: Part I. Fundamentals Introduction Some key ideas Experimental strategies The choice of a model Models and least squares Criteria for a good experiment Standard designs The analysis of experiments Part II. Theory and applications Optimum design theory Criteria of optimality Experiments with both qualitative and quantitative factors Blocking response surface designs Restricted region designs Failure of the experiment and design augmentation Non-linear models Optimum Bayesian design Discrimination between models Composite design criteria Further topics.

1,437 citations

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of defining probability measures with finite support, i.e., measures that assign probability one to a set consisting of a finite number of points.
Abstract: Let f1 , …, fk be linearly independent real functions on a space X, such that the range R of (f1, …, fk) is a compact set in k dimensional Euclidean space. (This will happen, for example, if the fi are continuous and X is a compact topological space.) Let S be any Borel field of subsets of X which includes X and all sets which consist of a finite number of points, and let C = {e} be any class of probability measures on S which includes all probability measures with finite support (that is, which assign probability one to a set consisting of a finite number of points), and which are such that is defined. In all that follows we consider only probability measures e which are in C.

872 citations


"Near optimal polynomial regression ..." refers methods in this paper

  • ...A cornerstone of optimal design theory, the well-known Kiefer-Wolfowitz General Equivalence Theorem [12], says that the difficult min-max problem (5) is equivalent to the much simpler maximization max µ det(Gµn) , G µ n = ( ∫ K qi(x)qj(x) dµ ) 1≤i,j≤N , (6) where Gµn is the Gram matrix of µ in a fixed polynomial basis {qi} (also called the information matrix in statistics)....

    [...]

  • ...A cornerstone of optimal design theory, the well-known Kiefer-Wolfowitz General Equivalence Theorem [12], says that the difficult min-max problem (5) is equivalent to the much simpler maximization...

    [...]

Journal ArticleDOI

541 citations


"Near optimal polynomial regression ..." refers background in this paper

  • ...The celebrated Caratheodory Theorem on conical finite-dimensional linear combinations [9], ensures that such a solution exists and has no more than N2n nonzero components....

    [...]

Journal ArticleDOI
TL;DR: Uniform approximation of differentiable or analytic functions of one or several variables on a compact set K is studied by a sequence of discrete least squares polynomials if K satisfies a Markov inequality and point evaluations on standard discretization grids provide nearly optimal approximants.

120 citations

Journal ArticleDOI
TL;DR: Using the concept of Geometric Weakly Admissible Meshes (see §2 below) together with an algorithm based on the classical QR factorization of matrices, the authors compute efficient points for discrete multivariate least squares approximation and Lagrange interpolation.
Abstract: Using the concept of Geometric Weakly Admissible Meshes (see §2 below) together with an algorithm based on the classical QR factorization of matrices, we compute efficient points for discrete multivariate least squares approximation and Lagrange interpolation.

82 citations