scispace - formally typeset
Search or ask a question
Author

Teresa M. Lebair

Bio: Teresa M. Lebair is an academic researcher from University of Maryland, College Park. The author has contributed to research in topics: Convex optimization & Convex combination. The author has an hindex of 2, co-authored 5 publications receiving 21 citations. Previous affiliations of Teresa M. Lebair include University of Maryland, Baltimore County & Johns Hopkins University.

Papers
More filters
Journal ArticleDOI
TL;DR: The global convergence of the algorithm for shape restricted smoothing splines subject to general polyhedral control constraints is shown, using techniques from nonsmooth analysis and polyhedral theory.

18 citations

Journal ArticleDOI
TL;DR: In this article, a two-stage estimator with optimal asymptotic performance was proposed, in which the first stage is an unconstrained estimator and the second stage is constructed by projecting it onto a cone of monotone splines.
Abstract: This paper considers $k$-monotone estimation and the related asymptotic performance analysis over a suitable Holder class for general $k$. A novel two stage $k$-monotone B-spline estimator is proposed: in the first stage, an unconstrained estimator with optimal asymptotic performance is considered; in the second stage, a $k$-monotone B-spline estimator is constructed (roughly) by projecting the unconstrained estimator onto a cone of $k$-monotone splines. To study the asymptotic performance of the second stage estimator under the sup-norm and other risks, a critical uniform Lipschitz property for the $k$-monotone B-spline estimator is established under the $\ell_{\infty }$-norm. This property uniformly bounds the Lipschitz constants associated with the mapping from a (weighted) first stage input vector to the B-spline coefficients of the second stage $k$-monotone estimator, independent of the sample size and the number of knots. This result is then exploited to analyze the second stage estimator performance and develop convergence rates under the sup-norm, pointwise, and $L_{p}$-norm (with $p\in [1,\infty )$) risks. By employing recent results in $k$-monotone estimation minimax lower bound theory, we show that these convergence rates are optimal.

5 citations

Journal ArticleDOI
TL;DR: The present technical note provides the first rigorous justification of the optimal minimax risk for convex estimation on the entire interval of interest in the sup-norm.
Abstract: Estimation of convex functions finds broad applications in science and engineering; however, the convex shape constraint complicates the asymptotic performance analysis of such estimators. This technical note is devoted to the minimax optimal estimation of univariate convex functions in a given Holder class. Particularly, a minimax lower bound in the supremum norm (or simply sup-norm) is established by constructing a novel family of piecewise quadratic convex functions in the Holder class. This result, along with a recent result on the minimax upper bound, gives rise to the optimal rate of convergence for the minimax sup-norm risk of convex functions with the Holder order between one and two. The present technical note provides the first rigorous justification of the optimal minimax risk for convex estimation on the entire interval of interest in the sup-norm.

2 citations

Proceedings ArticleDOI
20 Mar 2019
TL;DR: This paper develops minimax lower bounds for k-Monotone regression problems under the sup-norm for general k by constructing a family of k-monotone piecewise polynomial functions (or hypotheses) belonging to suitable Hölder and Sobolev classes.
Abstract: Belonging to the framework of shape constrained estimation, k-monotone estimation refers to the nonparametric estimation of univariate k-monotone functions, e.g., monotone and convex unctions. This paper develops minimax lower bounds for k-monotone regression problems under the sup-norm for general k by constructing a family of k-monotone piecewise polynomial functions (or hypotheses) belonging to suitable Holder and Sobolev classes. After establishing that these hypotheses satisfy several properties, we employ results from general min-imax lower bound theory to obtain the desired k-monotone regression minimax lower bound. Implications and extensions are also discussed.
Posted Content
TL;DR: In this article, the optimal minimax optimal estimation of univariate convex functions from the Holder class in the framework of shape constrained nonparametric estimation is studied, and the optimal rate of convergence in two steps for the minimax sup-norm risk of convex function with the Holder order between one and two is established.
Abstract: Estimation of convex functions finds broad applications in engineering and science, while convex shape constraint gives rise to numerous challenges in asymptotic performance analysis. This paper is devoted to minimax optimal estimation of univariate convex functions from the Holder class in the framework of shape constrained nonparametric estimation. Particularly, the paper establishes the optimal rate of convergence in two steps for the minimax sup-norm risk of convex functions with the Holder order between one and two. In the first step, by applying information theoretical results on probability measure distance, we establish the minimax lower bound under the supreme norm by constructing a novel family of piecewise quadratic convex functions in the Holder class. In the second step, we develop a penalized convex spline estimator and establish the minimax upper bound under the supreme norm. Due to the convex shape constraint, the optimality conditions of penalized convex splines are characterized by nonsmooth complementarity conditions. By exploiting complementarity methods, a critical uniform Lipschitz property of optimal spline coefficients in the infinity norm is established. This property, along with asymptotic estimation techniques, leads to uniform bounds for bias and stochastic errors on the entire interval of interest. This further yields the optimal rate of convergence by choosing the suitable number of knots and penalty value. The present paper provides the first rigorous justification of the optimal minimax risk for convex estimation under the supreme norm.

Cited by
More filters
Posted Content
13 May 2008
TL;DR: In this paper, the authors consider the estimation of a monotone regression function in a fixed point by the least-squares (Grenander) estimator and show that this estimator is locally asymptotic minimax, in the sense that, for each $f_0$, the attained rate of the probabilistic error is uniform over a shrinking $L 2 -neighborhood of
Abstract: In this paper, we will consider the estimation of a monotone regression (or density) function in a fixed point by the least-squares (Grenander) estimator. We will show that this estimator is locally asymptotic minimax, in the sense that, for each $f_0$, the attained rate of the probabilistic error is uniform over a shrinking $L^2$-neighborhood of $f_0$ and there is no estimator that attains a significantly better uniform rate over these shrinking neighborhoods. Therefore, it adapts to the individual underlying function, not to a smoothness class of functions. We also give general conditions for which we can calculate a (non-standard) limiting distribution for the estimator.

20 citations

Posted Content
TL;DR: In this paper, the authors developed a method to compute the penalized least squares estimators (PLSEs) of the parametric and nonparametric components given independent and identically distributed (i.i.d.) data.
Abstract: We consider estimation and inference in a single index regression model with an unknown but smooth link function. In contrast to the standard approach of using kernels or regression splines, we use smoothing splines to estimate the smooth link function. We develop a method to compute the penalized least squares estimators (PLSEs) of the parametric and the nonparametric components given independent and identically distributed (i.i.d.)~data. We prove the consistency and find the rates of convergence of the estimators. We establish asymptotic normality under under mild assumption and prove asymptotic efficiency of the parametric component under homoscedastic errors. A finite sample simulation corroborates our asymptotic theory. We also analyze a car mileage data set and a Ozone concentration data set. The identifiability and existence of the PLSEs are also investigated.

18 citations

Journal ArticleDOI
TL;DR: In this article, the authors study the solution uniqueness of an individual feasible vector of a class of convex optimization problems involving convex piecewise affine functions and subject to general polyhedral constraints.
Abstract: In this paper, we study the solution uniqueness of an individual feasible vector of a class of convex optimization problems involving convex piecewise affine functions and subject to general polyhedral constraints. This class of problems incorporates many important polyhedral constrained l 1 recovery problems arising from sparse optimization, such as basis pursuit, LASSO, and basis pursuit denoising, as well as polyhedral gauge recovery. By leveraging the max-formulation of convex piecewise affine functions and convex analysis tools, we develop dual variables based necessary and sufficient uniqueness conditions via simple and yet unifying approaches; these conditions are applied to a wide range of l 1 minimization problems under possible polyhedral constraints. An effective linear program based scheme is proposed to verify solution uniqueness conditions. The results obtained in this paper not only recover the known solution uniqueness conditions in the literature by removing restrictive assumptions but also yield new uniqueness conditions for much broader constrained l 1 -minimization problems.

12 citations

Posted ContentDOI
TL;DR: It is shown that exact recovery via constrained matching pursuit not only depends on a measurement matrix but also critically relies on a constraint set, and an important class of constraint sets are identified, called coordinate projection admissible set, or simply CP admissible sets.
Abstract: Matching pursuit, especially its orthogonal version (OMP) and variations, is a greedy algorithm widely used in signal processing, compressed sensing, and sparse modeling. Inspired by constrained sparse signal recovery, this paper proposes a constrained matching pursuit algorithm and develops conditions for exact support and vector recovery on constraint sets via this algorithm. We show that exact recovery via constrained matching pursuit not only depends on a measurement matrix but also critically relies on a constraint set. We thus identify an important class of constraint sets, called coordinate projection admissible set, or simply CP admissible sets; analytic and geometric properties of these sets are established. We study exact vector recovery on convex, CP admissible cones for a fixed support. We provide sufficient exact recovery conditions for a general support as well as necessary and sufficient recovery conditions when a support has small size. As a byproduct, we construct a nontrivial counterexample to a renowned necessary condition of exact recovery via the OMP for a support of size three. Moreover, using the properties of convex CP admissible sets and convex optimization techniques, we establish sufficient conditions for uniform exact recovery on convex CP admissible sets in terms of the restricted isometry-like constant and the restricted orthogonality-like constant.

11 citations