scispace - formally typeset
Search or ask a question
Posted Content

Almost Sure Uniqueness of a Global Minimum Without Convexity

TL;DR: In this paper, the authors established the argmin of a random objective function to be unique almost surely, without convexity of the objective function, and applied it to a variety of applications in statistics, including uniqueness of M estimators, both classical likelihood and penalized likelihood estimators.
Abstract: This paper establishes the argmin of a random objective function to be unique almost surely. This paper first formulates a general result that proves almost sure uniqueness without convexity of the objective function. The general result is then applied to a variety of applications in statistics. Four applications are discussed, including uniqueness of M-estimators, both classical likelihood and penalized likelihood estimators, and two applications of the argmin theorem, threshold regression and weak identification.
Citations
More filters
ReportDOI
TL;DR: This paper develops optimal confidence intervals and median-unbiased estimators that are valid conditional on the target selected and so overcome this winner’s curse.
Abstract: Many empirical questions concern target parameters selected through optimization. For example, researchers may be interested in the effectiveness of the best policy found in a randomized trial, or the best-performing investment strategy based on historical data. Such settings give rise to a winner’s curse, where conventional estimates are biased and conventional confidence intervals are unreliable. This paper develops optimal confidence intervals and median-unbiased estimators that are valid conditional on the target selected and so overcome this winner’s curse. If one requires validity only on average over targets that might have been selected, we develop hybrid procedures that combine conditional and projection confidence intervals to offer further performance gains relative to existing alternatives.

30 citations

Posted Content
TL;DR: A theoretically-motivated class of default priors on a non-parametric nuisance parameter implies computationally tractable Bayes decision rules in the limit problem, while leaving the prior on the structural parameter free to be selected by the researcher.
Abstract: This paper studies optimal decision rules, including estimators and tests, for weakly identified GMM models. We derive the limit experiment for weakly identified GMM, and propose a theoretically-motivated class of priors which give rise to quasi-Bayes decision rules as a limiting case. Together with results in the previous literature, this establishes desirable properties for the quasi-Bayes approach regardless of model identification status, and we recommend quasi-Bayes for settings where identification is a concern. We further propose weighted average power-optimal identification-robust frequentist tests and confidence sets, and prove a Bernstein-von Mises-type result for the quasi-Bayes posterior under weak identification.

7 citations

Journal ArticleDOI
TL;DR: A criterion is given based on the existence level that describes the behavior of the objective function as its argument approaches the extended boundary of the parameter space.
Abstract: In this paper, we give a necessary and sufficient criterion for the existence of the $$\ell _p$$ -norm estimate for the nonlinear $$\ell _p$$ -norm fitting problem. Our criterion is based on the existence level that describes the behavior of the objective function as its argument approaches the extended boundary of the parameter space.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.
Abstract: Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized likelihood approaches are proposed to handle these kinds of problems. The proposed methods select variables and estimate coefficients simultaneously. Hence they enable us to construct confidence intervals for estimated parameters. The proposed approaches are distinguished from others in that the penalty functions are symmetric, nonconcave on (0, ∞), and have singularities at the origin to produce sparse solutions. Furthermore, the penalty functions should be bounded by a constant to reduce bias and satisfy certain conditions to yield continuous solutions. A new algorithm is proposed for optimizing penalized likelihood functions. The proposed ideas are widely applicable. They are readily applied to a variety of ...

8,314 citations

Book
01 Jan 1989
TL;DR: In this article, a deterministic model of optimal growth is proposed, and a stochastic model is proposed for optimal growth with linear utility and linear systems and linear approximations.
Abstract: I. THE RECURSIVE APPROACH 1. Introduction 2. An Overview 2.1 A Deterministic Model of Optimal Growth 2.2 A Stochastic Model of Optimal Growth 2.3 Competitive Equilibrium Growth 2.4 Conclusions and Plans II. DETERMINISTIC MODELS 3. Mathematical Preliminaries 3.1 Metric Spaces and Normed Vector Spaces 3.2 The Contraction Mapping Theorem 3.3 The Theorem of the Maximum 4. Dynamic Programming under Certainty 4.1 The Principle of Optimality 4.2 Bounded Returns 4.3 Constant Returns to Scale 4.4 Unbounded Returns 4.5 Euler Equations 5. Applications of Dynamic Programming under Certainty 5.1 The One-Sector Model of Optimal Growth 5.2 A "Cake-Eating" Problem 5.3 Optimal Growth with Linear Utility 5.4 Growth with Technical Progress 5.5 A Tree-Cutting Problem 5.6 Learning by Doing 5.7 Human Capital Accumulation 5.8 Growth with Human Capital 5.9 Investment with Convex Costs 5.10 Investment with Constant Returns 5.11 Recursive Preferences 5.12 Theory of the Consumer with Recursive Preferences 5.13 A Pareto Problem with Recursive Preferences 5.14 An (s, S) Inventory Problem 5.15 The Inventory Problem in Continuous Time 5.16 A Seller with Unknown Demand 5.17 A Consumption-Savings Problem 6. Deterministic Dynamics 6.1 One-Dimensional Examples 6.2 Global Stability: Liapounov Functions 6.3 Linear Systems and Linear Approximations 6.4 Euler Equations 6.5 Applications III. STOCHASTIC MODELS 7. Measure Theory and Integration 7.1 Measurable Spaces 7.2 Measures 7.3 Measurable Functions 7.4 Integration 7.5 Product Spaces 7.6 The Monotone Class Lemma

2,991 citations

Journal ArticleDOI
TL;DR: It is proved that at a universal penalty level, the MC+ has high probability of matching the signs of the unknowns, and thus correct selection, without assuming the strong irrepresentable condition required by the LASSO.
Abstract: We propose MC+, a fast, continuous, nearly unbiased and accurate method of penalized variable selection in high-dimensional linear regression. The LASSO is fast and continuous, but biased. The bias of the LASSO may prevent consistent variable selection. Subset selection is unbiased but computationally costly. The MC+ has two elements: a minimax concave penalty (MCP) and a penalized linear unbiased selection (PLUS) algorithm. The MCP provides the convexity of the penalized loss in sparse regions to the greatest extent given certain thresholds for variable selection and unbiasedness. The PLUS computes multiple exact local minimizers of a possibly nonconvex penalized loss function in a certain main branch of the graph of critical points of the penalized loss. Its output is a continuous piecewise linear path encompassing from the origin for infinite penalty to a least squares solution for zero penalty. We prove that at a universal penalty level, the MC+ has high probability of matching the signs of the unknowns, and thus correct selection, without assuming the strong irrepresentable condition required by the LASSO. This selection consistency applies to the case of $p\gg n$, and is proved to hold for exactly the MC+ solution among possibly many local minimizers. We prove that the MC+ attains certain minimax convergence rates in probability for the estimation of regression coefficients in $\ell_r$ balls. We use the SURE method to derive degrees of freedom and $C_p$-type risk estimates for general penalized LSE, including the LASSO and MC+ estimators, and prove their unbiasedness. Based on the estimated degrees of freedom, we propose an estimator of the noise level for proper choice of the penalty level.

2,727 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a penalized linear unbiased selection (PLUS) algorithm, which computes multiple exact local minimizers of a possibly nonconvex penalized loss function in a certain main branch of the graph of critical points of the loss.
Abstract: We propose MC+, a fast, continuous, nearly unbiased and accurate method of penalized variable selection in high-dimensional linear regression. The LASSO is fast and continuous, but biased. The bias of the LASSO may prevent consistent variable selection. Subset selection is unbiased but computationally costly. The MC+ has two elements: a minimax concave penalty (MCP) and a penalized linear unbiased selection (PLUS) algorithm. The MCP provides the convexity of the penalized loss in sparse regions to the greatest extent given certain thresholds for variable selection and unbiasedness. The PLUS computes multiple exact local minimizers of a possibly nonconvex penalized loss function in a certain main branch of the graph of critical points of the penalized loss. Its output is a continuous piecewise linear path encompassing from the origin for infinite penalty to a least squares solution for zero penalty. We prove that at a universal penalty level, the MC+ has high probability of matching the signs of the unknowns, and thus correct selection, without assuming the strong irrepresentable condition required by the LASSO. This selection consistency applies to the case of p≫n, and is proved to hold for exactly the MC+ solution among possibly many local minimizers. We prove that the MC+ attains certain minimax convergence rates in probability for the estimation of regression coefficients in lr balls. We use the SURE method to derive degrees of freedom and Cp-type risk estimates for general penalized LSE, including the LASSO and MC+ estimators, and prove their unbiasedness. Based on the estimated degrees of freedom, we propose an estimator of the noise level for proper choice of the penalty level. For full rank designs and general sub-quadratic penalties, we provide necessary and sufficient conditions for the continuity of the penalized LSE. Simulation results overwhelmingly support our claim of superior variable selection properties and demonstrate the computational efficiency of the proposed method.

2,382 citations

Journal ArticleDOI
TL;DR: In this paper, a simple, regenerative, optimal stopping model of bus-engine replacement is proposed to describe the behavior of Harold Zurcher, superintendent of maintenance at the Madison (Wisconsin) Metropolitan Bus Company.
Abstract: This paper formulates a simple, regenerative, optimal-stopping model of bus-engine replacement to describe the behavior of Harold Zurcher, superintendent of maintenance at the Madison (Wisconsin) Metropolitan Bus Company. Admittedly, few people are likely to take particular interest in Harold Zurcher and bus engine replacement per se. The author focuses on a specific individual and capital good because it provides a simple, concrete framework to illustrate two ideas: (1) a "bottom-up" approach for modeling replacement investment and (2) a "nested fixed point" algorithm for estimating dynamic programming models of discrete choice.

1,815 citations