scispace - formally typeset
Open AccessJournal ArticleDOI

Optimal Smoothing in Single-index Models

Wolfgang Karl Härdle, +2 more
- 01 Mar 1993 - 
- Vol. 21, Iss: 1, pp 157-178
Reads0
Chats0
TLDR
In this article, a simple empirical rule for selecting the bandwidth appropriate to single-index models is proposed, which is studied in a small simulation study and an application in binary response models.
Abstract
Single-index models generalize linear regression. They have applications to a variety of fields, such as discrete choice analysis in econometrics and dose response models in biometrics, where high-dimensional regression models are often employed. Single-index models are similar to the first step of projection pursuit regression, a dimension-reduction method. In both cases the orientation vector can be estimated root-n consistently, even if the unknown univariate function (or nonparametric link function) is assumed to come from a large smoothness class. However, as we show in the present paper, the similarities end there. In particular, the amount of smoothing necessary for root-n consistent orientation estimation is very different in the two cases. We suggest a simple, empirical rule for selecting the bandwidth appropriate to single-index models. This rule is studied in a small simulation study and an application in binary response models.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Semiparametric least squares (sls) and weighted sls estimation of single-index models

TL;DR: For the class of single-index models, this article constructed a semiparametric estimator of coefficients up to a multiplicative constant that exhibits 1 √ n -consistency and asymptotic normality.
Journal ArticleDOI

Generalized Partially Linear Single-Index Models

TL;DR: The generalized partially linear single-index model (GPLSIM) as discussed by the authors is a nonparametric generalized linear model for regression of a response Y on predictors (X, Z) with conditional mean function based on a linear combination of X, Z, where η 0(·) is an unknown function.
Journal ArticleDOI

An adaptive estimation of dimension reduction space

TL;DR: The (conditional) minimum average variance estimation (MAVE) method is proposed, which is applicable to a wide range of models, with fewer restrictions on the distribution of the covariates, to the extent that even time series can be included.
Book

High-Dimensional Statistics: A Non-Asymptotic Viewpoint

TL;DR: This book provides a self-contained introduction to the area of high-dimensional statistics, aimed at the first-year graduate level, and includes chapters that are focused on core methodology and theory - including tail bounds, concentration inequalities, uniform laws and empirical process, and random matrices.
Journal ArticleDOI

Penalized Spline Estimation for Partially Linear Single-Index Models

TL;DR: In this article, the authors proposed penalized spline (P-spline) estimation of η 0(·) in partially linear single-index models, where the mean function has the form η0(α0Tx) + β 0Tz.
References
More filters
Journal ArticleDOI

Generalized Linear Models

TL;DR: In this paper, the authors used iterative weighted linear regression to obtain maximum likelihood estimates of the parameters with observations distributed according to some exponential family and systematic effects that can be made linear by a suitable transformation.
Journal ArticleDOI

Applied Nonparametric Regression.

Peter M. Robinson, +1 more
- 01 Nov 1991 - 
TL;DR: Applied Nonparametric Regression as mentioned in this paper is the first book to bring together in one place the techniques for regression curve smoothing involving more than one variable, including kernel smoothing, spline smoothing and orthogonal polynomials.
Journal ArticleDOI

Semiparametric least squares (sls) and weighted sls estimation of single-index models

TL;DR: For the class of single-index models, this article constructed a semiparametric estimator of coefficients up to a multiplicative constant that exhibits 1 √ n -consistency and asymptotic normality.
Book

Smoothing Techniques : With Implementation in S

TL;DR: The Kernel Estimate as a Frequency Counting Curve and the Histogram as a Maximum Likelihood Estimate: Keeping the Kernel Bias the Same and Keeping the Support of the Kernel the Same.
Related Papers (5)