scispace - formally typeset
Search or ask a question
Topic

Sufficient dimension reduction

About: Sufficient dimension reduction is a research topic. Over the lifetime, 700 publications have been published within this topic receiving 53420 citations. The topic is also known as: SDR.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the problem of selecting one of a number of models of different dimensions is treated by finding its Bayes solution, and evaluating the leading terms of its asymptotic expansion.
Abstract: The problem of selecting one of a number of models of different dimensions is treated by finding its Bayes solution, and evaluating the leading terms of its asymptotic expansion. These terms are a valid large-sample criterion beyond the Bayesian context, since they do not depend on the a priori distribution.

38,681 citations

Journal ArticleDOI
TL;DR: In this article, sliced inverse regression (SIR) is proposed to reduce the dimension of the input variable without going through any parametric or nonparametric model-fitting process.
Abstract: Modern advances in computing power have greatly widened scientists' scope in gathering and investigating information from many variables, information which might have been ignored in the past. Yet to effectively scan a large pool of variables is not an easy task, although our ability to interact with data has been much enhanced by recent innovations in dynamic graphics. In this article, we propose a novel data-analytic tool, sliced inverse regression (SIR), for reducing the dimension of the input variable x without going through any parametric or nonparametric model-fitting process. This method explores the simplicity of the inverse view of regression; that is, instead of regressing the univariate output variable y against the multivariate x, we regress x against y. Forward regression and inverse regression are connected by a theorem that motivates this method. The theoretical properties of SIR are investigated under a model of the form, y = f(β 1 x, …, β K x, e), where the β k 's are the unknown...

2,158 citations

Journal ArticleDOI
TL;DR: The (conditional) minimum average variance estimation (MAVE) method is proposed, which is applicable to a wide range of models, with fewer restrictions on the distribution of the covariates, to the extent that even time series can be included.
Abstract: Summary. Searching for an effective dimension reduction space is an important problem in regression, especially for high dimensional data. We propose an adaptive approach based on semiparametric models, which we call the (conditional) minimum average variance estimation (MAVE) method, within quite a general setting. The MAVE method has the following advantages. Most existing methods must undersmooth the nonparametric link function estimator to achieve a faster rate of consistency for the estimator of the parameters (than for that of the nonparametric function). In contrast, a faster consistency rate can be achieved by the MAVE method even without undersmoothing the nonparametric link function estimator. The MAVE method is applicable to a wide range of models, with fewer restrictions on the distribution of the covariates, to the extent that even time series can be included. Because of the faster rate of consistency for the parameter estimators, it is possible for us to estimate the dimension of the space consistently. The relationship of the MAVE method with other methods is also investigated. In particular, a simple outer product gradient estimator is proposed as an initial estimator. In addition to theoretical results, we demonstrate the efficacy of the MAVE method for high dimensional data sets through simulation. Two real data sets are analysed by using the MAVE approach.

787 citations

Journal ArticleDOI
TL;DR: J S T O R is a not-for-profit service that helps scholars, researchers, and students discover, use, and rely upon a wide range of content i n a trusted digital archive.
Abstract: Modern graphical tools have enhanced our ability to learn many things from data directly. With much user-friendly graphical software available, we are encouraged to plot a lot more often than before. The benefits from direct interaction with graphics have been enormous. But trailing behind these high-tech advances is the issue of appropriate guidance on what to plot. There are too many directions to project a high-dimensional data set and unguided plotting can be time-consuming and fruitless. In a recent article, Li set up a statistical framework for study on this issue, based on a notion of effective dimension reduction (edr) directions. They are the directions to project a high dimensional input variable for the purpose of effectively viewing and studying its relationship with an output variable. A methodology, sliced inverse regression, was introduced and shown to be useful in finding edr directions. This article introduces another method for finding edr directions. It begins with the observat...

618 citations


Network Information
Related Topics (5)
Multivariate statistics
18.4K papers, 1M citations
82% related
Estimator
97.3K papers, 2.6M citations
81% related
Linear model
19K papers, 1M citations
80% related
Sample size determination
21.3K papers, 961.4K citations
78% related
Statistical hypothesis testing
19.5K papers, 1M citations
78% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20251
202321
202256
202161
202056
201947