Penalized Composite Quasi-Likelihood for Ultrahigh-Dimensional Variable Selection
Citations
7 citations
Cites background or methods from "Penalized Composite Quasi-Likelihoo..."
...This idea is motivated by Bradic, Fan, and Wang (2011), which attempted to produce a robust and efficient estimator for high-dimensional linear regression by minimising composite loss functions simultaneously....
[...]
...However, different from a direct extension of Bradic et al. (2011) to nonparametric regression, we drop the finite second-moment assumption and combine the squared loss with multiple quantile loss functions for symmetric errors and pick weights to optimise the asymptotic efficiency....
[...]
6 citations
6 citations
Cites background from "Penalized Composite Quasi-Likelihoo..."
...An intermediary step of the method required the estimation of a weighted least squares version of Lasso in which weights are estimated....
[...]
...Other very important work includes Chaudhuri [7], Chaudhuri, Doksum and Samarov [8], Härdle, Ritov, and Song [12], Cattaneo, Crump, and Jansson [6], and Kong, Linton, and Xia [17], among others, but this work focused on local, non-series, methods....
[...]
...The second step attempts to properly partial out the confounding factors z from the treatment estimating a suitable residual via heteroskedastic Lasso [8, 1]....
[...]
...For instance, Lasso can be substituted by Dantzig selector, SCAD, square-root Lasso, the associated post-model selection estimators or others....
[...]
...[8] Probal Chaudhuri, Kjell Doksum, and Alexander Samarov....
[...]
6 citations
Cites methods from "Penalized Composite Quasi-Likelihoo..."
...[7] proposed a robust and efficient penalized composite quasi-likelihood method for ultrahigh dimensional variable selection....
[...]
5 citations
References
40,785 citations
8,314 citations
7,828 citations
"Penalized Composite Quasi-Likelihoo..." refers background or methods in this paper
...…(16) can be recast as a penalized weighted least square regression argmin β n∑ i=1 w1∣∣∣Yi −XTi β̂ (0) ∣∣∣ + w2 ( Yi −XTi β )2 + n p∑ j=1 γλ(|β(0)j |)|βj | which can be efficiently solved by pathwise coordinate optimization (Friedman et al., 2008) or least angle regression (Efron et al., 2004)....
[...]
...) are all nonnegative. This class of problems can be solved with fast and efficient computational algorithms such as pathwise coordinate optimization (Friedman et al., 2008) and least angle regression (Efron et al., 2004). One particular example is the combination of L 1 and L 2 regressions, in which K= 2, ρ 1(t) = |t−b 0|andρ 2(t) = t2. Here b 0 denotes themedian of error distributionε. Iftheerror distribution is sym...
[...]
...i=1 w 1 Yi −XT i βˆ (0) +w 2 Yi −XT i β 2 +n Xp j=1 γλ(|β (0) j |)|βj| which can be efficiently solved by pathwise coordinate optimization (Friedman et al., 2008) or least angle regression (Efron et al., 2004). If b 0 6= 0, the penalized least-squares problem ( 16) is somewhat different from (5) since we have an additional parameter b 0. Using the same arguments, and treating b 0 as an additional parameter ...
[...]
...This class of problems can be solved with fast and efficient computational algorithms such as pathwise coordinate optimization (Friedman et al., 2008) and least angle regression (Efron et al., 2004)....
[...]
6,765 citations
5,628 citations