Penalized Composite Quasi-Likelihood for Ultrahigh-Dimensional Variable Selection
Citations
3 citations
3 citations
Cites methods from "Penalized Composite Quasi-Likelihoo..."
...Bradic et al. (2011) suggested a data-driven method for estimating the optimal weights, and showed that the efficiency of the weighted linear CQR can be significantly improved by employing proper weights. Further, Sun et al. (2013) proposed the weighted local linear CQR for general conditions of the random error....
[...]
...Bradic et al. (2011) suggested a data-driven method for estimating the optimal weights, and showed that the efficiency of the weighted linear CQR can be significantly improved by employing proper weights....
[...]
3 citations
3 citations
Cites methods from "Penalized Composite Quasi-Likelihoo..."
...In this paper, we introduce the penalized quantile regression with the weighted L1-penalty (WR-Lasso) for robust regularization, as in Bradic et al. (2011). The weights are introduced to reduce the bias problem induced by the L1-penalty. The flexibility of the choice of the weights provides flexibility in shrinkage estimation of the regression coefficient. WR-Lasso shares a similar spirit to the folded-concave penalized quantile-regression (Zou and Li, 2008; Wang et al., 2012), but avoids the nonconvex optimization problem. We establish conditions on the error distribution in order for the WR-Lasso to successfully recover the true underlying sparse model with asymptotic probability one. It turns out that the required condition is much weaker than the sub-Gaussian assumption in Bradic et al. (2011). The only conditions we impose is that the density function of error has Lipschitz property in a neighborhood around 0....
[...]
...In this paper, we introduce the penalized quantile regression with the weighted L1-penalty (WR-Lasso) for robust regularization, as in Bradic et al. (2011). The weights are introduced to reduce the bias problem induced by the L1-penalty....
[...]
3 citations
References
40,785 citations
8,314 citations
7,828 citations
"Penalized Composite Quasi-Likelihoo..." refers background or methods in this paper
...…(16) can be recast as a penalized weighted least square regression argmin β n∑ i=1 w1∣∣∣Yi −XTi β̂ (0) ∣∣∣ + w2 ( Yi −XTi β )2 + n p∑ j=1 γλ(|β(0)j |)|βj | which can be efficiently solved by pathwise coordinate optimization (Friedman et al., 2008) or least angle regression (Efron et al., 2004)....
[...]
...) are all nonnegative. This class of problems can be solved with fast and efficient computational algorithms such as pathwise coordinate optimization (Friedman et al., 2008) and least angle regression (Efron et al., 2004). One particular example is the combination of L 1 and L 2 regressions, in which K= 2, ρ 1(t) = |t−b 0|andρ 2(t) = t2. Here b 0 denotes themedian of error distributionε. Iftheerror distribution is sym...
[...]
...i=1 w 1 Yi −XT i βˆ (0) +w 2 Yi −XT i β 2 +n Xp j=1 γλ(|β (0) j |)|βj| which can be efficiently solved by pathwise coordinate optimization (Friedman et al., 2008) or least angle regression (Efron et al., 2004). If b 0 6= 0, the penalized least-squares problem ( 16) is somewhat different from (5) since we have an additional parameter b 0. Using the same arguments, and treating b 0 as an additional parameter ...
[...]
...This class of problems can be solved with fast and efficient computational algorithms such as pathwise coordinate optimization (Friedman et al., 2008) and least angle regression (Efron et al., 2004)....
[...]
6,765 citations
5,628 citations