scispace - formally typeset
H

Hui Zou

Researcher at University of Minnesota

Publications -  125
Citations -  38126

Hui Zou is an academic researcher from University of Minnesota. The author has contributed to research in topics: Estimator & Lasso (statistics). The author has an hindex of 44, co-authored 121 publications receiving 32120 citations. Previous affiliations of Hui Zou include North Carolina State University & Pennsylvania State University.

Papers
More filters
Journal ArticleDOI

Regularization and variable selection via the elastic net

TL;DR: It is shown that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation, and an algorithm called LARS‐EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lamba.
Journal ArticleDOI

The adaptive lasso and its oracle properties

TL;DR: A new version of the lasso is proposed, called the adaptive lasso, where adaptive weights are used for penalizing different coefficients in the ℓ1 penalty, and the nonnegative garotte is shown to be consistent for variable selection.
Journal ArticleDOI

Sparse Principal Component Analysis

TL;DR: This work introduces a new method called sparse principal component analysis (SPCA) using the lasso (elastic net) to produce modified principal components with sparse loadings and shows that PCA can be formulated as a regression-type optimization problem.
Journal ArticleDOI

Multi-class AdaBoost ∗

TL;DR: A new algorithm is proposed that naturally extends the original AdaBoost algorithm to the multiclass case without reducing it to multiple two-class problems and is extremely easy to implement and is highly competitive with the best currently available multi-class classification methods.
Journal ArticleDOI

One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.

TL;DR: A new unified algorithm based on the local linear approximation for maximizing the penalized likelihood for a broad class of concave penalty functions and shows that if the regularization parameter is appropriately chosen, the one-step LLA estimates enjoy the oracle properties with good initial estimators.