Open AccessPosted Content
A note on the group lasso and a sparse group lasso
TLDR
An ecien t algorithm is derived for the resulting convex problem based on coordinate descent that can be used to solve the general form of the group lasso, with non-orthonormal model matrices.Abstract:
We consider the group lasso penalty for the linear model. We note that the standard algorithm for solving the problem assumes that the model matrices in each group are orthonormal. Here we consider a more general penalty that blends the lasso (L1) with the group lasso (\two-norm"). This penalty yields solutions that are sparse at both the group and individual feature levels. We derive an ecien t algorithm for the resulting convex problem based on coordinate descent. This algorithm can also be used to solve the general form of the group lasso, with non-orthonormal model matrices.read more
Citations
More filters
Journal ArticleDOI
Robust Face Recognition via Sparse Representation
TL;DR: This work considers the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise, and proposes a general classification algorithm for (image-based) object recognition based on a sparse representation computed by C1-minimization.
Journal ArticleDOI
Feature Selection: A Data Perspective
TL;DR: This survey revisits feature selection research from a data perspective and reviews representative feature selection algorithms for conventional data, structured data, heterogeneous data and streaming data, and categorizes them into four main groups: similarity- based, information-theoretical-based, sparse-learning-based and statistical-based.
Journal ArticleDOI
A Sparse-Group Lasso
TL;DR: A regularized model for linear regression with ℓ1 andℓ2 penalties is introduced and it is shown that it has the desired effect of group-wise and within group sparsity.
Journal ArticleDOI
Structured Compressed Sensing: From Theory to Applications
Marco F. Duarte,Yonina C. Eldar +1 more
TL;DR: The prime focus is bridging theory and practice, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware in compressive sensing.
Journal ArticleDOI
Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
Peter Richtárik,Martin Takáč +1 more
TL;DR: In this paper, a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function was developed, and it was shown that the algorithm converges linearly.
References
More filters
Journal ArticleDOI
Model selection and estimation in regression with grouped variables
Ming Yuan,Yi Lin +1 more
TL;DR: In this paper, instead of selecting factors by stepwise backward elimination, the authors focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection.
Journal ArticleDOI
The group lasso for logistic regression
TL;DR: An efficient algorithm is presented, that is especially suitable for high dimensional problems, which can also be applied to generalized linear models to solve the corresponding convex optimization problem.
Posted Content
Regularized Multivariate Regression for Identifying Master Predictors with Application to Integrative Genomics Study of Breast Cancer
TL;DR: The proposed method remMap - REgularized Multivariate regression for identifying MAster Predictors - for fitting multivariate response regression models under the high-dimension-low-sample-size setting is applied to a breast cancer study, in which genome wide RNA transcript levels and DNA copy numbers were measured for 172 tumor samples.
Journal ArticleDOI
Regularized Multivariate Regression for Identifying Master Predictors with Application to Integrative Genomics Study of Breast Cancer
TL;DR: In this article, the authors proposed a new method remMap -regularized multivariate regression for identifying MAster Predictors -for fitting multivariate response regression models under the high-dimension-low-sample-size setting.