Open AccessProceedings Article
EfficientL 1 regularized logistic regression
Sun-In Lee,Honglak Lee,Pieter Abbeel,Andrew Y. Ng +3 more
- pp 401-408
Reads0
Chats0
TLDR
Theoretical results show that the proposed efficient algorithm for L1 regularized logistic regression is guaranteed to converge to the global optimum, and experiments show that it significantly outperforms standard algorithms for solving convex optimization problems.Abstract:
L1 regularized logistic regression is now a workhorse of machine learning: it is widely used for many classification problems, particularly ones with many features. L1 regularized logistic regression requires solving a convex optimization problem. However, standard algorithms for solving convex optimization problems do not scale well enough to handle the large datasets encountered in many practical settings. In this paper, we propose an efficient algorithm for L1 regularized logistic regression. Our algorithm iteratively approximates the objective function by a quadratic approximation at the current point, while maintaining the L1 constraint. In each iteration, it uses the efficient LARS (Least Angle Regression) algorithm to solve the resulting L1 constrained quadratic optimization problem. Our theoretical results show that our algorithm is guaranteed to converge to the global optimum. Our experiments show that our algorithm significantly outperforms standard algorithms for solving convex optimization problems. Moreover, our algorithm outperforms four previously published algorithms that were specifically designed to solve the L1 regularized logistic regression problem.read more
Citations
More filters
Journal ArticleDOI
On Ising models and algorithms for the construction of symptom networks in psychopathological research.
Michael J. Brusco,Douglas Steinley,Michaela Hoffman,Clintin P. Davis-Stober,Stanley Wasserman +4 more
TL;DR: This article provides a careful assessment of the conditions that underlie the Ising model as well as specific limitations associated with the eLasso estimation algorithm, which leads to serious concerns regarding the implementation ofeLasso in psychopathological research.
Proceedings ArticleDOI
Fast Implementation of ℓ 1 Regularized Learning Algorithms Using Gradient Descent Methods.
TL;DR: It is demonstrated that l1 regularized learning problems can be easily solved by using gradient-descent techniques, and that the algorithm performs similarly or even better than other advanced algorithms in terms of computational efficiency and memory usage.
L(1)-norm sparse bayesian learning: theory and applications
Daniel D. Lee,Yuanqing Lin +1 more
TL;DR: A theory of l1-norm sparse Bayesian learning is established, which is able to accurately resolve the true sparseness in solutions even in very noisy data, and it provides better performance than the conventional uniform l1 -norm regularization and l 2-norm Bayesian sparse learning.
Proceedings ArticleDOI
A principled approach to remove false alarms by modelling the context of a face detector.
TL;DR: This work proposes a model to enhance a given face classifier, by discriminating false detections (sub-windows) from true detections using the contextual information, and investigates the detection distribution around some sub-window from which features from every possible axis combination are computed.
Proceedings ArticleDOI
Extracting, Ranking, and Evaluating Quality Features of Web Services through User Review Sentiment Analysis
TL;DR: This paper proposes a novel approach to extracting domain-related QoS features, ranking those features based on their interestingness, and evaluating the value of these features through sentiment analysis on user reviews, using natural language processing techniques and machine learning approaches.
References
More filters
Journal ArticleDOI
Regression Shrinkage and Selection via the Lasso
TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book
Convex Optimization
Stephen Boyd,Lieven Vandenberghe +1 more
TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Book
Generalized Linear Models
Peter McCullagh,John A. Nelder +1 more
TL;DR: In this paper, a generalization of the analysis of variance is given for these models using log- likelihoods, illustrated by examples relating to four distributions; the Normal, Binomial (probit analysis, etc.), Poisson (contingency tables), and gamma (variance components).
Journal ArticleDOI
Generalized Linear Models
TL;DR: This is the rst book on generalized linear models written by authors not mostly associated with the biological sciences, and it is thoroughly enjoyable to read.
Related Papers (5)
Least angle regression
Bradley Efron,Trevor Hastie,Iain M. Johnstone,Robert Tibshirani,Hemant Ishwaran,Keith Knight,Jean-Michel Loubes,Jean-Michel Loubes,Pascal Massart,Pascal Massart,David Madigan,David Madigan,Greg Ridgeway,Greg Ridgeway,Saharon Rosset,Saharon Rosset,Ji Zhu,Robert A. Stine,Berwin A. Turlach,Sanford Weisberg +19 more