scispace - formally typeset
Open AccessProceedings Article

EfficientL 1 regularized logistic regression

Reads0
Chats0
TLDR
Theoretical results show that the proposed efficient algorithm for L1 regularized logistic regression is guaranteed to converge to the global optimum, and experiments show that it significantly outperforms standard algorithms for solving convex optimization problems.
Abstract
L1 regularized logistic regression is now a workhorse of machine learning: it is widely used for many classification problems, particularly ones with many features. L1 regularized logistic regression requires solving a convex optimization problem. However, standard algorithms for solving convex optimization problems do not scale well enough to handle the large datasets encountered in many practical settings. In this paper, we propose an efficient algorithm for L1 regularized logistic regression. Our algorithm iteratively approximates the objective function by a quadratic approximation at the current point, while maintaining the L1 constraint. In each iteration, it uses the efficient LARS (Least Angle Regression) algorithm to solve the resulting L1 constrained quadratic optimization problem. Our theoretical results show that our algorithm is guaranteed to converge to the global optimum. Our experiments show that our algorithm significantly outperforms standard algorithms for solving convex optimization problems. Moreover, our algorithm outperforms four previously published algorithms that were specifically designed to solve the L1 regularized logistic regression problem.

read more

Content maybe subject to copyright    Report

Citations
More filters
DissertationDOI

Alternating Optimization: Constrained Problems, Adversarial Networks, and Robust Models

Zheng Xu
TL;DR: This dissertation focuses on machine learning problems that can be formulated as a minimax problem in training, and study alternating optimization methods served as fast, scalable, stable and automated solvers, including adaptive ADMM (AADMM), which is a fully automated solver achieving fast practical convergence by adapting the only free parameter in ADMM.
Posted Content

FWDA: a Fast Wishart Discriminant Analysis with its Application to Electronic Health Records Data Classification

TL;DR: A novel classifier FWDA -- Fast Wishart Discriminant Analysis, that makes predictions in an ensemble way that outperforms state-of-the-art algorithms by a large margin on large-scale EHR dataset.
Journal ArticleDOI

Quality 4.0 – an evolution of Six Sigma DMAIC

TL;DR: In this paper , a Quality 4.0-based innovation is guided by Identify, Acsensorize, Discover, Learn, Predict, Redesign and Relearn (IADLPR2), an ad hoc seven-step problem-solving approach.
Proceedings ArticleDOI

Synthesis of the Perceptionally Linear Color Space Using Machine Learning Methods

TL;DR: The experiment was conducted to create a color model that exactly matches the color perception of a person, taking into account the known parameters and characteristics of the visual analyzer.
Journal ArticleDOI

An effective procedure for feature subset selection in logistic regression based on information criteria

TL;DR: In this paper, a new approach combining mixed-integer programming and decomposition techniques is proposed to solve the problem of best subset selection in logistic regression, which takes into account formulations of the problem resulting from the adoption of information criteria such as AIC or BIC, as goodness-of-fit measures.
References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book

Convex Optimization

TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Book

Generalized Linear Models

TL;DR: In this paper, a generalization of the analysis of variance is given for these models using log- likelihoods, illustrated by examples relating to four distributions; the Normal, Binomial (probit analysis, etc.), Poisson (contingency tables), and gamma (variance components).
Journal ArticleDOI

Generalized Linear Models

Eric R. Ziegel
- 01 Aug 2002 - 
TL;DR: This is the Ž rst book on generalized linear models written by authors not mostly associated with the biological sciences, and it is thoroughly enjoyable to read.
Related Papers (5)