scispace - formally typeset
Search or ask a question
Topic

Convex optimization

About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.


Papers
More filters
Proceedings ArticleDOI
18 Oct 2014
TL;DR: In this article, the authors provide new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded.
Abstract: Convex empirical risk minimization is a basic tool in machine learning and statistics. We provide new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex.Our algorithms run in polynomial time, and in some cases even match the optimal non-private running time (as measured by oracle complexity). We give separate algorithms (and lower bounds) for (aepsi;, 0)- and (aepsi;,a#x03B4;)-differential privacy, perhaps surprisingly, the techniques used for designing optimal algorithms in the two cases are completely different. Our lower bounds apply even to very simple, smooth function families, such as linear and quadratic functions. This implies that algorithms from previous work can be used to obtain optimal error rates, under the additional assumption that the contributions of each data point to the loss function is smooth. We show that simple approaches to smoothing arbitrary loss functions (in order to apply previous techniques) do not yield optimal error rates. In particular, optimal algorithms were not previously known for problems such as training support vector machines and the high-dimensional median.

587 citations

Journal ArticleDOI
TL;DR: In this paper, the adaptive Lasso estimator is proposed for Cox's proportional hazards model, which is based on a penalized log partial likelihood with the adaptively weighted L 1 penalty on regression coefficients.
Abstract: SUMMARY We investigate the variable selection problem for Cox's proportional hazards model, and propose a unified model selection and estimation procedure with desired theoretical properties and computational convenience. The new method is based on a penalized log partial likelihood with the adaptively weighted L1 penalty on regression coefficients, providing what we call the adaptive Lasso estimator. The method incorporates different penalties for different coefficients: unimportant variables receive larger penalties than important ones, so that important variables tend to be retained in the selection process, whereas unimportant variables are more likely to be dropped. Theoretical properties, such as consistency and rate of convergence of the estimator, are studied. We also show that, with proper choice of regularization parameters, the proposed estimator has the oracle properties. The convex optimization nature of the method leads to an efficient algorithm. Both simulated and real examples show that the method performs competitively.

587 citations

Proceedings ArticleDOI
13 Jun 2010
TL;DR: This paper reduces this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of ℓ1-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence.
Abstract: This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of errors and a low-rank matrix of recovered aligned images. We reduce this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of l1-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence. We verify the efficacy of the proposed robust alignment algorithm with extensive experiments with both controlled and uncontrolled real data, demonstrating higher accuracy and efficiency than existing methods over a wide range of realistic misalignments and corruptions.

582 citations

Proceedings ArticleDOI
14 Jun 2010
TL;DR: Two new algorithms to efficiently solve convex optimization problems, based on the alternating direction method of multipliers, a method from the augmented Lagrangian family, are introduced and are shown to outperform off-the-shelf methods in terms of speed and accuracy.
Abstract: Convex optimization problems are common in hyperspectral unmixing. Examples are the constrained least squares (CLS) problem used to compute the fractional abundances in a linear mixture of known spectra, the constrained basis pursuit (CBP) to find sparse (i.e., with a small number of terms) linear mixtures of spectra, selected from large libraries, and the constrained basis pursuit denoising (CBPDN), which is a generalization of BP to admit modeling errors. In this paper, we introduce two new algorithms to efficiently solve these optimization problems, based on the alternating direction method of multipliers, a method from the augmented Lagrangian family. The algorithms are termed SUnSAL (sparse unmixing by variable splitting and augmented Lagrangian) and C-SUnSAL (constrained SUnSAL). C-SUnSAL solves the CBP and CBPDN problems, while SUnSAL solves CLS as well as a more general version thereof, called constrained sparse regression (CSR). C-SUnSAL and SUnSAL are shown to outperform off-the-shelf methods in terms of speed and accuracy.

580 citations

Journal ArticleDOI
TL;DR: In this article, the optimal power flow problems (OPF) were reformulated into a semidefinite programming (SDP) model and developed an algorithm of interior point method (IPM) for SDP.

576 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
94% related
Robustness (computer science)
94.7K papers, 1.6M citations
89% related
Linear system
59.5K papers, 1.4M citations
88% related
Markov chain
51.9K papers, 1.3M citations
86% related
Control theory
299.6K papers, 3.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023392
2022849
20211,461
20201,673
20191,677
20181,580