scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 2010"


Journal ArticleDOI
TL;DR: It is shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.
Abstract: The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.

3,432 citations


Journal Article
TL;DR: In this paper, it was shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.
Abstract: The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.

2,742 citations


Journal ArticleDOI
TL;DR: This paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors).
Abstract: This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible, but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n x n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nr polylog(n).

2,241 citations


Journal ArticleDOI
26 Apr 2010
TL;DR: This paper surveys the novel literature on matrix completion and introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise, and shows that, in practice, nuclear-norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples.
Abstract: On the heels of compressed sensing, a new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries. It comes up in many areas of science and engineering, including collaborative filtering, machine learning, control, remote sensing, and computer vision, to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown low-rank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclear-norm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown matrix of low rank from just about log noisy samples with an error that is proportional to the noise level. We present numerical results that complement our quantitative analysis and show that, in practice, nuclear-norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout.

1,623 citations


Journal ArticleDOI
TL;DR: A collection of methods for improving the speed of MPC, using online optimization, which can compute the control action on the order of 100 times faster than a method that uses a generic optimizer.
Abstract: A widely recognized shortcoming of model predictive control (MPC) is that it can usually only be used in applications with slow dynamics, where the sample time is measured in seconds or minutes. A well-known technique for implementing fast MPC is to compute the entire control law offline, in which case the online controller can be implemented as a lookup table. This method works well for systems with small state and input dimensions (say, no more than five), few constraints, and short time horizons. In this paper, we describe a collection of methods for improving the speed of MPC, using online optimization. These custom methods, which exploit the particular structure of the MPC problem, can compute the control action on the order of 100 times faster than a method that uses a generic optimizer. As an example, our method computes the control actions for a problem with 12 states, 3 controls, and horizon of 30 time steps (which entails solving a quadratic program with 450 variables and 1284 constraints) in around 5 ms, allowing MPC to be carried out at 200 Hz.

1,369 citations


Journal ArticleDOI
TL;DR: A new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer is proposed.
Abstract: We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.

1,211 citations


Journal ArticleDOI
TL;DR: These methods are specialized for quantum states that are fairly pure, and they offer a significant performance improvement on large quantum systems, and are able to reconstruct an unknown density matrix of dimension d and rank r using O(rdlog²d) measurement settings, compared to standard methods that require d² settings.
Abstract: We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure, and they offer a significant performance improvement on large quantum systems. In particular, they are able to reconstruct an unknown density matrix of dimension d and rank r using O(rdlog^2d) measurement settings, compared to standard methods that require d^2 settings. Our methods have several features that make them amenable to experimental implementation: they require only simple Pauli measurements, use fast convex optimization, are stable against noise, and can be applied to states that are only approximately low rank. The acquired data can be used to certify that the state is indeed close to pure, so no a priori assumptions are needed.

1,084 citations


Journal ArticleDOI
TL;DR: This paper considers a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set, and investigates the effects of stochastic subgradient errors on the convergence of the algorithm.
Abstract: We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set.

857 citations


Journal ArticleDOI
TL;DR: The regularized orthogonal matching pursuit (ROMP) algorithm as mentioned in this paper combines the speed and ease of implementation of the greedy methods with the strong guarantees of the convex programming methods.
Abstract: We demonstrate a simple greedy algorithm that can reliably recover a vector v ? ?d from incomplete and inaccurate measurements x = ?v + e. Here, ? is a N x d measurement matrix with N<

743 citations


Journal ArticleDOI
TL;DR: This work develops and analyzes M-estimation methods for divergence functionals and the likelihood ratios of two probability distributions based on a nonasymptotic variational characterization of f -divergences, which allows the problem of estimating divergences to be tackled via convex empirical risk optimization.
Abstract: We develop and analyze M-estimation methods for divergence functionals and the likelihood ratios of two probability distributions. Our method is based on a nonasymptotic variational characterization of f -divergences, which allows the problem of estimating divergences to be tackled via convex empirical risk optimization. The resulting estimators are simple to implement, requiring only the solution of standard convex programs. We present an analysis of consistency and convergence for these estimators. Given conditions only on the ratios of densities, we show that our estimators can achieve optimal minimax rates for the likelihood ratio and the divergence functionals in certain regimes. We derive an efficient optimization algorithm for computing our estimates, and illustrate their convergence behavior and practical viability by simulations.

729 citations


Journal ArticleDOI
TL;DR: This work generalizes the primal-dual hybrid gradient (PDHG) algorithm to a broader class of convex optimization problems, and surveys several closely related methods and explains the connections to PDHG.
Abstract: We generalize the primal-dual hybrid gradient (PDHG) algorithm proposed by Zhu and Chan in [An Efficient Primal-Dual Hybrid Gradient Algorithm for Total Variation Image Restoration, CAM Report 08-34, UCLA, Los Angeles, CA, 2008] to a broader class of convex optimization problems. In addition, we survey several closely related methods and explain the connections to PDHG. We point out convergence results for a modified version of PDHG that has a similarly good empirical convergence rate for total variation (TV) minimization problems. We also prove a convergence result for PDHG applied to TV denoising with some restrictions on the PDHG step size parameters. We show how to interpret this special case as a projected averaged gradient method applied to the dual functional. We discuss the range of parameters for which these methods can be shown to converge. We also present some numerical comparisons of these algorithms applied to TV denoising, TV deblurring, and constrained $l_1$ minimization problems.

Journal ArticleDOI
TL;DR: A modular framework is presented to obtain an approximate solution to the problem that is distributionally robust and more flexible than the standard technique of using linear rules.
Abstract: In this paper we focus on a linear optimization problem with uncertainties, having expectations in the objective and in the set of constraints. We present a modular framework to obtain an approximate solution to the problem that is distributionally robust and more flexible than the standard technique of using linear rules. Our framework begins by first affinely extending the set of primitive uncertainties to generate new linear decision rules of larger dimensions and is therefore more flexible. Next, we develop new piecewise-linear decision rules that allow a more flexible reformulation of the original problem. The reformulated problem will generally contain terms with expectations on the positive parts of the recourse variables. Finally, we convert the uncertain linear program into a deterministic convex program by constructing distributionally robust bounds on these expectations. These bounds are constructed by first using different pieces of information on the distribution of the underlying uncertainties to develop separate bounds and next integrating them into a combined bound that is better than each of the individual bounds.

Journal ArticleDOI
TL;DR: This paper proposes the use of the alternating direction method - a classic approach for optimization problems with separable variables - for signal reconstruction from partial Fourier measurements, and runs very fast (typically in a few seconds on a laptop) because it requires a small number of iterations.
Abstract: Recent compressive sensing results show that it is possible to accurately reconstruct certain compressible signals from relatively few linear measurements via solving nonsmooth convex optimization problems. In this paper, we propose the use of the alternating direction method - a classic approach for optimization problems with separable variables (D. Gabay and B. Mercier, ?A dual algorithm for the solution of nonlinear variational problems via finite-element approximations,? Computer and Mathematics with Applications, vol. 2, pp. 17-40, 1976; R. Glowinski and A. Marrocco, ?Sur lapproximation par elements finis dordre un, et la resolution par penalisation-dualite dune classe de problemes de Dirichlet nonlineaires,? Rev. Francaise dAut. Inf. Rech. Oper., vol. R-2, pp. 41-76, 1975) - for signal reconstruction from partial Fourier (i.e., incomplete frequency) measurements. Signals are reconstructed as minimizers of the sum of three terms corresponding to total variation, ?1-norm of a certain transform, and least squares data fitting. Our algorithm, called RecPF and published online, runs very fast (typically in a few seconds on a laptop) because it requires a small number of iterations, each involving simple shrinkages and two fast Fourier transforms (or alternatively discrete cosine transforms when measurements are in the corresponding domain). RecPF was compared with two state-of-the-art algorithms on recovering magnetic resonance images, and the results show that it is highly efficient, stable, and robust.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: This paper reduces this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of ℓ1-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence.
Abstract: This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of errors and a low-rank matrix of recovered aligned images. We reduce this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of l1-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence. We verify the efficacy of the proposed robust alignment algorithm with extensive experiments with both controlled and uncontrolled real data, demonstrating higher accuracy and efficiency than existing methods over a wide range of realistic misalignments and corruptions.

Proceedings ArticleDOI
14 Jun 2010
TL;DR: Two new algorithms to efficiently solve convex optimization problems, based on the alternating direction method of multipliers, a method from the augmented Lagrangian family, are introduced and are shown to outperform off-the-shelf methods in terms of speed and accuracy.
Abstract: Convex optimization problems are common in hyperspectral unmixing. Examples are the constrained least squares (CLS) problem used to compute the fractional abundances in a linear mixture of known spectra, the constrained basis pursuit (CBP) to find sparse (i.e., with a small number of terms) linear mixtures of spectra, selected from large libraries, and the constrained basis pursuit denoising (CBPDN), which is a generalization of BP to admit modeling errors. In this paper, we introduce two new algorithms to efficiently solve these optimization problems, based on the alternating direction method of multipliers, a method from the augmented Lagrangian family. The algorithms are termed SUnSAL (sparse unmixing by variable splitting and augmented Lagrangian) and C-SUnSAL (constrained SUnSAL). C-SUnSAL solves the CBP and CBPDN problems, while SUnSAL solves CLS as well as a more general version thereof, called constrained sparse regression (CSR). C-SUnSAL and SUnSAL are shown to outperform off-the-shelf methods in terms of speed and accuracy.

Journal ArticleDOI
TL;DR: It is demonstrated that convex optimization provides an indispensable set of tools for beamforming, enabling rigorous formulation and effective solution of both long-standing and emerging design problems.
Abstract: In this article, an overview of advanced convex optimization approaches to multisensor beamforming is presented, and connections are drawn between different types of optimization-based beamformers that apply to a broad class of receive, transmit, and network beamformer design problems. It is demonstrated that convex optimization provides an indispensable set of tools for beamforming, enabling rigorous formulation and effective solution of both long-standing and emerging design problems.

Journal ArticleDOI
TL;DR: It is shown that the approximate convex problem solved at each inner iteration can be cast as a conic quadratic programming problem, hence large scale TTD problems can be efficiently solved by the proposed method.
Abstract: We describe a general scheme for solving nonconvex optimization problems, where in each iteration the nonconvex feasible set is approximated by an inner convex approximation. The latter is defined using an upper bound on the nonconvex constraint functions. Under appropriate conditions, a monotone convergence to a KKT point is established. The scheme is applied to truss topology design (TTD) problems, where the nonconvex constraints are associated with bounds on displacements and stresses. It is shown that the approximate convex problem solved at each inner iteration can be cast as a conic quadratic programming problem, hence large scale TTD problems can be efficiently solved by the proposed method.

Journal Article
Tong Zhang1
TL;DR: A multi-stage convex relaxation scheme for solving problems with non-convex objective functions with sparse regularization is presented and it is shown that the local solution obtained by this procedure is superior to the global solution of the standard L1 conveX relaxation for learning sparse targets.
Abstract: We consider learning formulations with non-convex objective functions that often occur in practical applications. There are two approaches to this problem: Heuristic methods such as gradient descent that only find a local minimum. A drawback of this approach is the lack of theoretical guarantee showing that the local minimum gives a good solution. Convex relaxation such as L1-regularization that solves the problem under some conditions. However it often leads to a sub-optimal solution in reality. This paper tries to remedy the above gap between theory and practice. In particular, we present a multi-stage convex relaxation scheme for solving problems with non-convex objective functions. For learning formulations with sparse regularization, we analyze the behavior of a specific multi-stage relaxation scheme. Under appropriate conditions, we show that the local solution obtained by this procedure is superior to the global solution of the standard L1 convex relaxation for learning sparse targets.

DissertationDOI
01 Jan 2010
TL;DR: The principle of maximum causal entropy is introduced, a general technique for applying information theory to decision-theoretic, game-the theoretical, and control settings where relevant information is sequentially revealed over time.
Abstract: Predicting human behavior from a small amount of training examples is a challenging machine learning problem. In this thesis, we introduce the principle of maximum causal entropy, a general technique for applying information theory to decision-theoretic, game-theoretic, and control settings where relevant information is sequentially revealed over time. This approach guarantees decision-theoretic performance by matching purposeful measures of behavior (Abbeel & Ng, 2004), and/or enforces game-theoretic rationality constraints (Aumann, 1974), while otherwise being as uncertain as possible, which minimizes worst-case predictive log-loss (Grunwald & Dawid, 2003). We derive probabilistic models for decision, control, and multi-player game settings using this approach. We then develop corresponding algorithms for efficient inference that include relaxations of the Bellman equation (Bellman, 1957), and simple learning algorithms based on convex optimization. We apply the models and algorithms to a number of behavior prediction tasks. Specifically, we present empirical evaluations of the approach in the domains of vehicle route preference modeling using over 100,000 miles of collected taxi driving data, pedestrian motion modeling from weeks of indoor movement data, and robust prediction of game play in stochastic multi-player games.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: A novel method for face recognition from image sets that combines kernel trick and robust methods to discard input points that are far from the fitted model, thus handling complex and nonlinear manifolds of face images.
Abstract: We introduce a novel method for face recognition from image sets. In our setting each test and training example is a set of images of an individual's face, not just a single image, so recognition decisions need to be based on comparisons of image sets. Methods for this have two main aspects: the models used to represent the individual image sets; and the similarity metric used to compare the models. Here, we represent images as points in a linear or affine feature space and characterize each image set by a convex geometric region (the affine or convex hull) spanned by its feature points. Set dissimilarity is measured by geometric distances (distances of closest approach) between convex models. To reduce the influence of outliers we use robust methods to discard input points that are far from the fitted model. The kernel trick allows the approach to be extended to implicit feature mappings, thus handling complex and nonlinear manifolds of face images. Experiments on two public face datasets show that our proposed methods outperform a number of existing state-of-the-art ones.

Journal ArticleDOI
TL;DR: A feedback motion-planning algorithm which uses rigorously computed stability regions to build a sparse tree of LQR-stabilized trajectories and proves the property of probabilistic coverage.
Abstract: Advances in the direct computation of Lyapunov functions using convex optimization make it possible to efficiently evaluate regions of attraction for smooth non-linear systems. Here we present a feedback motion-planning algorithm which uses rigorously computed stability regions to build a sparse tree of LQR-stabilized trajectories. The region of attraction of this non-linear feedback policy “probabilistically covers” the entire controllable subset of state space, verifying that all initial conditions that are capable of reaching the goal will reach the goal. We numerically investigate the properties of this systematic non-linear feedback design algorithm on simple non-linear systems, prove the property of probabilistic coverage, and discuss extensions and implementation details of the basic algorithm.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: In this article, a convex program, named Principal Component Pursuit (PCP), is proposed to recover the low-rank matrix from a high-dimensional data matrix despite both small entry-wise noise and gross sparse errors.
Abstract: In this paper, we study the problem of recovering a low-rank matrix (the principal components) from a high-dimensional data matrix despite both small entry-wise noise and gross sparse errors. Recently, it has been shown that a convex program, named Principal Component Pursuit (PCP), can recover the low-rank matrix when the data matrix is corrupted by gross sparse errors. We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entry-wise noise and robust to gross sparse errors. More precisely, our result shows that the proposed convex program recovers the low-rank matrix even though a positive fraction of its entries are arbitrarily corrupted, with an error bound proportional to the noise level. We present simulation results to support our result and demonstrate that the new convex program accurately recovers the principal components (the low-rank matrix) under quite broad conditions. To our knowledge, this is the first result that shows the classical Principal Component Analysis (PCA), optimal for small i.i.d. noise, can be made robust to gross sparse errors; or the first that shows the newly proposed PCP can be made stable to small entry-wise perturbations.

Journal ArticleDOI
TL;DR: It is shown here that time-domain probing of a multipath channel with a random binary sequence, along with utilization of CS reconstruction techniques, can provide significant improvements in estimation accuracy compared to traditional least-squares based linear channel estimation strategies.
Abstract: Compressed sensing (CS) has recently emerged as a powerful signal acquisition paradigm. In essence, CS enables the recovery of high-dimensional sparse signals from relatively few linear observations in the form of projections onto a collection of test vectors. Existing results show that if the entries of the test vectors are independent realizations of certain zero-mean random variables, then with high probability the unknown signals can be recovered by solving a tractable convex optimization. This work extends CS theory to settings where the entries of the test vectors exhibit structured statistical dependencies. It follows that CS can be effectively utilized in linear, time-invariant system identification problems provided the impulse response of the system is (approximately or exactly) sparse. An immediate application is in wireless multipath channel estimation. It is shown here that time-domain probing of a multipath channel with a random binary sequence, along with utilization of CS reconstruction techniques, can provide significant improvements in estimation accuracy compared to traditional least-squares based linear channel estimation strategies. Abstract extensions of the main results are also discussed, where the theory of equitable graph coloring is employed to establish the utility of CS in settings where the test vectors exhibit more general statistical dependencies.

Journal ArticleDOI
TL;DR: This paper proposes an approach to deconvolving Poissonian images, which is based upon an alternating direction optimization method of multipliers (ADMM), which belongs to the family of augmented Lagrangian algorithms.
Abstract: Much research has been devoted to the problem of restoring Poissonian images, namely for medical and astronomical applications. However, the restoration of these images using state-of-the-art regularizers (such as those based upon multiscale representations or total variation) is still an active research area, since the associated optimization problems are quite challenging. In this paper, we propose an approach to deconvolving Poissonian images, which is based upon an alternating direction optimization method. The standard regularization [or maximum a posteriori (MAP)] restoration criterion, which combines the Poisson log-likelihood with a (nonsmooth) convex regularizer (log-prior), leads to hard optimization problems: the log-likelihood is nonquadratic and nonseparable, the regularizer is nonsmooth, and there is a nonnegativity constraint. Using standard convex analysis tools, we present sufficient conditions for existence and uniqueness of solutions of these optimization problems, for several types of regularizers: total-variation, frame-based analysis, and frame-based synthesis. We attack these problems with an instance of the alternating direction method of multipliers (ADMM), which belongs to the family of augmented Lagrangian algorithms. We study sufficient conditions for convergence and show that these are satisfied, either under total-variation or frame-based (analysis and synthesis) regularization. The resulting algorithms are shown to outperform alternative state-of-the-art methods, both in terms of speed and restoration accuracy.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss iterative methods for solving the split feasibility problem in the setting of infinite-dimensional Hilbert spaces, where regularization and iterative algorithms are also introduced to find the minimum norm solution of the SFP.
Abstract: The split feasibility problem (SFP) (Censor and Elfving 1994 Numer Algorithms 8 221–39) is to find a point x* with the property that x* C and Ax* Q, where C and Q are the nonempty closed convex subsets of the real Hilbert spaces and , respectively, and A is a bounded linear operator from to The SFP models inverse problems arising from phase retrieval problems (Censor and Elfving 1994 Numer Algorithms 8 221–39) and the intensity-modulated radiation therapy (Censor et al 2005 Inverse Problems 21 2071–84) In this paper we discuss iterative methods for solving the SFP in the setting of infinite-dimensional Hilbert spaces The CQ algorithm of Byrne (2002 Inverse Problems 18 441–53, 2004 Inverse Problems 20 103–20) is indeed a special case of the gradient-projection algorithm in convex minimization and has weak convergence in general in infinite-dimensional setting We will mainly use fixed point algorithms to study the SFP A relaxed CQ algorithm is introduced which only involves projections onto half-spaces so that the algorithm is implementable Both regularization and iterative algorithms are also introduced to find the minimum-norm solution of the SFP

Journal ArticleDOI
TL;DR: A broad view of Iterative-shrinkage algorithms is given, derive some of them, show accelerations based on the sequential subspace optimization, fast iterative soft-thresholding algorithm and the conjugate gradient method, and discuss their potential in various applications, such as compressed sensing, computed tomography, and deblurring.
Abstract: Sparse, redundant representations offer a powerful emerging model for signals. This model approximates a data source as a linear combination of few atoms from a prespecified and over-complete dictionary. Often such models are fit to data by solving mixed ?1-?2 convex optimization problems. Iterative-shrinkage algorithms constitute a new family of highly effective numerical methods for handling these problems, surpassing traditional optimization techniques. In this article, we give a broad view of this group of methods, derive some of them, show accelerations based on the sequential subspace optimization (SESOP), fast iterative soft-thresholding algorithm (FISTA) and the conjugate gradient (CG) method, present a comparative performance, and discuss their potential in various applications, such as compressed sensing, computed tomography, and deblurring.

Journal ArticleDOI
TL;DR: Algorithm to train support vector machines when training data are distributed across different nodes, and their communication to a centralized processing unit is prohibited due to, for example, communication complexity, scalability, or privacy reasons is developed.
Abstract: This paper develops algorithms to train support vector machines when training data are distributed across different nodes, and their communication to a centralized processing unit is prohibited due to, for example, communication complexity, scalability, or privacy reasons. To accomplish this goal, the centralized linear SVM problem is cast as a set of decentralized convex optimization sub-problems (one per node) with consensus constraints on the wanted classifier parameters. Using the alternating direction method of multipliers, fully distributed training algorithms are obtained without exchanging training data among nodes. Different from existing incremental approaches, the overhead associated with inter-node communications is fixed and solely dependent on the network topology rather than the size of the training sets available per node. Important generalizations to train nonlinear SVMs in a distributed fashion are also developed along with sequential variants capable of online processing. Simulated tests illustrate the performance of the novel algorithms.

Journal ArticleDOI
TL;DR: Krasovskii's method to find Lyapunov functions, and recently obtained extensions of the LaSalle invariance principle for hybrid systems are used to obtain stability proofs of primal-dual laws in different scenarios, and applications to cross-layer network optimization are exhibited.

Journal ArticleDOI
TL;DR: This survey aims to provide the reader with a significant overview of the LMI techniques that are used in control systems for tackling optimization problems over polynomials, describing approaches such as decomposition in sum of squares, Positivstellensatz, theory of moments, Pólya's theorem, and matrix dilation.
Abstract: Numerous tasks in control systems involve optimization problems over polynomials, and unfortunately these problems are in general nonconvex. In order to cope with this difficulty, linear matrix inequality (LMI) techniques have been introduced because they allow one to obtain bounds to the sought solution by solving convex optimization problems and because the conservatism of these bounds can be decreased in general by suitably increasing the size of the problems. This survey aims to provide the reader with a significant overview of the LMI techniques that are used in control systems for tackling optimization problems over polynomials, describing approaches such as decomposition in sum of squares, Positivstellensatz, theory of moments, Polya's theorem, and matrix dilation. Moreover, it aims to provide a collection of the essential problems in control systems where these LMI techniques are used, such as stability and performance investigations in nonlinear systems, uncertain systems, time-delay systems, and genetic regulatory networks. It is expected that this survey may be a concise useful reference for all readers.

Proceedings Article
01 Dec 2010
TL;DR: This work presents a new method for regularized convex optimization that unifies previously known firstorder algorithms, such as the projected gradient method, mirror descent, and forwardbackward splitting, and derives specific instantiations of this method for commonly used regularization functions,such as l1, mixed norm, and trace-norm.
Abstract: We present a new method for regularized convex optimization and analyze it under both online and stochastic optimization settings. In addition to unifying previously known firstorder algorithms, such as the projected gradient method, mirror descent, and forwardbackward splitting, our method yields new analysis and algorithms. We also derive specific instantiations of our method for commonly used regularization functions, such as l1, mixed norm, and trace-norm.