scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 2011"


Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations


Journal ArticleDOI
TL;DR: A first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure can achieve O(1/N2) convergence on problems, where the primal or the dual objective is uniformly convex, and it can show linear convergence, i.e. O(ωN) for some ω∈(0,1), on smooth problems.
Abstract: In this paper we study a first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure. We prove convergence to a saddle-point with rate O(1/N) in finite dimensions for the complete class of problems. We further show accelerations of the proposed algorithm to yield improved rates on problems with some degree of smoothness. In particular we show that we can achieve O(1/N 2) convergence on problems, where the primal or the dual objective is uniformly convex, and we can show linear convergence, i.e. O(? N ) for some ??(0,1), on smooth problems. The wide applicability of the proposed algorithm is demonstrated on several imaging problems such as image denoising, image deconvolution, image inpainting, motion estimation and multi-label image segmentation.

4,487 citations


Book
26 Apr 2011
TL;DR: This book provides a largely self-contained account of the main results of convex analysis and optimization in Hilbert space, and a concise exposition of related constructive fixed point theory that allows for a wide range of algorithms to construct solutions to problems in optimization, equilibrium theory, monotone inclusions, variational inequalities, and convex feasibility.
Abstract: This book provides a largely self-contained account of the main results of convex analysis and optimization in Hilbert space. A concise exposition of related constructive fixed point theory is presented, that allows for a wide range of algorithms to construct solutions to problems in optimization, equilibrium theory, monotone inclusions, variational inequalities, best approximation theory, and convex feasibility. The book is accessible to a broad audience, and reaches out in particular to applied scientists and engineers, to whom these tools have become indispensable.

3,905 citations


Book ChapterDOI
01 Jan 2011
TL;DR: The basic properties of proximity operators which are relevant to signal processing and optimization methods based on these operators are reviewed and proximal splitting methods are shown to capture and extend several well-known algorithms in a unifying framework.
Abstract: The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of inverse problems and, especially, in signal processing, where it has become increasingly important. In this paper, we review the basic properties of proximity operators which are relevant to signal processing and present optimization methods based on these operators. These proximal splitting methods are shown to capture and extend several well-known algorithms in a unifying framework. Applications of proximal methods in signal recovery and synthesis are discussed.

1,942 citations


Journal ArticleDOI
TL;DR: A very fast, robust and powerful algorithm, which the authors call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems and proves convergence of the first of these algorithms.
Abstract: The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is computationally expensive to solve when the matrices are large. In this paper, we propose fixed point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems (the code can be downloaded from http://www.columbia.edu/~sm2756/FPCA.htmfor non-commercial use). Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10−5 in about 3 min by sampling only 20% of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms.

1,099 citations


Journal ArticleDOI
TL;DR: This paper shows that reformulating that step as a constrained flow optimization results in a convex problem and takes advantage of its particular structure to solve it using the k-shortest paths algorithm, which is very fast.
Abstract: Multi-object tracking can be achieved by detecting objects in individual frames and then linking detections across frames. Such an approach can be made very robust to the occasional detection failure: If an object is not detected in a frame but is in previous and following ones, a correct trajectory will nevertheless be produced. By contrast, a false-positive detection in a few frames will be ignored. However, when dealing with a multiple target problem, the linking step results in a difficult optimization problem in the space of all possible families of trajectories. This is usually dealt with by sampling or greedy search based on variants of Dynamic Programming which can easily miss the global optimum. In this paper, we show that reformulating that step as a constrained flow optimization results in a convex problem. We take advantage of its particular structure to solve it using the k-shortest paths algorithm, which is very fast. This new approach is far simpler formally and algorithmically than existing techniques and lets us demonstrate excellent performance in two very different contexts.

1,076 citations


Posted Content
TL;DR: In this article, the authors prove that if the vectors z_i are sampled independently and uniformly at random on the unit sphere, then the signal x can be recovered exactly (up to a global phase factor) by solving a convenient semidefinite program.
Abstract: Suppose we wish to recover a signal x in C^n from m intensity measurements of the form | |^2, i = 1, 2,..., m; that is, from data in which phase information is missing. We prove that if the vectors z_i are sampled independently and uniformly at random on the unit sphere, then the signal x can be recovered exactly (up to a global phase factor) by solving a convenient semidefinite program---a trace-norm minimization problem; this holds with large probability provided that m is on the order of n log n, and without any assumption about the signal whatsoever. This novel result demonstrates that in some instances, the combinatorial phase retrieval problem can be solved by convex programming techniques. Finally, we also prove that our methodology is robust vis a vis additive noise.

878 citations


Journal Article
TL;DR: In this paper, rank-sparsity incoherence is defined as an uncertainty principle between the sparsity pat- tern of a matrix and its row and column spaces, and used to characterize both fundamental identifiability as well as (deterministic) sufficient conditions for exact recovery.
Abstract: Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Such a problem arises in a number of applications in model and system identification and is intractable to solve in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components by minimizing a linear combination of the l1 norm and the nuclear norm of the components. We develop a notion of rank-sparsity incoherence, expressed as an uncertainty principle between the sparsity pat- tern of a matrix and its row and column spaces, and we use it to characterize both fundamental identifiability as well as (deterministic) sufficient conditions for exact recovery. Our analysis is geometric in nature with the tangent spaces to the algebraic varieties of sparse and low-rank matrices playing a prominent role. When the sparse and low-rank matrices are drawn from certain natural random ensembles, we show that the sufficient conditions for exact recovery are satisfied with high probability. We conclude with simulation results on synthetic matrix decomposition problems.

845 citations


Journal ArticleDOI
TL;DR: In this article, an augmented Lagrangian method is proposed to deal with a variety of imaging ill-posed linear inverse problems, including deconvolution and reconstruction from compressive observations (such as MRI), using either total variation or wavelet-based regularization.
Abstract: We propose a new fast algorithm for solving one of the standard approaches to ill-posed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of off-the-shelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either total-variation or wavelet-based (or, more generally, frame-based) regularization. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the state-of-the-art.

835 citations


Journal ArticleDOI
TL;DR: This paper uses the n-rank of a tensor as a sparsity measure and considers the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-Rank that fulfills some linear constraints.
Abstract: In this paper we consider sparsity on a tensor level, as given by the n-rank of a tensor. In an important sparse-vector approximation problem (compressed sensing) and the low-rank matrix recovery problem, using a convex relaxation technique proved to be a valuable solution strategy. Here, we will adapt these techniques to the tensor setting. We use the n-rank of a tensor as a sparsity measure and consider the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-rank that fulfills some linear constraints. We introduce a tractable convex relaxation of the n-rank and propose efficient algorithms to solve the low-n-rank tensor recovery problem numerically. The algorithms are based on the Douglas–Rachford splitting technique and its dual variant, the alternating direction method of multipliers.

814 citations


Posted Content
TL;DR: In this article, the authors present from a general perspective optimization tools and techniques dedicated to such sparsityinducing penalties, including proximal methods, block-coordinate descent, reweighted $\ell_2$-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions.
Abstract: Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted $\ell_2$-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view.

Book
23 Dec 2011
TL;DR: This monograph covers proximal methods, block-coordinate descent, reweighted l2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provides an extensive set of experiments to compare various algorithms from a computational point of view.
Abstract: Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate nonsmooth norms. The goal of this monograph is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted l2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view.

Proceedings Article
12 Dec 2011
TL;DR: This work shows that both the basic proximal-gradient method and the accelerated proximal - gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates.
Abstract: We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates. Using these rates, we perform as well as or better than a carefully chosen fixed error level on a set of structured sparsity problems.

Proceedings ArticleDOI
06 Nov 2011
TL;DR: This paper proposes simple and easy to compute diagonal preconditioners for the first-order primal-dual algorithm for which convergence of the algorithm is guaranteed without the need to compute any step size parameters.
Abstract: In this paper we study preconditioning techniques for the first-order primal-dual algorithm proposed in [5]. In particular, we propose simple and easy to compute diagonal preconditioners for which convergence of the algorithm is guaranteed without the need to compute any step size parameters. As a by-product, we show that for a certain instance of the preconditioning, the proposed algorithm is equivalent to the old and widely unknown alternating step method for monotropic programming [7]. We show numerical results on general linear programming problems and a few standard computer vision problems. In all examples, the preconditioned algorithm significantly outperforms the algorithm of [5].

Proceedings ArticleDOI
01 Dec 2011
TL;DR: A general approach is obtained that allows to analyze and design (distributed) optimization systems converging to the solution of given convex optimization problems and demonstrates the natural tracking and adaptation capabilities of the system to changing constraints.
Abstract: In this paper, we want to study how natural and engineered systems could perform complex optimizations with limited computational and communication capabilities. We adopt a continuous-time dynamical system view rooted in early work on optimization and more recently in network protocol design, and merge it with the dynamic view of distributed averaging systems. We obtain a general approach, based on the control system viewpoint, that allows to analyze and design (distributed) optimization systems converging to the solution of given convex optimization problems. The control system viewpoint provides many insights and new directions of research. We apply the framework to a distributed optimal location problem and demonstrate the natural tracking and adaptation capabilities of the system to changing constraints.

Journal ArticleDOI
TL;DR: This paper proposes a secret transmit beamforming approach using a quality-of-service (QoS)-based perspective, and proves that SDR can exactly solve the design problems for a practically representative class of problem instances; e.g., when the intended receiver's instantaneous CSI is known.
Abstract: Secure transmission techniques have been receiving growing attention in recent years, as a viable, powerful alternative to blocking eavesdropping attempts in an open wireless medium. This paper proposes a secret transmit beamforming approach using a quality-of-service (QoS)-based perspective. Specifically, we establish design formulations that: i) constrain the maximum allowable signal-to-interference-and-noise ratios (SINRs) of the eavesdroppers, and that ii) provide the intended receiver with a satisfactory SINR through either a guaranteed SINR constraint or SINR maximization. The proposed designs incorporate a relatively new idea called artificial noise (AN), where a suitable amount of AN is added in the transmitted signal to confuse the eavesdroppers. Our designs advocate joint optimization of the transmit weights and AN spatial distribution in accordance with the channel state information (CSI) of the intended receiver and eavesdroppers. Our formulated design problems are shown to be NP-hard in general. We deal with this difficulty by using semidefinite relaxation (SDR), an approximation technique based on convex optimization. Interestingly, we prove that SDR can exactly solve the design problems for a practically representative class of problem instances; e.g., when the intended receiver's instantaneous CSI is known. Extensions to the colluding-eavesdropper scenario and the multi-intended-receiver scenario are also examined. Extensive simulation results illustrate that the proposed AN-aided designs can yield significant power savings or SINR enhancement compared to some other methods.

Journal ArticleDOI
TL;DR: It is proved that, if constraints in the SP problem are optimally removed—i.e., one deletes those constraints leading to the largest possible cost improvement—, then a precise optimality link to the original chance-constrained problem CCP in addition holds.
Abstract: In this paper, we study the link between a Chance-Constrained optimization Problem (CCP) and its sample counterpart (SP). SP has a finite number, say N, of sampled constraints. Further, some of these sampled constraints, say k, are discarded, and the final solution is indicated by $x^{\ast}_{N,k}$ . Extending previous results on the feasibility of sample convex optimization programs, we establish the feasibility of $x^{\ast}_{N,k}$ for the initial CCP problem. Constraints removal allows one to improve the cost function at the price of a decreased feasibility. The cost improvement can be inspected directly from the optimization result, while the theory here developed permits to keep control on the other side of the coin, the feasibility of the obtained solution. In this way, trading feasibility for performance is put on solid mathematical grounds in this paper. The feasibility result here obtained applies to a vast class of chance-constrained optimization problems, and has the distinctive feature that it holds true irrespective of the algorithm used to discard k constraints in the SP problem. For constraints discarding, one can thus, e.g., resort to one of the many methods introduced in the literature to solve chance-constrained problems with discrete distribution, or even use a greedy algorithm, which is computationally very low-demanding, and the feasibility result remains intact. We further prove that, if constraints in the SP problem are optimally removed—i.e., one deletes those constraints leading to the largest possible cost improvement—, then a precise optimality link to the original chance-constrained problem CCP in addition holds.

Journal ArticleDOI
TL;DR: A convergence and rate of convergence analysis of a variety of incremental methods, including some that involve randomization in the selection of components, and applications in a few contexts, including signal processing and inference/machine learning are discussed.
Abstract: We consider the minimization of a sum $${\sum_{i=1}^mf_i(x)}$$ consisting of a large number of convex component functions f i . For this problem, incremental methods consisting of gradient or subgradient iterations applied to single components have proved very effective. We propose new incremental methods, consisting of proximal iterations applied to single components, as well as combinations of gradient, subgradient, and proximal iterations. We provide a convergence and rate of convergence analysis of a variety of such methods, including some that involve randomization in the selection of components. We also discuss applications in a few contexts, including signal processing and inference/machine learning.


Journal ArticleDOI
TL;DR: This work proposes a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents for cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents.
Abstract: We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a time-varying network topology. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works on multi-agent optimization that make worst-case assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume that links fail according to a given stochastic process. Under the assumption that the link failures are independent and identically distributed over time (possibly correlated across links), we provide almost sure convergence results for our subgradient algorithm.

Journal ArticleDOI
TL;DR: This paper studies two problems which often occur in various applications arising in wireless sensor networks, and provides a diminishing step size algorithm which guarantees asymptotic convergence of the consensus problem and the problem of cooperative solution to a convex optimization problem.
Abstract: In this paper, we study two problems which often occur in various applications arising in wireless sensor networks. These are the problem of reaching an agreement on the value of local variables in a network of computational agents and the problem of cooperative solution to a convex optimization problem, where the objective function is the aggregate sum of local convex objective functions. We incorporate the presence of a random communication graph between the agents in our model as a more realistic abstraction of the gossip and broadcast communication protocols of a wireless network. An added ingredient is the presence of local constraint sets to which the local variables of each agent is constrained. Our model allows for the objective functions to be nondifferentiable and accommodates the presence of noisy communication links and subgradient errors. For the consensus problem we provide a diminishing step size algorithm which guarantees asymptotic convergence. The distributed optimization algorithm uses two diminishing step size sequences to account for communication noise and subgradient errors. We establish conditions on these step sizes under which we can achieve the dual task of reaching consensus and convergence to the optimal set with probability one. In both cases we consider the constant step size behavior of the algorithm and establish asymptotic error bounds.

Journal ArticleDOI
TL;DR: This correspondence studies cooperative jamming to increase the physical layer security of a wiretap fading channel via distributed relays and shows that the optimization problem can be solved using a combination of convex optimization and a one-dimensional search.
Abstract: This correspondence studies cooperative jamming (CJ) to increase the physical layer security of a wiretap fading channel via distributed relays. We first provide the feasible conditions on the positiveness of the secrecy rate and then show that the optimization problem can be solved using a combination of convex optimization and a one-dimensional search. Distributed implementation to realize the CJ solution and extension to deal with per group relays' power constraints are discussed.

BookDOI
01 Jun 2011
TL;DR: The material presented provides a survey of the state-of-the-art theory and practice in fixed-point algorithms, identifying emerging problems driven by applications, and discussing new approaches for solving these problems.
Abstract: "Fixed-Point Algorithms for Inverse Problems in Science and Engineering" presents some ofthe most recent work from top-notch researchers studying projection and other first-order fixed-point algorithms in several areas of mathematics and the applied sciences. The material presented provides a survey of the state-of-the-art theory and practice in fixed-point algorithms, identifying emerging problems driven by applications, and discussing new approaches for solving these problems. This book incorporates diverse perspectives from broad-ranging areas of research including, variational analysis, numerical linear algebra, biotechnology, materials science, computational solid-state physics, and chemistry. Topics presented include: Theory of Fixed-point algorithms: convex analysis, convex optimization, subdifferential calculus, nonsmooth analysis, proximal point methods, projection methods, resolvent and related fixed-point theoretic methods, and monotone operator theory. Numerical analysis of fixed-point algorithms: choice of step lengths, of weights, of blocks for block-iterative and parallel methods, and of relaxation parameters; regularization of ill-posed problems; numerical comparison of various methods. Areas of Applications: engineering (image and signal reconstruction and decompression problems), computer tomography and radiation treatment planning (convex feasibility problems), astronomy (adaptive optics), crystallography (molecular structure reconstruction), computational chemistry (molecular structure simulation) and other areas. Because of the variety of applications presented, this book can easily serve as a basis for new and innovated research and collaboration.

Posted Content
TL;DR: Submodular functions are relevant to machine learning for at least two reasons: (1) some problems may be expressed directly as the optimization of submodular function and (2) the lovasz extension of sub-modular Functions provides a useful set of regularization functions for supervised and unsupervised learning as discussed by the authors.
Abstract: Submodular functions are relevant to machine learning for at least two reasons: (1) some problems may be expressed directly as the optimization of submodular functions and (2) the lovasz extension of submodular functions provides a useful set of regularization functions for supervised and unsupervised learning. In this monograph, we present the theory of submodular functions from a convex analysis perspective, presenting tight links between certain polyhedra, combinatorial optimization and convex optimization problems. In particular, we show how submodular function minimization is equivalent to solving a wide variety of convex optimization problems. This allows the derivation of new efficient algorithms for approximate and exact submodular function minimization with theoretical guarantees and good practical performance. By listing many examples of submodular functions, we review various applications to machine learning, such as clustering, experimental design, sensor placement, graphical model structure learning or subset selection, as well as a family of structured sparsity-inducing norms that can be derived and used from submodular functions.

Posted Content
TL;DR: In this paper, the authors proposed scaled sparse linear regression (SRL) to jointly estimate the regression coefficients and the noise level in a linear model, which is a convex minimization of a penalized joint loss function.
Abstract: Scaled sparse linear regression jointly estimates the regression coefficients and noise level in a linear model. It chooses an equilibrium with a sparse regression method by iteratively estimating the noise level via the mean residual square and scaling the penalty in proportion to the estimated noise level. The iterative algorithm costs little beyond the computation of a path or grid of the sparse regression estimator for penalty levels above a proper threshold. For the scaled lasso, the algorithm is a gradient descent in a convex minimization of a penalized joint loss function for the regression coefficients and noise level. Under mild regularity conditions, we prove that the scaled lasso simultaneously yields an estimator for the noise level and an estimated coefficient vector satisfying certain oracle inequalities for prediction, the estimation of the noise level and the regression coefficients. These inequalities provide sufficient conditions for the consistency and asymptotic normality of the noise level estimator, including certain cases where the number of variables is of greater order than the sample size. Parallel results are provided for the least squares estimation after model selection by the scaled lasso. Numerical results demonstrate the superior performance of the proposed methods over an earlier proposal of joint convex minimization.

Journal ArticleDOI
TL;DR: An asynchronous broadcast-based algorithm where the communications over the network are subject to random link failures and the convergence properties of the algorithm for a diminishing (random) stepsize and a constant stepsize are investigated.
Abstract: We consider a distributed multi-agent network system where each agent has its own convex objective function, which can be evaluated with stochastic errors. The problem consists of minimizing the sum of the agent functions over a commonly known constraint set, but without a central coordinator and without agents sharing the explicit form of their objectives. We propose an asynchronous broadcast-based algorithm where the communications over the network are subject to random link failures. We investigate the convergence properties of the algorithm for a diminishing (random) stepsize and a constant stepsize, where each agent chooses its own stepsize independently of the other agents. Under some standard conditions on the gradient errors, we establish almost sure convergence of the method to an optimal point for diminishing stepsize. For constant stepsize, we establish some error bounds on the expected distance from the optimal point and the expected function value. We also provide numerical results.

Journal ArticleDOI
TL;DR: The problem of robust mode-dependent delayed state feedback H∞ control is investigated for a class of uncertain time-delay systems with Markovian switching parameters and mixed discrete, neutral, and distributed delays using Lyapunov-Krasovskii functional theory.
Abstract: The problem of robust mode-dependent delayed state feedback H∞ control is investigated for a class of uncertain time-delay systems with Markovian switching parameters and mixed discrete, neutral, and distributed delays. Based on the Lyapunov-Krasovskii functional theory, new required sufficient conditions are established in terms of delay-dependent linear matrix inequalities for the stochastic stability and stabilization of the considered system using some free matrices. The desired control is derived based on a convex optimization method such that the resulting closed-loop system is stochastically stable and satisfies a prescribed level of H∞ performance, simultaneously. Finally, two numerical examples are given to illustrate the effectiveness of our approach.

Journal ArticleDOI
TL;DR: A new method to adapt the step-size adaption of the ASD-POCS algorithm to solve the problems of angular undersampling, data lost due to metal implants, limited view angle tomography and interior tomography.
Abstract: In computed tomography there are different situations where reconstruction has to be performed with limited raw data. In the past few years it has been shown that algorithms which are based on compressed sensing theory are able to handle incomplete datasets quite well. As a cost function these algorithms use the l(1)-norm of the image after it has been transformed by a sparsifying transformation. This yields to an inequality-constrained convex optimization problem. Due to the large size of the optimization problem some heuristic optimization algorithms have been proposed in the past few years. The most popular way is optimizing the raw data and sparsity cost functions separately in an alternating manner. In this paper we will follow this strategy and present a new method to adapt these optimization steps. Compared to existing methods which perform similarly, the proposed method needs no a priori knowledge about the raw data consistency. It is ensured that the algorithm converges to the lowest possible value of the raw data cost function, while holding the sparsity constraint at a low value. This is achieved by transferring the step-size determination of both optimization procedures into the raw data domain, where they are adapted to each other. To evaluate the algorithm, we process measured clinical datasets. To cover a wide field of possible applications, we focus on the problems of angular undersampling, data lost due to metal implants, limited view angle tomography and interior tomography. In all cases the presented method reaches convergence within less than 25 iteration steps, while using a constant set of algorithm control parameters. The image artifacts caused by incomplete raw data are mostly removed without introducing new effects like staircasing. All scenarios are compared to an existing implementation of the ASD-POCS algorithm, which realizes the step-size adaption in a different way. Additional prior information as proposed by the PICCS algorithm can be incorporated easily into the optimization process.

Posted Content
TL;DR: This work brings together and notably extends various types of structured monotone inclusion problems and their solution methods and the application to convex minimization problems is given special attention.
Abstract: We propose a primal-dual splitting algorithm for solving monotone inclusions involving a mixture of sums, linear compositions, and parallel sums of set-valued and Lipschitzian operators. An important feature of the algorithm is that the Lipschitzian operators present in the formulation can be processed individually via explicit steps, while the set-valued operators are processed individually via their resolvents. In addition, the algorithm is highly parallel in that most of its steps can be executed simultaneously. This work brings together and notably extends various types of structured monotone inclusion problems and their solution methods. The application to convex minimization problems is given special attention.

Journal ArticleDOI
TL;DR: A chance-constrained approach that plans the future probabilistic distribution of the vehicle state so that the probability of failure is below a specified threshold, and introduces a customized solution method that returns almost-optimal solutions along with a hard bound on the level of suboptimality.
Abstract: Autonomous vehicles need to plan trajectories to a specified goal that avoid obstacles. For robust execution, we must take into account uncertainty, which arises due to uncertain localization, modeling errors, and disturbances. Prior work handled the case of set-bounded uncertainty. We present here a chance-constrained approach, which uses instead a probabilistic representation of uncertainty. The new approach plans the future probabilistic distribution of the vehicle state so that the probability of failure is below a specified threshold. Failure occurs when the vehicle collides with an obstacle or leaves an operator-specified region. The key idea behind the approach is to use bounds on the probability of collision to show that, for linear-Gaussian systems, we can approximate the nonconvex chance-constrained optimization problem as a disjunctive convex program. This can be solved to global optimality using branch-and-bound techniques. In order to improve computation time, we introduce a customized solution method that returns almost-optimal solutions along with a hard bound on the level of suboptimality. We present an empirical validation with an aircraft obstacle avoidance example.