scispace - formally typeset
Search or ask a question

Showing papers on "Convex optimization published in 2003"


Journal ArticleDOI
TL;DR: This article obtains parallel results in a more general setting, where the dictionary D can arise from two or several bases, frames, or even less structured systems, and sketches three applications: separating linear features from planar ones in 3D data, noncooperative multiuser encoding, and identification of over-complete independent component models.
Abstract: Given a dictionary D = {dk} of vectors dk, we seek to represent a signal S as a linear combination S = ∑k γ(k)dk, with scalar coefficients γ(k). In particular, we aim for the sparsest representation possible. In general, this requires a combinatorial optimization process. Previous work considered the special case where D is an overcomplete system consisting of exactly two orthobases and has shown that, under a condition of mutual incoherence of the two bases, and assuming that S has a sufficiently sparse representation, this representation is unique and can be found by solving a convex optimization problem: specifically, minimizing the l1 norm of the coefficients γ. In this article, we obtain parallel results in a more general setting, where the dictionary D can arise from two or several bases, frames, or even less structured systems. We sketch three applications: separating linear features from planar ones in 3D data, noncooperative multiuser encoding, and identification of over-complete independent component models.

3,158 citations


Proceedings Article
21 Aug 2003
TL;DR: An algorithm for convex programming is introduced, and it is shown that it is really a generalization of infinitesimal gradient ascent, and the results here imply that generalized inf initesimalgradient ascent (GIGA) is universally consistent.
Abstract: Convex programming involves a convex set F ⊆ Rn and a convex cost function c : F → R. The goal of convex programming is to find a point in F which minimizes c. In online convex programming, the convex set is known in advance, but in each step of some repeated optimization problem, one must select a point in F before seeing the cost function for that step. This can be used to model factory production, farm production, and many other industrial optimization problems where one is unaware of the value of the items produced until they have already been constructed. We introduce an algorithm for this domain. We also apply this algorithm to repeated games, and show that it is really a generalization of infinitesimal gradient ascent, and the results here imply that generalized infinitesimal gradient ascent (GIGA) is universally consistent.

2,273 citations


Journal ArticleDOI
TL;DR: It is proved that the set of all Lambertian reflectance functions (the mapping from surface normals to intensities) obtained with arbitrary distant light sources lies close to a 9D linear subspace, implying that, in general, theSet of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear sub space, explaining prior empirical results.
Abstract: We prove that the set of all Lambertian reflectance functions (the mapping from surface normals to intensities) obtained with arbitrary distant light sources lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace, explaining prior empirical results. We also provide a simple analytic characterization of this linear space. We obtain these results by representing lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce nonnegative lighting functions. We also show a simple way to enforce nonnegative lighting when the images of an object lie near a 4D linear space. We apply these algorithms to perform face recognition by finding the 3D model that best matches a 2D query image.

1,634 citations


Journal ArticleDOI
TL;DR: SOCP formulations are given for four examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadRatic functions, and many of the problems presented in the survey paper of Vandenberghe and Boyd as examples of SDPs can in fact be formulated as SOCPs and should be solved as such.
Abstract: Second-order cone programming (SOCP) problems are convex optimization problems in which a linear function is minimized over the intersection of an affine linear manifold with the Cartesian product of second-order (Lorentz) cones. Linear programs, convex quadratic programs and quadratically constrained convex quadratic programs can all be formulated as SOCP problems, as can many other problems that do not fall into these three categories. These latter problems model applications from a broad range of fields from engineering, control and finance to robust optimization and combinatorial optimization. On the other hand semidefinite programming (SDP)—that is the optimization problem over the intersection of an affine set and the cone of positive semidefinite matrices—includes SOCP as a special case. Therefore, SOCP falls between linear (LP) and quadratic (QP) programming and SDP. Like LP, QP and SDP problems, SOCP problems can be solved in polynomial time by interior point methods. The computational effort per iteration required by these methods to solve SOCP problems is greater than that required to solve LP and QP problems but less than that required to solve SDP’s of similar size and structure. Because the set of feasible solutions for an SOCP problem is not polyhedral as it is for LP and QP problems, it is not readily apparent how to develop a simplex or simplex-like method for SOCP. While SOCP problems can be solved as SDP problems, doing so is not advisable both on numerical grounds and computational complexity concerns. For instance, many of the problems presented in the survey paper of Vandenberghe and Boyd [VB96] as examples of SDPs can in fact be formulated as SOCPs and should be solved as such. In §2, 3 below we give SOCP formulations for four of these examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadratic functions ∗RUTCOR, Rutgers University, e-mail:alizadeh@rutcor.rutgers.edu. Research supported in part by the U.S. National Science Foundation grant CCR-9901991 †IEOR, Columbia University, e-mail: gold@ieor.columbia.edu. Research supported in part by the Department of Energy grant DE-FG02-92ER25126, National Science Foundation grants DMS-94-14438, CDA-97-26385 and DMS-01-04282.

1,535 citations


Journal ArticleDOI
TL;DR: It is proved that all expectation-maximization algorithms and classes of Legendre minimization and variational bounding algorithms can be reexpressed in terms of CCCP.
Abstract: The concave-convex procedure (CCCP) is a way to construct discrete-time iterative dynamical systems that are guaranteed to decrease global optimization and energy functions monotonically. This procedure can be applied to almost any optimization problem, and many existing algorithms can be interpreted in terms of it. In particular, we prove that all expectation-maximization algorithms and classes of Legendre minimization and variational bounding algorithms can be reexpressed in terms of CCCP. We show that many existing neural network and mean-field theory algorithms are also examples of CCCP. The generalized iterative scaling algorithm and Sinkhorn's algorithm can also be expressed as CCCP by changing variables. CCCP can be used both as a new way to understand, and prove the convergence of, existing optimization algorithms and as a procedure for generating new algorithms.

1,253 citations


Journal ArticleDOI
TL;DR: This paper addresses the joint design of transmit and receive beamforming or linear processing for multicarrier multiple-input multiple-output (MIMO) channels under a variety of design criteria by developing a unified framework based on considering two families of objective functions that embrace most reasonable criteria to design a communication system.
Abstract: This paper addresses the joint design of transmit and receive beamforming or linear processing (commonly termed linear precoding at the transmitter and equalization at the receiver) for multicarrier multiple-input multiple-output (MIMO) channels under a variety of design criteria. Instead of considering each design criterion in a separate way, we generalize the existing results by developing a unified framework based on considering two families of objective functions that embrace most reasonable criteria to design a communication system: Schur-concave and Schur-convex functions. Once the optimal structure of the transmit-receive processing is known, the design problem simplifies and can be formulated within the powerful framework of convex optimization theory, in which a great number of interesting design criteria can be easily accommodated and efficiently solved, even though closed-form expressions may not exist. From this perspective, we analyze a variety of design criteria, and in particular, we derive optimal beamvectors in the sense of having minimum average bit error rate (BER). Additional constraints on the peak-to-average ratio (PAR) or on the signal dynamic range are easily included in the design. We propose two multilevel water-filling practical solutions that perform very close to the optimal in terms of average BER with a low implementation complexity. If cooperation among the processing operating at different carriers is allowed, the performance improves significantly. Interestingly, with carrier cooperation, it turns out that the exact optimal solution in terms of average BER can be obtained in closed form.

1,243 citations


Journal ArticleDOI
TL;DR: It is shown that the MDA can be viewed as a nonlinear projected-subgradient type method, derived from using a general distance-like function instead of the usual Euclidean squared distance, and derived in a simple way convergence and efficiency estimates.

1,183 citations


Book
01 Jan 2003
TL;DR: A polynomial-time interior-point method for linear optimization was proposed in this article, which has a complexity bound of O(n log n log n 2 n 2 ).
Abstract: It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization. The importance of this paper, containing a new polynomial-time algorithm for linear op timization problems, was not only in its complexity bound. At that time, the most surprising feature of this algorithm was that the theoretical pre diction of its high efficiency was supported by excellent computational results. This unusual fact dramatically changed the style and direc tions of the research in nonlinear optimization. Thereafter it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. In a new rapidly develop ing field, which got the name "polynomial-time interior-point methods", such a justification was obligatory. Afteralmost fifteen years of intensive research, the main results of this development started to appear in monographs [12, 14, 16, 17, 18, 19]. Approximately at that time the author was asked to prepare a new course on nonlinear optimization for graduate students. The idea was to create a course which would reflect the new developments in the field. Actually, this was a major challenge. At the time only the theory of interior-point methods for linear optimization was polished enough to be explained to students. The general theory of self-concordant functions had appeared in print only once in the form of research monograph [12].

1,064 citations


Journal ArticleDOI
TL;DR: This study sheds light on the good performance of some recently proposed linear classification methods including boosting and support vector machines and shows their limitations and suggests possible improvements.
Abstract: We study how closely the optimal Bayes error rate can be approximately reached using a classification algorithm that computes a classifier by minimizing a convex upper bound of the classification error function. The measurement of closeness is characterized by the loss function used in the estimation. We show that such a classification scheme can be generally regarded as a (nonmaximum-likelihood) conditional in-class probability estimate, and we use this analysis to compare various convex loss functions that have appeared in the literature. Furthermore, the theoretical insight allows us to design good loss functions with desirable properties. Another aspect of our analysis is to demonstrate the consistency of certain classification methods using convex risk minimization. This study sheds light on the good performance of some recently proposed linear classification methods including boosting and support vector machines. It also shows their limitations and suggests possible improvements.

826 citations


Proceedings ArticleDOI
04 Jun 2003
TL;DR: A heuristic for minimizing the rank of a positive semidefinite matrix over a convex set using the logarithm of the determinant as a smooth approximation for rank is presented and readily extended to handle general matrices.
Abstract: We present a heuristic for minimizing the rank of a positive semidefinite matrix over a convex set. We use the logarithm of the determinant as a smooth approximation for rank, and locally minimize this function to obtain a sequence of trace minimization problems. We then present a lemma that relates the rank of any general matrix to that of a corresponding positive semidefinite one. Using this, we readily extend the proposed heuristic to handle general matrices. We examine the vector case as a special case, where the heuristic reduces to an iterative l/sub 1/-norm minimization technique. As practical applications of the rank minimization problem and our heuristic, we consider two examples: minimum-order system realization with time-domain constraints, and finding lowest-dimension embedding of points in a Euclidean space from noisy distance data.

614 citations


BookDOI
01 Feb 2003
TL;DR: This thesis treats analysis and design of piecewise linear control systems, and it is shown how Lyapunov functions with a discontinuous dependence on the discrete state can be computed via convex optimization.
Abstract: This thesis treats analysis and design of piecewise linear control systems. Piecewise linear systems capture many of the most common nonlinearities in engineering systems, and they can also be used for approximation of other nonlinear systems. Several aspects of linear systems with quadratic constraints are generalized to piecewise linear systems with piecewise quadratic constraints. It is shown how uncertainty models for linear systems can be extended to piecewise linear systems, and how these extensions give insight into the classical trade-offs between fidelity and complexity of a model. Stability of piecewise linear systems is investigated using piecewise quadratic Lyapunov functions. Piecewise quadratic Lyapunov functions are much more powerful than the commonly used quadratic Lyapunov functions. It is shown how piecewise quadratic Lyapunov functions can be computed via convex optimization in terms of linear matrix inequalities. The computations are based on a compact parameterization of continuous piecewise quadratic functions and conditional analysis using the S-procedure. A unifying framework for computation of a variety of Lyapunov functions via convex optimization is established based on this parameterization. Systems with attractive sliding modes and systems with bounded regions of attraction are also treated. Dissipativity analysis and optimal control problems with piecewise quadratic cost functions are solved via convex optimization. The basic results are extended to fuzzy systems, hybrid systems and smooth nonlinear systems. It is shown how Lyapunov functions with a discontinuous dependence on the discrete state can be computed via convex optimization. An automated procedure for increasing the flexibility of the Lyapunov function candidate is suggested based on linear programming duality. A Matlab toolbox that implements several of the results derived in the thesis is presented.

Book ChapterDOI
01 Jan 2003
TL;DR: A new transistor sizing algorithm, which couples synchronous timing analysis with convex optimization techniques, is presented, which shows that any point found to be locally optimal is certain to be globally optimal.
Abstract: A new transistor sizing algorithm, which couples synchronous timing analysis with convex optimization techniques, is presented. Let A be the sum of transistor sizes, T the longest delay through the circuit, and K a positive constant. Using a distributed RC model, each of the following three programs is shown to be convex: 1) Minimize A subject to T < K. 2) Minimize T subject to A < K. 3) Minimize AT K . The convex equations describing T are a particular class of functions called posynomials. Convex programs have many pleasant properties, and chief among these is the fact that any point found to be locally optimal is certain to be globally optimal TILOS (Timed Logic Synthesizer) is a program that sizes transistors in CMOS circuits. Preliminary results of TILOS’s transistor sizing algorithm are presented.

Journal ArticleDOI
TL;DR: A theoretical analysis shows that the proposed method provides better or at least the same results of the methods presented in the literature, and the proposed design method is applied in the control of an inverted pendulum.
Abstract: Relaxed conditions for stability of nonlinear, continuous and discrete-time systems given by fuzzy models are presented. A theoretical analysis shows that the proposed methods provide better or at least the same results of the methods presented in the literature. Numerical results exemplify this fact. These results are also used for fuzzy regulators and observers designs. The nonlinear systems are represented by fuzzy models proposed by Takagi and Sugeno (1985). The stability analysis and the design of controllers are described by linear matrix inequalities, that can be solved efficiently using convex programming techniques. The specification of the decay rate, constrains on control input and output are also discussed.

Journal ArticleDOI
TL;DR: In this paper, a robust stabilization problem for a class of multi-input and multi-output (MIMO) discrete-time nonlinear systems with both state and control inputs containing non-linear perturbations is discussed.
Abstract: This paper discusses a robust stabilization problem for a class of multi-input and multi-output (MIMO) discrete-time non-linear systems with both state and control inputs containing non-linear perturbations. The problem is solved via static output feedback and dynamic output feedback, respectively. A unified approach is used to cast the problem into a convex optimization involving linear matrix inequalities (LMI), all the controllers can robustly stabilize the systems and maximize the bound on the non-linear perturbations. This paper also extends the output feedback centralized design approach to a class of discrete-time MIMO non-linear decentralized systems, both robust static and dynamic output feedback controllers are obtained.

Proceedings ArticleDOI
25 Aug 2003
TL;DR: This paper presents a new approach to traffic matrix estimation using a regularization based on "entropy penalization", which chooses the traffic matrix consistent with the measured data that is information-theoretically closest to a model in which source/destination pairs are stochastically independent.
Abstract: Traffic matrices are required inputs for many IP network management tasks: for instance, capacity planning, traffic engineering and network reliability analysis. However, it is difficult to measure these matrices directly, and so there has been recent interest in inferring traffic matrices from link measurements and other more easily measured data. Typically, this inference problem is ill-posed, as it involves significantly more unknowns than data. Experience in many scientific and engineering fields has shown that it is essential to approach such ill-posed problems via "regularization". This paper presents a new approach to traffic matrix estimation using a regularization based on "entropy penalization". Our solution chooses the traffic matrix consistent with the measured data that is information-theoretically closest to a model in which source/destination pairs are stochastically independent. We use fast algorithms based on modern convex optimization theory to solve for our traffic matrices. We evaluate the algorithm with real backbone traffic and routing data, and demonstrate that it is fast, accurate, robust, and flexible.

01 Jan 2003
TL;DR: It is shown how tools from modern nonlinear control theory can be used to synthesize finite horizon MPC controllers with guaranteed stability, and more importantly, how some of the tech- nical assumptions in the literature can be dispensed with by using a slightly more complex controller.
Abstract: Controlling a system with control and state constraints is one of the most important problems in control theory, but also one of the most challenging. Another important but just as demanding topic is robustness against uncertainties in a controlled system. One of the most successful approaches, both in theory and practice, to control constrained systems is model predictive control (MPC). The basic idea in MPC is to repeatedly solve optimization problems on-line to find an optimal input to the controlled system. In recent years, much effort has been spent to incorporate the robustness problem into this framework.The main part of the thesis revolves around minimax formulations of MPC for uncertain constrained linear discrete-time systems. A minimax strategy in MPC means that worst-case performance with respect to uncertainties is optimized. Unfortunately, many minimax MPC formulations yield intractable optimization problems with exponential complexity.Minimax algorithms for a number of uncertainty models are derived in the thesis. These include systems with bounded external additive disturbances, systems with uncertain gain, and systems described with linear fractional transformations. The central theme in the different algorithms is semidefinite relaxations. This means that the minimax problems are written as uncertain semidefinite programs, and then conservatively approximated using robust optimization theory. The result is an optimization problem with polynomial complexity.The use of semidefinite relaxations enables a framework that allows extensions of the basic algorithms, such as joint minimax control and estimation, and approx- imation of closed-loop minimax MPC using a convex programming framework. Additional topics include development of an efficient optimization algorithm to solve the resulting semidefinite programs and connections between deterministic minimax MPC and stochastic risk-sensitive control.The remaining part of the thesis is devoted to stability issues in MPC for continuous-time nonlinear unconstrained systems. While stability of MPC for un-constrained linear systems essentially is solved with the linear quadratic controller, no such simple solution exists in the nonlinear case. It is shown how tools from modern nonlinear control theory can be used to synthesize finite horizon MPC controllers with guaranteed stability, and more importantly, how some of the tech- nical assumptions in the literature can be dispensed with by using a slightly more complex controller.

Journal ArticleDOI
TL;DR: A computer program PENNON for the solution of problems of convex Nonlinear and Semidefinite Programming (NLP-SDP), a generalized version of the Augmented Lagrangian method, originally introduced by Ben-Tal and Zibulevsky for convex NLP problems.
Abstract: We introduce a computer program PENNON for the solution of problems of convex Nonlinear and Semidefinite Programming (NLP-SDP). The algorithm used in PENNON is a generalized version of the Augmented Lagrangian method, originally introduced by Ben-Tal and Zibulevsky for convex NLP problems. We present generalization of this algorithm to convex NLP-SDP problems, as implemented in PENNON and details of its implementation. The code can also solve second-order conic programming (SOCP) problems, as well as problems with a mixture of SDP, SOCP and NLP constraints. Results of extensive numerical tests and comparison with other optimization codes are presented. The test examples show that PENNON is particularly suitable for large sparse problems.

Journal ArticleDOI
TL;DR: It is shown that there is a unique spectral density /spl Phi/ which minimizes this Kullback-Leibler distance, and that this optimal approximate is of the form /spl Psi//Q where the "correction term" Q is a rational spectral density function and the coefficients of Q can be obtained numerically by solving a suitable convex optimization problem.
Abstract: We introduce a Kullback-Leibler (1968) -type distance between spectral density functions of stationary stochastic processes and solve the problem of optimal approximation of a given spectral density /spl Psi/ by one that is consistent with prescribed second-order statistics. In general, such statistics are expressed as the state covariance of a linear filter driven by a stochastic process whose spectral density is sought. In this context, we show (i) that there is a unique spectral density /spl Phi/ which minimizes this Kullback-Leibler distance, (ii) that this optimal approximate is of the form /spl Psi//Q where the "correction term" Q is a rational spectral density function, and (iii) that the coefficients of Q can be obtained numerically by solving a suitable convex optimization problem. In the special case where /spl Psi/ = 1, the convex functional becomes quadratic and the solution is then specified by linear equations.

BookDOI
01 Jan 2003
TL;DR: In this article, the authors propose the Log-Quadratic Proximal Methodology in Convex Optimization Algorithms and Variational Inequalities (LQP) for large scale fixed charge network flow problems.
Abstract: On Vector Quasi-Equilibrium Problems.- 1. Introduction.- 2. Preliminaries.- 3. Existence Results.- 4. Some Applications.- References.- The Log-Quadratic Proximal Methodology in Convex Optimization Algorithms and Variational Inequalities.- 1. Introduction.- 2. Lagrangians and Proximal Methods.- 2.1. The quadratic augmented Lagrangian.- 2.2. Proximal Minimization Algorithms.- 2.3. Entropic Proximal Methods and Modified Lagrangians.- 2.4. Difficulties with Entropic Proximal Methods.- 2.5. Toward Solutions to Difficulties.- 3. The Logarithmic-Quadratic Proximal Framework.- 3.1. The LQ-Function and its Conjugate: Basic Properties.- 3.2. The Logarithmic-Quadratic Proximal Minimization.- 4. The LQP in Action.- 4.1. Primal LQP for Variational Inequalities over Polyhedra.- 4.2. Lagrangian Methods for convex optimization and variational inequalities.- 4.3. Dual and Primal-Dual Decomposition schemes.- 4.4. Primal Decomposition: Block Gauss-Seidel Schemes for Linearly constrained Problems.- 4.5. Convex Feasibility Problems.- 4.6. Bundle Methods in Nonsmooth Optimization.- References.- The Continuum Model of Transportation Problem.- 1. Introduction.- 2. Calculus of the solution.- References.- The Economic Model for Demand-Supply Problems.- 1. Introduction.- 2. The first phase: formalization of the equilibrium.- 3. The second phase: formalization of the equilibrium.- 4. The dependence of the second phase on the first one.- 5. General model.- 6. Example.- References.- Constrained Problems of Calculus of Variations Via Penalization Technique.- 1. Introduction.- 2. Statement of the problem.- 3. An equivalent statement of the problem.- 4. Local minima.- 5. Penalty functions.- 6. Exact penalty functions.- 6.1. Properties of the function ?.- 6.2. Properties of the function G.- 6.3. The rate of descent of the function ?.- 6.4. An Exact Penalty function.- 7. Necessary conditions for an Extremum.- 7.1. Necessary conditions generated by classical variations.- 7.2. Discussion and Remarks.- References.- Variational Problems with Constraints Involving Higher-Order Derivatives.- 1. Introduction.- 2. Statement of the problem.- 3. An equivalent statement of the problem.- 4. Local minima.- 5. Properties of the function ?.- 5.1. A classical variation of z.- 5.2. The case z ? Z.- 5.3. The case z ? Z.- 6. Exact penalty functions.- 6.1. Properties of the function G.- 6.2. An Exact Penalty function.- 7. Necessary conditions for an Extremum.- References.- On the strong solvability of a unilateral boundary value problem for Nonlinear Parabolic Operators in the Plane.- 1. Introduction.- 2. Hypotheses and results.- 3. Preliminary results.- 4. Proof of the theorems.- References.- Solving a Special Class of Discrete Optimal Control Problems Via a Parallel Interior-Point Method.- 1. Introduction.- 2. Framework of the Method.- 3. Global convergence.- 4. A special class of discrete optimal control problems.- 5. Numerical experiments.- 6. Conclusions.- References.- Solving Large Scale Fixed Charge Network Flow Problems.- 1. Introduction.- 2. Problem Definition and Formulation.- 3. Solution Procedure.- 3.1. The DSSP.- 3.2. Local Search.- 4. Computational Results.- 5. Concluding Remarks.- References.- Variable Projection Methods for Large-Scale Quadratic Optimization in data Analysis Applications.- 1. Introduction.- 2. Large QP Problems in Training Support Vector Machines.- 3. Numerical Solution of Image Restoration Problem.- 4. A Bivariate Interpolation Problem.- 5. Conclusions.- References.- Strong solvability of boundary value problems in elasticity with Unilateral Constraints.- 1. Introduction.- 2. Basic assumptions and main results.- 3. Preliminary results.- 4. Proof of the theorems.- References.- Time Dependent Variational Inequalities -Order Derivatives.- 1. Introduction.- 2. Statement of the problem.- 3. An equivalent statement of the problem.- 4. Local minima.- 5. Properties of the function ?.- 5.1. A classical variation of z.- 5.2. The case z ? Z.- 5.3. The case z ? Z.- 6. Exact penalty functions.- 6.1. Properties of the function G.- 6.2. An Exact Penalty function.- 7. Necessary conditions for an Extremum.- References.- On the strong solvability of a unilateral boundary value problem for Nonlinear Parabolic Operators in the Plane.- 1. Introduction.- 2. Hypotheses and results.- 3. Preliminary results.- 4. Proof of the theorems.- References.- Solving a Special Class of Discrete Optimal Control Problems Via a Parallel Interior-Point Method.- 1. Introduction.- 2. Framework of the Method.- 3. Global convergence.- 4. A special class of discrete optimal control problems.- 5. Numerical experiments.- 6. Conclusions.- References.- Solving Large Scale Fixed Charge Network Flow Problems.- 1. Introduction.- 2. Problem Definition and Formulation.- 3. Solution Procedure.- 3.1. The DSSP.- 3.2. Local Search.- 4. Computational Results.- 5. Concluding Remarks.- References.- Variable Projection Methods for Large-Scale Quadratic Optimization in data Analysis Applications.- 1. Introduction.- 2. Large QP Problems in Training Support Vector Machines.- 3. Numerical Solution of Image Restoration Problem.- 4. A Bivariate Interpolation Problem.- 5. Conclusions.- References.- Strong solvability of boundary value problems in elasticity with Unilateral Constraints.- 1. Introduction.- 2. Basic assumptions and main results.- 3. Preliminary results.- 4. Proof of the theorems.- References.- Time Dependent Variational Inequalities - Some Recent Trends.- 1. Introduction.- 2. Time - an additional parameter in variational inequalities.- 2.1. Time-dependent variational inequalities and quasi-variational inequalities.- 2.2. Some classic results on the differentiability of the projection on closed convex subsets in Hilbert space.- 2.3. Time-dependent variational inequalities with memory terms.- 3. Ordinary Differential Inclusions with Convex Constraints: Sweeping Processes.- 3.1. Moving convex sets and systems with hysteresis.- 3.2. Sweeping processes and generalizations.- 4. Projected dynamical systems.- 4.1. Differentiability of the projection onto closed convex subsets revisited.- 4.2. Projected dynamical systems and stationarity.- 4.3. Well-posedness for projected dynamical systems.- 5. Some Asymptotic Results.- 5.1. Some classical results.- 5.2. An asymptotic result for full discretization.- 5.3. Some convergence results for continuous-time subgradient procedures for convex optimization.- References.- On the Contractibility of the Efficient and Weakly Efficient Sets in R2.- 1. Introduction.- 2. Preliminaries.- 3. Topological structure of the efficient sets of compact convex sets.- 4. Example.- References.- Existence Theorems for a Class of Variational Inequalities and Applications to a Continuous Model of Transportation.- 1. Introduction.- 2. Continuous transportation model.- 3. Existence Theorem.- References.- On Auxiliary Principle for Equilibrium Problems.- 1. Introduction.- 2. The auxiliary equilibrium problem.- 3. The auxiliary problem principle.- 4. Applications to variational inequalities and optimization problems.- 5. Concluding remarks.- References.- Multicriteria Spatial Price Networks: Statics and Dynamics.- 1. Introduction.- 2. The Multicriteria Spatial Price Model.- 3. Qualitative Properties.- 4. The Dynamics.- 5. The Discrete-Time Algorithm.- 6. Numerical Examples.- 7. Summary and Conclusions.- References.- Non regular data in unilateral variational problems.- 1. Introduction.- 2. The approach by truncation and approximation.- 3. Renormalized formulation.- 4. Multivalued operators and more general measures.- 5. Uniqueness and convergence.- References.- Equilibrium Concepts in Transportation Networks: Generalized Wardrop Conditions and Variational Formulations.- 1. Introduction.- 2. Equilibrium model in a traffic network.- References.- Variational Geometry and Equilibrium.- 1. Introduction.- 2. Variational Inequalities and Normals to Convex Sets.- 3. Quasi-Variational Inequalities and Normals to General Sets.- 4. Calculus and Solution Perturbations.- 5. Application to an Equilibrium Model with Aggregation.- References.- On the Calculation of Equilibrium in Time Dependent Traffic Networks.- 1. Introduction.- 2. Calculation of Equilibria.- 3. The algorithm.- 4. Applications and Examples.- 5. Conclusions.- References.- Mechanical Equilibrium and Equilibrium Systems.- 1. Introduction.- 2. Physical motivation.- 3. Statement of the mechanical force equilibrium problem.- 4. The principle of virtual work.- 5. Characterization of the constraints.- 6. Quasi-variational inequalities (QVI).- 7. Principle of virtual work in force fields under scleronomic and holonomic constraints.- 8. Dual form of the principle of virtual work in force field under scleronomic and holonomic constraints.- 9. Procedure for solving mechanical equilibrium problems.- 10. Existence of solutions.- References.- False Numerical Convergence in Some Generalized Newton Methods.- 1. Introduction.- 2. Some generalized Newton methods.- 3. False numerical convergence.- 4. An example.- 5. Avoiding false numerical convergence.- References.- Distance to the Solution Set of an Inequality with an Increasing Function.- 1. Introduction.- 2. Preliminaries.- 3. Distance to the solution set of the inequality with an arbitrary increasing function.- 4. Distance to the solution set of the inequality with an ICAR function.- 5. Inequalities with an increasing function defined on the entire space.- 6. Inequalities with a topical function.- References.- Transportation Networks with Capacity Constraints.- 1. Introduction.- 2. Wardrop's generalized equilibrium condition.- 3. A triangular network.- 4. More about generalized equilibrium principle.- 5. Capacity constraints and paradox.- References.

Journal ArticleDOI
TL;DR: Several important problems in control theory can be reformulated as semidefinite programming problems, i.e., minimization of a linear objective subject to linear matrix inequality constraints, yielding new results or new proofs for existing results from control theory.
Abstract: Several important problems in control theory can be reformulated as semidefinite programming problems, i.e., minimization of a linear objective subject to linear matrix inequality (LMI) constraints. From convex optimization duality theory, conditions for infeasibility of the LMIs, as well as dual optimization problems, can be formulated. These can in turn be reinterpreted in control or system theoretic terms, often yielding new results or new proofs for existing results from control theory. We explore such connections for a few problems associated with linear time-invariant systems.

Journal ArticleDOI
TL;DR: A new projection-based method, termed the hybrid projection-reflection (HPR) algorithm, is introduced for solving phase-retrieval problems featuring nonnegativity constraints in the object domain, motivated by properties of the HPR algorithm for convex constraints.
Abstract: The phase-retrieval problem, fundamental in applied physics and engineering, addresses the question of how to determine the phase of a complex-valued function from modulus data and additional a priori information. Recently we identified two important methods for phase retrieval, namely, Fienup's basic input-output and hybrid input-output (HIO) algorithms, with classical convex projection methods and suggested that further connections between convex optimization and phase retrieval should be explored. Following up on this work, we introduce a new projection-based method, termed the hybrid projection-reflection (HPR) algorithm, for solving phase-retrieval problems featuring nonnegativity constraints in the object domain. Motivated by properties of the HPR algorithm for convex constraints, we recommend an error measure studied by Fienup more than 20 years ago. This error measure, which has received little attention in the literature, lends itself to an easily implementable stopping criterion. In numerical experiments we found the HPR algorithm to be a competitive alternative to the HIO algorithm and the stopping criterion to be reliable and robust.

Book ChapterDOI
01 Jan 2003
TL;DR: In this article, the auxiliary problem principle introduced by Cohen is extended to a general equilibrium problem, and applications to variational inequalities and to convex optimization problems are analysed. But their work focused on convex problems.
Abstract: The auxiliary problem principle introduced by Cohen is extended to a general equilibrium problem. In particular, applications to variational inequalities and to convex optimization problems are analysed.

Journal ArticleDOI
TL;DR: The results are extended to incremental subgradient methods for minimizing a sum of convex functions, which have recently been shown to be promising for various large-scale problems, including those arising from Lagrangian relaxation.
Abstract: We present a unified convergence framework for approximate subgradient methods that covers various stepsize rules (including both diminishing and nonvanishing stepsizes), convergence in objective values, and convergence to a neighborhood of the optimal set. We discuss ways of ensuring the boundedness of the iterates and give efficiency estimates. Our results are extended to incremental subgradient methods for minimizing a sum of convex functions, which have recently been shown to be promising for various large-scale problems, including those arising from Lagrangian relaxation.

Journal ArticleDOI
TL;DR: Comparisons with other classes of Lyapunov functions through numerical examples taken from the literature show that HPLFs are a powerful tool for robustness analysis.

Journal ArticleDOI
01 Jul 2003
TL;DR: This paper considers the case in which not even the channel statistics are available, obtaining a robust solution under channel uncertainty by formulating the problem within a game-theoretic framework and obtaining a uniform power allocation.
Abstract: When transmitting over multiple-input-multiple-output (MIMO) channels, there are additional degrees of freedom with respect to single-input-single-output (SISO) channels: the distribution of the available power over the transmit dimensions. If channel state information (CSI) is available, the optimum solution is well known and is based on diagonalizing the channel matrix and then distributing the power over the channel eigenmodes in a "water-filling" fashion. When CSI is not available at the transmitter, but the channel statistics are a priori known, an optimal fixed power allocation can be precomputed. This paper considers the case in which not even the channel statistics are available, obtaining a robust solution under channel uncertainty by formulating the problem within a game-theoretic framework. The payoff function of the game is the mutual information and the players are the transmitter and a malicious nature. The problem turns out to be the characterization of the capacity of a compound channel which is mathematically formulated as a maximin problem. The uniform power allocation is obtained as a robust solution (under a mild isotropy condition). The loss incurred by the uniform distribution is assessed using the duality gap concept from convex optimization theory. Interestingly, the robustness of the uniform power allocation also holds for the more general case of the multiple-access channel.

Proceedings ArticleDOI
04 Jun 2003
TL;DR: The analysis yields several improvements over previous methods and opens up new possibilities, including the possibility of treating nonlinear vector fields and/or switching surfaces and parametric robustness analysis in a unified way.
Abstract: This paper presents a method for stability analysis of switched and hybrid systems using polynomial and piecewise polynomial Lyapunov functions. Computation of such functions can be performed using convex optimization, based on the sum of squares decomposition of multivariate polynomials. The analysis yields several improvements over previous methods and opens up new possibilities, including the possibility of treating nonlinear vector fields and/or switching surfaces and parametric robustness analysis in a unified way.

Journal ArticleDOI
TL;DR: A new method is introduced for large-scale convex constrained optimization and is a generalization of the Spectral Projected Gradient method (SPG), but can be used when projections are difficult to compute.
Abstract: A new method is introduced for large-scale convex constrained optimization. The general model algorithm involves, at each iteration, the approximate minimization of a convex quadratic on the feasible set of the original problem and global convergence is obtained by means of nonmonotone line searches. A specific algorithm, the Inexact Spectral Projected Gradient method (ISPG), is implemented using inexact projections computed by Dykstra's alternating projection method and generates interior iterates. The ISPG method is a generalization of the Spectral Projected Gradient method (SPG), but can be used when projections are difficult to compute. Numerical results for constrained least-squares rectangular matrix problems are presented.

Journal ArticleDOI
TL;DR: A recent survey of applications, theoretical results and various algorithmic approaches for the sum-of-ratios problem is provided.
Abstract: One of the most difficult fractional programs encountered so far is the sum-of-ratios problem. Contrary to earlier expectations it is much more removed from convex programming than other multi-ratio problems analyzed before. It really should be viewed in the context of global optimization. It proves to be essentially [Formula: See Text]-hard in spite of its special structure under the usual assumptions on numerators and denominators. The article provides a recent survey of applications, theoretical results and various algorithmic approaches for this challenging problem.

Journal ArticleDOI
TL;DR: In this article, it was shown that the deterministic past history of the universe can be uniquely reconstructed from knowledge of the present mass density field, the latter being inferred from the three-dimensional distribution of luminous matter, assumed to be tracing the distribution of dark matter up to a known bias.
Abstract: We show that the deterministic past history of the Universe can be uniquely reconstructed from knowledge of the present mass density field, the latter being inferred from the three-dimensional distribution of luminous matter, assumed to be tracing the distribution of dark matter up to a known bias. Reconstruction ceases to be unique below those scales – a few Mpc – where multistreaming becomes significant. Above 6 h−1 Mpc we propose and implement an effective Monge–Ampere–Kantorovich method of unique reconstruction. At such scales the Zel'dovich approximation is well satisfied and reconstruction becomes an instance of optimal mass transportation, a problem which goes back to Monge. After discretization into N point masses one obtains an assignment problem that can be handled by effective algorithms with not more than O(N3) time complexity and reasonable CPU time requirements. Testing against N-body cosmological simulations gives over 60 per cent of exactly reconstructed points. We apply several interrelated tools from optimization theory that were not used in cosmological reconstruction before, such as the Monge–Ampere equation, its relation to the mass transportation problem, the Kantorovich duality and the auction algorithm for optimal assignment. A self-contained discussion of relevant notions and techniques is provided.

Journal ArticleDOI
Tong Zhang1
TL;DR: A greedy algorithm for a class of convex optimization problems is presented, motivated from function approximation using a sparse combination of basis functions as well as some of its variants, which derives a bound on the rate of approximate minimization.
Abstract: A greedy algorithm for a class of convex optimization problems is presented. The algorithm is motivated from function approximation using a sparse combination of basis functions as well as some of its variants. We derive a bound on the rate of approximate minimization for this algorithm, and present examples of its application. Our analysis generalizes a number of earlier studies.