scispace - formally typeset
Search or ask a question

Showing papers in "Siam Journal on Optimization in 2008"


Journal ArticleDOI
TL;DR: It is intended to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems.
Abstract: In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte Carlo sampling techniques, namely, the stochastic approximation (SA) and the sample average approximation (SAA) methods. Both approaches, the SA and SAA methods, have a long history. Current opinion is that the SAA method can efficiently use a specific (say, linear) structure of the considered problem, while the SA approach is a crude subgradient method, which often performs poorly in practice. We intend to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. We extend the analysis to the case of convex-concave stochastic saddle point problems and present (in our opinion highly encouraging) results of numerical experiments.

2,346 citations


Journal ArticleDOI
TL;DR: The structure of optimal solution sets is studied, finite convergence for important quantities is proved, and $q$-linear convergence rates for the fixed-point algorithm applied to problems with $f(x)$ convex, but not necessarily strictly convex are established.
Abstract: We present a framework for solving the large-scale $\ell_1$-regularized convex minimization problem:\[ \min\|x\|_1+\mu f(x). \] Our approach is based on two powerful algorithmic ideas: operator-splitting and continuation. Operator-splitting results in a fixed-point algorithm for any given scalar $\mu$; continuation refers to approximately following the path traced by the optimal value of $x$ as $\mu$ increases. In this paper, we study the structure of optimal solution sets, prove finite convergence for important quantities, and establish $q$-linear convergence rates for the fixed-point algorithm applied to problems with $f(x)$ convex, but not necessarily strictly convex. The continuation framework, motivated by our convergence results, is demonstrated to facilitate the construction of practical algorithms.

912 citations


Journal ArticleDOI
TL;DR: This work studies approximations of optimization problems with probabilistic constraints in which the original distribution of the underlying random vector is replaced with an empirical distribution obtained from a random sample to obtain a lower bound to the true optimal value.
Abstract: We study approximations of optimization problems with probabilistic constraints in which the original distribution of the underlying random vector is replaced with an empirical distribution obtained from a random sample. We show that such a sample approximation problem with a risk level larger than the required risk level will yield a lower bound to the true optimal value with probability approaching one exponentially fast. This leads to an a priori estimate of the sample size required to have high confidence that the sample approximation will yield a lower bound. We then provide conditions under which solving a sample approximation problem with a risk level smaller than the required risk level will yield feasible solutions to the original problem with high probability. Once again, we obtain a priori estimates on the sample size required to obtain high confidence that the sample approximation problem will yield a feasible solution to the original problem. Finally, we present numerical illustrations of how these results can be used to obtain feasible solutions and optimality bounds for optimization problems with probabilistic constraints.

568 citations


Journal ArticleDOI
TL;DR: It is proven that the feasibility of the randomized solutions for all other convex programs can be bounded based on the feasibility for the prototype class of fully-supported problems, which means that all fully- supported problems share the same feasibility properties.
Abstract: Many optimization problems are naturally delivered in an uncertain framework, and one would like to exercise prudence against the uncertainty elements present in the problem. In previous contributions, it has been shown that solutions to uncertain convex programs that bear a high probability to satisfy uncertain constraints can be obtained at low computational cost through constraint randomization. In this paper, we establish new feasibility results for randomized algorithms. Specifically, the exact feasibility for the class of the so-called fully-supported problems is obtained. It turns out that all fully-supported problems share the same feasibility properties, revealing a deep kinship among problems of this class. It is further proven that the feasibility of the randomized solutions for all other convex programs can be bounded based on the feasibility for the prototype class of fully-supported problems. The feasibility result of this paper outperforms previous bounds and is not improvable because it is exact for fully-supported problems.

559 citations


Journal ArticleDOI
TL;DR: This work provides estimates on the primal infeasibility and primal suboptimality of the generated approximate primal solutions and provides a basis for analyzing the trade-offs between the desired level of error and the selection of the stepsize value.
Abstract: In this paper, we study methods for generating approximate primal solutions as a byproduct of subgradient methods applied to the Lagrangian dual of a primal convex (possibly nondifferentiable) constrained optimization problem. Our work is motivated by constrained primal problems with a favorable dual problem structure that leads to efficient implementation of dual subgradient methods, such as the recent resource allocation problems in large-scale networks. For such problems, we propose and analyze dual subgradient methods that use averaging schemes to generate approximate primal optimal solutions. These algorithms use a constant stepsize in view of its simplicity and practical significance. We provide estimates on the primal infeasibility and primal suboptimality of the generated approximate primal solutions. These estimates are given per iteration, thus providing a basis for analyzing the trade-offs between the desired level of error and the selection of the stepsize value. Our analysis relies on the Slater condition and the inherited boundedness properties of the dual problem under this condition. It also relies on the boundedness of subgradients, which is ensured by assuming the compactness of the constraint set.

437 citations


Journal ArticleDOI
TL;DR: It is shown that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is computed only up to a small, uniformly bounded error.
Abstract: We show that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is computed only up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this means in some instances computing only a few leading eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method's computational cost. This also allows sparse problems to be solved efficiently using sparse maximum eigenvalue packages.

203 citations


Journal ArticleDOI
TL;DR: A class of trust-region methods is presented for solving unconstrained nonlinear and possibly nonconvex discretized optimization problems, like those arising in systems governed by partial differential equations, leading to true multilevel/multiscale optimization methods reminiscent of multigrid methods in linear algebra and the solution ofpartial differential equations.
Abstract: A class of trust-region methods is presented for solving unconstrained nonlinear and possibly nonconvex discretized optimization problems, like those arising in systems governed by partial differential equations. The algorithms in this class make use of the discretization level as a means of speeding up the computation of the step. This use is recursive, leading to true multilevel/multiscale optimization methods reminiscent of multigrid methods in linear algebra and the solution of partial differential equations. A simple algorithm of the class is then described and its numerical performance is shown to be numerically promising. This observation then motivates a proof of global convergence to first-order stationary points on the fine grid that is valid for all algorithms in the class.

197 citations


Journal ArticleDOI
TL;DR: This paper proposes methods to further relax the SDP relaxation, more precisely, to relax the single semidefinite matrix cone into a set of small-size semidfinite submatrix cones, which it is called a sub-SDP (SSDP) approach.
Abstract: Recently, a semidefinite programming (SDP) relaxation approach has been proposed to solve the sensor network localization problem. Although it achieves high accuracy in estimating the sensor locations, the speed of the SDP approach is not satisfactory for practical applications. In this paper we propose methods to further relax the SDP relaxation, more precisely, to relax the single semidefinite matrix cone into a set of small-size semidefinite submatrix cones, which we call a sub-SDP (SSDP) approach. We present two such relaxations. Although they are weaker than the original SDP relaxation, they retain the key theoretical property, and numerical experiments show that they are both efficient and accurate. The speed of the SSDP is even faster than that of other approaches based on weaker relaxations. The SSDP approach may also pave a way to efficiently solving general SDP problems without sacrificing the solution quality.

178 citations


Journal ArticleDOI
TL;DR: A class of nonlinear operators in Banach spaces is proposed that contains the classes of firmly nonexpansive mappings in Hilbert spaces and resolvents of maximal monotone operators inBanach spaces.
Abstract: A class of nonlinear operators in Banach spaces is proposed. We call each operator in this class a firmly nonexpansive-type mapping. This class contains the classes of firmly nonexpansive mappings in Hilbert spaces and resolvents of maximal monotone operators in Banach spaces. We study the existence and approximation of fixed points of firmly nonexpansive-type mappings in Banach spaces.

174 citations


Journal ArticleDOI
TL;DR: A new algorithm is proposed called BOP, called for the biobjective optimization (BOP) problem, which generates an approximation of the Pareto front by solving a series of single-objective formulations of BOP.
Abstract: This work deals with bound constrained multiobjective optimization (MOP) of nonsmooth functions for problems where the structure of the objective functions either cannot be exploited, or are absent. Typical situations arise when the functions are computed as the result of a computer simulation. We first present definitions and optimality conditions as well as two families of single-objective formulations of MOP. Next, we propose a new algorithm called for the biobjective optimization (BOP) problem (i.e., MOP with two objective functions). The property that Pareto points may be ordered in BOP and not in MOP is exploited by our algorithm. generates an approximation of the Pareto front by solving a series of single-objective formulations of BOP. These single-objective problems are solved using the recent (mesh adaptive direct search) algorithm for nonsmooth optimization. The Pareto front approximation is shown to satisfy some first order necessary optimality conditions based on the Clarke calculus. Finally, is tested on problems from the literature designed to illustrate specific difficulties encountered in biobjective optimization, such as a nonconvex or disjoint Pareto front, local Pareto fronts, or a nonuniform Pareto front.

154 citations


Journal ArticleDOI
TL;DR: This work proposes a decentralized, asynchronous gradient-descent method that is suitable for implementation in the case where the communication between agents is described in terms of a dynamic network and shows how to accommodate nonnegativity constraints on the resources using the results derived.
Abstract: We consider the problem of n agents that share m common resources. The objective is to derive an optimal allocation that maximizes a global objective expressed as a separable concave objective function. We propose a decentralized, asynchronous gradient-descent method that is suitable for implementation in the case where the communication between agents is described in terms of a dynamic network. This communication model accommodates situations such as mobile agents and communication failures. The method is shown to converge provided that the objective function has Lipschitz-continuous gradients. We further consider a randomized version of the same algorithm for the case where the objective function is nondifferentiable but has bounded subgradients. We show that both algorithms converge to near-optimal solutions and derive convergence rates in terms of the magnitude of the gradient of the objective function. We show how to accommodate nonnegativity constraints on the resources using the results derived. Ex...

Journal ArticleDOI
TL;DR: The theory is developed for elliptic distributed controls in domains up to dimension three and problems of elliptic boundary control and parabolic distributed control are discussed in spatial domains of dimension two and one, respectively.
Abstract: Second-order sufficient optimality conditions are established for the optimal control of semilinear elliptic and parabolic equations with pointwise constraints on the control and the state. In contrast to former publications on this subject, the cone of critical directions is the smallest possible in the sense that the second-order sufficient conditions are the closest to the associated necessary ones. The theory is developed for elliptic distributed controls in domains up to dimension three. Moreover, problems of elliptic boundary control and parabolic distributed control are discussed in spatial domains of dimension two and one, respectively.

Journal ArticleDOI
TL;DR: The recently introduced proximal average of two convex functions is a convex function with many useful properties and its basic properties with respect to the standard convex-analytical notions are provided.
Abstract: The recently introduced proximal average of two convex functions is a convex function with many useful properties. In this paper, we introduce and systematically study the proximal average for finitely many convex functions. The basic properties of the proximal average with respect to the standard convex-analytical notions (domain, Fenchel conjugate, subdifferential, proximal mapping, epi-continuity, and others) are provided and illustrated by several examples.

Journal ArticleDOI
TL;DR: A parameter-free integer-programming-based algorithm for the global resolution of a linear program with linear complementarity constraints (LPCCs), which establishes that the algorithm can handle infeasible, unbounded, and solvable LPCCs effectively.
Abstract: This paper presents a parameter-free integer-programming-based algorithm for the global resolution of a linear program with linear complementarity constraints (LPCCs). The cornerstone of the algorithm is a minimax integer program formulation that characterizes and provides certificates for the three outcomes—infeasibility, unboundedness, or solvability—of an LPCC. An extreme point/ray generation scheme in the spirit of Benders decomposition is developed, from which valid inequalities in the form of satisfiability constraints are obtained. The feasibility problem of these inequalities and the carefully guided linear-programming relaxations of the LPCC are the workhorses of the algorithm, which also employs a specialized procedure for the sparsification of the satifiability cuts. We establish the finite termination of the algorithm and report computational results using the algorithm for solving randomly generated LPCCs of reasonable sizes. The results establish that the algorithm can handle infeasible, unbounded, and solvable LPCCs effectively.

Journal ArticleDOI
TL;DR: It is shown that, under convexity, the hierarchy of semidefinite relaxations for polynomial optimization simplifies and has finite convergence, a highly desirable feature as convex problems are in principle easier to solve.
Abstract: We review several (and provide new) results on the theory of moments, sums of squares, and basic semialgebraic sets when convexity is present. In particular, we show that, under convexity, the hierarchy of semidefinite relaxations for polynomial optimization simplifies and has finite convergence, a highly desirable feature as convex problems are in principle easier to solve. In addition, if a basic semialgebraic set $\mathbf{K}$ is convex but its defining polynomials are not, we provide two algebraic certificates of convexity which can be checked numerically. The second is simpler and holds if a sufficient (and almost necessary) condition is satisfied; it also provides a new condition for $\mathbf{K}$ to have semidefinite representation. For this we use (and extend) some of the recent results from the author and Helton and Nie [Math. Program., to appear]. Finally, we show that, when restricting to a certain class of convex polynomials, the celebrated Jensen's inequality in convex analysis can be extended to linear functionals that are not necessarily probability measures.

Journal ArticleDOI
TL;DR: This strategy builds upon a combination of techniques from two-stage stochastic programming and level-set-based shape optimization and usage of linear elasticity and quadratic objective functions to obtain a computational cost which scales linearly in the number of linearly independent applied forces.
Abstract: We present an algorithm for shape optimization under stochastic loading and representative numerical results. Our strategy builds upon a combination of techniques from two-stage stochastic programming and level-set-based shape optimization. In particular, usage of linear elasticity and quadratic objective functions permits us to obtain a computational cost which scales linearly in the number of linearly independent applied forces, which often is much smaller than the number of different realizations of the stochastic forces. Numerical computations are performed using a level set method with composite finite elements both in two and in three spatial dimensions.

Journal ArticleDOI
TL;DR: A globalization framework that ensures the convergence of adaptive interior methods, and examines convergence failures of the Mehrotra predictor-corrector algorithm are proposed.
Abstract: This paper considers strategies for selecting the barrier parameter at every iteration of an interior-point method for nonlinear programming. Numerical experiments suggest that heuristic adaptive choices, such as Mehrotra's probing procedure, outperform monotone strategies that hold the barrier parameter fixed until a barrier optimality test is satisfied. A new adaptive strategy is proposed based on the minimization of a quality function. The paper also proposes a globalization framework that ensures the convergence of adaptive interior methods, and examines convergence failures of the Mehrotra predictor-corrector algorithm. The barrier update strategies proposed in this paper are applicable to a wide class of interior methods and are tested in the two distinct algorithmic frameworks provided by the ipopt and knitro software packages.

Journal ArticleDOI
TL;DR: This work introduces some new notions of constraint qualifications in terms of the epigraphs of the conjugates of these functions and study relationships between these new constraint qualifications and other well-known constraint qualifications.
Abstract: For an inequality system defined by an infinite family of proper convex functions, we introduce some new notions of constraint qualifications in terms of the epigraphs of the conjugates of these functions and study relationships between these new constraint qualifications and other well-known constraint qualifications including the basic constraint qualification studied by Hiriart-Urrutty and Lemarechal and by Li, Nahak, and Singer. Extensions of known results to more general settings are presented, and applications to particular important problems, such as conic programming and approximation theory, are also studied.

Journal ArticleDOI
TL;DR: A new method for the numerical solution of nonlinear multiobjective optimization problems with an arbitrary partial ordering in the objective space induced by a closed pointed convex cone based on the well-known scalarization approach by Pascoletti and Serafini and adaptively controls the scalarized parameters using new sensitivity results.
Abstract: This paper presents a new method for the numerical solution of nonlinear multiobjective optimization problems with an arbitrary partial ordering in the objective space induced by a closed pointed convex cone. This algorithm is based on the well-known scalarization approach by Pascoletti and Serafini and adaptively controls the scalarization parameters using new sensitivity results. The computed image points give a nearly equidistant approximation of the whole Pareto surface. The effectiveness of this new method is demonstrated with various test problems and an applied problem from medicine.

Journal ArticleDOI
TL;DR: In this article, a smooth optimization approach for nonsmooth strictly concave maximization problems whose objective functions admit smooth convex minimization reformulations was proposed, and it is shown that the resulting approach has ${\cal O}(1/{\sqrt{\epsilon}})$ iteration complexity for finding an $\epsilone$-optimal solution to both primal and dual problems.
Abstract: In this paper we first study a smooth optimization approach for solving a class of nonsmooth strictly concave maximization problems whose objective functions admit smooth convex minimization reformulations. In particular, we apply Nesterov's smooth optimization technique [Y. E. Nesterov, Dokl. Akad. Nauk SSSR, 269 (1983), pp. 543-547; Y. E. Nesterov, Math. Programming, 103 (2005), pp. 127-152] to their dual counterparts that are smooth convex problems. It is shown that the resulting approach has ${\cal O}(1/{\sqrt{\epsilon}})$ iteration complexity for finding an $\epsilon$-optimal solution to both primal and dual problems. We then discuss the application of this approach to sparse covariance selection that is approximately solved as an $l_1$-norm penalized maximum likelihood estimation problem, and also propose a variant of this approach which has substantially outperformed the latter one in our computational experiments. We finally compare the performance of these approaches with other first-order methods, namely, Nesterov's ${\cal O}(1/\epsilon)$ smooth approximation scheme and block-coordinate descent method studied in [A. d'Aspremont, O. Banerjee, and L. El Ghaoui, SIAM J. Matrix Anal. Appl., 30 (2008), pp. 56-66; J. Friedman, T. Hastie, and R. Tibshirani, Biostatistics, 9 (2008), pp. 432-441] for sparse covariance selection on a set of randomly generated instances. It shows that our smooth optimization approach substantially outperforms the first method above, and moreover, its variant substantially outperforms both methods above.

Journal ArticleDOI
TL;DR: The second algorithm asymptotically exhibits linear convergence, indicating that the latter algorithm indeed terminates faster with smaller core sets in comparison with the first one, and establishes the existence of a core set of size $O(1/\epsilon)$ for a much wider class of input sets.
Abstract: Given ${\cal A} := \{a^1,\dots,a^m\} \subset \mathbb{R}^n$ and $\epsilon > 0$, we propose and analyze two algorithms for the problem of computing a $(1 + \epsilon)$-approximation to the radius of the minimum enclosing ball of ${\cal A}$. The first algorithm is closely related to the Frank-Wolfe algorithm with a proper initialization applied to the dual formulation of the minimum enclosing ball problem. We establish that this algorithm converges in $O(1/\epsilon)$ iterations with an overall complexity bound of $O(mn/\epsilon)$ arithmetic operations. In addition, the algorithm returns a “core set” of size $O(1/\epsilon)$, which is independent of both $m$ and $n$. The latter algorithm is obtained by incorporating “away” steps into the former one at each iteration and achieves the same asymptotic complexity bound as the first one. While the asymptotic bound on the size of the core set returned by the second algorithm also remains the same as the first one, the latter algorithm has the potential to compute even smaller core sets in practice, since, in contrast to the former one, it allows “dropping” points from the working core set at each iteration. Our analysis reveals that the leading terms in the asymptotic complexity analysis are reasonably small. In contrast to the first algorithm, we also establish that the second algorithm asymptotically exhibits linear convergence, which provides further insight into our computational results, indicating that the latter algorithm indeed terminates faster with smaller core sets in comparison with the first one. We also discuss how our algorithms can be extended to compute an approximation to the minimum enclosing ball of more general input sets without sacrificing the iteration complexity and the bound on the core set size. In particular, we establish the existence of a core set of size $O(1/\epsilon)$ for a much wider class of input sets. We adopt the real number model of computation in our analysis.

Journal ArticleDOI
TL;DR: New integer and linear programming formulations for optimization under first- and second-order stochastic dominance constraints, respectively are presented, more compact than existing formulations, and relaxing integrality in the first-order formulation yields a second- order formulation, demonstrating the tightness of this formulation.
Abstract: Stochastic dominance constraints allow a decision maker to manage risk in an optimization setting by requiring his or her decision to yield a random outcome which stochastically dominates a reference random outcome. We present new integer and linear programming formulations for optimization under first- and second-order stochastic dominance constraints, respectively. These formulations are more compact than existing formulations, and relaxing integrality in the first-order formulation yields a second-order formulation, demonstrating the tightness of this formulation. We also present a specialized branching strategy and heuristics which can be used with the new first-order formulation. Computational tests illustrate the potential benefits of the new formulations.

Journal ArticleDOI
TL;DR: The relationship between the optimal value of a homogeneous quadratic optimization problem and its semidefinite programming (SDP) relaxation was studied in this article, where it was shown that the ratio between optimal value and SDP relaxation is upper bounded by O(m^2) for both the real and complex case.
Abstract: This paper studies the relationship between the optimal value of a homogeneous quadratic optimization problem and its semidefinite programming (SDP) relaxation. We consider two quadratic optimization models: (1) $\min \{ x^* C x \mid x^* A_k x \ge 1, k=0,1,\ldots,m, x\in\mathbb{F}^n\}$ and (2) $\max \{ x^* C x \mid x^* A_k x \le 1, k=0,1,\ldots,m, x\in\mathbb{F}^n\}$, where $\mathbb{F}$ is either the real field $\mathbb{R}$ or the complex field $\mathbb{C}$, and $A_k,C$ are symmetric matrices. For the minimization model (1), we prove that if the matrix $C$ and all but one of the $A_k$'s are positive semidefinite, then the ratio between the optimal value of (1) and its SDP relaxation is upper bounded by $O(m^2)$ when $\mathbb{F}=\mathbb{R}$, and by $O(m)$ when $\mathbb{F}=\mathbb{C}$. Moreover, when two or more of the $A_k$'s are indefinite, this ratio can be arbitrarily large. For the maximization model (2), we show that if $C$ and at most one of the $A_k$'s are indefinite while other $A_k$'s are positive semidefinite, then the ratio between the optimal value of (2) and its SDP relaxation is bounded from below by $O(1/\log m)$ for both the real and the complex case. This result improves the bound based on the so-called approximate S-Lemma of Ben-Tal, Nemirovski, and Roos [SIAM J. Optim., 13 (2002), pp. 535-560]. When two or more of the $A_k$'s in (2) are indefinite, we derive a general bound in terms of the problem data and the SDP solution. For both optimization models, we present examples to show that the derived approximation bounds are essentially tight.

Journal ArticleDOI
TL;DR: This paper presents a new iterative scheme that utilizes the conjugate gradient direction and demonstrates the effectiveness, performance, and convergence of the proposed algorithm.
Abstract: In this paper, we discuss the convex optimization problem over the fixed point set of a nonexpansive mapping. The main objective of the paper is to accelerate the hybrid steepest descent method for the problem. To this goal, we present a new iterative scheme that utilizes the conjugate gradient direction. Its convergence to the solution is guaranteed under certain assumptions. In order to demonstrate the effectiveness, performance, and convergence of our proposed algorithm, we present numerical comparisons of the algorithm with the existing algorithm.

Journal ArticleDOI
TL;DR: A rule to calculate the subdifferential set of the pointwise supremum of an arbitrary family of convex functions defined on a real locally convex topological vector space is provided.
Abstract: We provide a rule to calculate the subdifferential set of the pointwise supremum of an arbitrary family of convex functions defined on a real locally convex topological vector space. Our formula is given exclusively in terms of the data functions and does not require any assumption either on the index set on which the supremum is taken or on the involved functions. Some other calculus rules, namely chain rule formulas of standard type, are obtained from our main result via new and direct proofs.

Journal ArticleDOI
TL;DR: This work studies the convergence of an iterative projection/reflection algorithm originally proposed for solving phase retrieval problems in optics, and investigates the asymptotic behavior of the RAAR algorithm for the general problem of finding points that achieve the minimum distance between two closed convex sets in a Hilbert space with empty intersection.
Abstract: We study the convergence of an iterative projection/reflection algorithm originally proposed for solving what are known as phase retrieval problems in optics. There are two features that frustrate any analysis of iterative methods for solving the phase retrieval problem: nonconvexity and infeasibility. The algorithm that we developed, called relaxed averaged alternating reflections (RAAR), was designed primarily to address infeasibility, though our strategy has advantages for nonconvex problems as well. In the present work we investigate the asymptotic behavior of the RAAR algorithm for the general problem of finding points that achieve the minimum distance between two closed convex sets in a Hilbert space with empty intersection, and for the problem of finding points that achieve a local minimum distance between one closed convex set and a closed prox-regular set, also possibly nonintersecting. The nonconvex theory includes and expands prior results limited to convex sets with nonempty intersection. To place the RAAR algorithm in context, we develop parallel statements about the standard alternating projections algorithm and gradient descent. All of the various algorithms are unified as instances of iterated averaged alternating proximal reflectors applied to a sum of regularized maximal monotone mappings.

Journal ArticleDOI
TL;DR: A necessary and sufficient condition is presented to characterize when the CDT subproblem and its Lagrangian dual admits no duality gap (i.e., the strong duality holds), which reveals that it is actually rare to render a positiveDuality gap for theCDT subproblems in general.
Abstract: In this paper, we consider the problem of minimizing a nonconvex quadratic function, subject to two quadratic inequality constraints. As an application, such a quadratic program plays an important role in the trust region method for nonlinear optimization; such a problem is known as the Celis, Dennis, and Tapia (CDT) subproblem in the literature. The Lagrangian dual of the CDT subproblem is a semidefinite program (SDP), hence convex and solvable. However, a positive duality gap may exist between the CDT subproblem and its Lagrangian dual because the CDT subproblem itself is nonconvex. In this paper, we present a necessary and sufficient condition to characterize when the CDT subproblem and its Lagrangian dual admits no duality gap (i.e., the strong duality holds). This necessary and sufficient condition is easy verifiable and involves only one (any) optimal solution of the SDP relaxation for the CDT subproblem. Moreover, the condition reveals that it is actually rare to render a positive duality gap for the CDT subproblems in general. Moreover, if the strong duality holds, then an optimal solution for the CDT problem can be retrieved from an optimal solution of the SDP relaxation, by means of a matrix rank-one decomposition procedure. The same analysis is extended to the framework where the necessary and sufficient condition is presented in terms of the Lagrangian multipliers at a KKT point. Furthermore, we show that the condition is numerically easy to work with approximatively.

Journal ArticleDOI
TL;DR: Certain ideas and recent results are surveyed, some new, which have been or can be productively used in studies relating to variational analysis and nonsmooth optimization.
Abstract: The word “tame” is used in the title in the same context as in expressions like “convex optimization,” “nonsmooth optimization,” etc.—as a reference to the class of objects involved in the formulation of optimization problems. Definable and tame functions and mappings associated with various o-minimal structures (e.g. semilinear, semialgebraic, globally subanalytic, and others) have a number of remarkable properties which make them an attractive domain for various applications. This relates both to the power of results that can be obtained and the power of available analytic techniques. The paper surveys certain ideas and recent results, some new, which have been or (hopefully) can be productively used in studies relating to variational analysis and nonsmooth optimization.

Journal ArticleDOI
TL;DR: An algorithm for large-scale equality constrained optimization based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence is presented.
Abstract: We present an algorithm for large-scale equality constrained optimization. The method is based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence. Inexact SQP methods are needed for large-scale applications for which the iteration matrix cannot be explicitly formed or factored and the arising linear systems must be solved using iterative linear algebra techniques. We address how to determine when a given inexact step makes sufficient progress toward a solution of the nonlinear program, as measured by an exact penalty function. The method is globalized by a line search. An analysis of the global convergence properties of the algorithm and numerical results are presented.

Journal ArticleDOI
TL;DR: By using a technique developed by Magnanti, some duality results are derived for the optimization problem with cone constraints and its Lagrange dual problem, and it is shown that a duality result recently given in the literature has self-contradictory assumptions.
Abstract: We give some new regularity conditions for Fenchel duality in separated locally convex vector spaces, written in terms of the notion of quasi interior and quasi-relative interior, respectively. We provide also an example of a convex optimization problem for which the classical generalized interior-point conditions given so far in the literature cannot be applied, while the one given by us is applicable. By using a technique developed by Magnanti, we derive some duality results for the optimization problem with cone constraints and its Lagrange dual problem, and we show that a duality result recently given in the literature for this pair of problems has self-contradictory assumptions.