scispace - formally typeset
Search or ask a question

Showing papers in "Optimization Methods & Software in 2019"


Journal ArticleDOI
TL;DR: This paper analyses the rate of convergence of gradient descent for smooth unconstrained multiobjective optimization, and it is shown to be the same as for gradient descent in single-objectives optimization and correspond to appropriate worst-case complexity bounds.
Abstract: A number of first-order methods have been proposed for smooth multiobjective optimization for which some form of convergence to first-order criticality has been proved Such convergence is global i

67 citations


Journal ArticleDOI
TL;DR: This work considers the problem of splitting a market area into a given number of price zones such that the resulting market design yields welfare-optimal outcomes, and presents an extended Karush-Kuhn-Tucker transformation approach as well as a generalized Benders approach that both yield globally optimal solutions.
Abstract: Mathematical modelling of market design issues in liberalized electricity markets often leads to mixed-integer nonlinear multilevel optimization problems for which no general-purpose solvers exist ...

60 citations


Journal ArticleDOI
TL;DR: Methods that automatically tune the acceleration coefficients online and establish their convergence are introduced, made possible by considering classes of fixed-point iterations over averaged operators which encompass gradient methods, ADMM, primal dual algorithms and so on.
Abstract: We propose generic acceleration schemes for a wide class of optimization and iterative schemes based on relaxation and inertia. In particular, we introduce methods that automatically tune the acceleration coefficients online and establish their convergence. This is made possible by considering classes of fixed-point iterations over averaged operators which encompass gradient methods, ADMM (Alternating Direction Method of Multipliers), primal dual algorithms and so on.

52 citations


Journal ArticleDOI
TL;DR: An iterative scheme for solving the split common null point problem in the framework of Banach spaces is proposed and strong convergence theorem of the sequences generated by the scheme is proved under suitable conditions.
Abstract: In this work, we study the split common null point problem in the framework of Banach spaces. We propose an iterative scheme for solving the problem and then prove strong convergence theorem of the sequences generated by our iterative scheme under suitable conditions. We finally provide some numerical examples to support the main theorem.

34 citations


Journal ArticleDOI
TL;DR: A modified Krasnosel'skiĭ–Mann algorithm is proposed in connection with the determination of a fixed point of a nonexpansive mapping and strong convergence of the iteratively generated sequence to the minimal norm solution of the problem is shown.
Abstract: Proximal splitting algorithms for monotone inclusions (and convex optimization problems) in Hilbert spaces share the common feature to guarantee for the generated sequences in general weak convergence to a solution. In order to achieve strong convergence, one usually needs to impose more restrictive properties for the involved operators, like strong monotonicity (respectively, strong convexity for optimization problems). In this paper, we propose a modified Krasnosel'skiĭ-Mann algorithm in connection with the determination of a fixed point of a nonexpansive mapping and show strong convergence of the iteratively generated sequence to the minimal norm solution of the problem. Relying on this, we derive a forward-backward and a Douglas-Rachford algorithm, both endowed with Tikhonov regularization terms, which generate iterates that strongly converge to the minimal norm solution of the set of zeros of the sum of two maximally monotone operators. Furthermore, we formulate strong convergent primal-dual algorithms of forward-backward and Douglas-Rachford-type for highly structured monotone inclusion problems involving parallel-sums and compositions with linear operators. The resulting iterative schemes are particularized to the solving of convex minimization problems. The theoretical results are illustrated by numerical experiments on the split feasibility problem in infinite dimensional spaces.

32 citations


Journal ArticleDOI
TL;DR: It is shown that using a curvature-adaptive step size in the BFGS method (and quasi-Newton methods in the Broyden convex class other than the DFP method) results in superlinear convergence for strongly convex self-concordant functions.
Abstract: We consider the use of a curvature-adaptive step size in gradient-based iterative methods, including quasi-Newton methods, for minimizing self-concordant functions, extending an approach first prop...

31 citations


Journal ArticleDOI
TL;DR: In this article, a complete iteration complexity analysis of inexact first-order Lagrangian and penalty methods for solving cone-constrained convex problems that have or may not have optimal Lagrange multipliers that close the duality gap is presented.
Abstract: In this paper we present a complete iteration complexity analysis of inexact first-order Lagrangian and penalty methods for solving cone-constrained convex problems that have or may not have optimal Lagrange multipliers that close the duality gap. We first assume the existence of optimal Lagrange multipliers and study primal–dual first-order methods based on inexact information and augmented Lagrangian smoothing or Nesterov-type smoothing. For inexact (fast) gradient augmented Lagrangian methods, we derive an overall computational complexity of O(1/ϵ) projections onto a simple primal set in order to attain an e-optimal solution of the conic convex problem. For the inexact fast gradient method combined with Nesterov-type smoothing, we derive computational complexity O(1/ϵ3/2) projections onto the same set. Then, we assume that optimal Lagrange multipliers might not exist for the cone-constrained convex problem, and analyse the fast gradient method for solving penalty reformulations of the problem. For the ...

31 citations


Journal ArticleDOI
TL;DR: This paper studies how to compute all real solutions of the tensor complimentary problem, if there are finite many ones, as a sequence of polynomial optimization problems.
Abstract: In this paper, we study how to compute all real solutions of the tensor complimentary problem, if there are finite many ones. We formulate the problem as a sequence of polynomial optimization probl...

25 citations


Journal ArticleDOI
TL;DR: This paper proposes a generic algorithmic framework for stochastic proximal quasi-Newton (SPQN) methods to solve non-convex composite optimization problems and proposes a modified self-scaling symmetric rank one incorporated in the framework for SPQN method, which is called Stochastic symmetricRank one method.
Abstract: In this paper, we propose a generic algorithmic framework for stochastic proximal quasi-Newton (SPQN) methods to solve non-convex composite optimization problems. Stochastic second-order informatio...

25 citations


Journal ArticleDOI
TL;DR: This paper proposes an extension of MIQCR which applies to any QCQP and proposes to solve by a branch-and-bound algorithm based on the relaxation of the additional quadratic constraints and of the integrality constraints.
Abstract: The class of mixed-integer quadratically constrained quadratic programs (QCQP) consists of minimizing a quadratic function under quadratic constraints where the variables could be integer or continuous. On a previous paper we introduced a method called MIQCR for solving QCQPs with the following restriction: all quadratic sub-functions of purely continuous variables are already convex. In this paper, we propose an extension of MIQCR which applies to any QCQP. Let (P) be a QCQP. Our approach to solve (P) is first to build an equivalent mixed-integer quadratic problem (P∗). This equivalent problem (P∗) has a quadratic convex objective function, linear constraints, and additional variables y that are meant to satisfy the additional quadratic constraints y=xxT, where x are the initial variables of problem (P). We then propose to solve (P∗) by a branch-and-bound algorithm based on the relaxation of the additional quadratic constraints and of the integrality constraints. This type of branching is known as spatia...

24 citations


Journal ArticleDOI
TL;DR: The diverse features of the eight solvers included in the FOM MATLAB toolbox for solving convex optimization problems using first-order methods are illustrated through a collection of examples of different nature.
Abstract: This paper presents the FOM MATLAB toolbox for solving convex optimization problems using first-order methods. The diverse features of the eight solvers included in the package are illustrated thro...

Journal ArticleDOI
TL;DR: In this article, a non-intrusive framework for integrating existing unsteady PDE solvers into a parallel-in-time simultaneous optimization algorithm is presented, where the time-parallelization of the PDE algorithm is assumed to be linear.
Abstract: This paper presents a non-intrusive framework for integrating existing unsteady partial differential equation (PDE) solvers into a parallel-in-time simultaneous optimization algorithm. The time-par...

Journal ArticleDOI
TL;DR: As it turns out, adapted versions of Scholtes' global relaxation scheme as well as the relaxation scheme of Steffensen and Ulbrich only find W-stationary points of switching-constrained optimization problems in general.
Abstract: Switching-constrained optimization problems form a difficult class of mathematical programmes since their feasible set is almost disconnected while standard constraint qualifications are likely to ...

Journal ArticleDOI
TL;DR: A Halpern-type proximal point algorithm for approximating a common solution of monotone inclusion problem, minimization problem (MP) and fixed point problem is introduced and a strong convergence theorem is proved.
Abstract: In this paper, a Halpern-type proximal point algorithm for approximating a common solution of monotone inclusion problem, minimization problem (MP) and fixed point problem is introduced. Using our ...

Journal ArticleDOI
TL;DR: The Maple library for Semidefinite Programming solved exactly with Computational Tools of Real Algebra as mentioned in this paper solves linear matrix inequalities with symbolic computation in exact arithmetic and is targeted to small-size, possibly degenerate problems for which symbolic infeasibility or feasibility certificates are required.
Abstract: This document describes our freely distributed Maple library {\sc spectra}, for Semidefinite Programming solved Exactly with Computational Tools of Real Algebra. It solves linear matrix inequalities with symbolic computation in exact arithmetic and it is targeted to small-size, possibly degenerate problems for which symbolic infeasibility or feasibility certificates are required.

Journal ArticleDOI
TL;DR: An algorithm for studying the split common fixed point and null problem for demicontractive operators and maximal monotone operators in real Hilbert spaces is introduced and a strong convergence result is established under some suitable conditions.
Abstract: In this article, we consider a split common fixed point and null point problem which includes the split common fixed point problem, the split common null problem and other problems related to the fixed point problem and the null point problem. We introduce an algorithm for studying the split common fixed point and null problem for demicontractive operators and maximal monotone operators in real Hilbert spaces. We establish a strong convergence result under some suitable conditions and reduce our main result to above-mentioned problems. Moreover, we also apply our main results to the split equilibrium problem. Finally, we give numerical results to demonstrate the convergence of our algorithms.

Journal ArticleDOI
TL;DR: Numerical results on randomly generated problems are reported which show the effectiveness of the proposed approach, in particular in limiting the growth of the number of nodes in the branch-and-bound tree as the density of the underlying graph increases.
Abstract: In this paper we propose convex and LP bounds for standard quadratic programming (StQP) problems and employ them within a branch-and-bound approach. We first compare different bounding strategies for StQPs in terms both of the quality of the bound and of the computation times. It turns out that the polyhedral bounding strategy is the best one to be used within a branch-and-bound scheme. Indeed, it guarantees a good quality of the bound at the expense of a very limited computation time. The proposed branch-and-bound algorithm performs an implicit enumeration of all the KKT (stationary) points of the problem. We compare different branching strategies exploiting the structure of the problem. Numerical results on randomly generated problems (with varying density of the underlying convexity graph) are reported which show the effectiveness of the proposed approach, in particular in limiting the growth of the number of nodes in the branch-and-bound tree as the density of the underlying graph increases.

Journal ArticleDOI
TL;DR: Based on the numerical efficiency of Hestenes–Stiefel (HS) method, a new modified HS algorithm is proposed for unconstrained optimization and the new direction independent of the line search satisfies in the sufficient descent condition.
Abstract: In this paper, based on the numerical efficiency of Hestenes–Stiefel (HS) method, a new modified HS algorithm is proposed for unconstrained optimization. The new direction independent of the line search satisfies in the sufficient descent condition. Motivated by theoretical and numerical features of three-term conjugate gradient (CG) methods proposed by Narushima et al., similar to Dai and Kou approach, the new direction is computed by minimizing the distance between the CG direction and the direction of the three-term CG methods proposed by Narushima et al. Under some mild conditions, we establish global convergence of the new method for general functions when the standard Wolfe line search is used. Numerical experiments on some test problems from the CUTEst collection are given to show the efficiency of the proposed method.

Journal ArticleDOI
TL;DR: In this paper, a decision-making problem can often be modelled using optimization concepts and tools, and the problem can be characterized by several local solutions (their number can be very large).
Abstract: Many decision-making problems can often be modelled using optimization concepts and tools. Multiextremal optimization problems are characterized by several local solutions (their number can be very...

Journal ArticleDOI
TL;DR: An algorithm is proposed that uses the L-BFGS quasi-Newton approximation of the problem's curvature together with a variant of the weak Wolfe line search to overcome the inherent shortsightedness of the gradient for a non-smooth function.
Abstract: We consider the problem of minimizing a continuous function that may be non-smooth and non-convex, subject to bound constraints. We propose an algorithm that uses the L-BFGS quasi-Newton approximation of the problem's curvature together with a variant of the weak Wolfe line search. The key ingredient of the method is an active-set selection strategy that defines the subspace in which search directions are computed. To overcome the inherent shortsightedness of the gradient for a non-smooth function, we propose two strategies. The first relies on an approximation of the e-minimum norm subgradient, and the second uses an iterative corrective loop that augments the active set based on the resulting search directions. While theoretical convergence guarantees have been elusive even for the unconstrained case, we present numerical results on a set of standard test problems to illustrate the efficacy of our approach, using an open-source Python implementation of the proposed algorithm.

Journal ArticleDOI
TL;DR: This work concerns the study of a constraint qualification for non-smooth DC-constrained optimization problems, as well as the design and convergence analysis of minimizing algorithms to address these problems.
Abstract: This work concerns the study of a constraint qualification for non-smooth DC-constrained optimization problems, as well as the design and convergence analysis of minimizing algorithms to address th...

Journal ArticleDOI
TL;DR: Four variants of the splitting method, which depend on the properties of the matrices included in the definition of EiCP, are introduced and seem to be competitive with the most efficient state-of-the-art algorithms for the solution of E iCP.
Abstract: We study splitting methods for solving the Eigenvalue Complementarity Problem (EiCP). We introduce four variants, which depend on the properties (symmetry, nonsymmetry, positive definite, negative ...

Journal ArticleDOI
TL;DR: This paper presents a dynamic regret analysis on the decentralized online convex optimization problems computed over a network of agents to distributively optimize a global function.
Abstract: This paper presents a dynamic regret analysis on the decentralized online convex optimization problems computed over a network of agents. The goal is to distributively optimize a global function wh...

Journal ArticleDOI
TL;DR: The experimental results show that the proposed algorithm is competitive with other approaches and for particular problems, the computational performance is better than the state-of-the-art algorithms.
Abstract: In this paper, we propose a non-monotone line search method for solving optimization problems on Stiefel manifold. The main novelty of our approach is that our method uses a search direction based on a linear combination of descent directions and a Barzilai–Borwein line search. The feasibility is guaranteed by projecting each iterate on the Stiefel manifold through SVD (singular value decomposition) factorizations. Some theoretical results for analysing the algorithm are presented. Finally, we provide numerical experiments for comparing our algorithm with other state-of-the-art procedures. The code is available online. The experimental results show that the proposed algorithm is competitive with other approaches and for particular problems, the computational performance is better than the state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: A remarkable feature of the proposed method is that it possesses a globally convergent even without convexity assumption on the objective function.
Abstract: In this paper, according to the fifth-order Taylor expansion of the objective function and the modified secant equation suggested by Li and Fukushima, a new modified secant equation is presented. Also, a new modification of the scaled memoryless BFGS preconditioned conjugate gradient algorithm is suggested which is the idea to compute the scaling parameter based on a two-point approximation of our new modified secant equation. A remarkable feature of the proposed method is that it possesses a globally convergent even without convexity assumption on the objective function. Numerical results show that the proposed new modification of scaled conjugate gradient is efficient.

Journal ArticleDOI
TL;DR: A modification of a primal-dual algorithm based on a mixed augmented Lagrangian and a log-barrier penalty function is presented to quickly detect infeasibility and it is shown that under a suitable choice of the parameters along the iterations, the rate of convergence of the algorithm to an infeasible stationary point is superlinear.
Abstract: We present a modification of a primal-dual algorithm based on a mixed augmented Lagrangian and a log-barrier penalty function. The goal of this new feature is to quickly detect infeasibility. An ad...

Journal ArticleDOI
TL;DR: A globalization technique on the basis of the hyperplane projection method is applied to the BFGS method and the method applied to pseudo-monotone VIP is globally convergent in the sense that subproblems always have unique solutions, and the sequence of iterates converges to a solution to the problem without any regularity assumption.
Abstract: In this paper, we propose a globally convergent BFGS method to solve Variational Inequality Problems (VIPs). In fact, a globalization technique on the basis of the hyperplane projection method is applied to the BFGS method. The technique, which is independent of any merit function, is applicable for pseudo-monotone problems. The proposed method applies the BFGS direction and tries to reduce the distance of iterates to the solution set. This property, called Fejer monotonicity of iterates with respect to the solution set, is the basis of the convergence analysis. The method applied to pseudo-monotone VIP is globally convergent in the sense that subproblems always have unique solutions, and the sequence of iterates converges to a solution to the problem without any regularity assumption. Finally, some numerical simulations are included to evaluate the efficiency of the proposed algorithm.

Journal ArticleDOI
TL;DR: In this paper, the first author was supported by MINECO of Spain and ERDF of EU, as part of the Ramon y Cajal program (RYC-2013-13327) and the Grant MTM2014-59179-C2-1-P.
Abstract: The first author was supported by MINECO of Spain and ERDF of EU, as part of the Ramon y Cajal program (RYC-2013-13327) and the Grant MTM2014-59179-C2-1-P. The second author’s work was supported by research grant no. 2013003 of the United States-Israel Binational Science Foundation (BSF). The third author’s work was supported by the EU FP7 IRSES program STREVCOMS, grant no. PIRSES-GA-2013-612669.

Journal ArticleDOI
TL;DR: The proposed algorithm serves to bridge the gap between the needs of data-mining community and existing state-of-the-art approaches embraced foremost by the optimization community, and can be realized with minimal modification to the CG algorithm itself with negligible storage and computational overhead.
Abstract: In this paper, we have developed a new algorithm for solving nonconvex large-scale problems. The new algorithm performs explicit matrix modifications adaptively, mimicing the implicit modifications used by trust-region methods. Thus, it shares the equivalent theoretical strength of trust-region approaches, without needing to accommodate an explicit step-size constraint. We show that the algorithm is well suited for solving very large-scale nonconvex problems whenever Hessian-vector products are available. The numerical results on the CUTEr problems demonstrate the effectiveness of this approach in the context of a line-search method for large-scale unconstrained nonconvex optimization. Moreover, applications in deep-learning problems further illustrate the usefulness of this algorithm. It does not share any of the prohibitive traits of popular matrix-free algorithms such as truncated conjugate gradient (CG) due to the difficult nature of deep-learning problems. Thus the proposed algorithm serves to bridge...

Journal ArticleDOI
TL;DR: An adaptive full Newton-step infeasible-interior-point method for solving sufficient horizontal linear complementarity problems is analysed and sufficient conditions are given for the superlinear convergence of the sequence of iterates.
Abstract: An adaptive full Newton-step infeasible-interior-point method for solving sufficient horizontal linear complementarity problems is analysed and sufficient conditions are given for the superlinear c...