scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Optimization Theory and Applications in 2018"


Journal ArticleDOI
TL;DR: In infinite-dimensional Hilbert spaces, it is proved that the iterative sequence generated by the extragradient method for solving pseudo-monotone variational inequalities converges weakly to a solution.
Abstract: In infinite-dimensional Hilbert spaces, we prove that the iterative sequence generated by the extragradient method for solving pseudo-monotone variational inequalities converges weakly to a solution A class of pseudo-monotone variational inequalities is considered to illustrate the convergent behavior The result obtained in this note extends some recent results in the literature; especially, it gives a positive answer to a question raised in Khanh (Acta Math Vietnam 41:251-263, 2016)

102 citations


Journal ArticleDOI
TL;DR: The existence–uniqueness theorem for the adjoint equations is proved, which is represented by an anticipated backward stochastic differential equation with jumps and regimes, and the results are illustrated by a problem of optimal consumption problem from a cash flow with delay and regimes.
Abstract: We study a stochastic optimal control problem for a delayed Markov regime-switching jump-diffusion model. We establish necessary and sufficient maximum principles under full and partial information for such a system. We prove the existence---uniqueness theorem for the adjoint equations, which are represented by an anticipated backward stochastic differential equation with jumps and regimes. We illustrate our results by a problem of optimal consumption problem from a cash flow with delay and regimes.

86 citations


Journal ArticleDOI
TL;DR: In this article, a model of interbank lending and borrowing is proposed, where the evolution of log-monetary reserves of banks is described by coupled diffusions driven by controls with delay in their drifts.
Abstract: We propose a model of inter-bank lending and borrowing which takes into account clearing debt obligations. The evolution of log-monetary reserves of banks is described by coupled diffusions driven by controls with delay in their drifts. Banks are minimizing their finite-horizon objective functions which take into account a quadratic cost for lending or borrowing and a linear incentive to borrow if the reserve is low or lend if the reserve is high relative to the average capitalization of the system. As such, our problem is a finite-player linear–quadratic stochastic differential game with delay. An open-loop Nash equilibrium is obtained using a system of fully coupled forward and advanced-backward stochastic differential equations. We then describe how the delay affects liquidity and systemic risk characterized by a large number of defaults. We also derive a closed-loop Nash equilibrium using a Hamilton–Jacobi–Bellman partial differential equation approach.

59 citations


Journal ArticleDOI
Jun Yang1, Hongwei Liu1
TL;DR: A weak convergence theorem for the algorithm is proved without any requirement of additional projections and the knowledge of the Lipschitz constant of the mapping; R-linear convergence rate is obtained under strong monotonicity assumption of the mapped.
Abstract: In this paper, we investigate and analyze classical variational inequalities with Lipschitz continuous and monotone mapping in real Hilbert space. The projected reflected gradient method, with varying step size, requires at most two projections onto the feasible set and one value of the mapping per iteration. We modify the method with a simple structure; a weak convergence theorem for our algorithm is proved without any requirement of additional projections and the knowledge of the Lipschitz constant of the mapping. Meanwhile, R-linear convergence rate is obtained under strong monotonicity assumption of the mapping. Preliminary results from numerical experiments are performed.

52 citations


Journal ArticleDOI
TL;DR: The exact worst-case convergence rates of the proximal gradient method for minimizing the sum of a smooth strongly convex function and a non-smooth convexfunction, whose proximal operator is available are established.
Abstract: We study the worst-case convergence rates of the proximal gradient method for minimizing the sum of a smooth strongly convex function and a non-smooth convex function, whose proximal operator is available We establish the exact worst-case convergence rates of the proximal gradient method in this setting for any step size and for different standard performance measures: objective function accuracy, distance to optimality and residual gradient norm The proof methodology relies on recent developments in performance estimation of first-order methods, based on semidefinite programming In the case of the proximal gradient method, this methodology allows obtaining exact and non-asymptotic worst-case guarantees that are conceptually very simple, although apparently new On the way, we discuss how strong convexity can be replaced by weaker assumptions, while preserving the corresponding convergence rates We also establish that the same fixed step size policy is optimal for all three performance measures Finally, we extend recent results on the worst-case behavior of gradient descent with exact line search to the proximal case

51 citations


Journal ArticleDOI
TL;DR: It is successfully proved that the whole sequence is convergent, if it is bounded, provided that the objective function is subanalytic continuous on its domain and one of the two Difference-of-Convex components is differentiable with locally Lipschitz derivative.
Abstract: Difference-of-Convex programming and related algorithms, which constitute the backbone of nonconvex programming and global optimization, were introduced in 1985 by Pham Dinh Tao and have been extensively developed by Le Thi Hoai An and Pham Dinh Tao since 1994 to become now classic and increasingly popular. That algorithm is a descent method without linesearch and every limit point of its generated sequence is a critical point of the related Difference-of-Convex program. Determining its convergence rate is a challenging problem. Its knowledge is crucial from both theoretical and practical points of view. In this work, we treat this problem for the class of Difference-of-Convex programs with subanalytic data by using the nonsmooth form of the Lojasiewicz inequality. We have successfully proved that the whole sequence is convergent, if it is bounded, provided that the objective function is subanalytic continuous on its domain and one of the two Difference-of-Convex components is differentiable with locally Lipschitz derivative. We also established a result on the convergence rate, which depended on the Lojasiewicz exponent of the objective function. Finally, for both classes of trust-region subproblems and nonconvex quadratic programs, we showed that the Lojasiewicz exponent was one half, and thereby, our proposed algorithms applied to these Difference-of-Convex programs were Root-linearly convergent.

50 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that subgradient methods converge linearly on a convex function that grows sharply away from its solution set, provided that the methods are initialized within a fixed tube around the solution set.
Abstract: Subgradient methods converge linearly on a convex function that grows sharply away from its solution set. In this work, we show that the same is true for sharp functions that are only weakly convex, provided that the subgradient methods are initialized within a fixed tube around the solution set. A variety of statistical and signal processing tasks come equipped with good initialization and provably lead to formulations that are both weakly convex and sharp. Therefore, in such settings, subgradient methods can serve as inexpensive local search procedures. We illustrate the proposed techniques on phase retrieval and covariance estimation problems.

42 citations


Journal ArticleDOI
TL;DR: It is shown that the tensor variational inequality has the property of global uniqueness and solvability under some assumptions, which is different from the existing result for the general Variational inequality.
Abstract: In this paper, we consider a class of variational inequalities, where the involved function is the sum of an arbitrary given vector and a homogeneous polynomial defined by a tensor; we call it the tensor variational inequality. The tensor variational inequality is a natural extension of the affine variational inequality and the tensor complementarity problem. We show that a class of multi-person noncooperative games can be formulated as a tensor variational inequality. In particular, we investigate the global uniqueness and solvability of the tensor variational inequality. To this end, we first introduce two classes of structured tensors and discuss some related properties, and then, we show that the tensor variational inequality has the property of global uniqueness and solvability under some assumptions, which is different from the existing result for the general variational inequality.

40 citations


Journal ArticleDOI
TL;DR: A new and simple iterative method, which combines Halpern’s technique and the subgradient extragradient idea, is given, and under mild and standard assumptions, the strong convergence of the algorithm is established in a uniformly smooth and convex Banach spaces.
Abstract: In this paper, we study the variational inequalities involving monotone and Lipschitz continuous mapping in Banach spaces. A new and simple iterative method, which combines Halpern’s technique and the subgradient extragradient idea, is given. Under mild and standard assumptions, we establish the strong convergence of our algorithm in a uniformly smooth and convex Banach spaces. We also present a modification of our method using a line-search approach, this enable to obtain strong convergence in real and reflexive Banach spaces, without the prior knowledge of the Lipschitz constant. Numerical experiments illustrate the performances of our new algorithm and provide a comparison with related algorithms. Our results generalize and extend some of the existing works in Hilbert spaces to Banach spaces as well as provide an extension from weak to strong convergence.

40 citations


Journal ArticleDOI
TL;DR: A general methodology to approximate sets of data points through Non-uniform Rational Basis Spline (NURBS) curves is provided and the effectiveness of the proposed methodology is proven through some mathematical benchmarks as well as a real-world engineering problem.
Abstract: In this paper, a general methodology to approximate sets of data points through Non-uniform Rational Basis Spline (NURBS) curves is provided. The proposed approach aims at integrating and optimizing the full set of design variables (both integer and continuous) defining the shape of the NURBS curve. To this purpose, a new formulation of the curve fitting problem is required: it is stated in the form of a constrained nonlinear programming problem by introducing a suitable constraint on the curvature of the curve. In addition, the resulting optimization problem is defined over a domain having variable dimension, wherein both the number and the value of the design variables are optimized. To deal with this class of constrained nonlinear programming problems, a global optimization hybrid tool has been employed. The optimization procedure is split in two steps: firstly, an improved genetic algorithm optimizes both the value and the number of design variables by means of a two-level Darwinian strategy allowing the simultaneous evolution of individuals and species; secondly, the optimum solution provided by the genetic algorithm constitutes the initial guess for the subsequent gradient-based optimization, which aims at improving the accuracy of the fitting curve. The effectiveness of the proposed methodology is proven through some mathematical benchmarks as well as a real-world engineering problem.

39 citations


Journal ArticleDOI
TL;DR: The optimality system for a class of obstacle problems with nonmonotone perturbation is given and existence of optimal pairs for the optimal control problem is obtained.
Abstract: This paper deals with the optimality system of an optimal control problem governed by a nonlinear elliptic inclusion and a nonsmooth cost functional. The system describing the state consists of a variational–hemivariational inequality, the solution mapping of which with respect to the control is proved to be weakly closed. Existence of optimal pairs for the optimal control problem is obtained. Approximation results and abstract necessary optimality conditions of first order are derived based on the adapted penalty method and nonsmooth analysis techniques. Moreover, the optimality system for a class of obstacle problems with nonmonotone perturbation is given.

Journal ArticleDOI
TL;DR: A new first-order method is derived that resembles the optimized gradient method for strongly convex quadratic problems with known function parameters, yielding a linear convergence rate that is faster than that of the analogous version of the fast gradient method.
Abstract: First-order methods with momentum, such as Nesterov’s fast gradient method, are very useful for convex optimization problems, but can exhibit undesirable oscillations yielding slow convergence rates for some applications. An adaptive restarting scheme can improve the convergence rate of the fast gradient method, when the parameter of a strongly convex cost function is unknown or when the iterates of the algorithm enter a locally strongly convex region. Recently, we introduced the optimized gradient method, a first-order algorithm that has an inexpensive per-iteration computational cost similar to that of the fast gradient method, yet has a worst-case cost function rate that is twice faster than that of the fast gradient method and that is optimal for large-dimensional smooth convex problems. Building upon the success of accelerating the fast gradient method using adaptive restart, this paper investigates similar heuristic acceleration of the optimized gradient method. We first derive a new first-order method that resembles the optimized gradient method for strongly convex quadratic problems with known function parameters, yielding a linear convergence rate that is faster than that of the analogous version of the fast gradient method. We then provide a heuristic analysis and numerical experiments that illustrate that adaptive restart can accelerate the convergence of the optimized gradient method. Numerical results also illustrate that adaptive restart is helpful for a proximal version of the optimized gradient method for nonsmooth composite convex functions.

Journal ArticleDOI
TL;DR: In this paper, by virtue of the image space analysis, the general scalar robust optimization problems under the strictly robust counterpart are considered, among which, the uncertainties are included in the objective as well as the constraints.
Abstract: In this paper, by virtue of the image space analysis, the general scalar robust optimization problems under the strictly robust counterpart are considered, among which, the uncertainties are included in the objective as well as the constraints. Besides, on the strength of a corrected image in a new type, an equivalent relation between the uncertain optimization problem and its image problem is also established, which provides an idea to tackle with minimax problems. Furthermore, theorems of the robust weak alternative as well as sufficient characterizations of robust optimality conditions are achieved on the frame of the linear and nonlinear (regular) weak separation functions. Moreover, several necessary and sufficient optimality conditions, especially saddle point sufficient optimality conditions for scalar robust optimization problems, are obtained. Finally, a simple example for finding a shortest path is included to show the effectiveness of the results derived in this paper.

Journal ArticleDOI
TL;DR: The main goal is to derive the optimality conditions of Mayer problem for differential inclusions with initial point constraints by using the discretization method guaranteeing transition to continuous problem, and the discrete and discrete-approximation inclusions are investigated.
Abstract: The present paper studies a new class of problems of optimal control theory with Sturm–Liouville-type differential inclusions involving second-order linear self-adjoint differential operators. Our main goal is to derive the optimality conditions of Mayer problem for differential inclusions with initial point constraints. By using the discretization method guaranteeing transition to continuous problem, the discrete and discrete-approximation inclusions are investigated. Necessary and sufficient conditions, containing both the Euler–Lagrange and Hamiltonian-type inclusions and “transversality” conditions are derived. The idea for obtaining optimality conditions of Mayer problem is based on applying locally adjoint mappings. This approach provides several important equivalence results concerning locally adjoint mappings to Sturm–Liouville-type set-valued mappings. The result strengthens and generalizes to the problem with a second-order non-self-adjoint differential operator; a suitable choice of coefficients then transforms this operator to the desired Sturm–Liouville-type problem. In particular, if a positive-valued, scalar function specific to Sturm–Liouville differential inclusions is identically equal to one, we have immediately the optimality conditions for the second-order discrete and differential inclusions. Furthermore, practical applications of these results are demonstrated by optimization of some “linear” optimal control problems for which the Weierstrass–Pontryagin maximum condition is obtained.

Journal ArticleDOI
TL;DR: This work introduces second-order necessary and sufficient optimality conditions and uses this observation to provide extended local convergence theory for a Scholtes-type regularization method, which guarantees the existence and convergence of iterates under suitable assumptions.
Abstract: We consider nonlinear optimization problems with cardinality constraints. Based on a continuous reformulation, we introduce second-order necessary and sufficient optimality conditions. Under such a second-order condition, we can guarantee local uniqueness of Mordukhovich stationary points. Finally, we use this observation to provide extended local convergence theory for a Scholtes-type regularization method, which guarantees the existence and convergence of iterates under suitable assumptions. This convergence theory can also be applied to other regularization schemes.

Journal ArticleDOI
TL;DR: In a Hilbert space, the convergence properties of a general class of inertial forward–backward algorithms in the presence of perturbations, approximations, errors are analyzed to show in a unifying way the robustness of these algorithms.
Abstract: In a Hilbert space, we analyze the convergence properties of a general class of inertial forward–backward algorithms in the presence of perturbations, approximations, errors. These splitting algorithms aim to solve, by rapid methods, structured convex minimization problems. The function to be minimized is the sum of a continuously differentiable convex function whose gradient is Lipschitz continuous and a proper lower semicontinuous convex function. The algorithms involve a general sequence of positive extrapolation coefficients that reflect the inertial effect and a sequence in the Hilbert space that takes into account the presence of perturbations. We obtain convergence rates for values and convergence of the iterates under conditions involving the extrapolation and perturbation sequences jointly. This extends the recent work of Attouch–Cabot which was devoted to the unperturbed case. Next, we consider the introduction into the algorithms of a Tikhonov regularization term with vanishing coefficient. In this case, when the regularization coefficient does not tend too rapidly to zero, we obtain strong ergodic convergence of the iterates to the minimum norm solution. Taking a general sequence of extrapolation coefficients makes it possible to cover a wide range of accelerated methods. In this way, we show in a unifying way the robustness of these algorithms.

Journal ArticleDOI
Peter Ochs1
TL;DR: The abstract theory in this paper applies to the inertial forward–backward splitting method: iPiano—a generalization of the Heavy-ball method, and reveals an equivalence between iPiano and inertial averaged/alternating proximal minimization and projection methods.
Abstract: A local convergence result for an abstract descent method is proved. The sequence of iterates is attracted by a local (or global) minimum, stays in its neighborhood, and converges within this neighborhood. This result allows algorithms to exploit local properties of the objective function. In particular, the abstract theory in this paper applies to the inertial forward–backward splitting method: iPiano—a generalization of the Heavy-ball method. Moreover, it reveals an equivalence between iPiano and inertial averaged/alternating proximal minimization and projection methods. Key for this equivalence is the attraction to a local minimum within a neighborhood and the fact that, for a prox-regular function, the gradient of the Moreau envelope is locally Lipschitz continuous and expressible in terms of the proximal mapping. In a numerical feasibility problem, the inertial alternating projection method significantly outperforms its non-inertial variants.

Journal ArticleDOI
Bin Gao1, Feng Ma
TL;DR: It is confirmed that the symmetric alternating direction method of multipliers can also be regularized with an indefinite proximal term and theoretically prove the global convergence of the indefinite method and establish its worst-case convergence rate in an ergodic sense.
Abstract: The proximal alternating direction method of multipliers is a popular and useful method for linearly constrained, separable convex problems, especially for the linearized case. In the literature, convergence of the proximal alternating direction method has been established under the assumption that the proximal regularization matrix is positive semi-definite. Recently, it was shown that the regularizing proximal term in the proximal alternating direction method of multipliers does not necessarily have to be positive semi-definite, without any additional assumptions. However, it remains unknown as to whether the indefinite setting is valid for the proximal version of the symmetric alternating direction method of multipliers. In this paper, we confirm that the symmetric alternating direction method of multipliers can also be regularized with an indefinite proximal term. We theoretically prove the global convergence of the indefinite method and establish its worst-case convergence rate in an ergodic sense. In addition, the generalized alternating direction method of multipliers proposed by Eckstein and Bertsekas is a special case in our discussion. Finally, we demonstrate the performance improvements achieved when using the indefinite proximal term through experimental results.

Journal ArticleDOI
TL;DR: It is shown that using alternated inertia yields monotonically decreasing functional values, which contrasts with usual accelerated proximal gradient methods.
Abstract: In this paper, we investigate attractive properties of the proximal gradient algorithm with inertia. Notably, we show that using alternated inertia yields monotonically decreasing functional values, which contrasts with usual accelerated proximal gradient methods. We also provide convergence rates for the algorithm with alternated inertia, based on local geometric properties of the objective function. The results are put into perspective by discussions on several extensions (strongly convex case, non-convex case, and alternated extrapolation) and illustrations on common regularized optimization problems.

Journal ArticleDOI
TL;DR: This work considers to solve numerically the shape optimization models with Dirichlet Laplace eigenvalues with volume-constrained and volume unconstrained formulations, and advocates to use the more general volume expressions of Eulerian derivatives.
Abstract: We consider to solve numerically the shape optimization models with Dirichlet Laplace eigenvalues. Both volume-constrained and volume unconstrained formulations of the model problems are presented. Different from the literature using boundary-type Eulerian derivatives in shape gradient descent methods, we advocate to use the more general volume expressions of Eulerian derivatives. We present two shape gradient descent algorithms based on the volume expressions. Numerical examples are presented to show the more effectiveness of the algorithms than those based on the boundary expressions.

Journal ArticleDOI
TL;DR: It is shown that the solution set of tensor complementarity problems has the strictly lower bound and the upper bounds of spectral radius are obtained, which depends only on the principal diagonal entries of tensors.
Abstract: In this paper, one of our main purposes is to prove the boundedness of the solution set of tensor complementarity problems such that the specific bounds depend only on the structural properties of such a tensor. To achieve this purpose, firstly, we prove that this class of structured tensors is strictly semi-positive. Subsequently, the strictly lower and upper bounds of operator norms are given for two positively homogeneous operators. Finally, with the help of the above upper bounds, we show that the solution set of tensor complementarity problems has the strictly lower bound. Furthermore, the upper bounds of spectral radius are obtained, which depends only on the principal diagonal entries of tensors.

Journal ArticleDOI
TL;DR: The continuity and convexity of the nonlinear scalarizing function for sets are showed under some suitable conditions and the upper semicontinuity and the lower semicont inuity of strongly approximate solution mappings to the parametric set optimization problems are given.
Abstract: Hern $$\acute{\mathrm{a}}$$ ndez and Rodriguez-Marin (J Math Anal Appl 325:1–18, 2007) introduced a nonlinear scalarizing function for sets, which is a generalization of the Gerstewitz’s function. This paper aims at investigating some properties concerned with the nonlinear scalarizing function for sets. The continuity and convexity of the nonlinear scalarizing function for sets are showed under some suitable conditions. As applications, the upper semicontinuity and the lower semicontinuity of strongly approximate solution mappings to the parametric set optimization problems are also given.

Journal ArticleDOI
TL;DR: This work poses and solves the problem to guide a collection of weakly interacting dynamical systems (agents, particles, etc.) to a specified terminal distribution as a mean-field game problem, and relies on and extends the theory of optimal mass transport and its generalizations.
Abstract: The purpose of this work is to pose and solve the problem to guide a collection of weakly interacting dynamical systems (agents, particles, etc.) to a specified terminal distribution. This is formulated as a mean-field game problem, and is discussed in both non-cooperative games and cooperative games settings. In the non-cooperative games setting, a terminal cost is used to accomplish the task; we establish that the map between terminal costs and terminal probability distributions is onto. In the cooperative games setting, the goal is to find a common optimal control that would drive the distribution of the agents to a targeted one. We focus on the cases when the underlying dynamics is linear and the running cost is quadratic. Our approach relies on and extends the theory of optimal mass transport and its generalizations.

Journal ArticleDOI
TL;DR: In this paper, the image space analysis is employed to study constrained inverse vector variational inequalities and two alternative theorems are established, which lead directly to sufficient and necessary optimality conditions of the inverse vector Variational inequalities.
Abstract: In this paper, we employ the image space analysis to study constrained inverse vector variational inequalities. First, sufficient and necessary optimality conditions for constrained inverse vector variational inequalities are established by using multiobjective optimization. A continuous nonlinear function is also introduced based on the oriented distance function and projection operator. This function is proven to be a weak separation function and a regular weak separation function under different parameter sets. Then, two alternative theorems are established, which lead directly to sufficient and necessary optimality conditions of the inverse vector variational inequalities. This provides a partial answer to an open question posed in Chen et al. (J Optim Theory Appl 166:460–479, 2015).

Journal ArticleDOI
TL;DR: Various properties on the solution set for a quadratic complementarity problem, including existence, compactness and uniqueness, are studied and several results are established from assumptions given in terms of the comprising matrices of the underlying tensor, henceforth easily checkable.
Abstract: In this paper, we study quadratic complementarity problems, which form a subclass of nonlinear complementarity problems with the nonlinear functions being quadratic polynomial mappings. Quadratic complementarity problems serve as an important bridge linking linear complementarity problems and nonlinear complementarity problems. Various properties on the solution set for a quadratic complementarity problem, including existence, compactness and uniqueness, are studied. Several results are established from assumptions given in terms of the comprising matrices of the underlying tensor, henceforth easily checkable. Examples are given to demonstrate that the results improve or generalize the corresponding quadratic complementarity problem counterparts of the well-known nonlinear complementarity problem theory and broaden the boundary knowledge of nonlinear complementarity problems as well.

Journal ArticleDOI
TL;DR: Characterizations of the copulas attaining the bounds of multivariate Kendall's tau are provided, mainly in terms of theCopula measure, but also via Kendall’s distribution function and for shuffles of copulas.
Abstract: Kendall’s tau is one of the most popular measures of concordance, and even in the multivariate case exact upper and lower bounds of Kendall’s tau are known. The present paper provides characterizations of the copulas attaining the bounds of multivariate Kendall’s tau, mainly in terms of the copula measure, but also via Kendall’s distribution function and for shuffles of copulas.

Journal ArticleDOI
TL;DR: Some suitable subsets of scalarization image space are introduced to make equivalent characterizations for upper set (lower set, set, certainly, respectively) less ordered robustness for uncertain multiobjective optimization problems.
Abstract: This paper focuses on a unified approach to characterizing different kinds of multiobjective robustness concepts. Based on linear and nonlinear scalarization results for several set order relations, together with the help of image space analysis, some suitable subsets of scalarization image space are introduced to make equivalent characterizations for upper set (lower set, set, certainly, respectively) less ordered robustness for uncertain multiobjective optimization problems. In particular, the nonlinear scalarization functional plays a significant role in computing various multiobjective robust solutions. Finally, the corresponding examples are included to show the effectiveness of the results derived in this paper.

Journal ArticleDOI
TL;DR: In this article, the extragradient method is used to minimize the sum of two functions, the first one being smooth and the second being convex, under the Kurdyka-Łojasiewicz assumption.
Abstract: We consider the extragradient method to minimize the sum of two functions, the first one being smooth and the second being convex. Under the Kurdyka–Łojasiewicz assumption, we prove that the sequence produced by the extragradient method converges to a critical point of the problem and has finite length. The analysis is extended to the case when both functions are convex. We provide, in this case, a sublinear convergence rate, as for gradient-based methods. Furthermore, we show that the recent small-prox complexity result can be applied to this method. Considering the extragradient method is an occasion to describe an exact line search scheme for proximal decomposition methods. We provide details for the implementation of this scheme for the one-norm regularized least squares problem and demonstrate numerical results which suggest that combining nonaccelerated methods with exact line search can be a competitive choice.

Journal ArticleDOI
TL;DR: An improved multi-parametric programming algorithm based on active-set methods is introduced in this paper to overcome computational difficulties in handling genome-scale metabolic models.
Abstract: Flux balance analysis has proven an effective tool for analyzing metabolic networks. In flux balance analysis, reaction rates and optimal pathways are ascertained by solving a linear program, in which the growth rate is maximized subject to mass-balance constraints. A variety of cell functions in response to environmental stimuli can be quantified using flux balance analysis by parameterizing the linear program with respect to extracellular conditions. However, for most large, genome-scale metabolic networks of practical interest, the resulting parametric problem has multiple and highly degenerate optimal solutions, which are computationally challenging to handle. An improved multi-parametric programming algorithm based on active-set methods is introduced in this paper to overcome these computational difficulties. Degeneracy and multiplicity are handled, respectively, by introducing generalized inverses and auxiliary objective functions into the formulation of the optimality conditions. These improvements are especially effective for metabolic networks because their stoichiometry matrices are generally sparse; thus, fast and efficient algorithms from sparse linear algebra can be leveraged to compute generalized inverses and null-space bases. We illustrate the application of our algorithm to flux balance analysis of metabolic networks by studying a reduced metabolic model of Corynebacterium glutamicum and a genome-scale model of Escherichia coli. We then demonstrate how the critical regions resulting from these studies can be associated with optimal metabolic modes and discuss the physical relevance of optimal pathways arising from various auxiliary objective functions. Achieving more than fivefold improvement in computational speed over existing multi-parametric programming tools, the proposed algorithm proves promising in handling genome-scale metabolic models.

Journal ArticleDOI
TL;DR: Two types of fractional local error bounds for quadratic complementarity problems are established, one is based on the natural residual function and the other on the standard violation measure of the polynomial equalities and inequalities.
Abstract: In this article, two types of fractional local error bounds for quadratic complementarity problems are established, one is based on the natural residual function and the other on the standard violation measure of the polynomial equalities and inequalities. These fractional local error bounds are given with explicit exponents. A fractional local error bound with an explicit exponent via the natural residual function is new in the tensor/polynomial complementarity problems literature. The other fractional local error bounds take into account the sparsity structures, from both the algebraic and the geometric perspectives, of the third-order tensor in a quadratic complementarity problem. They also have explicit exponents, which improve the literature significantly.