scispace - formally typeset
Search or ask a question

Showing papers in "Optimization Methods & Software in 2015"


Journal ArticleDOI
TL;DR: This article investigates methods to solve a fundamental task in gas transportation, namely the validation of nomination problem, and describes a two-stage approach to solve the resulting complex and numerically difficult nonconvex mixedinteger nonlinear feasibility problem.
Abstract: In this article, we investigate methods to solve a fundamental task in gas transportation, namely the validation of nomination problem: given a gas transmission network consisting of passive pipelines and active, controllable elements and given an amount of gas at every entry and exit point of the network, find operational settings for all active elements such that there exists a network state meeting all physical, technical, and legal constraints. We describe a two-stage approach to solve the resulting complex and numerically difficult nonconvex mixedinteger nonlinear feasibility problem. The first phase consists of four distinct algorithms applying mixedinteger linear, mixedinteger nonlinear, nonlinear, and methods for complementarity constraints to compute possible settings for the discrete decisions. The second phase employs a precise continuous nonlinear programming model of the gas network. Using this setup, we are able to compute high-quality solutions to real-world industrial instances that are si...

116 citations


Journal ArticleDOI
TL;DR: The presented results demonstrate that the primal network simplex and cost-scaling algorithms are the most efficient and robust in general.
Abstract: An extensive computational analysis of several algorithms for solving the minimum-cost network flow problem is conducted. Some of the considered implementations were developed by the author and are available as part of an open-source C++ optimization library called LEMON (http://lemon.cs.elte.hu/). These codes are compared to other publicly available solvers: CS2, MCF, RelaxIV, PDNET, MCFSimplex, as well as the corresponding components of the IBM ILOG CPLEX Optimization Studio and the LEDA C++ library. This evaluation, to the author's knowledge, is more comprehensive than earlier studies in terms of the range of considered implementations as well as the diversity and size of problem instances. The presented results demonstrate that the primal network simplex and cost-scaling algorithms are the most efficient and robust in general. Other methods, however, can outperform them in particular cases. The network simplex code of the author turned out to be far superior to the other implementations of this method...

96 citations


Journal ArticleDOI
TL;DR: BkgaAPI is an efficient and easy-to-use object-oriented application programming interface for the algorithmic framework of biased random-key genetic algorithms, and automatically handles the large portion of problem-independent modules that are part of the framework.
Abstract: In this paper, we describe brkgaAPI, an efficient and easy-to-use object-oriented application programming interface for the algorithmic framework of biased random-key genetic algorithms. Our cross-platform library automatically handles the large portion of problem-independent modules that are part of the framework, including population management and evolutionary dynamics, leaving to the user the task of implementing a problem-dependent procedure to convert a vector of random keys into a solution to the underlying optimization problem. Our implementation is written in the C++programming language and may benefit from shared-memory parallelism when available.

94 citations


Journal ArticleDOI
TL;DR: It is shown here that piecewise differentiable functions are lexicographically smooth in the sense of Nesterov, and that lexicographic derivatives of these functions comprise a particular subset of both the B-subdifferential and the Clarke Jacobian.
Abstract: Numerical methods for non-smooth equation-solving and optimization often require generalized derivative information in the form of elements of the Clarke Jacobian or the B-subdifferential. It is shown here that piecewise differentiable functions are lexicographically smooth in the sense of Nesterov, and that lexicographic derivatives of these functions comprise a particular subset of both the B-subdifferential and the Clarke Jacobian. Several recently developed methods for generalized derivative evaluation of composite piecewise differentiable functions are shown to produce identical results, which are also lexicographic derivatives. A vector forward mode of automatic differentiation AD is presented for evaluation of these derivatives, generalizing established methods and combining their computational benefits. This forward AD mode may be applied to any finite composition of known smooth functions, piecewise differentiable functions such as the absolute value function, , and , and certain non-smooth functions which are not piecewise differentiable, such as the Euclidean norm. This forward AD mode may be implemented using operator overloading, does not require storage of a computational graph, and is computationally tractable relative to the cost of a function evaluation. An implementation in C is discussed.

84 citations


Journal ArticleDOI
TL;DR: The branch-and-cut framework integrated into GloMIQO 2, which addresses mixed-integer quadratically constrained quadratic programs (MIQCQP) to ε-global optimality, is documents.
Abstract: The global mixed-integer quadratic optimizer, GloMIQO, addresses mixed-integer quadratically constrained quadratic programs (MIQCQP) to e-global optimality. This paper documents the branch-and-cut framework integrated into GloMIQO 2. Cutting planes are derived from reformulation–linearization technique equations, convex multivariable terms, αBB convexifications, and low- and high-dimensional edge-concave aggregations. Cuts are based on both individual equations and collections of nonlinear terms in MIQCQP. Novel contributions of this paper include: development of a corollary to Crama's [Concave extensions for nonlinear 0-1 maximization problems, Math. Program. 61 (1993), pp. 53–60] necessary and sufficient condition for the existence of a cut dominating the termwise relaxation of a bilinear expression; algorithmic descriptions for deriving each class of cut; presentation of a branch-and-cut framework integrating the cuts. Computational results are presented along with comparison of the GloMIQO 2 performan...

58 citations


Journal ArticleDOI
TL;DR: Penalty decomposition methods are proposed for general rank minimization problems in which each subproblem is solved by a block coordinate descent method and it is shown that any accumulation point of the sequence generated by the PD methods satisfies the first-order optimality conditions of a nonlinear reformulation of the problems.
Abstract: In this paper we consider general rank minimization problems with rank appearing either in the objective function or as a constraint We first establish that a class of special rank minimization problems has closed-form solutions Using this result, we then propose penalty decomposition PD methods for general rank minimization problems in which each subproblem is solved by a block coordinate descent method Under some suitable assumptions, we show that any accumulation point of the sequence generated by the PD methods satisfies the first-order optimality conditions of a nonlinear reformulation of the problems Finally, we test the performance of our methods by applying them to the matrix completion and nearest low-rank correlation matrix problems The computational results demonstrate that our methods are generally comparable or superior to the existing methods in terms of solution quality

55 citations


Journal ArticleDOI
TL;DR: An alternating direction augmented Lagrangian (ADAL) method is proposed, based on a new variable splitting approach that results in subproblems that can be solved efficiently and exactly and the global convergence of the new algorithm is established for the anisotropic TV model.
Abstract: We consider the image denoising problem using total variation TV regularization. This problem can be computationally challenging to solve due to the non-differentiability and non-linearity of the regularization term. We propose an alternating direction augmented Lagrangian ADAL method, based on a new variable splitting approach that results in subproblems that can be solved efficiently and exactly. The global convergence of the new algorithm is established for the anisotropic TV model. For the isotropic TV model, by doing further variable splitting, we are able to derive an ADAL method that is globally convergent. We compare our methods with the split Bregman method [T. Goldstein and S. Osher, The split Bregman method for l1-regularized problems, SIAM J. Imaging Sci. 2 2009, pp. 323],which is closely related to it, and demonstrate their competitiveness in computational performance on a set of standard test images.

53 citations


Journal ArticleDOI
TL;DR: This paper investigates efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function and proposes novel subproblem solvers within the standard alternating block variable approach.
Abstract: Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process e.g. count data, which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton and quasi-Newton methods. We compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.

51 citations


Journal ArticleDOI
TL;DR: The development of McCormick relaxations of implicit functions is presented, a deterministic algorithm for solving nonconvex NLPs globally using a reduced-space approach and finite convergence to ε-optimal global solutions is guaranteed.
Abstract: A deterministic algorithm for solving nonconvex NLPs globally using a reduced-space approach is presented. These problems are encountered when real-world models are involved as nonlinear equality constraints and the decision variables include the state variables of the system. By solving the model equations for the dependent state variables as implicit functions of the independent decision variables, a significant reduction in dimensionality can be obtained. As a result, the inequality constraints and objective function are implicit functions of the independent variables, which can be estimated via a fixed-point iteration. Relying on the recently developed ideas of generalized McCormick relaxations and McCormick-based relaxations of algorithms and subgradient propagation, the development of McCormick relaxations of implicit functions is presented. Using these ideas, the reduced space, implicit optimization formulation can be relaxed. When applied within a branch-and-bound framework, finite convergence to e-optimal global solutions is guaranteed.

47 citations


Journal ArticleDOI
TL;DR: The results show that the DSAMOPSO method, the dynamic self-adaptive multi-objective particle swarm optimization, outperforms the other three methods in terms of validity and efficiency.
Abstract: In this paper, we model a multi-mode time–cost–quality trade-off project scheduling problem under generalized precedence relations using mixed-integer mathematical programming. Several solution procedures, including the classical epsilon-constraint, the efficient epsilon-constraint method, dynamic self-adaptive multi-objective particle swarm optimization DSAMOPSO, and the multi-start partial bound enumeration algorithm, are provided to solve the proposed model. Several test problems are simulated and solved with the four methods and the performance of the methods are compared according to a set of accuracy and diversity comparison metrics. Additional analyses and tests are performed on the generated Pareto fronts of the solution procedures. Computational experiments are conducted to determine the validity and the efficiency of the DSAMOPSO method. The results show that this method outperforms the other three methods. We also carry out a sensitivity analysis of the DSAMOPSO algorithm to study the effects of parameter changes on the CPU time.

36 citations


Journal ArticleDOI
TL;DR: It turns out that the alternating direction method of multipliers and the restarted version of the fast gradient method are the best methods for solving decomposable QPs in terms of the number of necessary, lower level QP solutions.
Abstract: This paper aims to collect, benchmark and implement state-of-the-art decomposable convex quadratic programming (QP) methods employing duality. In order to decouple the original problem, these methods relax some constraints by introducing dual variables and apply a hierarchical optimization scheme. In the lower level of this scheme, a sequence of parametric QPs is solved in parallel, while in the high-level problem, a gradient-based method is applied to achieve an optimal dual solution. Finding the optimal dual variables is a hard problem since the dual function is not twice continuously differentiable and not strongly convex. We investigate and compare several gradient-based methods using a set of convex QPs as benchmarks. We discuss the theoretical worst-case convergence properties of the investigated methods, but we also evaluate their practical convergence behaviour. The benchmark set as well as the suite of implemented algorithms are released as open-source software. From our experiments, it turns out...

Journal ArticleDOI
TL;DR: This study links the huge computational gains compared to state-of-the-art MILP solvers to an analysis of subproblems on the branching tree and proves polynomial runtime of the algorithm for special cases and numerical evidence for efficiency by means of a numerical benchmark problem.
Abstract: We are interested in methods to solve mixed-integer nonlinear optimal control problems constrained by ordinary differential equations and combinatorial constraints on some of the control functions. To solve these problems we use a first discretize, then optimize approach to get a specially structured mixed-integer nonlinear program (MINLP). We decompose this MINLP into a nonlinear program (NLP) and a mixed-integer linear program (MILP), which is called the combinatorial integral approximation problem (CIAP). Previous results guarantee an integer gap for the MINLP depending on the objective function value of the CIAP. The focus of this study is the analysis of the CIAP and of a tailored branch-and-bound method. We link the huge computational gains compared to state-of-the-art MILP solvers to an analysis of subproblems on the branching tree. To this end we study properties of the Lagrangian relaxation of the CIAP. Special focus is given to special ordered set constraints that are present due to an outer con...

Journal ArticleDOI
TL;DR: A multiobjective optimization approach is proposed to find optimal control strategies for the minimization of active infectious and persistent latent individuals, as well as the cost associated to the implementation of the control strategies.
Abstract: Mathematical modelling can help to explain the nature and dynamics of infection transmissions, as well as support a policy for implementing those strategies that are most likely to bring public health and economic benefits. The paper addresses the application of optimal control strategies in a tuberculosis model. The model consists of a system of ordinary differential equations, which considers reinfection and post-exposure interventions. We propose a multiobjective optimization approach to find optimal control strategies for the minimization of active infectious and persistent latent individuals, as well as the cost associated to the implementation of the control strategies. Optimal control strategies are investigated for different values of the model parameters. The obtained numerical results cover a whole range of the optimal control strategies, providing valuable information about the tuberculosis dynamics and showing the usefulness of the proposed approach.

Journal ArticleDOI
TL;DR: In this article, the authors compared the parallel coordinate descent (PCDM) method with the diagonal quadratic approximation method (DQAM) for minimizing the augmented Lagrangian and showed that the two methods are equivalent for feasibility problems up to the selection of a step-size parameter.
Abstract: In this paper, we study decomposition methods based on separable approximations for minimizing the augmented Lagrangian In particular, we study and compare the diagonal quadratic approximation method DQAM of Mulvey and Ruszczynski [A diagonal quadratic approximation method for large scale linear programs, Oper Res Lett 12 1992, pp 205–215] and the parallel coordinate descent method PCDM of Richtarik and Takac [Parallel coordinate descent methods for big data optimization Technical report, November 2012 arXiv:12120873] We show that the two methods are equivalent for feasibility problems up to the selection of a step-size parameter Furthermore, we prove an improved complexity bound for PCDM under strong convexity, and show that this bound is at least 8L′/Lω−12 times better than the best known bound for DQAM, where ω is the degree of partial separability and L’ and L are the maximum and average of the block Lipschitz constants of the gradient of the quadratic penalty appearing in the augmented Lagrangian

Journal ArticleDOI
TL;DR: This paper proposes and compares different approaches within the general fixed-point framework that allows to deal with multi-user (stochastic) equilibrium assignment with variable demand (VD).
Abstract: This paper proposes and compares different approaches within the general fixed-point framework that allows to deal with multi-user (stochastic) equilibrium assignment with variable demand (VD). The aim was threefold: (i) compare the efficiency and the effectiveness of the internal and the external approaches to stochastic equilibrium assignment with VD; (ii) investigate the efficiency and the effectiveness of different algorithms based on the method of successive averages and its extensions; (iii) investigate the effects of different averaging schemes, different convergence criteria and different path choice models, such as Multinomial Logit model, C-Logit model and Multinomial Probit model. Analyses were carried out with respect to a real network and considering different indicators of both efficiency and effectiveness.

Journal ArticleDOI
TL;DR: With adaptive update of Lagrange multipliers, it is proved the global convergence of the proposed augmented Lagrangian trust region method for equality constrained optimization.
Abstract: In this paper we propose an augmented Lagrangian trust region method for equality constrained optimization. Different from standard augmented Lagrangian methods which minimize the augmented Lagrangian function for fixed Lagrange multiplier and penalty parameter at each iteration, the proposed method tries to minimize its second-order approximation function. We propose a new strategy for adjusting the penalty parameter. With adaptive update of Lagrange multipliers, we prove the global convergence of the proposed method. Numerical results on test problems from the CUTEr collection are also reported.

Journal ArticleDOI
TL;DR: A trust-region algorithm for constrained optimization problems in which the derivatives of the objective function are not available that is approximated by a model obtained by quadratic interpolation, which is then minimized within the intersection of the feasible set with the trust region.
Abstract: We propose a trust-region algorithm for constrained optimization problems in which the derivatives of the objective function are not available. In each iteration, the objective function is approximated by a model obtained by quadratic interpolation, which is then minimized within the intersection of the feasible set with the trust region. Since the constraints are handled in the trust-region subproblems, all the iterates are feasible even if some interpolation points are not. The rules for constructing and updating the quadratic model and the interpolation set use ideas from the BOBYQA software, a largely used algorithm for box-constrained problems. The subproblems are solved by ALGENCAN , a competitive implementation of an Augmented Lagrangian approach for general-constrained problems. Some numerical results for the Hock–Schittkowski collection are presented, followed by a performance comparison between our proposal and three derivative-free algorithms found in the literature.

Journal ArticleDOI
TL;DR: An active-set method for minimizing an objective that is the sum of a convex quadratic and regularization term that has the flexibility of computing a first-order proximal gradient step or a subspace CG step at each iteration.
Abstract: We present an active-set method for minimizing an objective that is the sum of a convex quadratic and regularization term. Unlike two-phase methods that combine a first-order active set identification step and a subspace phase consisting of a cycle of conjugate gradient CG iterations, the method presented here has the flexibility of computing a first-order proximal gradient step or a subspace CG step at each iteration. The decision of which type of step to perform is based on the relative magnitudes of some scaled components of the minimum norm subgradient of the objective function. The paper establishes global rates of convergence, as well as work complexity estimates for two variants of our approach, which we call the interleaved iterative soft-thresholding algorithm ISTA–CG method. Numerical results illustrating the behaviour of the method on a variety of test problems are presented.

Journal ArticleDOI
TL;DR: This paper considers generalized optimal solution to interval linear programming problems and develops necessary and sufficient conditions for checking three kinds of these newly defined optimal solutions.
Abstract: Recently, some new concepts of optimal solution to interval linear program are introduced which include some existing concepts of optimal solution as special cases. This paper considers generalized optimal solution to interval linear programming problems. The new concepts of optimal solution are introduced in a more general and unified framework. Most existing optimal solution concepts of interval linear program in the literature are special cases in this framework. Necessary and sufficient conditions for checking three kinds of these newly defined optimal solutions are developed.

Journal ArticleDOI
TL;DR: A Lagrangean relaxation method is proposed to obtain an acceptable feasible solution by solving a sequence of one-level mixed-integer problems to solve the BLPP efficiently.
Abstract: Under study is a special class of mixed-integer bi-level programming problem (BLPP) which arises in several areas such as engineering, transportation and control systems. The main characteristic of BLPP is that the outer optimization (upper level) problem is constrained by an inner optimization (lower level) problem. In this paper, a Lagrangean relaxation method is proposed to obtain an acceptable feasible solution by solving a sequence of one-level mixed-integer problems. The computational results on some examples were taken from literatures and randomly generated problems indicate that the method is able to solve the BLPP efficiently.

Journal ArticleDOI
TL;DR: A two-phase descent direction method for unconstrained stochastic optimization problem is proposed, and the almost sure convergence of the proposed method is established, under standard assumption for descent direction and SA methods.
Abstract: A two-phase descent direction method for unconstrained stochastic optimization problem is proposed. A line-search method with an arbitrary descent direction is used to determine the step sizes during the initial phase, and the second phase performs the stochastic approximation SA step sizes. The almost sure convergence of the proposed method is established, under standard assumption for descent direction and SA methods. The algorithm used for practical implementation combines a line-search quasi-Newton QN method, in particular the Broyden–Fletcher–Goldfarb–Shanno BFGS and Symmetric Rank 1 SR1 methods, with the SA iterations. Numerical results show good performance of the proposed method for different noise levels.

Journal ArticleDOI
TL;DR: This paper intends to present a simple and effective nested strategy based on the quantum binary particle swarm optimization (QBPSO) method for solving the bi-level mathematical model of the problem.
Abstract: This paper deals with a special class of competitive facility location problems, in which two non-cooperative firms compete to capture the most of a given market, in order to maximize their profit. This paper intends to present a simple and effective nested strategy based on the quantum binary particle swarm optimization QBPSO method for solving the bi-level mathematical model of the problem. In solution approach, an improvement procedure is embedded into QBPSO to increase the convergence speed and generate more accurate solutions. Taguchi's method is employed to systematically determine the optimal values of QBPSO parameters. Finally, computational results on large-scale instances with up to 300 locations and 350 clients more than 100,000 variables and 300,000 constraints at each level confirmed the method efficiency in terms of solution quality and time.

Journal ArticleDOI
TL;DR: A hybridization of the Hestenes–Stiefel and Dai–Yuan conjugate gradient methods is proposed, which demonstrates efficiency of the proposed hybrid CG method in the sense of the Dolan–Moré performance profile.
Abstract: Following Andrei's approach of combining the conjugate gradient parameters convexly, a hybridization of the Hestenes–Stiefel HS and Dai–Yuan conjugate gradient CG methods is proposed. The hybridization parameter is computed by solving the least-squares problem of minimizing the distance between search directions of the hybrid method and a three-term conjugate gradient method proposed by Zhang et al. which possesses the sufficient descent property. Also, Powell's non-negative restriction of the HS CG parameter is employed in the hybrid method. A brief global convergence analysis is made without convexity assumption on the objective function. Comparative testing results are reported; they demonstrate efficiency of the proposed hybrid CG method in the sense of the Dolan–More performance profile.

Journal ArticleDOI
TL;DR: An efficient updating rule is introduced for the parameters of the Yabe and Takano's CG algorithm to establish the global convergence property of the new suggested algorithm on uniformly convex and general functions.
Abstract: In this paper, we propose a new conjugate gradient CG method which belongs to the CG methods of Dai–Liao family [New conjugacy conditions and related nonlinear conjugate gradient methods, Appl. Math. Optim. 43 2001, pp. 87–101]. Babaie-Kafaki et al. [Two new conjugate gradient methods based on modified secant equations, J. Comput. Appl. Math. 234 2010, pp. 1374–1386] made some modifications on the Yabe and Takano's CG approach [Global convergence properties of nonlinear conjugate gradient methods with modified secant condition, Comput. Optim. Appl. 28 2004, pp. 203–225] and received some appealing results in theory and practice. Here, we introduce an efficient updating rule for the parameters of the Yabe and Takano's CG algorithm. Under some standard assumptions, we establish the global convergence property of the new suggested algorithm on uniformly convex and general functions. Numerical results on some testing problems from CUTEr collection show the priority of the proposed method to some existing CG methods in practice.

Journal ArticleDOI
TL;DR: All the relationships between monotonicity conditions in the framework of the so-called abstract EP are investigated and variational inequalities and linear EPs, which include also Nash EPs with quadratic payoffs are analyzed.
Abstract: In the last years many solution methods for equilibrium problems (EPs) have been developed. Several different monotonicity conditions have been exploited to prove convergence. The paper investigates all the relationships between them in the framework of the so-called abstract EP. The analysis is further detailed for variational inequalities and linear EPs, which include also Nash EPs with quadratic payoffs.

Journal ArticleDOI
TL;DR: This work studies the rate of convergence of a projected Barzilai–Borwein method, which performs the Grippo–Lampariello–Lucidi non-monotone line search along the feasible direction, for convex constrained optimization, and establishes sublinear convergence of the considered method for general convex functions.
Abstract: We study the rate of convergence of a projected Barzilai–Borwein method, which performs the Grippo–Lampariello–Lucidi GLL non-monotone line search along the feasible direction, for convex constrained optimization. Under mild conditions, we establish sublinear convergence of the considered method for general convex functions. Moreover, we show that the rate of convergence is R-linear for strongly convex functions. We extend the convergence results to different versions of the method based on a general GLL non-monotone line search, an adaptive non-monotone line search, and a non-monotone line search that requires an average of the successive function values decreases.

Journal ArticleDOI
TL;DR: A modified trust region algorithm for nonlinear equations with the trust region radii converging to zero is presented, which is very efficient for both singular problems and nonsingular problems.
Abstract: In this paper, we present a modified trust region algorithm for nonlinear equations with the trust region radii converging to zero. The algorithm calculates the Jacobian after every two computations of the step. It preserves the global convergence as the traditional trust region algorithms. Moreover, it converges nearly q-cubically under the local error bound condition, which is weaker than the nonsingularity of the Jacobian at a solution. Numerical results show that the algorithm is very efficient for both singular problems and nonsingular problems.

Journal ArticleDOI
TL;DR: This work considers the inexact restoration and the composite-step sequential quadratic programming (SQP) methods, and relates them to the so-called perturbed SQP framework, where iterations of the methods are interpreted as certain structured perturbations of the basic SQP iterations.
Abstract: We consider the inexact restoration and the composite-step sequential quadratic programming SQP methods, and relate them to the so-called perturbed SQP framework. In particular, iterations of the methods in question are interpreted as certain structured perturbations of the basic SQP iterations. This gives a different insight into local behaviour of those algorithms, as well as improved or different local convergence and rate of convergence results.

Journal ArticleDOI
TL;DR: It is shown that, if the sampling radii for linear interpolation are properly selected, then the new algorithm has the same convergence rate as the original gradient-based algorithm, providing a novel global rate-of-convergence result for nonsmooth convex DFO with nonsm Smooth convex constraints.
Abstract: We consider the minimization of a nonsmooth convex function over a compact convex set subject to a nonsmooth convex constraint. We work in the setting of derivative-free optimization DFO, assuming that the objective and constraint functions are available through a black-box that provides function values for lower- representation of the functions. Our approach is based on a DFO adaptation of the e-comirror algorithm [Beck et al. The CoMirror algorithm for solving nonsmooth constrained convex problems, Oper. Res. Lett. 386 2010, pp. 493–498]. Algorithmic convergence hinges on the ability to accurately approximate subgradients of lower- functions, which we prove is possible through linear interpolation. We show that, if the sampling radii for linear interpolation are properly selected, then the new algorithm has the same convergence rate as the original gradient-based algorithm. This provides a novel global rate-of-convergence result for nonsmooth convex DFO with nonsmooth convex constraints. We conclude with numerical testing that demonstrates the practical feasibility of the algorithm and some directions for further research.

Journal ArticleDOI
TL;DR: An inexact affine-scaling method for large-scale bound-constrained systems of nonlinear equations which often arise in practical applications when some of the unknowns are naturally subject to constraints due to physical arguments is introduced.
Abstract: Within the framework of affine-scaling trust-region methods for bound-constrained problems, we discuss the use of an inexact dogleg method as a tool for simultaneously handling the trust-region and the bound constraints while seeking for an approximate minimizer of the model. Then, we focus on large-scale bound-constrained systems of nonlinear equations which often arise in practical applications when some of the unknowns are naturally subject to constraints due to physical arguments. We introduce an inexact affine-scaling method for such a class of problems that employs the inexact dogleg procedure. Global convergence results are established without any Lipschitz assumption on the Jacobian matrix, and locally fast convergence is shown under standard assumptions. Convergence analysis is performed without specifying the scaling matrix that is used to handle the bounds, and a rather general class of scaling matrices is allowed in actual algorithms. Numerical results showing the performance of the method are...