scispace - formally typeset
Search or ask a question

Showing papers in "Optimization Methods & Software in 2000"


Journal ArticleDOI
TL;DR: This work discusses optimization techniques suitable for nonlinear programs of this type, with an emphasis on algorithms that guarantee global convergence.
Abstract: Many large optimization problems represent a family of models of varying size, corresponding to different discretizations. An example is optimal control problems where the solution is a function that is approximated by its values at finitely many points. We discuss optimization techniques suitable for nonlinear programs of this type, with an emphasis on algorithms that guarantee global convergence. The goal is to exploit the similar structure among the subproblems, using the solutions of smaller subproblems to accelerate the solution of larger, more refined subproblems.

181 citations


Journal ArticleDOI
TL;DR: Numerical results on fully dense publicly available datasets, numbering 20,000 to 1 million points in 32-dimensional space, confirm the theoretical results and demonstrate the ability to handle very large problems.
Abstract: A linear support vector machine formulation is used to generate a fast, finitely-terminating linear-programming algorithm for discriminating between two massive sets in n-dimen-sional space, where the number of points can be orders of magnitude larger than n. The algorithm creates a succession of sufficiently small linear programs that separate chunks of the data at a time. The key idea is that a small number of support vectors, corresponding to linear programming constraints with positive dual variables, are carried over between the successive small linear programs, each of which containing a chunk of the data. We prove that this procedure is monotonic and terminates in a finite number of steps at an exact solution that leads to an optimal separating plane for the entire dataset. Numerical results on fully dense publicly available datasets, numbering 20,000 to 1 million points in 32-dimensional space, confirm the theoretical results and demonstrate the ability to handle very large problems

177 citations


Journal ArticleDOI
TL;DR: In this article, a quasi-Newton method for smooth nonlinear equations is proposed, which converges globally and superlinearly for nonlinear problems involving a mapping with positive definite Jacobian matrices.
Abstract: In this paper, by using derivative-free line search, we propose a quasi-Newton method for smooth nonlinear equations Under appropriate conditions, we show that the proposed quasi-Newton method converges globally and superlinearly For nonlinear equations involving a mapping with positive definite Jacobian matrices, we also propose a norm descent quasi-Newton method and establish its global and superhnear convergence Finally, we report results of preliminary numerical experiments

157 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a new method for mutligroup discrimination based on a hierarchical procedure (Multi-Group Hierarchical Discrimination-M.H. DIS) which is evaluated along with eight real world case studies from the fields of finance and marketing.
Abstract: The discrimination problem is of major interest in fields such as environmental management, human resources management, production management, finance, marketing, medicine, etc.For decades this problem has been studied from a multivariate statistical point of view. Recently the possibilities of new approaches have been explored, based mainly on mathematical programming. This paper follows the methodological frame work of multicriteria decision aid (MCDA), to propose a new method for mutligroup discrimination based on a hierarchical procedure (Multi-Group Hierarchical Discrimination-M.H. DIS). The performance of the M.H.DIS method is evaluated along with eight real world case studies from the fields of finance and marketing. A Comparison is also performed with other MCDA methods.

108 citations


Journal ArticleDOI
TL;DR: In this paper, the authors extend the self-dual embedding technique to solve general conic convex programs, including semidefinite programmng, without the need to consider the initialization problem.
Abstract: How ro initialize an algorithm to solve an optimization problem is of great theoretical and practical importance. In the simplex method for linear programming this issue is resolved by either the two-phase approach or using the so-called big M technique. In the interior point method, there is a more elegant way to deal with the initialization problem. Viz. the self-dual embedding technique proposed by Ye, Todd and Mizuno [30]. For linear programming this technique makes it possible to identify an optimal solution or conclude the problem to be infeasible/unbounded by solving tts embedded self-dual problem The embedded self-dual problem has a trivial initial solution and has the same stmctare as the original problem. Hence. it eliminates the need to consider the initialization problem at all. In this paper, we extend this approach to solve general conic convex programming, including semidefinite programmng. Since a nonlinear conic convex programming problem may lack the so-called strict complementarity prop...

93 citations


Journal ArticleDOI
TL;DR: Evaluating the software in view of engineers addressing black box global optimization problems, i.e. problems with an objective function whose explicit form is unknown and whose evaluation is costly, with results obtained on a set of eleven test problems.
Abstract: We instance our experience with six public-domain global optimization software products and report comparative computational results obtained on a set of eleven test problems. The techniques used by the software under study include integral global optimization, genetic algorithms, simulated annealing, clustering, random search, continuation, Bayesian, tunneling, and multi-level methods. The test set contains practical problems: least median of squares regression, protein folding, and multidimensional scaling. These include non-differentiable, and also discontinuous objective functions, some with an exponential number of local minima. The dimension of the search space ranges from 1 to 20. We evaluate the software in view of engineers addressing black box global optimization problems, i.e. problems with an objective function whose explicit form is unknown and whose evaluation is costly. Such an objective function is common in industry. It is for instance given under the form of computer programmes involving...

52 citations


Journal ArticleDOI
TL;DR: This paper proposes an efficient new linesearch algorithms for solving large scale unconstrained optimization problems which exploit any local nonconvexity of the objective function, and proves global convergence to second-order critical points for the new algorithm.
Abstract: In this paper we propose efficient new linesearch algorithms for solving large scale unconstrained optimization problems which exploit any local nonconvexity of the objective function. Current algorithms in this class typically compute a pair of search directions at every iteration: a Newton-type direction, which ensures both global and fast asymptotic convergence, and a negative curvature direction, which enables the iterates to escape from the region of local non-convexity. A new point is generated by performing a search along a line or a curve obtained by combining these two directions. However, in almost all if these algorithms, the relative scaling of the directions is not taken into account. We propose a new algorithm which accounts for the relative scaling of the two directions. To do this, only the most promising of the two directions is selected at any given iteration, and a linesearch is performed along the chosen direction. The appropriate direction is selected by estimating the rate of decreas...

47 citations


Journal ArticleDOI
TL;DR: A generalization of the Penalty/Barrier Multiplier algorithm for semidefinite programming, based on a matrix form of Lagrange multipliers, gives computationally tractable dual bounds, which are produced by the Legendre transformation of the penalty function.
Abstract: We present a generalization of the Penalty/Barrier Multiplier algorithm for semidefinite programming, based on a matrix form of Lagrange multipliers. Our approach allows to use among others logarithmic, shifted logarithmic, exponential and a very effective quadratic-logarithmic penalty/Barrier functions. We present a dual analysis of the method, based on its correspondence to a proximal point algorithm with a nonquadratic distance-like function. We give computationally tractable dual bounds, which are produced by the Legendre transformation of the penalty function. Numerical results for large-scale problems from robust control, robust truss topology design and free material design demonstrate high efficiency of the algorithm

40 citations


Journal ArticleDOI
TL;DR: A simple numerical approach is proposed based on the exact penalization of the complementarity constraint, which addresses the main attention paid to optimality conditions and the respective constraint qualification.
Abstract: The paper deals with mathematical programs, where a complementarity problem arises among the constraints. The main attention is paid to optimality conditions and the respective constraint qualification. In addition, a simple numerical approach is proposed based on the exact penalization of the complementarity constraint.

34 citations


Journal ArticleDOI
TL;DR: In this article, a nonmonotone hybrid method for solving nonhnear systems is proposed, which consists in matching a Newton-like method stabilized by non-Monotone line-search with a cheap direct search method.
Abstract: A nonmonotone hybrid method for solving nonhnear systems is proposed. The idea consists in matching a Newton-like method stabilized by nonmonotone line-search with a cheap direct search method that...

27 citations


Journal ArticleDOI
TL;DR: This paper proposes an algorithm that removes this drawback of Odyss6e by means of modifications of both the tangent linear code and the adjoint code and is used for the differentiation of Meso-NH with respect to the state variables.
Abstract: This paper describes a quite automatic method that allows for optimizing the adjoint codes produced by automatic differentiation tools. The automatic differentiator Odyssee that generates both tangent linear (Forward Automatic Differentiation) and adjoint (Reverse Automatic Differentiation) codes, is chosen to illustrate the discussion. As with many other tools, Odyss6e allows the hand coding construction of the linearized codes to be avoided, but it sometimes generates huge executable codes that cannot be run. We propose an algorithm that removes this drawback by means of modifications of both the tangent linear code and the adjoint code. In particular a study of the nonlinear parts of the code is proposed to determine the parts of the trajectory that must be stored, the other parts are not stored In the second part of the paper, this method is used for the differentiation of Meso-NH with respect to the state variables. When generated in such a way, the resulting codes are efficient

Journal ArticleDOI
TL;DR: A parallel algorithm for simulated annealing in the continuous case, the Multiple Trials and Adaptive Supplementary Search, MTASS algorithm, is presented, based on a combination of multiple trials, local improved searches and an adaptive cooling schedule.
Abstract: In this paper a parallel algorithm for simulated annealing (SA) in the continuous case, the Multiple Trials and Adaptive Supplementary Search, MTASS algorithm, is presented. It is based on a combination of multiple trials, local improved searches and an adaptive cooling schedule. The results in optimizing some standard test problems are compared with some SA sequential algorithms and another parallel probabilistic algorithm

Journal ArticleDOI
TL;DR: This paper considers employing extra updates for the BFGS method for unconstrained optimization and concludes that some new algorithms are competitive with the standard BF GS method.
Abstract: This paper considers employing extra updates for the BFGS method for unconstrained optimization. The usual BFGS Hessian is updated a number of times, depending on the information of the first order derivatives, to obtain a new Hessian approximation at each iteration. Two approaches are proposed. One of them has the same properties of global and superlinear convergence on convex functions the BFGS method has, and another has the same property of quadratic termination without exact line searches that the symmetric rank-one method has. The new algorithms attempt to combine the best features of certain methods which are intended for either parallel computation or large scale optimization. It is concluded that some new algorithms are competitive with the standard BFGS method

Journal ArticleDOI
TL;DR: A data parallel procedure for randomly generating test problems for two-stage quadratic stochastic programming that allows the user to specify the size of the problem, the condition numbers of the Hessian matrices of the objective functions and the structure of the feasible regions in the first and the second stages.
Abstract: This paper proposes a data parallel procedure for randomly generating test problems for two-stage quadratic stochastic programming. Multiple quadratic programs in the second stage are randomly gene...

Journal ArticleDOI
TL;DR: In this article, a local convergence theory for the block-inexact-Newton method is presented, and a globally convergent modification of the basic block Inexact Newton algorithm is introduced so that, under suitable assumptions, convergence can be ensured, independently of the initial point considered.
Abstract: A nonlinear system of equations is said to be reducible if it can be divided into m blocks (m > 1) in such a way that the i-th block depends only on the first i blocks of unknowns. Different ways of handling the different blocks with the aim of solving the system have been proposed in the literature. When the dimension of the blocks is very large, it can be difficult to solve the linear Newtonian equations associated to them using direct solvers based on factorizations. In this case, the idea of using iterative linear solvers to deal with the blocks of the system separately is appealing. In this paper, a local convergence theory that justifies this procedure is presented. The theory also explains the behavior of a Block-Newton method under different types of perturbations. Moreover, a globally convergent modification of the basic Block Inexact-Newton algorithm is introduced so that, under suitable assumptions, convergence can be ensured, independently of the initial point considered

Journal ArticleDOI
TL;DR: A Newton-type algorithm model for solving smooth nonlinear optimization problems with general constraints and bound constraints on the variables and a way of forcing the global convergence is suggested.
Abstract: In this paper we introduce a Newton-type algorithm model for solving smooth nonlinear optimization problems with general constraints and bound constraints on the variables. Under very mild assumptions and without requiring the strict complementarity assumption, thc algorithm produces a sequence of pairs {(x k , λ k )} converging quadratically to , where is the solution of the problem and is the KKT multiplier associated with the general constraints. As regards the behaviour of the sequence {xk} alone, it converges at least superlinearly. A distinguishing feature of the proposed algorithm is that it exploits the particular structure of the constraints of the optimization problem so as to limit the computational burden as much as possible. In fact, at each iteration, it requires only the solution of a linear system whose dimension is equal at most to the number of variables plus the number of the general constraints. Hence, the proposed algorithm model may be well suited to tackle large scale problems. Even...

Journal ArticleDOI
TL;DR: A primal-dual interior point algorithm for solving general nonlinear programming problems is presented, which solves the perturbed optimality conditions by applying a quasi-Newton method, where the Hessian of the Lagrangian is replaced by a positive definite approximation.
Abstract: A primal-dual interior point algorithm for solving general nonlinear programming problems is presented. The algorithm solves the perturbed optimality conditions by applying a quasi-Newton method, where the Hessian of the Lagrangian is replaced by a positive definite approximation. An approximation of Fletcher's exact and differentiable merit function together with line-search procedures are incorporated into the algorithm. The line-search procedures are used to modify the length of the step so that the value of the merit function is always reduced. Different step-sizes are used for the primal and dual variables. The search directions are ensured to be descent for the merit function, which is thus used to guide the algorithm to an optimum solution of the constrained optimisation problem. The monotonic decrease of the merit function at each iteration, ensures the global convergence of the algorithm. Finally, preliminary numerical results demonstrate the efficient performance of the algorithm for a variety o...

Journal ArticleDOI
TL;DR: In this paper, a well defined trust region method for solving the KKT system of a variational inequality problem based on its semismooth reformulation is presented. And the proposed method solves subproblems inexactly.
Abstract: We present a well defined trust region method for solving the KKT system of a variational inequality problem based on its semismooth reformulation. The proposed method solves subproblems inexactly. We show that the proposed method converges

Journal ArticleDOI
TL;DR: The minimax linear assignment problem with additional linear constraints is studied, an exact polynomial time algorithm is developed, verified and validated in case of a single added constraint and a tree search technique is proposed.
Abstract: The minimax linear assignment problem with additional linear constraints is studied. An exact polynomial time algorithm is developed, verified and validated in case of a single added constraint. In case of multiple constraints, the problem is more complex and becomes NP-hard. A tree search technique is proposed. Some computational studies are reported.

Journal ArticleDOI
TL;DR: An extension that incorporates uncertainty of information in production and objective coefficients is developed that unites techniques of stochastic dynamic programming with techniques of linear programming, and it can be implemented by building very directly on the D L P decision support system.
Abstract: A brief overview is given of a new decision support system for solving deterministic resource decision problems. It is based on a synthesis of techniques of dynamic programming and linear programmi...

Journal ArticleDOI
TL;DR: In this paper, a truncated form of the difference ABS-type algorithm is proposed and its local convergence property is investigated, where the authors show that the truncated ABS algorithm has local convergence properties similar to those of the original ABS algorithm.
Abstract: A truncated form of the difference ABS-type algorithm is proposed and its local convergence property is investigated.

Journal ArticleDOI
TL;DR: This article considers two software tools implementing the forward and the reverse mode of CD, namely ADOL-C and FADBAD, and the model of a cooling system within steel manufacturing.
Abstract: Mathematical functions are often evaluated by computer programs. Using the technique of computational differentiation (CD), one can determine in addition exact gradients of these functions. This article considers two software tools implementing the forward and the reverse mode of CD, namely ADOL-C and FADBAD. Then the model of a cooling system within steel manufacturing is described. The gradient of the corresponding function is calculated by CD applying both software tools separately and alternatively, by divided differences. After a short account of our experience installing and running the software tools the results obtained are presented. A comparison of the run-time needed by CD to the run-time needed by the divided difference method is given

Journal ArticleDOI
TL;DR: By means of optimal control theory and Lie algebra, the basic formulation solving the optimization problems of smooth functions on the reachable set of a nonlinear control system is derived and the solution by Lie series is approximate.
Abstract: We study the optimization problems of smooth functions on the reachable set of a nonlinear control system. By means of the optimal control theory and Lie algebra, we not only derive the basic formulation solving the problem, but also approximate the solution by Lie series and give a computational way to deal with the optimization.