# Showing papers in "Journal of Global Optimization in 2001"

••

General Motors

^{1}TL;DR: This paper presents a taxonomy of existing approaches for using response surfaces for global optimization, illustrating each method with a simple numerical example that brings out its advantages and disadvantages.

Abstract: This paper presents a taxonomy of existing approaches for using response surfaces for global optimization. Each method is illustrated with a simple numerical example that brings out its advantages and disadvantages. The central theme is that methods that seem quite reasonable often have non-obvious failure modes. Understanding these failure modes is essential for the development of practical algorithms that fulfill the intuitive promise of the response surface approach.

2,122 citations

••

TL;DR: It is shown that, for most types of radial basis functions that are considered in this paper, convergence can be achieved without further assumptions on the objective function.

Abstract: We introduce a method that aims to find the global minimum of a continuous nonconvex function on a compact subset of \dRd It is assumed that function evaluations are expensive and that no additional information is available Radial basis function interpolation is used to define a utility function The maximizer of this function is the next point where the objective function is evaluated We show that, for most types of radial basis functions that are considered in this paper, convergence can be achieved without further assumptions on the objective function Besides, it turns out that our method is closely related to a statistical global optimization method, the P-algorithm A general framework for both methods is presented Finally, a few numerical examples show that on the set of Dixon-Szego test functions our method yields favourable results in comparison to other global optimization methods

793 citations

••

TL;DR: A form of the DIRECT algorithm that is strongly biased toward local search is proposed that should do well for small problems with a single global minimizer and only a few local minimizers.

Abstract: In this paper we propose a form of the DIRECT algorithm that is strongly biased toward local search. This form should do well for small problems with a single global minimizer and only a few local minimizers. We motivate our formulation with some results on how the original formulation of the DIRECT algorithm clusters its search near a global minimizer. We report on the performance of our algorithm on a suite of test problems and observe that the algorithm performs particularly well when termination is based on a budget of function evaluations.

301 citations

••

TL;DR: An algorithm for computing the global minimum of the problem by means of an interior-point method for convex programs is proposed.

Abstract: We consider the problem of minimizing the sum of a convex function and of p≥1 fractions subject to convex constraints. The numerators of the fractions are positive convex functions, and the denominators are positive concave functions. Thus, each fraction is quasi-convex. We give a brief discussion of the problem and prove that in spite of its special structure, the problem is \cN\cP-complete even when only p=1 fraction is involved. We then show how the problem can be reduced to the minimization of a function of p variables where the function values are given by the solution of certain convex subproblems. Based on this reduction, we propose an algorithm for computing the global minimum of the problem by means of an interior-point method for convex programs.

185 citations

••

TL;DR: A novel technique that addresses the solution of the general nonlinear bilevel programming problem to global optimality based on the relaxation of the feasible region by convex underestimation utilizing the basic principles of the deterministic global optimization algorithm, αBB.

Abstract: A novel technique that addresses the solution of the general nonlinear bilevel programming problem to global optimality is presented. Global optimality is guaranteed for problems that involve twice differentiable nonlinear functions as long as the linear independence constraint qualification condition holds for the inner problem constraints. The approach is based on the relaxation of the feasible region by convex underestimation, embedded in a branch and bound framework utilizing the basic principles of the deterministic global optimization algorithm, αBB [2, 4, 5, 11]. Epsilon global optimality in a finite number of iterations is theoretically guaranteed. Computational studies on several literature problems are reported.

146 citations

••

TL;DR: A scatter search implementation designed to find high quality solutions for the NP-hard linear ordering problem, which has a significant number of applications in practice and incorporates innovative mechanisms to combine solutions and to create a balance between quality and diversification in the reference set.

Abstract: Scatter search is a population-based method that has recently been shown to yield promising outcomes for solving combinatorial and nonlinear global optimization problems. Based on formulations originally proposed in the 1960s for combining decision rules and problem constraints, such as in generating surrogate constraints, scatter search uses strategies for combining solution vectors that have proved effective in a variety of problem settings. In this paper, we present a scatter search implementation designed to find high quality solutions for the NP-hard linear ordering problem, which has a significant number of applications in practice. The LOP, for example, is equivalent to the so-called triangulation problem for input-output tables in economics. Our implementation incorporates innovative mechanisms to combine solutions and to create a balance between quality and diversification in the reference set. We also use a tracking process that generates solution statistics disclosing the nature of combinations and the ranks of antecedent solutions that produced the best final solutions. Extensive computational experiments with more than 300 instances establishes the effectiveness of our procedure in relation to approaches previously identified to be best.

146 citations

••

TL;DR: This paper is concerned with filled function techniques for unconstrained global minimization of a continuous function of several variables that have either one or two adjustable parameters.

Abstract: This paper is concerned with filled function techniques for unconstrained global minimization of a continuous function of several variables. More general forms of filled functions are presented for smooth and non-smooth optimization problems. These functions have either one or two adjustable parameters. Conditions on functions and on the values of parameters are given so that the constructed functions have the desired properties of filled functions.

120 citations

••

TL;DR: This work develops the convex envelope and concave envelope of z=x/y over a hypercube and proposes a new relaxation technique for fractional programs which includes the derived envelopes.

Abstract: In a recent work, we introduced the concept of convex extensions for lower semi-continuous functions and studied their properties. In this work, we present new techniques for constructing convex and concave envelopes of nonlinear functions using the theory of convex extensions. In particular, we develop the convex envelope and concave envelope of z=x/y over a hypercube. We show that the convex envelope is strictly tighter than previously known convex underestimators of x/y. We then propose a new relaxation technique for fractional programs which includes the derived envelopes. The resulting relaxation is shown to be a semidefinite program. Finally, we derive the convex envelope for a class of functions of the type f(x,y) over a hypercube under the assumption that f is concave in x and convex in y.

115 citations

••

TL;DR: It is proved that one of the four bounding schemes provides the convex envelope and that two schemes provide the concave envelope for the product of p variables overmathbb{R}_{^ + }^p.

Abstract: We analyze four bounding schemes for multilinear functions and theoretically compare their tightness. We prove that one of the four schemes provides the convex envelope and that two schemes provide the concave envelope for the product of p variables over Rp+.

102 citations

••

TL;DR: This paper proposes a new filled function that needs only one parameter and does not include exponential terms, and has better computability than the traditional ones.

Abstract: The Filled Function Method is an approach to finding global minima of multidimensional nonconvex functions. The traditional filled functions have features that may affect the computability when applied to numerical optimization. This paper proposes a new filled function. This function needs only one parameter and does not include exponential terms. Also, the lower bound of weight factor a is usually smaller than that of one previous formulation. Therefore, the proposed new function has better computability than the traditional ones.

98 citations

••

TL;DR: Five different parallel Simulated Annealing (SA) algorithms are developed and compared on an extensive test bed used previously for the assessment of various solution approaches in global optimization.

Abstract: Global optimization involves the difficult task of the identification of global extremities of mathematical functions Such problems are often encountered in practice in various fields, eg, molecular biology, physics, industrial chemistry In this work, we develop five different parallel Simulated Annealing (SA) algorithms and compare them on an extensive test bed used previously for the assessment of various solution approaches in global optimization The parallel SA algorithms consist of various categories: the asynchronous approach where no information is exchanged among parallel runs and the synchronous approaches where solutions are exchanged using genetic operators, or where solutions are transmitted only occasionally, or where highly coupled synchronization is achieved at every iteration One of these approaches, which occasionally applies partial information exchanges (controlled in terms of solution quality), provides particularly notable results for functions with vast search spaces of up to 400 dimensions Previous attempts with other approaches, such as sequential SA, adaptive partitioning algorithms and clustering algorithms, to identify the global optima of these functions have failed without exception

••

TL;DR: Three global optimization techniques on the HSCT problem and two test problems containing thousands of local optima and noise are compared: multistart local optimizations using either sequential quadratic programming (SQP) as implemented in the design optimization tools (DOT) program or Snyman's dynamic search method, and a modified form of Jones' DIRECT global optimization algorithm.

Abstract: The conceptual design of aircraft often entails a large number of nonlinear constraints that result in a nonconvex feasible design space and multiple local optima. The design of the high-speed civil transport (HSCT) is used as an example of a highly complex conceptual design with 26 design variables and 68 constraints. This paper compares three global optimization techniques on the HSCT problem and two test problems containing thousands of local optima and noise: multistart local optimizations using either sequential quadratic programming (SQP) as implemented in the design optimization tools (DOT) program or Snyman's dynamic search method, and a modified form of Jones' DIRECT global optimization algorithm. SQP is a local optimizer, while Snyman's algorithm is capable of moving through shallow local minima. The modified DIRECT algorithm is a global search method based on Lipschitzian optimization that locates small promising regions of design space and then uses a local optimizer to converge to the optimum. DOT and the dynamic search algorithms proved to be superior for finding a single optimum masked by noise of trigonometric form. The modified DIRECT algorithm was found to be better for locating the global optimum of functions with many widely separated true local optima.

••

TL;DR: The main result proves that the condition of proper quasimonotonicity is sharp in order to solve the dual equilibrium problem on every convex set.

Abstract: In this paper, we consider some well–known equilibrium problems and their duals in a topological Hausdorff vector space X for a bifunction F defined on \kk,where K is a convex subset of X. Some necessary conditions are investigated, proving different results depending on the behaviour of F on the diagonal set. The concept of proper quasimonotonicity for bifunctions is defined, and the relationship with generalized monotonicity is investigated. The main result proves that the condition of proper quasimonotonicity is sharp in order to solve the dual equilibrium problem on every convex set.

••

TL;DR: The proposed approach significantly improves upon a previous method of Sherali et al. (1998) by way of adopting tighter polyhedral relaxations, and more effective partitioning strategies in concert with a maximal spanning tree-based branching variable selection procedure.

Abstract: In this paper, we address the development of a global optimization procedure for the problem of designing a water distribution network, including the case of expanding an already existing system, that satisfies specified flow demands at stated pressure head requirements. The proposed approach significantly improves upon a previous method of Sherali et al. (1998) by way of adopting tighter polyhedral relaxations, and more effective partitioning strategies in concert with a maximal spanning tree-based branching variable selection procedure. Computational experience on three standard test problems from the literature is provided to evaluate the proposed procedure. For all these problems, proven global optimal solutions within a tolerance of 10−4% and/or within 1$ of optimality are obtained. In particular, the two larger instances of the Hanoi and the New York test networks are solved to global optimality for the very first time in the literature. A new real network design test problem based on the Town of Blacksburg Water Distribution System is also offered to be included in the available library of test cases, and related computational results are presented.

••

TL;DR: The method consists in exploiting the global optimality conditions, expressed in terms of∈-subdifferentials of convex functions and ∈-normal directions, to convex sets, to find explicit conditions for optimality.

Abstract: For the problem of maximizing a convex quadratic function under convex quadratic constraints, we derive conditions characterizing a globally optimal solution. The method consists in exploiting the global optimality conditions, expressed in terms of e-subdifferentials of convex functions and e-normal directions, to convex sets. By specializing the problem of maximizing a convex function over a convex set, we find explicit conditions for optimality.

••

TL;DR: This work proposes two polynomial-time algorithms, based on two continuous formulations of the maximum independent set problem on a graph G=(V,E), for finding maximal independent sets with cardinality greater than or equal to F( x0) and H(x0), respectively.

Abstract: Two continuous formulations of the maximum independent set problem on a graph G=(V,E) are considered. Both cases involve the maximization of an n-variable polynomial over the n-dimensional hypercube, where n is the number of nodes in G. Two (polynomial) objective functions F(x) and H(x) are considered. Given any solution to x0 in the hypercube, we propose two polynomial-time algorithms based on these formulations, for finding maximal independent sets with cardinality greater than or equal to F(x0) and H(x0), respectively. A relation between the two approaches is studied and a more general statement for dominating sets is proved. Results of preliminary computational experiments for some of the DIMACS clique benchmark graphs are presented.

••

TL;DR: A refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems is proposed.

Abstract: The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new approach, we investigate the stability properties of the general gradient-based neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any trajectory of the gradient-based neural network converges to an equilibrium point, and (b) the Lyapunov stability is equivalent to the asymptotical stability in the gradient-based neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradient-based neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems. Promising simulation results of a refined gradient-based neural network on some problems are also reported.

••

TL;DR: Extensive numerical tests confirm that substantial improvements can be achieved both on simple and sophisticated algorithms by the new method (utilizing the known minimum value), and that these improvements are larger when hard problems are to be solved.

Abstract: The theoretical convergence properties of interval global optimization algorithms that select the next subinterval to be subdivided according to a new class of interval selection criteria are investigated. The latter are based on variants of the RejectIndex: pf*(X) = \frac{f*-\underline{F}(X)}{\overline{F}(X) - \underline{F}(X)}, a recently thoroughly studied indicator, that can quite reliably show which subinterval is close to a global minimizer point. Extensive numerical tests on 40 problems confirm that substantial improvements can be achieved both on simple and sophisticated algorithms by the new method (utilizing the known minimum value), and that these improvements are larger when hard problems are to be solved.

••

TL;DR: In this paper, Lipschitz univariate constrained global optimization problems where both the objective function and constraints can be multiextremal are considered and a Branch-and-Bound method that does not use derivatives for solving the reduced problem is proposed.

Abstract: In this paper, Lipschitz univariate constrained global optimization problems where both the objective function and constraints can be multiextremal are considered. The constrained problem is reduced to a discontinuous unconstrained problem by the index scheme without introducing additional parameters or variables. A Branch-and-Bound method that does not use derivatives for solving the reduced problem is proposed. The method either determines the infeasibility of the original problem or finds lower and upper bounds for the global solution. Not all the constraints are evaluated during every iteration of the algorithm, providing a significant acceleration of the search. Convergence conditions of the new method are established. Extensive numerical experiments are presented.

••

TL;DR: It is shown that this class of problems can be transformed into equivalent concave minimization problems using the proposed convexification schemes and an outer approximation method can be used to find the global solution of the transformed problem.

Abstract: A convexification method is proposed for solving a class of global optimization problems with certain monotone properties. It is shown that this class of problems can be transformed into equivalent concave minimization problems using the proposed convexification schemes. An outer approximation method can then be used to find the global solution of the transformed problem. Applications to mixed-integer nonlinear programming problems arising in reliability optimization of complex systems are discussed and satisfactory numerical results are presented.

••

TL;DR: The aim of this paper is to analyze the behavior of the UEGO algorithm as a function of different parameter settings and types of functions and to examine its reliability with the help of Csendes' method.

Abstract: UEGO is a general clustering technique capable of accelerating and/or parallelizing existing search methods. UEGO is an abstraction of GAS, a genetic algorithm (GA) with subpopulation support, so the niching (i.e. clustering) technique of GAS can be applied along with any kind of optimizers, not only genetic algorithm. The aim of this paper is to analyze the behavior of the algorithm as a function of different parameter settings and types of functions and to examine its reliability with the help of Csendes' method. Comparisons to other methods are also presented.

••

TL;DR: In this article, the authors analyze cooperative games with side payments, where each player faces a possibly non-convex optimization problem, interpreted as production planning, constrained by his resources or technology.

Abstract: The paper analyzes cooperative games with side payments. Each player faces a possibly non-convex optimization problem, interpreted as production planning, constrained by his resources or technology. Coalitions can aggregate (or pool) members' contributions. We discuss instances where such aggregation eliminates or reduces the lack of convexity. Core solutions are computed or approximated via dual programs associated to the grand coalition.

••

TL;DR: A framework to study the behaviour of algorithms in general is presented and embedded into the context of the authors' view on questions in Global Optimization by using as a reference a theoretical ideal algorithm called N-points Pure Adaptive Search (NPAS).

Abstract: Controlled Random Search (CRS) is a simple population based algorithm which despite its attractiveness for practical use, has never been very popular among researchers on Global Optimization due to the difficulties in analysing the algorithm. In this paper, a framework to study the behaviour of algorithms in general is presented and embedded into the context of our view on questions in Global Optimization. By using as a reference a theoretical ideal algorithm called N-points Pure Adaptive Search (NPAS) some new analytical results provide bounds on speed of convergence and the Success Rate of CRS in the limit once it has settled down into simple behaviour. To relate the performance of the algorithm to characteristics of functions to be optimized, constructed simple test functions, called extreme cases, are used.

••

TL;DR: This work introduces several dual problems related to each of these problems and shows how solutions to the dual problems can serve to locate solutions of the primal problem.

Abstract: Calling anticonvex a program which is either a maximization of a convex function on a convex set or a minimization of a convex function on the set of points outside a convex subset, we introduce several dual problems related to each of these problems. We give conditions ensuring there is no duality gap. We show how solutions to the dual problems can serve to locate solutions of the primal problem.

••

TL;DR: A new global branch-and-prune algorithm dedicated to the solving of nonlinear systems that combines a multidimensional interval Newton method with HC4, a state of the art constraint satisfaction algorithm recently proposed by the authors.

Abstract: This paper describes a new global branch-and-prune algorithm dedicated to the solving of nonlinear systems. The pruning technique combines a multidimensional interval Newton method with HC4, a state of the art constraint satisfaction algorithm recently proposed by the authors. From an algorithmic point of view, the main contributions of this paper are the design of a fine-grained interaction between both algorithms which avoids some unnecessary computation and the description of HC4 in terms of a chain rule for constraint projections. Our algorithm is experimentally compared, on a particular circuit design problem proposed by Ebers and Moll in 1954, with two global methods proposed in the last ten years by Ratschek and Rokne and by Puget and Van Hentenryck. This comparison shows an improvement factor of five with respect to the fastest of these previous implementations on the same machine.

••

TL;DR: A mixed-integer non-linear model is proposed to optimize jointly the assignment of capacities and flows (the CFA problem) in a communication network and numerical tests show the ability of the decomposition approach to obtain global optimal solutions of the C FA problem.

Abstract: A mixed-integer non-linear model is proposed to optimize jointly the assignment of capacities and flows (the CFA problem) in a communication network. Discrete capacities are considered and the cost function combines the installation cost with a measure of the Quality of Service (QoS) of the resulting network for a given traffic. Generalized Benders decomposition induces convex subproblems which are multicommodity flow problems on different topologies with fixed capacities. These are solved by an efficient proximal decomposition method. Numerical tests on small to medium-size networks show the ability of the decomposition approach to obtain global optimal solutions of the CFA problem.

••

TL;DR: It is shown that the concave cost functions on the arcs can be approximated by linear functions, and the considered problem can be solved by a series of linear programs.

Abstract: We consider minimum concave cost flow problems in acyclic, uncapacitated networks with a single source. For these problems a dynamic programming scheme is developed. It is shown that the concave cost functions on the arcs can be approximated by linear functions. Thus the considered problem can be solved by a series of linear programs. This approximation method, whose convergence is shown, works particularly well, if the nodes of the network have small degrees. Computational results on several classes of networks are reported.

••

TL;DR: This paper investigates a new local search method based on interval analysis information and on a new selection criterion to direct the search that can be incorporated to a standard interval GO algorithm, not only to find a good upper bound of the solution, but also to simultaneously carry out part of the work of the interval B&B algorithm.

Abstract: Usually, interval global optimization algorithms use local search methods to obtain a good upper (lower) bound of the solution. These local methods are based on point evaluations. This paper investigates a new local search method based on interval analysis information and on a new selection criterion to direct the search. When this new method is used alone, the guarantee to obtain a global solution is lost. To maintain this guarantee, the new local search method can be incorporated to a standard interval GO algorithm, not only to find a good upper bound of the solution, but also to simultaneously carry out part of the work of the interval B&B algorithm. Moreover, the new method permits improvement of the guaranteed upper bound of the solution with the memory requirements established by the user. Thus, the user can avoid the possible memory problems arising in interval GO algorithms, mainly when derivative information is not used. The chance of reaching the global solution with this algorithm may depend on the established memory limitations. The algorithm has been evaluated numerically using a wide set of test functions which includes easy and hard problems. The numerical results show that it is possible to obtain accurate solutions for all the easy functions and also for the investigated hard problems.

••

TL;DR: This work considers the piecewise-convex case of F:Rn→ R, which generalizes the well-known convex maximization problem and proposes a preliminary algorithm to check it.

Abstract: A function F:Rn→ R is called a piecewise convex function if it can be decomposed into F(x)= min{fj(x)\;\mid\; j∈M}, where fj:Rn→ R is convex for all j∈M={1,2...,m}. We consider \max F(x) subject to x∈D. It generalizes the well-known convex maximization problem. We briefly review global optimality conditions for convex maximization problems and carry one of them to the piecewise-convex case. Our conditions are all written in primal space so that we are able to proposea preliminary algorithm to check them.

••

TL;DR: A characterisation of a family of graphs for which its stability number is determined by convex quadratic programming and an algorithmic strategy is extended to the determination of a maximum matching of an arbitrary graph and some related results are presented.

Abstract: The problem of determining a maximum matching or whether there exists a perfect matching, is very common in a large variety of applications and as been extensively studied in graph theory. In this paper we start to introduce a characterisation of a family of graphs for which its stability number is determined by convex quadratic programming. The main results connected with the recognition of this family of graphs are also introduced. It follows a necessary and sufficient condition which characterise a graph with a perfect matching and an algorithmic strategy, based on the determination of the stability number of line graphs, by convex quadratic programming, applied to the determination of a perfect matching. A numerical example for the recognition of graphs with a perfect matching is described. Finally, the above algorithmic strategy is extended to the determination of a maximum matching of an arbitrary graph and some related results are presented.