scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Global Optimization in 1991"


Journal ArticleDOI
TL;DR: It is shown that the problem of minimizing a concave quadratic function with one concave direction is NP-hard, and this result can be interpreted as an attempt to understand exactly what makes nonconvex quadRatic programming problems hard.
Abstract: We show that the problem of minimizing a concave quadratic function with one concave direction is NP-hard. This result can be interpreted as an attempt to understand exactly what makes nonconvex quadratic programming problems hard. Sahni in 1974 [8] showed that quadratic programming with a negative definite quadratic term (n negative eigenvalues) is NP-hard, whereas Kozlov, Tarasov and Hacijan [2] showed in 1979 that the ellipsoid algorithm solves the convex quadratic problem (no negative eigenvalues) in polynomial time. This report shows that even one negative eigenvalue makes the problem NP-hard.

523 citations


Journal ArticleDOI
Fabio Schoen1
TL;DR: Stochastic algorithms for global optimization are reviewed with the aim of presenting recent papers on the subject, which have received only scarce attention in the most recent published surveys.
Abstract: In this paper stochastic algorithms for global optimization are reviewed. After a brief introduction on random-search techniques, a more detailed analysis is carried out on the application of simulated annealing to continuous global optimization. The aim of such an analysis is mainly that of presenting recent papers on the subject, which have received only scarce attention in the most recent published surveys. Finally a very brief presentation of clustering techniques is given.

141 citations


Journal ArticleDOI
TL;DR: It is shown that parametric linear programming algorithms work efficiently for a class of nonconvex quadratic programming problems called generalized linear multiplicative programming problems, whose objective function is the sum of a linear function and a product of two linear functions.
Abstract: It is shown that parametric linear programming algorithms work efficiently for a class of nonconvex quadratic programming problems called generalized linear multiplicative programming problems, whose objective function is the sum of a linear function and a product of two linear functions. Also, it is shown that the global minimum of the sum of the two linear fractional functions over a polytope can be obtained by a similar algorithm. Our numerical experiments reveal that these problems can be solved in much the same computational time as that of solving associated linear programs. Furthermore, we will show that the same approach can be extended to a more general class of nonconvex quadratic programming problems.

135 citations


Journal ArticleDOI
TL;DR: A relaxation algorithm is presented for finding a globally optimal solution for problem (P), and an exact optimal solution to the problem after a finite number of iterations is found.
Abstract: The problem (P) of optimizing a linear function over the efficient set of a multiple objective linear program has many important applications in multiple criteria decision making. Since the efficient set is in general a nonconvex set, problem (P) can be classified as a global optimization problem. Perhaps due to its inherent difficulty, it appears that no precisely-delineated implementable algorithm exists for solving problem (P) globally. In this paper a relaxation algorithm is presented for finding a globally optimal solution for problem (P). The algorithm finds an exact optimal solution to the problem after a finite number of iterations. A detailed discussion is included of how to implement the algorithm using only linear programming methods. Convergence of the algorithm is proven, and a sample problem is solved.

80 citations


Book ChapterDOI
TL;DR: This paper reviews methods which have been proposed for solving global optimization problems in the framework of the Bayesian paradigm and concludes that these methods should be considered as stand-alone approaches to optimization.
Abstract: This paper reviews methods which have been proposed for solving global optimization problems in the framework of the Bayesian paradigm.

74 citations


Journal ArticleDOI
TL;DR: This algorithm combines a new prismatic branch and bound technique with polyhedral outer approximation in such a way that only linear programming problems have to be solved.
Abstract: We are dealing with a numerical method for solving the problem of minimizing a difference of two convex functions (a d.c. function) over a closed convex set in ℝ n . This algorithm combines a new prismatic branch and bound technique with polyhedral outer approximation in such a way that only linear programming problems have to be solved.

69 citations


Journal ArticleDOI
TL;DR: In this paper, a convex multiplicative programming problem of the form % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGqiVu0Je9sqq
Abstract: We consider a convex multiplicative programming problem of the form % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9% vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9qq-f0-yqaqVeLsFr0-vr% 0-vr0db8meaabaqaciGacaGaaeqabaWaaeaaeaaakeaacaGG7bGaam% OzamaaBaaaleaacaaIXaaabeaakiaacIcacaWG4bGaaiykaiabgwSi% xlaadAgadaWgaaWcbaGaaGOmaaqabaGccaGGOaGaamiEaiaacMcaca% GG6aGaamiEaiabgIGiolaadIfacaGG9baaaa!4A08!\[\{ f_1 (x) \cdot f_2 (x):x \in X\} \] where X is a compact convex set of ℝ n and f 1, f 2 are convex functions which have nonnegative values over X. Using two additional variables we transform this problem into a problem with a special structure in which the objective function depends only on two of the (n+2) variables. Following a decomposition concept in global optimization we then reduce this problem to a master problem of minimizing a quasi-concave function over a convex set in ℝ2 2. This master problem can be solved by an outer approximation method which requires performing a sequence of simplex tableau pivoting operations. The proposed algorithm is finite when the functions f i, (i=1, 2) are affine-linear and X is a polytope and it is convergent for the general convex case.

68 citations


Journal ArticleDOI
TL;DR: Algorithms, applications, and complexity issues for the single-source uncapacitated (SSU) version of the minimum concave-cost network flow problem (MCNFP) are investigated, the local search algorithm of Gallo and Sodini is formally state, and alternatives are presented.
Abstract: We investigate algorithms, applications, and complexity issues for the single-source uncapacitated (SSU) version of the minimum concave-cost network flow problem (MCNFP). We present applications arising from production planning, and prove complexity results for both global and local search. We formally state the local search algorithm of Gallo and Sodini [5], and present alternative local search algorithms. Computational results are provided to compare the various local search algorithms proposed and the effects of initial solution techniques.

60 citations


Journal ArticleDOI
TL;DR: Two problems are called equivalent if each lower level set of one problem is mapped homeomorphically onto a corresponding lowerlevel set of the other one.
Abstract: We study global stability properties for differentiable optimization problems of the type: % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9% vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9qq-f0-yqaqVeLsFr0-vr% 0-vr0db8meaabaqaciGacaGaaeqabaWaaeaaeaaakeaacaWGqbGaai% ikaGqaaiaa-jzacaGGSaGaamisaiaacYcacaqGGaGaam4raiaacMca% caGG6aGaaeiiaiaab2eacaqGPbGaaeOBaiaabccacaWFsgGaaeikai% aadIhacaqGPaGaaeiiaiaab+gacaqGUbGaaeiiaiaad2eacaGGBbGa% amisaiaacYcacaWGhbGaaiyxaiabg2da9iaacUhacaWG4bGaeyicI4% CeeuuDJXwAKbsr4rNCHbacfaGae4xhHe6aaWbaaSqabeaacaWGUbaa% aOGaaiiFaiaabccacaWGibGaaiikaiaadIhacaGGPaGaeyypa0JaaG% imaiaacYcacaqGGaGaam4raiaacIcacaWG4bGaaiykamaamaaabaGa% eyyzImlaaiaaicdacaGG9bGaaiOlaaaa!6B2E!\[P(f,H,{\text{ }}G):{\text{ Min }}f{\text{(}}x{\text{) on }}M[H,G] = \{ x \in \mathbb{R}^n |{\text{ }}H(x) = 0,{\text{ }}G(x)\underline \geqslant 0\} \] Two problems are called equivalent if each lower level set of one problem is mapped homeomorphically onto a corresponding lower level set of the other one In case that P(% MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9% vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9qq-f0-yqaqVeLsFr0-vr% 0-vr0db8meaabaqaciGacaGaaeqabaWaaeaaeaaakeaaieaaceWFsg% GbaGaacaWFSaGaa8hiaiqadIeagaacaiaacYcacaWFGaGabm4rayaa% iaaaaa!3EBF!\[\tilde f, \tilde H, \tilde G\]) is equivalent with P(f, H, GG) for all (% MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9% vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9qq-f0-yqaqVeLsFr0-vr% 0-vr0db8meaabaqaciGacaGaaeqabaWaaeaaeaaakeaaieaaceWFsg% GbaGaacaWFSaGaa8hiaiqadIeagaacaiaacYcacaWFGaGabm4rayaa% iaaaaa!3EBF!\[\tilde f, \tilde H, \tilde G\]) in some neighbourhood of (f, H, G) we call P(f, H, G) structurally stable; the topology used takes derivatives up to order two into account Under the assumption that M[H, G] is compact we prove that structural stability of P(f, H, GG) is equivalent with the validity of the following three conditions:

54 citations


Journal ArticleDOI
TL;DR: A general class of so called weakly exhaustive simplicial subdivision processes is introduced that subsumes all previously known radial exhaustive processes and provides the basis for constructing flexible subdivision strategies that can be adapted to take advantage of various problem conditions.
Abstract: We investigate subdivision strategies that can improve the convergence and efficiency of some branch and bound algorithms of global optimization. In particular, a general class of so called weakly exhaustive simplicial subdivision processes is introduced that subsumes all previously known radial exhaustive processes. This result provides the basis for constructing flexible subdivision strategies that can be adapted to take advantage of various problem conditions.

51 citations


Journal ArticleDOI
TL;DR: Dinkelbach's global optimization approach for finding the global maximum of the fractional programming problem is discussed and a modified algorithm is presented which provides both upper and lower bounds at each iteration.
Abstract: Dinkelbach's global optimization approach for finding the global maximum of the fractional programming problem is discussed. Based on this idea, a modified algorithm is presented which provides both upper and lower bounds at each iteration. The convergence of the lower and upper bounds to the global maximum function value is shown to be superlinear. In addition, the special case of fractional programming when the ratio involves only linear or quadratic terms is considered. In this case, the algorithm is guaranteed to find the global maximum to within any specified tolerance, regardless of the definiteness of the quadratic form.

Journal ArticleDOI
TL;DR: It is shown that the global minimum of this nonconvex problem can be obtained by solving a sequence of convex programming problems and the basic algorithm is to embed the original problem into a problem in a higher dimensional space and to apply a branch-and-bound algorithm using an underestimating function.
Abstract: This paper addresses itself to the algorithm for minimizing the product of two nonnegative convex functions over a convex set. It is shown that the global minimum of this nonconvex problem can be obtained by solving a sequence of convex programming problems. The basic idea of this algorithm is to embed the original problem into a problem in a higher dimensional space and to apply a branch-and-bound algorithm using an underestimating function. Computational results indicate that our algorithm is efficient when the objective function is the product of a linear and a quadratic functions and the constraints are linear. An extension of our algorithm for minimizing the sum of a convex function and a product of two convex functions is also discussed.

Journal ArticleDOI
TL;DR: An overview of interval arithmetical tools and basic techniques that can be used to construct deterministic global optimization algorithms and are applicable to unconstrained and constrained optimization as well as to nonsmooth optimization and to problems over unbounded domains is presented.
Abstract: An overview of interval arithmetical tools and basic techniques is presented that can be used to construct deterministic global optimization algorithms. These tools are applicable to unconstrained and constrained optimization as well as to nonsmooth optimization and to problems over unbounded domains. Since almost all interval based global optimization algorithms use branch-and-bound methods with iterated bisection of the problem domain we also embed our overview in such a setting.

Journal ArticleDOI
TL;DR: It is demonstrated how the size of certain global optimization problems can substantially be reduced by using dualization and polyhedral annexation techniques.
Abstract: We demonstrate how the size of certain global optimization problems can substantially be reduced by using dualization and polyhedral annexation techniques. The results are applied to develop efficient algorithms for solving concave minimization problems with a low degree of nonlinearity. This class includes in particular nonconvex optimization problems involving products or quotients of affine functions in the objective function.

Journal ArticleDOI
TL;DR: A global search heuristic based on random extreme feasible initial solutions and local search is developed, which is used to evaluate the complexity of the randomly generated test problems and extended to bound the search based on cost properties and linear underestimation.
Abstract: We present algorithms for the single-source uncapacitated version of the minimum concave cost network flow problem. Each algorithm exploits the fact that an extreme feasible solution corresponds to a sub-tree of the original network. A global search heuristic based on random extreme feasible initial solutions and local search is developed. The algorithm is used to evaluate the complexity of the randomly generated test problems. An exact global search algorithm is developed, based on enumerative search of rooted subtrees. This exact technique is extended to bound the search based on cost properties and linear underestimation. The technique is accelerated by exploiting the network structure.

Journal ArticleDOI
TL;DR: The problem min can be reduced to a quasiconcave minimization problem over a compact convex set in ℝ2 and hence can be solved effectively provided f, T are convex and G is convex or discrete.
Abstract: We consider the problem min{f(χ) : χ ∈ G, T(χ) ∉ int D}, where f is a lower semicontinuous function, G a compact, nonempty set in IRn, D a closed convex set in JR² with nonempty interior, and T a continuous mapping from IRn to IR². The constraint T(χ) ∉. int D is areverse convex constraint, so the feasible domain may be disconnected even when f, T are affine and G is a polytope. We show that this problem can be reduced to a quasiconcave minimization problem over a compact convex set in IR², and hence can be solved effectively provided f, T are convex and G is convex or discrete. In particular, we discuss areverse convex constraint of the form (c, χ) . (d, χ) ≤ 1. We also compare the approach in this paper with the parametric approach.

Journal ArticleDOI
TL;DR: The issue of finding feasible mixture designs is formulated and solved as a Lipschitzian global optimization problem, based on a simplicial partition strategy.
Abstract: The issue of finding feasible mixture designs is formulated and solved as a Lipschitzian global optimization problem. The solution algorithm is based on a simplicial partition strategy. Implementation aspects and extension possibilities are treated in some detail, providing also numerical examples.

Journal ArticleDOI
TL;DR: This work derives stopping rules that minimize loss functions which assign a loss to the sample size and to the deviation between the maximum in the sample and the true (unknown) maximum.
Abstract: Suppose a sequential sample is taken from an unknown discrete probability distribution on an unknown range of integers, in an effort to sample its maximum. A crucial issue is an appropriate stopping rude determining when to terminate the sampling process. We approach this problem from a Bayesian perspective, and derive stopping rules that minimize loss functions which assign a loss to the sample size and to the deviation between the maximum in the sample and the true (unknown) maximum. We will show that our rules offer an extremely simple approximate solution to the well-known problem to terminate the Multistart method for continuous global optimization.

Journal ArticleDOI
TL;DR: Finitely convergent algorithms for solving rank two and three bilinear programming problems are proposed and a variant of parametric simplex algorithm can solve large scale rank two bilinears programming problems efficiently.
Abstract: Finitely convergent algorithms for solving rank two and three bilinear programming problems are proposed. A rank k bilinear programming problem is a nonconvex quadratic programming problem with the following structure: % MathType!MTEF!2!1!+-% feaafeart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4baFfea0dXde9vqpa0lb9% cq0dXdb9IqFHe9FjuP0-iq0dXdbba9pe0lb9hs0dXda91qaq-xfr-x% fj-hmeGabaqaciGacaGaaeqabaWaaeaaeaaakeaaieaacaWFTbGaa8% xAaiaa-5gacaWFPbGaa8xBaiaa-LgacaWF6bGaa8xzaiaa-bcacaWF% 7bacbiGaa43yamaaDaaaleaacaGFWaaabaGaa4hDaaaakiaa+Hhaca% GFRaGaa4hzamaaDaaaleaacaGFWaaabaGaa4hDaaaakiaa+LhacaGF% RaWaaabuaeaacaGFJbWaa0baaSqaaiaa+PgaaeaacaGF0baaaOGaam% iEaiabl+y6NjaadsgadaqhaaWcbaGaamOAaaqaaiaadshaaaGccaWG% 5bGaaiiFaaWcbaGaa8NAaiaa-1dacaWFXaaabeqdcqGHris5aOGaa4% hEaiabgIGiolaa+HfacaGFSaGaa4xEaiabgIGiolaa+LfacaWF9bGa% a8hlaaaa!5D2E!\[minimize \{ c_0^t x + d_0^t y + \sum\limits_{j = 1} {c_j^t xd_j^t y|} x \in X,y \in Y\} ,\] where X ⊂ Rn1 and Y ⊂ R n2 are non-empty and bounded polytopes. We show that a variant of parametric simplex algorithm can solve large scale rank two bilinear programming problems efficiently. Also, we show that a cutting-cake algorithm, a more elaborate variant of parametric simplex algorithm can solve medium scale rank three problems.

Journal ArticleDOI
TL;DR: A finite subgradient algorithm for the search of an ε-optimal solution is proposed and results of numerical experiments are presented.
Abstract: The global optimization problem is considered under the assumption that the objective function is convex with respect to some variables. A finite subgradient algorithm for the search of an e-optimal solution is proposed. Results of numerical experiments are presented.

Journal ArticleDOI
Jørgen Tind1
TL;DR: A framework is proposed based on general duality theory in mathematical programming and thus focussing on approaches leading to global optimality on the general mathematical programming problem with two sets of variables.
Abstract: The purpose of this article is to propose a simple framework for the various decomposition schemes in mathematical programming.

Journal ArticleDOI
TL;DR: In this paper, it was shown that such an algorithm does not necessarily converge to a global optimum, since it does not converge to the region of indeterminacy, which contains all globally optimal points.
Abstract: Timonov proposes an algorithm for global maximization of univariate Lipschitz functions in which successive evaluation points are chosen in order to ensure at each iteration a maximal expected reduction of the “region of indeterminacy”, which contains all globally optimal points. It is shown that such an algorithm does not necessarily converge to a global optimum.

Journal ArticleDOI
TL;DR: The experts in global optimization are asked if there is an efficient solution to an optimization problem in acceptance sampling: here, one often has incomplete prior information about the quality of incoming lots.
Abstract: We ask the experts in global optimization if there is an efficient solution to an optimization problem in acceptance sampling: Here, one often has incomplete prior information about the quality of incoming lots. Given a cost model, a decision rule for the inspection of a lot may then be designed that minimizes the maximum loss compatible with the available information. The resulting minimax problem is sometimes hard to solve, as the loss functions may have several local maxima which vary in an “unpredictable” way with the parameters of the decision rule.

Journal ArticleDOI
TL;DR: Procedures using interval analysis for computing guaranteed bounds on the solution set provides a means for doing a sensitivity analysis or simply bounding the effect of errors in data.
Abstract: Consider a global optimization problem in which the objective function and/or the constraints are expressed in terms of parameters. Suppose we wish to know the set of global solutions as the parameters vary over given intervals. In this paper we discuss procedures using interval analysis for computing guaranteed bounds on the solution set. This provides a means for doing a sensitivity analysis or simply bounding the effect of errors in data.

Journal ArticleDOI
TL;DR: The van der Waerden permanent problem was solved using mainly algebraic methods, but a much simpler analytic proof is given using a new concept in optimization theory which may be of importance in the general theory of mathematical programming.
Abstract: The van der Waerden permanent problem was solved using mainly algebraic methods. A much simpler analytic proof is given using a new concept in optimization theory which may be of importance in the general theory of mathematical programming.