scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Global Optimization in 1992"


Journal ArticleDOI
TL;DR: This paper is concerned with the development of an algorithm for general bilinear programming problems, and develops a new Reformulation-Linearization Technique (RLT) for this problem, and imbeds it within a provably convergent branch-and-bound algorithm.
Abstract: This paper is concerned with the development of an algorithm for general bilinear programming problems. Such problems find numerous applications in economics and game theory, location theory, nonlinear multi-commodity network flows, dynamic assignment and production, and various risk management problems. The proposed approach develops a new Reformulation-Linearization Technique (RLT) for this problem, and imbeds it within a provably convergent branch-and-bound algorithm. The method first reformulates the problem by constructing a set of nonnegative variable factors using the problem constraints, and suitably multiplies combinations of these factors with the original problem constraints to generate additional valid nonlinear constraints. The resulting nonlinear program is subsequently linearized by defining a new set of variables, one for each nonlinear term. This “RLT” process yields a linear programming problem whose optimal value provides a tight lower bound on the optimal value to the bilinear programming problem. Various implementation schemes and constraint generation procedures are investigated for the purpose of further tightening the resulting linearization. The lower bound thus produced theoretically dominates, and practically is far tighter, than that obtained by using convex envelopes over hyper-rectangles. In fact, for some special cases, this process is shown to yield an exact linear programming representation. For the associated branch-and-bound algorithm, various admissible branching schemes are discussed, including one in which branching is performed by partitioning the intervals for only one set of variables x or y, whichever are fewer in number. Computational experience is provided to demonstrate the viability of the algorithm. For a large number of test problems from the literature, the initial bounding linear program itself solves the underlying bilinear programming problem.

400 citations


Journal ArticleDOI
TL;DR: This paper is concerned with the development of an algorithm to solve continuous polynomial programming problems for which the objective function and the constraints are specified polynomials, and a linear programming relaxation is derived based on a Reformulation Linearization Technique.
Abstract: This paper is concerned with the development of an algorithm to solve continuous polynomial programming problems for which the objective function and the constraints are specified polynomials. A linear programming relaxation is derived for the problem based on a Reformulation Linearization Technique (RLT), which generates nonlinear (polynomial) implied constraints to be included in the original problem, and subsequently linearizes the resulting problem by defining new variables, one for each distinct polynomial term. This construct is then used to obtain lower bounds in the context of a proposed branch and bound scheme, which is proven to converge to a global optimal solution. A numerical example is presented to illustrate the proposed algorithm.

341 citations


Journal ArticleDOI
TL;DR: Rationality of the search for a global minimum is formulated axiomatically and the features of the corresponding algorithm are derived from the axioms.
Abstract: A review of statistical models for global optimization is presented. Rationality of the search for a global minimum is formulated axiomatically and the features of the corresponding algorithm are derived from the axioms. Furthermore the results of some applications of the proposed algorithm are presented and the perspectives of the approach are discussed.

80 citations


Journal ArticleDOI
TL;DR: A simplicial branch and bound-outer approximation technique for solving nonseparable, nonlinearly constrained concave minimization problems is proposed which uses a new simplicial cover rather than classical simplicial partitions.
Abstract: A simplicial branch and bound-outer approximation technique for solving nonseparable, nonlinearly constrained concave minimization problems is proposed which uses a new simplicial cover rather than classical simplicial partitions. Some geometric properties and convergence results are demonstrated. A report on numerical aspects and experiments is given which shows that the most promising variant of the cover technique can be expected to be more efficient than comparable previous simplicial procedures.

59 citations


Journal ArticleDOI
TL;DR: Modifications to a prototypical branch and bound algorithm for nonlinear optimization are proposed so that the algorithm efficiently handles constrained problems with constant bound constraints.
Abstract: In this paper, we propose modifications to a prototypical branch and bound algorithm for nonlinear optimization so that the algorithm efficiently handles constrained problems with constant bound constraints. The modifications involve treating subregions of the boundary identically to interior regions during the branch and bound process, but using reduced gradients for the interval Newton method. The modifications also involve preconditioners for the interval Gauss-Seidel method which are optimal in the sense that their application selectively gives a coordinate bound of minimum width, a coordinate bound whose left endpoint is as large as possible, or a coordinate bound whose right endpoint is as small as possible. We give experimental results on a selection of problems with different properties.

53 citations


Journal ArticleDOI
TL;DR: A new global minimization method in which the Gibbs distribution of the objective function is deterministically annealed by tracing the evolution of a multiple-Gaussian-packet approximation is outlined.
Abstract: We outline a new global minimization method in which the Gibbs distribution of the objective function is deterministically annealed by tracing the evolution of a multiple-Gaussian-packet approximation. Solutions are reached by iterative approximations with decreasing coarse-graining of both objective-function and spatial scales. Results from application of a partial implementation to the atomic-microcluster conformation problem are presented.

41 citations


Journal ArticleDOI
TL;DR: A new scheme simultaneously employing several joint Peano-type scannings is presented which conducts the property ofNearness of points in many dimensions to a property of nearness of pre-images of these points in one dimension significantly better than in the case of a scheme with a single space-filling curve.
Abstract: Some powerful algorithms for multi-extremal non-convex-constrained optimization problems are based on reducing these multi-dimensional problems to those of one dimension by applying Peano-type space-filling curves mapping a unit interval on the real axis onto a multi-dimensional hypercube. Here is presented and substantiated a new scheme simultaneously employing several joint Peano-type scannings which conducts the property of nearness of points in many dimensions to a property of nearness of pre-images of these points in one dimension significantly better than in the case of a scheme with a single space-filling curve. Sufficient conditions of global convergence for the new scheme are investigated.

39 citations


Journal ArticleDOI
TL;DR: The application of this algorithm to the special case of polynomial functions of one variable is discussed, and the primal problem is shown to reduce to a simple function evaluation, while the relaxed dual problem is equivalent to the simultaneous solution of two linear equations in two variables.
Abstract: In Floudas and Visweswaran (1990), a new global optimization algorithm (GOP) was proposed for solving constrained nonconvex problems involving quadratic and polynomial functions in the objective function and/or constraints. In this paper, the application of this algorithm to the special case of polynomial functions of one variable is discussed. The special nature of polynomial functions enables considerable simplification of the GOP algorithm. The primal problem is shown to reduce to a simple function evaluation, while the relaxed dual problem is equivalent to the simultaneous solution of two linear equations in two variables. In addition, the one-to-one correspondence between the x and y variables in the problem enables the iterative improvement of the bounds used in the relaxed dual problem. The simplified approach is illustrated through a simple example that shows the significant improvement in the underestimating function obtained from the application of the modified algorithm. The application of the algorithm to several unconstrained and constrained polynomial function problems is demonstrated.

34 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the problem of minimizing the number of additional variables is equivalent to a maximum bipartite subgraph and a maximum stable set problem in a graph associated with the quadratic program.
Abstract: Indefinite quadratic programs with quadratic constraints can be reduced to bilinear programs with bilinear constraints by duplication of variables. Such reductions are studied in which: (i) the number of additional variables is minimum or (ii) the number of complicating variables, i.e., variables to be fixed in order to obtain a linear program, in the resulting bilinear program is minimum. These two problems are shown to be equivalent to a maximum bipartite subgraph and a maximum stable set problem respectively in a graph associated with the quadratic program. Non-polynomial but practically efficient algorithms for both reductions are thus obtaine.d Reduction of more general global optimization problems than quadratic programs to bilinear programs is also briefly discussed.

33 citations


Journal ArticleDOI
TL;DR: It is shown the importance of exploiting the complementary convex structure for efficiently solving a wide class of specially structured nonconvex global optimization problems, including linear and convex multiplicative programming problems, concave minimization problems with few nonlinear variables, bilevel linear optimized problems, etc.
Abstract: We show the importance of exploiting the complementary convex structure for efficiently solving a wide class of specially structured nonconvex global optimization problems. Roughly speaking, a specific feature of these problems is that their nonconvex nucleus can be transformed into a complementary convex structure which can then be shifted to a subspace of much lower dimension than the original underlying space. This approach leads to quite efficient algorithms for many problems of practical interest, including linear and convex multiplicative programming problems, concave minimization problems with few nonlinear variables, bilevel linear optimization problems, etc...

32 citations


Journal ArticleDOI
TL;DR: A technique of improving the dual estimates in nonconvex multiextremal problems of mathematical programming, by adding some additional constraints which are the consequences of the original constraints, is proposed.
Abstract: We propose a technique of improving the dual estimates in nonconvex multiextremal problems of mathematical programming, by adding some additional constraints which are the consequences of the original constraints. This technique is used for the problems of finding the global minimum of polynomial functions, and extremal quadratic and boolean quadratic problems. In the article one ecological multiextremal problem and an algorithm for finding the dual estimate for it also considered. This algorithm is based upon a scheme of decomposition and nonsmooth optimization methods.

Journal ArticleDOI
TL;DR: The exterior penalty method as well as the variational approximation method appear to be particular cases of this framework for multiobjective optimization problems with a finite number of objective functions.
Abstract: Some results of approximation type for multiobjective optimization problems with a finite number of objective functions are presented. Namely, for a sequence of multiobjective optimization problems P n which converges in a suitable sense to a limit problem P, properties of the sequence of approximate Pareto efficient sets of the P n 's, are studied with respect to the Pareto efficient set of P. The exterior penalty method as well as the variational approximation method appear to be particular cases of this framework.

Journal ArticleDOI
TL;DR: A method is proposed for solving a class of such problems which includes Lipschitz optimization, reverse convex programming problems and also more general nonconvex optimization problems.
Abstract: A mathematical programming problem is said to have separated nonconvex variables when the variables can be divided into two groups: x=(x1,...,xn) and y=( y1,...,yn), such that the objective function and any constraint function is a sum of a convex function of (x, y) jointly and a nonconvex function of x alone. A method is proposed for solving a class of such problems which includes Lipschitz optimization, reverse convex programming problems and also more general nonconvex optimization problems.

Journal ArticleDOI
Anil P. Kamath1, Narendra Karmarkar1
TL;DR: This paper considers the problem of maximizing a quadratic function xTQx where Q is an n × n real symmetric matrix with x an n-dimensional vector constrained to be an element of {−1, 1} n.
Abstract: In the graph partitioning problem, as in other NP-hard problems, the problem of proving the existence of a cut of given size is easy and can be accomplished by exhibiting a solution with the correct value. On the other hand proving the non-existence of a cut better than a given value is very difficult. We consider the problem of maximizing a quadratic function xTQx where Q is an n × n real symmetric matrix with x an n-dimensional vector constrained to be an element of {−1, 1} n. We had proposed a technique for obtaining upper bounds on solutions to the problem using a continuous approach in [4]. In this paper, we extend this method by using techniques of differential geometry.

Journal ArticleDOI
TL;DR: This paper shows that this problem is equivalent to a convex maximization problem over a compact convex set and develops a specialized polyhedral annexation procedure to find a global solution for the case when the inside function is a polyhedral norm.
Abstract: The problem of maximizing the sum of certain composite functions, where each term is the composition of a convex decreasing function, bounded from below, with a convex function having compact level sets arises in certain single facility location problems with gauge distance functions. We show that this problem is equivalent to a convex maximization problem over a compact convex set and develop a specialized polyhedral annexation procedure to find a global solution for the case when the inside function is a polyhedral norm. As the problem was solved recently only for local solutions, this paper offers an algorithm for finding a global solution. Implementation and testing are not treated in this short communication.

Journal ArticleDOI
TL;DR: It is proved a sufficient condition that a strong local minimizer of a bounded quadratic program is the unique global minimizer.
Abstract: In this paper we prove a sufficient condition that a strong local minimizer of a bounded quadratic program is the unique global minimizer. This sufficient condition can be verified computationally by solving a linear and a convex quadratic program and can be used as a quality test for local minimizers found by standard indefinite quadratic programming routines.

Journal ArticleDOI
TL;DR: A parallel stochastic algorithm is presented for solving the linearly constrained concave global minimization problem and makes use of a Bayesian stopping rule to identify the global minimum with high probability.
Abstract: A parallel stochastic algorithm is presented for solving the linearly constrained concave global minimization problem. The algorithm is a multistart method and makes use of a Bayesian stopping rule to identify the global minimum with high probability. Computational results are presented for more than 200 problems on a Cray X-MP EA/464 supercomputer.

Journal ArticleDOI
TL;DR: A new stochastic method for locating the global optimum of a function is proposed that is based on the entropy of a move selecting distribution and is loosely connected to some notions in statistical thermodynamics.
Abstract: Recently, simulated annealing methods have proven to be a valuable tool for global optimization. We propose a new stochastic method for locating the global optimum of a function. The proposed method begins with the subjective specification of a probing distribution. The objective function is evaluated at a few points sampled from this distribution, which is then updated using the collected information. The updating mechanism is based on the entropy of a move selecting distribution and is loosely connected to some notions in statistical thermodynamics. Examples of the use of the proposed method are presented. These indicate its superior performance as compared with simulated annealing. Preliminary considerations in applying the method to discrete problems are discussed.

Journal ArticleDOI
TL;DR: This paper gives a brief survey and assessment of computational methods for finding solutions to systems of nonlinear equations and systems of polynomial equations, focusing on simplicial algorithms and homotopy methods.
Abstract: This paper gives a brief survey and assessment of computational methods for finding solutions to systems of nonlinear equations and systems of polynomial equations. Starting from methods which converge locally and which find one solution, we progress to methods which are globally convergent and find an a priori determinable number of solutions. We will concentrate on simplicial algorithms and homotopy methods. Enhancements of published methods are included and further developments are discussed.

Journal ArticleDOI
TL;DR: The normal score transformation is applied to the multi-univariate method of global optimization, yielding a method that gives equivalent searches for any monotonic transformation of the objective function.
Abstract: Nonparametric global optimization methods have been developed that determine the location of their next guess based on the rank-transformed objective function evaluations rather than the actual function values themselves. Another commonly-used transformation in nonparametric statistics is the normal score transformation. This paper applies the normal score transformation to the multi-univariate method of global optimization. The benefits of the new method are shown by its performance on a standard set of global optimization test problems. The normal score transformation yields a method that gives equivalent searches for any monotonic transformation of the objective function.

Journal ArticleDOI
TL;DR: It is shown that the concave problems obtained have fewer integer local minima than has the classical concave formulation of the 0–1 unconstrained 0-1 nonlinear problem.
Abstract: The purpose of this paper is to give new formulations for the unconstrained 0–1 nonlinear problem. The unconstrained 0–1 nonlinear problem is reduced to nonlinear continuous problems where the objective functions are piecewise linear. In the first formulation, the objective function is a difference of two convex functions while the other formulations lead to concave problems. It is shown that the concave problems we obtain have fewer integer local minima than has the classical concave formulation of the 0–1 unconstrained 0–1 nonlinear problem.

Journal ArticleDOI
TL;DR: The goals of this model are to offer an abstract representation of asynchronous and heterogeneous distributed systems, to present a mechanism for specifying externally observable behaviours of distributed processes and to provide rules for combining these processes into networks with desired properties.
Abstract: In this paper, we present a model which characterizes distributed computing algorithms. The goals of this model are to offer an abstract representation of asynchronous and heterogeneous distributed systems, to present a mechanism for specifying externally observable behaviours of distributed processes and to provide rules for combining these processes into networks with desired properties (good functioning, fairness...). Once these good properties are found, the determination of the optimal rules are studied.