scispace - formally typeset
Search or ask a question

Showing papers in "Mathematical Programming in 1991"


Journal ArticleDOI
TL;DR: This paper presents a methodology for the solution of multistage stochastic optimization problems, based on the approximation of the expected-cost-to-go functions of Stochastic dynamic programming by piecewise linear functions.
Abstract: This paper presents a methodology for the solution of multistage stochastic optimization problems, based on the approximation of the expected-cost-to-go functions of stochastic dynamic programming by piecewise linear functions. No state discretization is necessary, and the combinatorial "explosion" with the number of states (the well known "curse of dimensionality" of dynamic programming) is avoided. The piecewise functions are obtained from the dual solutions of the optimization problem at each stage and correspond to Benders cuts in a stochastic, multistage decomposition framework. A case study of optimal stochastic scheduling for a 39-reservoir system is presented and discussed.

1,230 citations


Journal ArticleDOI
TL;DR: In this article, a stochastic approach based on the simulated annealing algorithm is proposed for global optimization problems, which can be defined as the problem of finding points on a bounded subset of Ω(n) points in which some real valued function f assumes its optimal (maximal or minimal) value.
Abstract: In this paper we are concerned with global optimization, which can be defined as the problem of finding points on a bounded subset of ℝ n in which some real valued functionf assumes its optimal (maximal or minimal) value. We present a stochastic approach which is based on the simulated annealing algorithm. The approach closely follows the formulation of the simulated annealing algorithm as originally given for discrete optimization problems. The mathematical formulation is extended to continuous optimization problems, and we prove asymptotic convergence to the set of global optima. Furthermore, we discuss an implementation of the algorithm and compare its performance with other well-known algorithms. The performance evaluation is carried out for a standard set of test functions from the literature.

391 citations


Journal ArticleDOI
TL;DR: The implementation is based on a fast LP-solver (IBM's MPSX) and makes effective use of polyhedral results on the symmetric travelling salesman polytope and describes the important ingredients of the code.
Abstract: In this paper we report on a cutting plane procedure with which we solved symmetric travelling salesman problems of up to 1000 cities to optimality. Our implementation is based on a fast LP-solver (IBM's MPSX) and makes effective use of polyhedral results on the symmetric travelling salesman polytope. We describe the important ingredients of our code and give an extensive documentation of its computational performance.

322 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to outline basic approaches and basic types of algorithms available to deal with this problem and to review their convergence analysis.
Abstract: A generalized fractional programming problem is specified as a nonlinear program where a nonlinear function defined as the maximum over several ratios of functions is to be minimized on a feasible domain of źn. The purpose of this paper is to outline basic approaches and basic types of algorithms available to deal with this problem and to review their convergence analysis. The conclusion includes results and comments on the numerical efficiency of these algorithms.

267 citations


Journal ArticleDOI
Yinyu Ye1
TL;DR: In this paper, a primal-dual potential function for linear programming is described, and an interior point algorithm is developed to find the optimal solution set in O(n ≥ 3
Abstract: We describe a primal-dual potential function for linear programming: $$\phi (x,s) = \rho \ln (x^T s) - \sum\limits_{j = 1}^n {\ln (x_j s_j )} $$ whereρ⩾ n, x is the primal variable, ands is the dual-slack variable. As a result, we develop an interior point algorithm seeking reductions in the potential function with $$\rho = n + \sqrt n $$ . Neither tracing the central path nor using the projective transformation, the algorithm converges to the optimal solution set in $$O(\sqrt n L)$$ iterations and uses O(n 3 L) total arithmetic operations. We also suggest a practical approach to implementing the algorithm.

265 citations


Journal ArticleDOI
TL;DR: Conditions under which these approximations can be proved to converge globally to the true Hessian matrix are given, in the case where the Symmetric Rank One update formula is used.
Abstract: Quasi-Newton algorithms for unconstrained nonlinear minimization generate a sequence of matrices that can be considered as approximations of the objective function second derivatives This paper gives conditions under which these approximations can be proved to converge globally to the true Hessian matrix, in the case where the Symmetric Rank One update formula is used The rate of convergence is also examined and proven to be improving with the rate of convergence of the underlying iterates The theory is confirmed by some numerical experiments that also show the convergence of the Hessian approximations to be substantially slower for other known quasi-Newton formulae

219 citations


Journal ArticleDOI
TL;DR: It is shown that deciding membership in a correlation polytope is an NP-complete problem, and deciding facets is probably not even in NP.
Abstract: A family of polytopes, correlation polytopes, which arise naturally in the theory of probability and propositional logic, is defined. These polytopes are tightly connected to combinatorial problems in the foundations of quantum mechanics, and to the Ising spin model. Correlation polytopes exhibit a great deal of symmetry. Exponential size symmetry groups, which leave the polytope invariant and act transitively on its vertices, are defined. Using the symmetries, a large family of facets is determined. A conjecture concerning the full facet structure of correlation polytopes is formulated (the conjecture, however, implies that NP=co-NP). Various complexity results are proved. It is shown that deciding membership in a correlation polytope is an NP-complete problem, and deciding facets is probably not even in NP. The relations between the polytope symmetries and its complexity are indicated.

190 citations


Journal ArticleDOI
TL;DR: An interior point algorithm for a positive semi-definite linear complementarity problem that reduces the potential function f(x,y) by at least 0.2 in each iteration requiring O(n3) arithmetic operations and is closely related with the central path following algorithm recently given by the authors.
Abstract: This paper proposes an interior point algorithm for a positive semi-definite linear complementarity problem: find an (x, y)∈ℝ 2n such thaty=Mx+q, (x,y)⩾0 andx T y=0. The algorithm reduces the potential function $$f(x,y) = (n + \sqrt n )\log x^T y - \sum\limits_{i = 1}^n {\log x_i y_i } $$ by at least 0.2 in each iteration requiring O(n 3) arithmetic operations. If it starts from an interior feasible solution with the potential function value bounded by $$O(\sqrt n L)$$ , it generates, in at most $$O(\sqrt n L)$$ iterations, an approximate solution with the potential function value $$ - O(\sqrt n L)$$ , from which we can compute an exact solution in O(n 3) arithmetic operations. The algorithm is closely related with the central path following algorithm recently given by the authors. We also suggest a unified model for both potential reduction and path following algorithms for positive semi-definite linear complementarity problems.

150 citations


Journal ArticleDOI
TL;DR: The algorithm is based on a unified formulation of these three mathematical programming problems as a certain system of B-differentiable equations, and is a modification of the damped Newton method described in Pang (1990) for solving such systems of nonsmooth equations.
Abstract: This paper presents a globally convergent, locally quadratically convergent algorithm for solving general nonlinear programs, nonlinear complementarity and variational inequality problems. The algorithm is based on a unified formulation of these three mathematical programming problems as a certain system of B-differentiable equations, and is a modification of the damped Newton method described in Pang (1990) for solving such systems of nonsmooth equations. The algorithm resembles several existing methods for solving these classes of mathematical programs, but has some special features of its own; in particular, it possesses the combined advantage of fast quadratic rate of convergence of a basic Newton method and the desirable global convergence induced by one-dimensional Armijo line searches. In the context of a nonlinear program, the algorithm is of the sequential quadratic programming type with two distinct characteristics: (i) it makes no use of a penalty function; and (ii) it circumvents the Maratos effect. In the context of the variational inequality/complementarity problem, the algorithm provides a Newton-type descent method that is guaranteed globally convergent without requiring the F-differentiability assumption of the defining B-differentiable equations.

145 citations


Journal ArticleDOI
TL;DR: Several equivalent definitions of the property of a sharp minimum on a set are given and the notion is used to prove finite termination of the proximal point algorithm.
Abstract: This paper concerns the notion of a sharp minimum on a set and its relationship to the proximal point algorithm. We give several equivalent definitions of the property and use the notion to prove finite termination of the proximal point algorithm.

144 citations


Journal ArticleDOI
TL;DR: A primal interior point method for convex quadratic programming which is based upon a logarithmic barrier function approach, which generates a sequence of problems, each of which is approximately solved by taking a single Newton step.
Abstract: We present a primal interior point method for convex quadratic programming which is based upon a logarithmic barrier function approach. This approach generates a sequence of problems, each of which is approximately solved by taking a single Newton step. It is shown that the method requires $$O(\sqrt n L)$$ iterations and O(n3.5L) arithmetic operations. By using modified Newton steps the number of arithmetic operations required by the algorithm can be reduced to O(n3L).


Journal ArticleDOI
TL;DR: This work considers the continuous trajectories of the vector field induced by the primal affine scaling algorithm as applied to linear programming problems in standard form and shows that these trajectories tend to an optimal solution which in general depends on the starting point.
Abstract: We consider the continuous trajectories of the vector field induced by the primal affine scaling algorithm as applied to linear programming problems in standard form. By characterizing these trajectories as solutions of certain parametrized logarithmic barrier families of problems, we show that these trajectories tend to an optimal solution which in general depends on the starting point. By considering the trajectories that arise from the Lagrangian multipliers of the above mentioned logarithmic barrier families of problems, we show that the trajectories of the dual estimates associated with the affine scaling trajectories converge to the so called "centered' optimal solution of the dual problem. We also present results related to asymptotic direction of the affine scaling trajectories. We briefly discuss how to apply our results to linear programs formulated in formats different from the standard form. Finally, we extend the results to the primal-dual aMne scaling algorithm.

Journal ArticleDOI
TL;DR: This result is used to analytically compare various formulations of the asymmetric travelling salesman problem to the standard formulation due to Dantzig, Fulkerson and Johnson which are shown to be “weaker formulations” in a precise setting.
Abstract: A transformation technique is proposed that permits one to derive the linear description of the imageX of a polyhedronZ under an affine linear transformation from the (given) linear description ofZ. This result is used to analytically compare various formulations of the asymmetric travelling salesman problem to the standard formulation due to Dantzig, Fulkerson and Johnson which are all shown to be "weaker formulations" in a precise setting. We also apply this transformation technique to "symmetrize" formulations and show, in particular, that the symmetrization of the standard asymmetric formulation results into the standard one for the symmetric version of the travelling salesman problem.

Journal ArticleDOI
TL;DR: Computational results for an efficient implementation of a variant of dual projective algorithm for linear programming using the preconditioned conjugate gradient method for computing projections indicates that this algorithm has potential as an alternative for solving very large LPs in which the direct methods fail due to memory and CPU time requirements.
Abstract: This paper gives computational results for an efficient implementation of a variant of dual projective algorithm for linear programming. The implementation uses the preconditioned conjugate gradient method for computing projections. Our computational experience reported in this paper indicates that this algorithm has potential as an alternative for solving very large LPs in which the direct methods fail due to memory and CPU time requirements. The conjugate gradient algorithm was able to find very accurate directions even when the system was ill-conditioned. The paper also discusses a new mathematical technique called the reciprocal estimates for estimating the primal variables. We have conducted extensive computational experiments on problems representative of large classes of applications of current interest. We have also chosen instances of the problems of future potential interest, which could not be solved in the past due to the weakness of the prior solution methods, but which represent a large class of new applications. The hypergraph model is such an example. Comparison of our implementation with MINOS 5.1 shows that our implementation is orders of magnitude faster than MINOS 5.1 for these problems.

Journal ArticleDOI
Arie Tamir1
TL;DR: This work uses polynomial formulations to show that several rational and discrete network synthesis games, including the minimum cost spanning tree game, satisfy the assumptions of Owen's linear production game model.
Abstract: We use polynomial formulations to show that several rational and discrete network synthesis games, including the minimum cost spanning tree game, satisfy the assumptions of Owen's linear production game model. We also discuss computational issues related to finding and recognizing core points for these classes of games.

Journal ArticleDOI
TL;DR: It is shown that for large classes of problems the complexity integral of the local “weighted curvature” of the (primal—dual) path is not greater than constmα log(R/δ), whereα < 1/2 e.g.α = 1/4 orα = 3/8 (note thatα = 2/2 gives the complexity of zero order methods).
Abstract: A class of algorithms is proposed for solving linear programming problems (withm inequality constraints) by following the central path using linear extrapolation with a special adaptive choice of steplengths. The latter is based on explicit results concerning the convergence behaviour of Newton's method to compute points on the central pathx(r), r>0, and this allows to estimate the complexity, i.e. the total numberN = N(R, δ) of steps needed to go from an initial pointx(R) to a final pointx(δ), R>δ>0, by an integral of the local “weighted curvature” of the (primal—dual) path. Here, the central curve is parametrized with the logarithmic penalty parameterr↓0. It is shown that for large classes of problems the complexity integral, i.e. the number of stepsN, is not greater than constm α log(R/δ), whereα < 1/2 e.g.α = 1/4 orα = 3/8 (note thatα = 1/2 gives the complexity of zero order methods). We also provide a lower bound for the complexity showing that for some problems the above estimation can hold only forα ⩾ 1/3. As a byproduct, many analytical and structural properties of the primal—dual central path are obtained: there are, for instance, close relations between the weighted curvature and the logarithmic derivatives of the slack variables; the dependence of these quantities on the parameterr is described. Also, related results hold for a family of weighted trajectories, into which the central path can be embedded.

Journal ArticleDOI
TL;DR: A new algorithm using successive linear programming is presented and it is concluded that the new algorithm seems to be very efficient and stable, and that it always finds a solution with a cost near the best possible.
Abstract: The paper treats a piping system, where the layout of the network is given but the diameters of the pipes should be chosen among a small number of different values. The cost of realizing the system should be minimized while keeping the energy heads at the nodes above some lower limits. A new algorithm using successive linear programming is presented. The performance of the algorithm is illustrated by optimizing a network with 201 pipes and 172 nodes. It is concluded that the new algorithm seems to be very efficient and stable, and that it always finds a solution with a cost near the best possible.

Journal ArticleDOI
TL;DR: An iterative method for minimizing strictly convex quadratic functions over the intersection of a finite number of convex sets is presented and convergence proofs are given even for the inconsistent case, i.e. when the intersections of the sets is empty.
Abstract: We present an iterative method for minimizing strictly convex quadratic functions over the intersection of a finite number of convex sets. The method consists in computing projections onto the individual sets simultaneously and the new iterate is a convex combination of those projections. We give convergence proofs even for the inconsistent case, i.e. when the intersection of the sets is empty.

Journal ArticleDOI
TL;DR: Extensions and further analytical properties of algorithms for linear programming based only on primal scaling and projected gradients of a potential function are presented and Ye's O(sqrt n) iteration bound is shown to be optimal with respect to the choice of the parameterq.
Abstract: This paper presents extensions and further analytical properties of algorithms for linear programming based only on primal scaling and projected gradients of a potential function. The paper contains extensions and analysis of two polynomial-time algorithms for linear programming. We first present an extension of Gonzaga's O(nL) iteration algorithm, that computes dual variables and does not assume a known optimal objective function value. This algorithm uses only affine scaling, and is based on computing the projected gradient of the potential function $$q\ln (x^T s) - \sum\limits_{j = 1}^n {\ln (x_j )} $$ wherex is the vector of primal variables ands is the vector of dual slack variables, and q = n + $$\sqrt n $$ . The algorithm takes either a primal step or recomputes dual variables at each iteration. We next present an alternate form of Ye's O( $$\sqrt n $$ L) iteration algorithm, that is an extension of the first algorithm of the paper, but uses the potential function $$q\ln (x^T s) - \sum\limits_{j = 1}^n {\ln (x_j ) - \sum\limits_{j - 1}^n {\ln (s_j )} } $$ where q = n + $$\sqrt n $$ . We use this alternate form of Ye's algorithm to show that Ye's algorithm is optimal with respect to the choice of the parameterq in the following sense. Suppose thatq = n + n t wheret⩾0. Then the algorithm will solve the linear program in O(n r L) iterations, wherer = max{t, 1 − t}. Thus the value oft that minimizes the complexity bound ist = 1/2, yielding Ye's O( $$\sqrt n $$ L) iteration bound.

Journal ArticleDOI
TL;DR: In this paper, stochastic programming problems are viewed as parametric programs with respect to the probability distributions of the random coefficients and quantitative continuity results for optimal values and optimal solution sets are proved.
Abstract: In this paper, stochastic programming problems are viewed as parametric programs with respect to the probability distributions of the random coefficients. General results on quantitative stability in parametric optimization are used to study distribution sensitivity of stochastic programs. For recourse and chance constrained models quantitative continuity results for optimal values and optimal solution sets are proved (with respect to suitable metrics on the space of probability distributions). The results are useful to study the effect of approximations and of incomplete information in stochastic programming.

Journal ArticleDOI
TL;DR: In this paper, the convergence properties of reduced Hessian successive quadratic programming for equality constrained optimization were studied. But the convergence conditions for local and superlinear convergence were not considered.
Abstract: We study the convergence properties of reduced Hessian successive quadratic programming for equality constrained optimization. The method uses a backtracking line search, and updates an approximation to the reduced Hessian of the Lagrangian by means of the BFGS formula. Two merit functions are considered for the line search: thel1 function and the Fletcher exact penalty function. We give conditions under which local and superlinear convergence is obtained, and also prove a global convergence result. The analysis allows the initial reduced Hessian approximation to be any positive definite matrix, and does not assume that the iterates converge, or that the matrices are bounded. The effects of a second order correction step, a watchdog procedure and of the choice of null space basis are considered. This work can be seen as an extension to reduced Hessian methods of the well known results of Powell (1976) for unconstrained optimization.

Journal ArticleDOI
TL;DR: An interior point approach to the zero–one integer programming feasibility problem based on the minimization of a nonconvex potential function is presented, considering a class of difficult set covering problems that arise from computing the 1-width of the incidence matrix of Steiner triple systems.
Abstract: We present an interior point approach to the zero–one integer programming feasibility problem based on the minimization of a nonconvex potential function. Given a polytope defined by a set of linear inequalities, this procedure generates a sequence of strict interior points of this polytope, such that each consecutive point reduces the value of the potential function. An integer solution (not necessarily feasible) is generated at each iteration by a rounding scheme. The direction used to determine the new iterate is computed by solving a nonconvex quadratic program on an ellipsoid. We illustrate the approach by considering a class of difficult set covering problems that arise from computing the 1-width of the incidence matrix of Steiner triple systems.

Journal ArticleDOI
TL;DR: It is shown here how one can compose facet- inducing inequalities for the graphical traveling salesman polyhedron, and obtain other facet-inducing inequalities that leads to new valid inequalities forThe Symmetric Traveling Salesman polytope.
Abstract: The graphical relaxation of the Traveling Salesman Problem is the relaxation obtained by requiring that the salesman visit each city at least once instead of exactly once. This relaxation has already led to a better understanding of the Traveling Salesman polytope in Cornuejols, Fonlupt and Naddef (1985). We show here how one can compose facet-inducing inequalities for the graphical traveling salesman polyhedron, and obtain other facet-inducing inequalities. This leads to new valid inequalities for the Symmetric Traveling Salesman polytope. This paper is the first of a series of three papers on the Symmetric Traveling Salesman polytope, the next one studies the strong relationship between that polytope and its graphical relaxation, and the last one applies all the theoretical developments of the two first papers to prove some new facet-inducing results.

Journal ArticleDOI
Sehun Kim1, Hyunsil Ahn1, Seong-cheol Cho1
TL;DR: This paper extends the convergence properties of the Polyak's subgradient algorithm with a fixed target value to a more general case with variable target values and provides a target value updating scheme which finds an optimal solution without prior knowledge of the optimal objective value.
Abstract: Polyak's subgradient algorithm for nondifferentiable optimization problems requires prior knowledge of the optimal value of the objective function to find an optimal solution. In this paper we extend the convergence properties of the Polyak's subgradient algorithm with a fixed target value to a more general case with variable target values. Then a target value updating scheme is provided which finds an optimal solution without prior knowledge of the optimal objective value. The convergence proof of the scheme is provided and computational results of the scheme are reported.

Journal ArticleDOI
TL;DR: For zero sum limiting average games, this formulation reduces to a linear objective, nonlinear constraints program, which finds the “best” stationary strategies, even whenε-optimal stationary strategies do not exist, for arbitrarily smallε.
Abstract: Stationary equilibria in discounted and limiting average finite state/action space stochastic games are shown to be equivalent to global optima of certain nonlinear programs. For zero sum limiting average games, this formulation reduces to a linear objective, nonlinear constraints program, which finds the “best” stationary strategies, even whene-optimal stationary strategies do not exist, for arbitrarily smalle.

Journal ArticleDOI
TL;DR: An algorithm is proposed for globally minimizing a concave function over a compact convex set by combining typical branch-and-bound elements like partitioning, bounding and deletion with suitably introduced cuts in such a way that the computationally most expensive subroutines of previous methods are avoided.
Abstract: An algorithm is proposed for globally minimizing a concave function over a compact convex set. This algorithm combines typical branch-and-bound elements like partitioning, bounding and deletion with suitably introduced cuts in such a way that the computationally most expensive subroutines of previous methods are avoided. In each step, essentially only few linear programming problems have to be solved. Some preliminary computational results are reported.

Journal ArticleDOI
TL;DR: In this article, a branch-and-bound framework is proposed for solving global optimization problems with a few variables and constraints, and the first complete solution of two difficult test problems is presented.
Abstract: Global optimization problems with a few variables and constraints arise in numerous applications but are seldom solved exactly. Most often only a local optimum is found, or if a global optimum is detected no proof is provided that it is one. We study here the extent to which such global optimization problems can be solved exactly using analytical methods. To this effect, we propose a series of tests, similar to those of combinatorial optimization, organized in a branch-and-bound framework. The first complete solution of two difficult test problems illustrates the efficiency of the resulting algorithm. Computational experience with the programbagop, which uses the computer algebra systemmacsyma, is reported on. Many test problems from the compendiums of Hock and Schittkowski and others sources have been solved.

Journal ArticleDOI
TL;DR: In this article, the optimal logical form of a query in information retrieval, given the attributes to be used, can be expressed as a parametric hyperbolic 0-1 program and solved in O(n logn) time, wheren is the number of elementary logical conjunctions of the attributes.
Abstract: Unconstrained hyperbolic 0–1 programming can be solved in linear time when the numerator and the denominator are linear and the latter is always positive. It is NP-hard, and finding an approximate solution with a value equal to a positive multiple of the optimal one is also NP-hard, if this last hypothesis does not hold. Determining the optimal logical form of a query in information retrieval, given the attributes to be used, can be expressed as a parametric hyperbolic 0–1 program and solved in O(n logn) time, wheren is the number of elementary logical conjunctions of the attributes. This allows to characterize the optimal queries for the Van Rijsbergen synthetic criterion.

Journal ArticleDOI
TL;DR: This report derives explicit expressions for the search directions used in many well-known algorithms and gives a survey of projected gradient and Newton directions for all potential and barrier functions.
Abstract: A basic characteristic of an interior point algorithm for linear programming is the search direction. Many papers on interior point algorithms only give an implicit description of the search direction. In this report we derive explicit expressions for the search directions used in many well-known algorithms. Comparing these explicit expressions gives a good insight into the similarities and differences between the various algorithms. Moreover, we give a survey of projected gradient and Newton directions for all potential and barrier functions. This is done both for the affine and projective variants.