scispace - formally typeset
Search or ask a question

Showing papers in "Mathematical Programming in 1983"


Journal ArticleDOI
TL;DR: An efficient and numerically stable dual algorithm for positive definite quadratic programming is described which takes advantage of the fact that the unconstrained minimum of the objective function can be used as a starting point.
Abstract: An efficient and numerically stable dual algorithm for positive definite quadratic programming is described which takes advantage of the fact that the unconstrained minimum of the objective function can be used as a starting point. Its implementation utilizes the Cholesky and QR factorizations and procedures for updating them. The performance of the dual algorithm is compared against that of primal algorithms when used to solve randomly generated test problems and quadratic programs generated in the course of solving nonlinear programming problems by a successive quadratic programming code (the principal motivation for the development of the algorithm). These computational results indicate that the dual algorithm is superior to primal algorithms when a primal feasible point is not readily available. The algorithm is also compared theoretically to the modified-simplex type dual methods of Lemke and Van de Panne and Whinston and it is illustrated by a numerical example.

1,007 citations


Journal ArticleDOI
TL;DR: The procedure samples the efficient set by computing the nondominated criterion vector that is closest to an ideal criterion vector according to a randomly weighted Tchebycheff metric.
Abstract: The procedure samples the efficient set by computing the nondominated criterion vector that is closest to an ideal criterion vector according to a randomly weighted Tchebycheff metric. Using ‘filtering’ techniques, maximally dispersed representatives of smaller and smaller subsets of the set of nondominated criterion vectors are presented at each iteration. The procedure has the advantage that it can converge to non-extreme final solutions. Especially suitable for multiple objective linear programming, the procedure is also applicable to integer and nonlinear multiple objective programs.

668 citations


Journal ArticleDOI
TL;DR: An algorithm for large-scale unconstrained optimization based on Newton's method is presented, shown to have strong convergence properties and has the unusual feature that the asymptotic convergence rate is a user specified parameter which can be set to anything between linear and quadratic convergence.
Abstract: We present an algorithm for large-scale unconstrained optimization based onNewton's method. In large-scale optimization, solving the Newton equations at each iteration can be expensive and may not be justified when far from a solution. Instead, an inaccurate solution to the Newton equations is computed using a conjugate gradient method. The resulting algorithm is shown to have strong convergence properties and has the unusual feature that the asymptotic convergence rate is a user specified parameter which can be set to anything between linear and quadratic convergence. Some numerical results on a 916 vriable test problem are given. Finally, we contrast the computational behavior of our algorithm with Newton's method and that of a nonlinear conjugate gradient algorithm.

409 citations


Journal ArticleDOI
Stella Dafermos1
TL;DR: A general iterative scheme for the numerical solution of finite dimensional variational inequalities that contains the projection, linear approximation and relaxation methods but also induces new algorithms and allows the possibility of adjusting the norm at each step of the algorithm.
Abstract: In this paper we introduce and study a general iterative scheme for the numerical solution of finite dimensional variational inequalities. This iterative scheme not only contains, as special cases the projection, linear approximation and relaxation methods but also induces new algorithms. Then, we show that under appropriate assumptions the proposed iterative scheme converges by establishing contraction estimates involving a sequence of norms in En induced by symmetric positive definite matrices Gm. Thus, in contrast to the above mentioned methods, this technique allows the possibility of adjusting the norm at each step of the algorithm. This flexibility will generally yield convergence under weaker assumptions.

271 citations


Journal ArticleDOI
TL;DR: A general convergence theorem is provided for algorithms of this type including the calculation of fixed points of contraction and monotone mappings arising in linear and nonlinear systems of equations, optimization problems, shortest path problems, and dynamic programming.
Abstract: We present an algorithmic model for distributed computation of fixed points whereby several processors participate simultaneously in the calculations while exchanging information via communication links. We place essentially no assumptions on the ordering of computation and communication between processors thereby allowing for completely uncoordinated execution. We provide a general convergence theorem for algorithms of this type, and demonstrate its applicability to several classes of problems including the calculation of fixed points of contraction and monotone mappings arising in linear and nonlinear systems of equations, optimization problems, shortest path problems, and dynamic programming.

257 citations


Journal ArticleDOI
TL;DR: The goal is to give some theoretical explanation for the efficiency of the simplex method of George Dantzig, and it is shown that the number of pivots required to solve a linear programming problem grows in proportion to thenumber of variables on the average.
Abstract: The goal is to give some theoretical explanation for the efficiency of the simplex method of George Dantzig. Fixing the number of constraints and using Dantzig's self-dual parametric algorithm, we show that the number of pivots required to solve a linear programming problem grows in proportion to the number of variables on the average.

245 citations


Journal ArticleDOI
TL;DR: The development of the cross decomposition method captures profound relationships between primal and dual decomposition, and shows that the more constraints can be included in the Langrangean relaxation, the fewer the Benders cuts one may expect to need.
Abstract: Many methods for solving mixed integer programming problems are based either on primal or on dual decomposition, which yield, respectively, a Benders decomposition algorithm and an implicit enumeration algorithm with bounds computed via Lagrangean relaxation. These methods exploit either the primal or the dual structure of the problem. We propose a new approach, cross decomposition, which allows exploiting simultaneously both structures. The development of the cross decomposition method captures profound relationships between primal and dual decomposition. It is shown that the more constraints can be included in the Langrangean relaxation (provided the duality gap remains zero), the fewer the Benders cuts one may expect to need. If the linear programming relaxation has no duality gap, only one Benders cut is needed to verify optimality.

179 citations


Journal ArticleDOI
TL;DR: It is shown that the largest λ (the maximal flow) is determined by the minimal cut; for the continuous problem the argument depends on the coarea formula for functions of bounded variation.
Abstract: In place of flows on a discrete network we study flows described by a vector field ź(x,y) in a plane domain ź. The analogue of the capacity constraint is |ź|≤c(x,y), and the strength of sources and sinks is ź·n=źf on the boundary and--div ź=źF in the interior. We show that the largest ź (the maximal flow) is determined by the minimal cut. As in the discrete case the dual problem has a 0---1 solution, given by the characteristic function of the minimal cut; for the continuous problem the argument depends on the coarea formula for functions of bounded variation.

177 citations


Journal ArticleDOI
TL;DR: A class of implementable algorithms is described for minimizing any convex, not necessarily differentiable, functionf of several variables that have flexible storage requirements and computational effort per iteration that can be controlled by a user.
Abstract: A class of implementable algorithms is described for minimizing any convex, not necessarily differentiable, functionf of several variables The methods require only the calculation off and one subgradient off at designated points They generalize Lemarechal's bundle method More specifically, instead of using all previously computed subgradients in search direction finding subproblems that are quadratic programming problems, the methods use an aggregate subgradient which is recursively updated as the algorithms proceed Each algorithm yields a minimizing sequence of points, and iff has any minimizers, then this sequence converges to a solution of the problem Particular members of this algorithm class terminate whenf is piecewise linear The methods are easy to implement and have flexible storage requirements and computational effort per iteration that can be controlled by a user

172 citations


Journal ArticleDOI
TL;DR: A theorem is proved showing that the general distance-k graph coloring problem is NP-Complete for all fixedk≥2, and hence that the optimal non-overlapping direct cover problem is alsoNP-Complete.
Abstract: We consider the problem of approximating the Hessian matrix of a smooth non-linear function using a minimum number of gradient evaluations, particularly in the case that the Hessian has a known, fixed sparsity pattern. We study the class of Direct Methods for this problem, and propose two new ways of classifying Direct Methods. Examples are given that show the relationships among optimal methods from each class. The problem of finding a non-overlapping direct cover is shown to be equivalent to a generalized graph coloring problem—the distance-2 graph coloring problem. A theorem is proved showing that the general distance-k graph coloring problem is NP-Complete for all fixedk≥2, and hence that the optimal non-overlapping direct cover problem is also NP-Complete. Some worst-case bounds on the performance of a simple coloring heuristic are given. An appendix proves a well-known folklore result, which gives lower bounds on the number of gradient evaluations needed in any possible approximation method.

154 citations


Journal ArticleDOI
TL;DR: It is shown that for arbitrary real edge costs the travelling salesman problem can be polynomially solved for such a graph, and an explicit linear description of the travelled salesman polytope is given.
Abstract: A Halin graphH=T∪C is obtained by embedding a treeT having no nodes of degree 2 in the plane, and then adding a cycleC to join the leaves ofT in such a way that the resulting graph is planar. These graphs are edge minimal 3-connected, hamiltonian, and in general have large numbers of hamilton cycles. We show that for arbitrary real edge costs the travelling salesman problem can be polynomially solved for such a graph, and we give an explicit linear description of the travelling salesman polytope (the convex hull of the incidence vectors of the hamilton cycles) for such a graph.

Journal ArticleDOI
TL;DR: A weak, strong and strict converse duality theorem are proved and the formulation of the dual is refined such that well-known dual problems of Gale, Kuhn and Tucker and Isermann are generalized.
Abstract: In this paper the problem dual to a convex vector optimization problem is defined. Under suitable assumptions, a weak, strong and strict converse duality theorem are proved. In the case of linear mappings the formulation of the dual is refined such that well-known dual problems of Gale, Kuhn and Tucker [8] and Isermann [12] are generalized by this approach.

Journal ArticleDOI
TL;DR: A new combined CG-QN algorithm which can use whatever storage is available and is presented to demonstrate that the new algorithm is never worse than CONMIN and that it is almost always better if even a small amount of extra storage is provided.
Abstract: Both conjugate gradient and quasi-Newton methods are quite successful at minimizing smooth nonlinear functions of several variables, and each has its advantages In particular, conjugate gradient methods require much less storage to implement than a quasi-Newton code and therefore find application when storage limitations occur They are, however, slower, so there have recently been attempts to combine CG and QN algorithms so as to obtain an algorithm with good convergence properties and low storage requirements One such method is the code CONMIN due to Shanno and Phua; it has proven quite successful but it has one limitation It has no middle ground, in that it either operates as a quasi-Newton code using O(n 2) storage locations, or as a conjugate gradient code using 7n locations, but it cannot take advantage of the not unusual situation where more than 7n locations are available, but a quasi-Newton code requires an excessive amount of storage

Journal ArticleDOI
TL;DR: A new method is presented which, at each iteration, computes a direction of search by solving the Newton system of equations, projected, if necessary, into a linear manifold along which F is locally differentiable, and has quadratic convergence to a solutionx under given conditions.
Abstract: We consider the problem of minimizing a sum of Euclidean norms. $$F(x) = \sum olimits_{i = 1}^m {||r_i } (x)||$$ here the residuals {r i(x)} are affine functions fromR n toR 1 (n≥1≥2,m>-2). This arises in a number of applications, including single-and multi-facility location problems. The functionF is, in general, not differentiable atx if at least oner i (x) is zero. Computational methods described in the literature converge quite slowly if the solution is at such a point. We present a new method which, at each iteration, computes a direction of search by solving the Newton system of equations, projected, if necessary, into a linear manifold along whichF is locally differentiable. A special line search is used to obtain the next iterate. The algorithm is closely related to a method recently described by Calamai and Conn. The new method has quadratic convergence to a solutionx under given conditions. The reason for this property depends on the nature of the solution. If none of the residuals is zero at* x, thenF is differentiable at* x and the quadratic convergence follows from standard properties of Newton's method. If one of the residuals, sayr i * x), is zero, then, as the iteration proceeds, the Hessian ofF becomes extremely ill-conditioned. It is proved that this illconditioning, instead of creating difficulties, actually causes quadratic convergence to the manifold (xℛr i (x)=0}. If this is a single point, the solution is thus identified. Otherwise it is necessary to continue the iteration restricted to this manifold, where the usual quadratic convergence for Newton's method applies. If several residuals are zero at* x, several stages of quadratic convergence take place as the correct index set is constructed. Thus the ill-conditioning property accelerates the identification of the residuals which are zero at the solution. Numerical experiments are presented, illustrating these results.

Journal ArticleDOI
TL;DR: The modified binary search method is theoretically interesting because of its superlinear convergence and the capability to provide an explicit interval containing the optimum parameter value $$\bar \lambda $$ .
Abstract: The fractional program P is defined by maxf(x)/g(x) subject tox∈X. A class of methods for solving P is based on the auxiliary problem Q(λ) with a parameter λ: maxf(x)−λg(x) subject tox∈X. Starting with two classical methods in this class, the Newton method and the binary search method, a number of variations are introduced and compared. Among the proposed methods. the modified binary search method is theoretically interesting because of its superlinear convergence and the capability to provide an explicit interval containing the optimum parameter value\(\bar \lambda \). Computational behavior is tested by solving fractional knapsack problems and quadratic fractional programs. The interpolated binary search method seems to be most efficient, while other methods also behave surprisingly well.

Journal ArticleDOI
TL;DR: It is shown how the ε-optimality conditions given in this paper can be mechanized into a bundle algorithm for solving nondifferentiable convex programming problems with linear inequality constraints.
Abstract: In this paper we present e-optimality conditions of the Kuhn-Tucker type for points which are within e of being optimal to the problem of minimizing a nondifferentiable convex objective function subject to nondifferentiable convex inequality constraints, linear equality constraints and abstract constraints. Such e-optimality conditions are of interest for theoretical consideration as well as from the computational point of view. Some illustrative applications are made. Thus we derive an expression for the e-subdifferential of a general convex ‘max function’. We also show how the e-optimality conditions given in this paper can be mechanized into a bundle algorithm for solving nondifferentiable convex programming problems with linear inequality constraints.

Journal ArticleDOI
Ahn Byong-hun1
TL;DR: In this paper, a robust iterative method for the linear complementarity problem (LCP) with symmetric and nonsymmetric properties was proposed. But this method is not suitable for the case of non-symmetric properties.
Abstract: This paper is concerned with iterative methods for the linear complementarity problem (LCP) of findingx and y in Rn such thatc+Dx+yź0, b-xź0, andxT(c+Dx+y)=yT(b-x)=0 whenb>0. This is the LCP (M, q) withM=(l 0D t), which is in turn equivalent to a linear variational inequality over a rectangle. This type of problem arises, for example, from quadratic programming (QP) problems with upper and lower bounds ifD is symmetric, and from multicommodity market equilibrium problems with institutional price controls imposed ifD is not necessarily symmetric. Iterative methods for the LCP (M, q) with symmetricM are well developed and studied using QP applications. For nonsymmetric cases withM being either an H-matrix with positive diagonals, or a Z-matrix, there exists a robust iterative method with guaranteed convergence. This paper extends this algorithm so that the LCP (M, q) withM=(l 0D t) which is neither symmetric, nor an H-matrix with positive diagonals, nor a Z-matrix, can be processed when onlyD notM satisfies such properties. The case whereD is nonsymmetric is explicitly discussed.

Journal ArticleDOI
TL;DR: Two examples of parametric cost programming problems—one in network programming and one in NP-hard 0-1 programming—are given; in each case, the number of breakpoints in the optimal cost curve is exponential in the square root of thenumber of variables in the problem.
Abstract: Two examples of parametric cost programming problems—one in network programming and one in NP-hard 0-1 programming—are given; in each case, the number of breakpoints in the optimal cost curve is exponential in the square root of the number of variables in the problem.

Journal ArticleDOI
TL;DR: Computational experience with the codesDecompsx and Lift which are built on IBM's MPSX/370 LP software for large-scale structured programs, including multinational energy models and global economic models are reported.
Abstract: This paper reports computational experience with the codesDecompsx andLift which are built on IBM's MPSX/370 LP software for large-scale structured programs.Decompsx is an implementation of the Dantzig-Wolfe decomposition algorithm for block-angular LP's.Lift is an implementation of a nested decomposition algorithm for staircase and block-triangular LP's. A diverse collection of test problems drawn from real applications is used to test these codes, including multinational energy models and global economic models.

Journal ArticleDOI
TL;DR: An ellipsoid algorithm for nonlinear programming is investigated and its computer implementation is discussed and a method for measuring computational efficiency is presented.
Abstract: We investigate an ellipsoid algorithm for nonlinear programming. After describing the basic steps of the algorithm, we discuss its computer implementation and present a method for measuring computational efficiency. The computational results obtained from experimenting with the algorithm are discussed and the algorithm's performance is compared with that of a widely used commercial code.

Journal ArticleDOI
Kaoru Tone1
TL;DR: Two revisions of the linear approximation to the constraints are proposed and it is shown that the directions generated by the revisions are also descent directions of exact penalty functions of nonlinear programming problems.
Abstract: In the last few years the successive quadratic programming methods proposed by Han and Powell have been widely recognized as excellent means for solving nonlinea programming problems. However, there remain some questions about their linear approximations to the constraints from both theoretical and empirical points of view. In this paper, we propose two revisions of the linear approximation to the constraints and show that the directions generated by the revisions are also descent directions of exact penalty functions of nonlinear programming problems. The new technique can cope better with bad starting points than the usual one.

Journal ArticleDOI
TL;DR: It is proved that Slater's qualification implies those qualifications that give the necessary conditions for optimality, local and global constraint qualifications are established, based on the property of Farkas-Minkowski.
Abstract: This paper gives characterizations of optimal solutions to the nondifferentiable convex semi-infinite programming problem, which involve the notion of Lagrangian saddlepoint With the aim of giving the necessary conditions for optimality, local and global constraint qualifications are established These constraint qualifications are based on the property of Farkas-Minkowski, which plays an important role in relation to certain systems obtained by linearizing the feasible set It is proved that Slater's qualification implies those qualifications

Journal ArticleDOI
TL;DR: A new variant is developed which is, in fact, a new form of the ‘primal-dual algorithm’ and which has several interesting properties and uses, explicitly only dual variables.
Abstract: This paper is concerned with the minimum cost flow problem. It is shown that the class of dual algorithms which solve this problem consists of different variants of a common general algorithm. We develop a new variant which is, in fact, a new form of the ‘primal-dual algorithm’ and which has several interesting properties. It uses, explicitly only dual variables. The slope of the change in the (dual) objective is monotone. The bound on the maximum number of iterations to solve a problem with integral bounds on the flow is better than bounds for other algorithms.

Journal ArticleDOI
TL;DR: ‘pricing’ routines that compute reduced costs for nonbasic variables and that select a variable to enter the basis at each iteration that may offer substantial savings in number of iterations, time per iteration, or both are examined.
Abstract: This and a companion paper consider how current implementations of the simplex method may be adapted to better solve linear programs that have a staged, or ‘staircase’, structure. The preceding paper considered ‘inversion’ routines that factorize the basis and solve linear systems. The present paper examines ‘pricing’ routines that compute reduced costs for nonbasic variables and that select a variable to enter the basis at each iteration. Both papers describe extensive (although preliminary) computer experiments, and can point to some quite promising results. For pricing in particular, staircase computation strategies appear to offer modest but consistent savings; staircase selection strategies, properly chosen, may offer substantial savings in number of iterations, time per iteration, or both.

Journal ArticleDOI
TL;DR: This paper presents methods for solving allocation problems that can be stated as convex knapsack problems with generalized upper bounds and introduces an approximation method to solve certain equations, which arise during the procedures.
Abstract: This paper presents methods for solving allocation problems that can be stated as convex knapsack problems with generalized upper bounds. Such bounds may express upper limits on the total amount allocated to each of several subsets of activities. In addition our model arises as a subproblem in more complex mathematical programs. We therefore emphasize efficient procedures to recover optimality when minor changes in the parameters occur from one problem instance to the next. These considerations lead us to propose novel data structures for such problems. Also, we introduce an approximation method to solve certain equations, which arise during the procedures.

Journal ArticleDOI
TL;DR: It is proved that, if the DFP or BFGS algorithm with step-lengths of one is applied to a functionF(x) that has a Lipschitz continuous second derivative, and if the calculated vectors of variables converge to a point at which ∇F is zero and ∇2F is positive definite, then the sequence of variable metric matrices also converges.
Abstract: It is proved that, if the DFP or BFGS algorithm with step-lengths of one is applied to a functionF(x) that has a Lipschitz continuous second derivative, and if the calculated vectors of variables converge to a point at which źF is zero and ź2F is positive definite, then the sequence of variable metric matrices also converges. The limit of this sequence is identified in the case whenF(x) is a strictly convex quadratic function.

Journal ArticleDOI
TL;DR: This paper characterizes the classU of all realn×n matricesM for which the linear complementarity problem (q, M) has a unique solution for allRealn-vectorsq interior to the coneK(M) of vectors for which (q) has any solution at all.
Abstract: This paper characterizes the classU of all realn×n matricesM for which the linear complementarity problem (q, M) has a unique solution for all realn-vectorsq interior to the coneK(M) of vectors for which (q, M) has any solution at all. It is shown that restricting the uniqueness property to the interior ofK(M) is necessary because whenU, the problem (q, M) has infinitely many solutions ifq belongs to the boundary of intK(M). It is shown thatM must have nonnegative principal minors whenU andK(M) is convex. Finally, it is shown that whenM has nonnegative principal minors, only one of which is 0, andK(M)≠R n , thenU andK(M) is a closed half-space.

Journal ArticleDOI
TL;DR: It is shown that certain theorems concerning differentiable pseudoconvex functions can be extended to a class of nondifferentiable pseudocomplementary functions that was defined with help of the Dini-derivative by Diewert recently.
Abstract: It is shown that certain theorems concerning differentiable pseudoconvex functions can be extended to a class of nondifferentiable pseudoconvex functions that was defined with help of the Dini-derivative by Diewert recently.

Journal ArticleDOI
TL;DR: It is shown that the simplex method can be implemented using a working basis whose size is the number of explicit constraints as long as the local structure of X around the current point is known.
Abstract: This paper is concerned with linear programming problems in which many of the constraints are handled implicitly by requiring that the vector of decision variables lie in a polyhedronX. It is shown that the simplex method can be implemented using a working basis whose size is the number of explicit constraints as long as the local structure ofX around the current point is known. Various ways of describing this local structure lead to known implementations whenX is defined by generalized or variable upper bounds or flow conservation constraints. In the general case a decomposition principle can be used to generate this local structure. We also show how to update factorizations of the working basis.

Journal ArticleDOI
TL;DR: This approach reduces the original problem to a simple problem of maximizing a globally differentiable function on the product space of a Euclidean space and the nonnegative orthant of another Euclidesan space.
Abstract: A new penalty function is associated with an inequality constrained nonlinear programming problem via its dual. This penalty function is globally differentiable if the functions defining the original problem are twice globally differentiable. In addition, the penalty parameter remains finite. This approach reduces the original problem to a simple problem of maximizing a globally differentiable function on the product space of a Euclidean space and the nonnegative orthant of another Euclidean space. Many efficient algorithms exist for solving this problem. For the case of quadratic programming, the penalty function problem can be solved effectively by successive overrelaxation (SOR) methods which can handle huge problems while preserving sparsity features.