scispace - formally typeset
Search or ask a question

Showing papers in "Mathematical Programming in 1977"


Journal ArticleDOI
TL;DR: The main purpose of this paper is to provide an algorithm with a restart procedure that takes account of the objective function automatically and to study a multiplying factor that occurs in the definition of the search direction of each iteration.
Abstract: The conjugate gradient method is particularly useful for minimizing functions of very many variables because it does not require the storage of any matrices. However the rate of convergence of the algorithm is only linear unless the iterative procedure is "restarted" occasionally. At present it is usual to restart everyn or (n + 1) iterations, wheren is the number of variables, but it is known that the frequency of restarts should depend on the objective function. Therefore the main purpose of this paper is to provide an algorithm with a restart procedure that takes account of the objective function automatically. Another purpose is to study a multiplying factor that occurs in the definition of the search direction of each iteration. Various expressions for this factor have been proposed and often it does not matter which one is used. However now some reasons are given in favour of one of these expressions. Several numerical examples are reported in support of the conclusions of this paper.

1,588 citations


Journal ArticleDOI
TL;DR: A one parameter family of variable metric updates is developed by considering a fundamental decomposition of the Hessian that underlies Variable Metric Algorithms and considers particular choices of the parameter.
Abstract: We develop a one parameter family of variable metric updates by considering a fundamental decomposition of the Hessian that underlies Variable Metric Algorithms. The relationship with other Variable Metric Updates is discussed. Considerations based on the condition of the Hessian inverse approximation indicate particular choices of the parameter and these are discussed in the second half of this paper.

333 citations


Journal ArticleDOI
TL;DR: Rates of convergence of subgradient optimization are studied and it is shown that if the step size is chosen to be a geometric progression with ratioρ the convergence, if it occurs, is geometric with rateρ.
Abstract: Rates of convergence of subgradient optimization are studied. If the step size is chosen to be a geometric progression with ratioρ the convergence, if it occurs, is geometric with rateρ. For convergence to occur, it is necessary that the initial step size be large enough, and that the ratioρ be greater than a sustainable ratez(μ), which depends upon a condition numberμ, defined for both differentiable and nondifferentiable functions. The sustainable ratez(μ) is closely related to the rate of convergence of the steepest ascent method for differentiable functions: in fact it is identical if the function is not too well conditioned.

238 citations



Journal ArticleDOI
TL;DR: In this article, a technique for maximizing the sink path length in a graph is presented, which is equivalent to the parametric solution of a minimum cost flow problem, given a linear cost function for lengthening arcs.
Abstract: Given a linear cost function for lengthening arcs, a technique is shown for maximizing, within a budget, the shortest source—sink path length in a graph. The computation is equivalent to the parametric solution of a minimum cost flow problem.

224 citations


Journal ArticleDOI
TL;DR: Results of computer comparisons on medium-scale problems indicate that the resulting algorithm requires less iterations but about the same overall time as the algorithm of Harris [8], which may be regarded as approximating the steepest-edge algorithm.
Abstract: It is shown that suitable recurrences may be used in order to implement in practice the steepest-edge simplex linear programming algorithm. In this algorithm each iteration is along an edge of the polytope of feasible solutions on which the objective function decreases most rapidly with respect to distance in the space of all the variables. Results of computer comparisons on medium-scale problems indicate that the resulting algorithm requires less iterations but about the same overall time as the algorithm of Harris [8], which may be regarded as approximating the steepest-edge algorithm. Both show a worthwhile advantage over the standard algorithm.

199 citations


Journal ArticleDOI
TL;DR: A class of functions called uniformly-locally-convex is introduced that is also tractable, and algorithms for it are sketched.
Abstract: This paper contains basic results that are useful for building algorithms for the optimization of Lipschitz continuous functionsf on compact subsets of En. In this settingf is differentiable a.e. The theory involves a set-valued mappingxźźźf(x) whose range is the convex hull of existing values of źf and limits of źf on a closedź-ball,B(x, ź). As an application, simple descent algorithms are formulated that generate sequence {xk} whose distance from some stationary set (see Section 2) is 0, and where {f(xk)} decreases monotonously. This is done with the aid of anyone of the following three hypotheses: Forź arbitrarily small, a point is available that in arbitrarily close to:(1)the minimizer off onB(x, ź),(2)the closest point inźźf(x) to the origin,(3)ź(h) ź źźf(x), where [ź(h), h] = max {[ź, h]: ź ź źźf(x)}. Observe that these three problems are simplified iff has a tractable local approximation. The minimax problem is taken as an example, and algorithms for it are sketched. For this example, all three hypotheses may be satisfied. A class of functions called uniformly-locally-convex is introduced that is also tractable.

151 citations


Journal ArticleDOI
TL;DR: A new primal extreme point algorithm for solving assignment problems which both circumvents and exploits degeneracy is presented, and is substantially more efficient than previously developed primal and primal-dual extreme point methods for assignment problems.
Abstract: The purpose of this paper is to present a new primal extreme point algorithm for solving assignment problems which both circumvents and exploits degeneracy. The algorithm is based on the observation that the degeneracy difficulties of the simplex method result from the unnecessary inspection of alternative basis representations of the extreme points. This paper characterizes a subsetQ of all bases that are capable of leading to an optimal solution to the problem if one exists. Using this characterization, an extreme point algorithm is developed which considers only those bases inQ. Computational results disclose that the new algorithm is substantially more efficient than previously developed primal and primal-dual extreme point (“simplex”) methods for assignment problems.

147 citations


Journal ArticleDOI
TL;DR: An attempt to provide a powerful mathematical programming language, allowing an easy programming of specific studies on medium-size models such as the recursive use of LP or the build-up of algorithms based on the simplex method is described.
Abstract: First, this paper presents the results of experiments with algorithmic techniques for efficiently solving medium and large scale linear and mixed integer programming problems. The techniques presented here are either original or recent.

142 citations


Journal ArticleDOI
TL;DR: Simplicial decomposition is a special version of the Dantzig—Wolfe decomposition principle, based on Carathéodory's theorem, which allows the direct application of any unrestricted optimization method in the master program to find constrained maximizers for it.
Abstract: Simplicial decomposition is a special version of the Dantzig—Wolfe decomposition principle, based on Caratheodory's theorem. The associated class of algorithms has the following features and advantages: The master and the subprogram are constructed without dual variables; the methods remain therefore well-defined for non-concave objective functions, and pseudo-concavity suffices for convergence to global maxima. The subprogram produces affinely independent sets of feasible generator points defining simplices, which the master program keeps minimal by dropping redundant generator points and finding maximizers in the relative interiors of the resulting subsimplices. The use of parallel subspaces allows the direct application of any unrestricted optimization method in the master program; thus the best unconstrained procedure for any type of objective function can be used to find constrained maximizers for it.

137 citations


Journal ArticleDOI
TL;DR: An algorithm for determining all the extreme points of a convex polytope associated with a set of linear constraints, via the computation of basic feasible solutions to the constraints, is presented.
Abstract: An algorithm for determining all the extreme points of a convex polytope associated with a set of linear constraints, via the computation of basic feasible solutions to the constraints, is presented. The algorithm is based on the product-form revised simplex method and as such can be readily linked onto standard linear programming codes. Applications of such an algorithm are reviewed and limited computational experience given.

Journal ArticleDOI
TL;DR: Both convex inequality and linear equality constraints are seen to satisfy the same generalized constraint qualification for quasi-convex programmes.
Abstract: Multivalued functions satisfying a general convexity condition are examined in the first section. The second section establishes a general transposition theorem for such functions and develops an abstract multiplier principle for them. In particular both convex inequality and linear equality constraints are seen to satisfy the same generalized constraint qualification. The final section examines quasi-convex programmes.

Journal ArticleDOI
TL;DR: A constrained minimax problem is converted to minimization of a sequence of unconstrained and continuously differentiable functions in a manner similar to Morrison's method for constrained optimization, and it is found that the second algorithm converges faster with respect to the other methods.
Abstract: A constrained minimax problem is converted to minimization of a sequence of unconstrained and continuously differentiable functions in a manner similar to Morrison's method for constrained optimization. One can thus apply any efficient gradient minimization technique to do the unconstrained minimization at each step of the sequence. Based on this approach, two algorithms are proposed, where the first one is simpler to program, and the second one is faster in general. To show the efficiency of the algorithms even for unconstrained problems, examples are taken to compare the two algorithms with recent methods in the literature. It is found that the second algorithm converges faster with respect to the other methods. Several constrained examples are also tried and the results are presented.

Journal ArticleDOI
TL;DR: Theoretical results are developed for zero–one linear multiple objective programs for the main problem, having as a feasible set the vertices of the unit hypercube.
Abstract: Theoretical results are developed for zero–one linear multiple objective programs. Initially a simpler program, having as a feasible set the vertices of the unit hypercube, is studied. For the main problem an algorithm, computational experience, parametric analysis and indifference sets are presented. The mixed integer version of the main problem is briefly discussed.

Journal ArticleDOI
TL;DR: A class of methods for minimizing a nondifferentiable function which is the maximum of a finite number of smooth functions is developed, which possesses many attractive features of variable metric methods and can be viewed as their natural extension to the nondifferentiability case.
Abstract: We develop a class of methods for minimizing a nondifferentiable function which is the maximum of a finite number of smooth functions. The methods proceed by solving iteratively qquadratic programming problems to generate search directions. For efficiency the matrices in the quadratic programming problems are suggested to be updated in a variable metric way. By doing so, the methods possess many attractive features of variable metric methods and can be viewed as their natural extension to the nondifferentiable case. To avoid the difficulties of an exact line search, a practical stepsize procedure is also introduced. Under mild asumptions the resulting method converge globally.

Journal ArticleDOI
TL;DR: This work gives a necessary and sufficient condition for optimality, and an algorithm to find an optimal solution to the Bilinear Programming Problem.
Abstract: The Bilinear Programming Problem is a structured quadratic programming problem whose objective function is, in general, neither convex nor concave. Making use of the formal linearity of a dual formulation of the problem, we give a necessary and sufficient condition for optimality, and an algorithm to find an optimal solution.

Journal ArticleDOI
TL;DR: This work generalizes many of the results on efficient points for linear multiple objective optimization problems to the nonlinear case by focusing on an auxiliary problem by relying on duality theory.
Abstract: We generalize many of the results on efficient points for linear multiple objective optimization problems to the nonlinear case by focusing on an auxiliary problem. The approach, which relies on duality theory, is a straightforward development that even in the linear case yields simpler proofs.

Journal ArticleDOI
TL;DR: This paper discusses heuristic “branch and bound” methods for solving mixed integer linear programming problems and introduces new heuristic rules for generating a tree which make use of pseudo-costs and estimations.
Abstract: This paper discusses heuristic "branch and bound" methods for solving mixed integer linear programming problems. The research presented on here is the follow on to that recorded in [3]. After a resume of the concept of pseudo-costs and estimations, new heuristic rules for generating a tree which make use of pseudo-costs and estimations are presented. Experiments have shown that models having a low percentage of integer variables behave in a radically different way from models with a high percentage of integer variables. The new heuristic rules seem to apply generally to the first type of model. Later, other heuristic rules are presented that are used with models having a high percentage of integer variables and with models having a special structure (models including special ordered sets.) The rules introduced here have been implemented in the IBM Mathematical Programming System Extended/370. They are used to solve large mixed integer linear programming models. Numerical results that permit comparisons to be made among the different rules are provided and discussed.

Journal ArticleDOI
TL;DR: This session discusses the design and implementation of software for unconstrained optimization, and a proposal for the classification and documentation of test problems in the field of nonlinear programming.
Abstract: History of mathematical programming systems.- Scope of mathematical programming software.- Anatomy of a mathematical programming system.- Elements of numerical linear algebra.- A tutorial on matricial packing.- Pivot selection tactics.- An interactive query system for MPS solution information.- Modeling and solving network problems.- Integer programming codes.- Some considerations in using branch-and bound codes.- Quadratic programming.- Nonlinear programming using a general mathematical programming system.- The design and implementation of software for unconstrained optimization.- The GRG method for nonlinear programming.- Generalized reduced gradient software for linearly and nonlinearly constrained problems.- The ALGOL 60 procedure minifun for solving non-linear optimization problems.- An accelerated conjugate gradient algorithm.- Global optima without convexity.- Computational aspects of geometric programming.- A proposal for the classification and documentation of test problems in the field of nonlinear programming.- Guidelines for reporting computational experiments in mathematical programming.- COAL session summary.- List of participants.

Journal ArticleDOI
TL;DR: Armijo's step-size procedure for function minimization is modified to include second derivative information and accumulation points are shown to be stationary points with positive semi-definite Hessian matrices.
Abstract: Armijo's step-size procedure for function minimization is modified to include second derivative information. Accumulation points using this procedure are shown to be stationary points with positive semi-definite Hessian matrices.

Journal ArticleDOI
TL;DR: It is shown that this criterion for selecting the “best” approximation from any given class is equivalent for all practical purposes to the familiar Chebyshev approximation criterion.
Abstract: Mathematical programming applications often require an objective function to be approximated by one of simpler form so that an available computational approach can be used. An a priori bound is derived on the amount of error (suitably defined) which such an approximation can induce. This leads to a natural criterion for selecting the “best” approximation from any given class. We show that this criterion is equivalent for all practical purposes to the familiar Chebyshev approximation criterion. This gains access to the rich legacy on Chebyshev approximation techniques, to which we add some new methods for cases of particular interest in mathematical programming. Some results relating to post-computational bounds are also obtained.

Journal ArticleDOI
TL;DR: A new method of solving the nonlinear programming problem, which has similar characteristics to the Augmented Lagrangian method is presented, and convergence and rate of convergence of the new method are proved.
Abstract: Over the past few years a number of researchers in mathematical programming became very interested in the method of the Augmented Lagrangian to solve the nonlinear programming problem. The main reason being that the Augmented Lagrangian approach overcomes the ill-conditioning problem and the slow convergence of the penalty methods. The purpose of this paper is to present a new method of solving the nonlinear programming problem, which has similar characteristics to the Augmented Lagrangian method. The original nonlinear programming problem is transformed into the minimization of a leastpth objective function which under certain conditions has the same optimum as the original problem. Convergence and rate of convergence of the new method is also proved. Furthermore numerical results are presented which illustrate the usefulness of the new approach to nonlinear programming.

Journal ArticleDOI
TL;DR: Two well-know theorems of König for bipartite graphs are shown to hold also for line perfect graphs; this extension provides a reinterpretation of the content of these theorem.
Abstract: The concept of line perfection of a graph is defined so that a simple graph is line perfect if and only if its line graph is perfect in the usual sense. Line perfect graphs are characterized as those which contain no odd cycles of size larger than 3. Two well-know theorems of Konig for bipartite graphs are shown to hold also for line perfect graphs; this extension provides a reinterpretation of the content of these theorems.

Journal ArticleDOI
TL;DR: A characterization of the GUS property which generalizes a basic theorem in linear complementarity theory is given and known sufficient conditions given by Cottle, Karamardian, and Moré for the nonlinear case are shown to be generalized.
Abstract: A complementarity problem is said to be globally uniquely solvable (GUS) if it has a unique solution, and this property will not change, even if any constant term is added to the mapping generating the problem.

Journal ArticleDOI
TL;DR: A sequence of finite horizon (T period) problems is shown to approximate the infinite horizon problems in the following sense: the optimal values of theT period problems converge monotonically to the optimal value of the infinite problem.
Abstract: This paper describes the class of infinite horizon linear programs that have finite optimal values. A sequence of finite horizon (T period) problems is shown to approximate the infinite horizon problems in the following sense: the optimal values of theT period problems converge monotonically to the optimal value of the infinite problem and the limit of any convergent subsequence of initialT period optimal decisions is an optimal decision for the infinite horizon problem.

Journal ArticleDOI
TL;DR: A modification of the Dinkelbach's algorithm is proposed to exploit the fact that good feasible solutions are easily obtained for both the fractional knapsack problem and the ordinary knapsacks problem, and an upper bound of the number of iterations is derived.
Abstract: The fractional knapsack problem to obtain an integer solution that maximizes a linear fractional objective function under the constraint of one linear inequality is considered. A modification of the Dinkelbach's algorithm [3] is proposed to exploit the fact that good feasible solutions are easily obtained for both the fractional knapsack problem and the ordinary knapsack problem. An upper bound of the number of iterations is derived. In particular it is clarified how optimal solutions depend on the right hand side of the constraint; a fractional knapsack problem reduces to an ordinary knapsack problem if the right hand side exceeds a certain bound.

Journal ArticleDOI
TL;DR: A compact and flexible updating procedure using matrix augmentation is developed, and it is shown that the representation of the updated inverse does not grow monotonically in size, and may actually decrease during certain simplex iterations.
Abstract: A compact and flexible updating procedure using matrix augmentation is developed. It is shown that the representation of the updated inverse does not grow monotonically in size, and may actually decrease during certain simplex iterations. Angular structures, such as GUB, are handled naturally within the partitioning framework, and require no modifications of the simplex method.

Journal ArticleDOI
TL;DR: A polynomial bounded algorithm for solving the generalized assignment problem of a class of objective functions that can be chosen from a totally ordered commutative semigroup which obeys a divisibility axiom.
Abstract: For assignment problems a class of objective functions is studied by algebraic methods and characterized in terms of an axiomatic system. It says essentially that the coefficients of the objective function can be chosen from a totally ordered commutative semigroup, which obeys a divisibility axiom. Special cases of the general model are the linear assignment problem, the linear bottleneck problem, lexicographic multicriteria problems,p-norm assignment problems and others. Further a polynomial bounded algorithm for solving this generalized assignment problem is stated. The algebraic approach can be extended to a broader class of combinatorial optimization problems.

Journal ArticleDOI
TL;DR: It is shown that if the Bartels—Golub algorithm or one of its variants is used to update theLU factorization of B, then less computing is needed if one works with the factors of the updatedB than with those ofB.
Abstract: Many implementations of the simplex method require the row of the inverse of the basis matrixB corresponding to the pivot row at each iteration for updating either a pricing vector or the nonbasic reduced costs. In this note we show that if the Bartels--Golub algorithm [1, 2] or one of its variants is used to update theLU factorization ofB, then less computing is needed if one works with the factors of the updatedB than with those ofB. These results are discussed as they apply to the column selection algorithms recently proposed by Goldfarb and Reid [4, 5] and Harris [6].

Journal ArticleDOI
TL;DR: As an answer to an open problem from Nemhauser and Trotter, it is shown that there is a unique maximal set of variables which are integral in optimal (VLP) solutions.
Abstract: Given a graph with weights on vertices, the vertex packing problem consists of finding a vertex packing (i.e. a set of vertices, no two of them being adjacent) of maximum weight. A linear relaxation of one binary programming formulation of this problem has these two well-known properties: (i) every basic solution is (0, 1/2, 1)-valued, (ii) in an optimum linear solution, an integer-valued variable keeps the same value in an optimum binary solution.