scispace - formally typeset
Search or ask a question

Showing papers in "Computational Optimization and Applications in 1997"


Journal ArticleDOI
TL;DR: This work presents a new class ofconstructive Lagrangian relaxation algorithms that circumvent some of the deficiencies of previous methods and demonstrates the efficiency and effectiveness of thenew algorithm class.
Abstract: Large classes of data association problems in multiple target tracking applications involving both multiple and single sensor systems can be formulated as multidimensional assignment problems. These NP-hard problems are large scale and sparse with noisy objective function values, but must be solved in “real-time”. Lagrangian relaxation methods have proven to be particularly effective in solving these problems to the noise level in real-time, especially for dense scenarios and for multiple scans of data from multiple sensors. This work presents a new class of constructive Lagrangian relaxation algorithms that circumvent some of the deficiencies of previous methods. The results of several numerical studies demonstrate the efficiency and effectiveness of the new algorithm class.

152 citations


Journal ArticleDOI
TL;DR: Comparing various computercodes for solving large scale mixed complementarity problems is provided, and inadequacies in how solvers are currently compared are discussed, and a testing environment is presented that addresses these inadequacies.
Abstract: This paper provides a means for comparing various computer codes for solving large scale mixed complementarity problems. We discuss inadequacies in how solvers are currently compared, and present a testing environment that addresses these inadequacies. This testing environment consists of a library of test problems, along with GAMS and MATLAB interfaces that allow these problems to be easily accessed. The environment is intended for use as a tool by other researchers to better understand both their algorithms and their implementations, and to direct research toward problem classes that are currently the most challenging. As an initial benchmark, eight different algorithm implementations for large scale mixed complementarity problems are briefly described and tested with default parameter settings using the new testing environment.

119 citations


Journal ArticleDOI
TL;DR: The article presents a methodology for optimal model-based decomposition (OMBD) of design problems, whether or notinitially cast as optimization problems, that is robust enough to account for computational demands and resources and strength of interdependencies between the computational modules contained in the model.
Abstract: Decomposition of large engineering system models is desirable since increased model size reduces reliability and speed of numerical solution algorithms. The article presents a methodology for optimal model-based decomposition (OMBD) of design problems, whether or not initially cast as optimization problems. The overall model is represented by a hypergraph and is optimally partitioned into weakly connected subgraphs that satisfy decomposition constraints. Spectral graph-partitioning methods together with iterative improvement techniques are proposed for hypergraph partitioning. A known spectral K-partitioning formulation, which accounts for partition sizes and edge weights, is extended to graphs with also vertex weights. The OMBD formulation is robust enough to account for computational demands and resources and strength of interdependencies between the computational modules contained in the model.

114 citations


Journal ArticleDOI
TL;DR: The most recent developments regardingsimulated annealing and genetic algorithms for solving facility layout problems approximately are reviewed.
Abstract: The facility layout problem (FLP) has many practical applications and is known to be NP-hard. During recent decades exact and heuristic approaches have been proposed in the literature to solve FLPs. In this paper we review the most recent developments regarding simulated annealing and genetic algorithms for solving facility layout problems approximately.

111 citations


Journal ArticleDOI
TL;DR: The main results show that this problem can be solved by an iterative method based on averaging at each step the Bregman projections with respect to f(x)=∑i=1nxi· ln xi of the current iterate onto the given sets.
Abstract: The problem considered in this paper is that of finding a point which is common to almost all the members of a measurable family of closed convex subsets of {R}_{++}^n, provided that such a point exists. The main results show that this problem can be solved by an iterative method essentially based on averaging at each step the Bregman projections with respect to f(x)=σ_i=1^nx_i\cdot \ln x_i of the current iterate onto the given sets.

89 citations


Journal ArticleDOI
TL;DR: The combination of one of the best bound functions for a Branch-and-Bound algorithm (the Gilmore-Lawler bound) and various testing, variable binding and recalculation of bounds between branchings when used in a parallel Branch- and- Bound algorithm is investigated.
Abstract: Quadratic Assignment problems are in practice among the most difficult to solve in the class of NP-complete problems. The only successful approach hitherto has been Branch-and-Bound-based algorithms, but such algorithms are crucially dependent on good bound functions to limit the size of the space searched. Much work has been done to identify such functions for the QAP, but with limited success. Parallel processing has also been used in order to increase the size of problems solvable to optimality. The systems used have, however, often been systems with relatively few, but very powerful vector processors, and have hence not been ideally suited for computations essentially involving non-vectorizable computations on integers. In this paper we investigate the combination of one of the best bound functions for a Branch-and-Bound algorithm (the Gilmore-Lawler bound) and various testing, variable binding and recalculation of bounds between branchings when used in a parallel Branch-and-Bound algorithm. The algorithm has been implemented on a 16-processor MEIKO Computing Surface with Intel i860 processors. Computational results from the solution of a number of large QAPs, including the classical Nugent 20 are reported.

80 citations


Journal ArticleDOI
TL;DR: A matrix factorization procedure is employed that exploits the structure of the constraint matrix, and it is implemented on parallel computers to show that the codes are efficient and stable for problems with thousands of scenarios.
Abstract: We present a computationally efficient implementation of an interior point algorithm for solving large-scale problems arising in stochastic linear programming and robust optimization. A matrix factorization procedure is employed that exploits the structure of the constraint matrix, and it is implemented on parallel computers. The implementation is perfectly scalable. Extensive computational results are reported for a library of standard test problems from stochastic linear programming, and also for robust optimization formulations. The results show that the codes are efficient and stable for problems with thousands of scenarios. Test problems with 130 thousand scenarios, and a deterministic equivalent linear programming formulation with 2.6 million constraints and 18.2 million variables, are solved successfully.

42 citations


Journal ArticleDOI
TL;DR: A new extension to the Symmetric Travelling Salesman Problem (STSP) is described in which some nodes are visited in both of 2 periods and the remaining nodes are visiting in either 1 of the periods.
Abstract: We describe a new extension to the Symmetric Travelling Salesman Problem (STSP) in which some nodes are visited in both of 2 periods and the remaining nodes are visited in either 1 of the periods. A number of possible Integer Programming Formulations are given. Valid cutting plane inequalities are defined for this problem which result in an, otherwise prohibitively difficult, model of 42 nodes becoming easily solvable by a combination of cuts and Branch-and-Bound. Some of the cuts are entered in a “pool” and only used when it is automatically verified that they are violated. Other constraints which are generalisations of the subtour and comb inequalities for the single period STSP, are identified manually when needed. Full computational details of solution process are given.

38 citations


Journal ArticleDOI
TL;DR: The difficulties that need to be addressed and some general ideas that may be used to resolve these difficulties are discussed, together with the ideas on which they are based.
Abstract: Sequential quadratic (SQP) programming methods are the method of choice when solving small or medium-sized problems. Since they are complex methods they are difficult (but not impossible) to adapt to solve large-scale problems. We start by discussing the difficulties that need to be addressed and then describe some general ideas that may be used to resolve these difficulties. A number of SQP codes have been written to solve specific applications and there is a general purposed SQP code called SNOPT, which is intended for general applications of a particular type. These are described briefly together with the ideas on which they are based. Finally we discuss new work on developing SQP methods using explicit second derivatives.

33 citations


Journal ArticleDOI
TL;DR: It is shown that nonmonotone synchronization schemesare admissible, which further improves flexibility of PVD approach and derive some new and improved linear convergence results for problems with weak sharp minima of order 2 and strongly convex problems.
Abstract: We consider the recently proposed parallel variable distribution (PVD) algorithm of Ferris and Mangasarian [4] for solving optimization problems in which the variables are distributed amongp processors. Each processor has the primary responsibility for updating its block of variables while allowing the remaining “secondary” variables to change in a restricted fashion along some easily computable directions. We propose useful generalizations that consist, for the general unconstrained case, of replacing exact global solution of the subproblems by a certain natural sufficient descent condition, and, for the convex case, of inexact subproblem solution in the PVD algorithm. These modifications are the key features of the algorithm that has not been analyzed before. The proposed modified algorithms are more practical and make it easier to achieve good load balancing among the parallel processors. We present a general framework for the analysis of this class of algorithms and derive some new and improved linear convergence results for problems with weak sharp minima of order 2 and strongly convex problems. We also show that nonmonotone synchronization schemes are admissible, which further improves flexibility of PVD approach.

31 citations


Journal ArticleDOI
TL;DR: The conclusion is that the hybrid option in ELSO provides performance comparable to the hand-coded option, while having the significant advantage of not requiring a hand- coded gradient or the sparsity pattern of the partially separable function.
Abstract: ELSO is an environment for the solution of large-scale optimization problems. With ELSO the user is required to provide only code for the evaluation of a partially separable function. ELSO exploits the partial separability structure of the function to compute the gradient efficiently using automatic differentiation. We demonstrate ELSO‘s efficiency by comparing the various options available in ELSO. Our conclusion is that the hybrid option in ELSO provides performance comparable to the hand-coded option, while having the significant advantage of not requiring a hand-coded gradient or the sparsity pattern of the partially separable function. In our test problems, which have carefully coded gradients, the computing time for the hybrid AD option is within a factor of two of the hand-coded option.

Journal ArticleDOI
TL;DR: A new surrogate constraint analysis is presented that gives rise to a family of strong valid inequalities called surrogate-knapsack (S-K) cuts that are capable of reducing the duality gap between optimal continuous and integer feasible solutions more effectively than standard liftedcover inequalities.
Abstract: This paper presents a new surrogate constraint analysis that gives rise to a family of strong valid inequalities called surrogate-knapsack (S-K) cuts. The analytical procedure presented provides a strong S-K cut subject to constraining the values of selected cut coefficients, including the right-hand side. Our approach is applicable to both zero-one integer problems and problems having multiple choice (generalized upper bound) constraints. We also develop a strengthening process that further tightens the S-K cut obtained via the surrogate analysis. Building on this, we develop a polynomial-time separation procedure that successfully generates an S-K cut that renders a given non-integer extreme point infeasible. We show how sequential lifting processes can be viewed in our framework, and demonstrate that our approach can obtain facets that are not available to standard lifting methods. We also provide a related analysis for generating “fast cuts”. Finally, we present computational results of the new S-K cuts for solving 0-1 integer programming problems. Our outcomes disclose that the new cuts are capable of reducing the duality gap between optimal continuous and integer feasible solutions more effectively than standard lifted cover inequalities, as used in modern codes such as the CPLEX mixed 0-1 integer programming solver.

Journal ArticleDOI
TL;DR: An extensive numerical experience obtained by different algorithms which belong to the preceding class of truncated Newton methods for solving large scale unconstrained optimization problems is presented.
Abstract: Recently, in [12] a very general class of truncated Newton methods has been proposed for solving large scale unconstrained optimization problems. In this work we present the results of an extensive numerical experience obtained by different algorithms which belong to the preceding class. This numerical study, besides investigating which are the best algorithmic choices of the proposed approach, clarifies some significant points which underlies every truncated Newton based algorithm.

Journal ArticleDOI
TL;DR: Under mild conditions, the global convergence of this new algorithm on convex functions is proved and some numerical experiments show that this new nonmonotoneBFGS algorithm is competitive to the BFGS algorithm.
Abstract: In this paper, a new nonmonotone BFGS algorithm for unconstrained optimization is introduced. Under mild conditions, the global convergence of this new algorithm on convex functions is proved. Some numerical experiments show that this new nonmonotone BFGS algorithm is competitive to the BFGS algorithm.

Journal ArticleDOI
TL;DR: It is shown that this abstract bicriterial optimization problem has at least one solution andumerical results for special parameters using a multiobjective optimization method are presented.
Abstract: In this paper we consider a special optimization problem with two objectives which arises in antenna theory. It is shown that this abstract bicriterial optimization problem has at least one solution. Discretized versions of this problem are also discussed, and the relationships between these finite dimensional problems and the infinite dimensional problem are investigated. Moreover, we present numerical results for special parameters using a multiobjective optimization method.

Journal ArticleDOI
TL;DR: It is shown that asymptotically, under suitable reasonable assumptions, a single inner iteration suffices for the algorithm proposed by Conn et al.
Abstract: This paper considers the number of inner iterations required per outer iteration for the algorithm proposed by Conn et al. [9]. We show that asymptotically, under suitable reasonable assumptions, a single inner iteration suffices.

Journal ArticleDOI
TL;DR: In this article, the authors describe a parallel, non-shared memory implementation of the classical general mixed integer branch and bound algorithm, with experiments on the CM-5 family of parallel processors.
Abstract: This paper describes parallel, non-shared-memory implementation of the classical general mixed integer branch and bound algorithm, with experiments on the CM-5 family of parallel processors. The main issue in such an implementation is whether task scheduling and certain data-storage functions should be handled by a single processor, or spread among multiple processors. The centralized approach risks creating processing bottlenecks, while the more decentralized implementations differ more from the fundamental serial algorithm. Extensive computational tests on standard MIPLIB problems compare centralized, clustered, and fully decentralized task scheduling methods, using a novel combination of random work scattering and rendezvous-based global load balancing, along with a distributed “control by token” technique. Further experiments compare centralized and distributed schemes for storing heuristic “pseudo-cost” branching data. The distributed storage method is based on continual asynchronous reduction along a tree of redundant storage sites. On average, decentralized task scheduling appears at least as effective as central control, but pseudo-cost storage should be kept as centralized as possible.

Journal ArticleDOI
TL;DR: It is shown that the primal-dual entropy function may provide asatisfactory alternative to the classical primal and dual scalings forlinear programming problems, and the possible effects of more general reparametrizations onfeasible-interior-point algorithms are considered.
Abstract: We are motivated by the problem of constructing a primal-dual barrier function whose Hessian induces the (theoretically and practically) popular symmetric primal and dual scalings for linear programming problems Although this goal is impossible to attain, we show that the primal-dual entropy function may provide a satisfactory alternative We study primal-dual interior-point algorithms whose search directions are obtained from a potential function based on this primal-dual entropy barrier We provide polynomial iteration bounds for these interior-point algorithms Then we illustrate the connections between the barrier function and a reparametrization of the central path equations Finally, we consider the possible effects of more general reparametrizations on infeasible-interior-point algorithms

Journal ArticleDOI
TL;DR: Borders on the length of the primal-dual affinescaling directions associated with a linearly constrained convex program satisfying the following conditions are derived: 1) the problem has asolution satisfying strict complementarity, 2) the Hessian of the objective function satisfies a certain invariance property.
Abstract: This note derives bounds on the length of the primal-dual affine scaling directions associated with a linearly constrained convex program satisfying the following conditions: 1) the problem has a solution satisfying strict complementarity, 2) the Hessian of the objective function satisfies a certain invariance property. We illustrate the usefulness of these bounds by establishing the superlinear convergence of the algorithm presented in Wright and Ralph [22] for solving the optimality conditions associated with a linearly constrained convex program satisfying the above conditions.

Journal ArticleDOI
TL;DR: Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithms significantly speeds up the process of solving MOLPs, and the algorithm is shown to be scalable and gives better results for large problems.
Abstract: This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLPs). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. The scalability of a parallel algorithm is a measure of its capacity to increase performance with respect to the number of processors used. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLPs, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm is shown to be scalable and gives better results for large problems. Motivation and justification for solving large MOLPs are also included.

Journal ArticleDOI
TL;DR: The Lennard-Jones potential problem is reformulated as an equality constrained nonlinear programming problem with only linear constraints, which allows the solution to approached through infeasible configurations, increasing the basin of attraction of the globalsolution.
Abstract: Minimizing the Lennard-Jones potential, the most-studied model problem for molecular conformation, is an unconstrained global optimization problem with a large number of local minima. In this paper, the problem is reformulated as an equality constrained nonlinear programming problem with only linear constraints. This formulation allows the solution to approached through infeasible configurations, increasing the basin of attraction of the global solution. In this way the likelihood of finding a global minimizer is increased. An algorithm for solving this nonlinear program is discussed, and results of numerical tests are presented.

Journal ArticleDOI
TL;DR: Numerical and computational aspects of direct methods for large and sparse least squares problems are considered, and a Householder multifrontalscheme and its implementation on sequential and parallel computers are described.
Abstract: Numerical and computational aspects of direct methods for large and sparse least squares problems are considered After a brief survey of the most often used methods, we summarize the important conclusions made from a numerical comparison in matlab Significantly improved algorithms have during the last 10-15 years made sparse QR factorization attractive, and competitive to previously recommended alternatives Of particular importance is the multifrontal approach, characterized by low fill-in, dense subproblems and naturally implemented parallelism We describe a Householder multifrontal scheme and its implementation on sequential and parallel computers Available software has in practice a great influence on the choice of numerical algorithms Less appropriate algorithms are thus often used solely because of existing software packages We briefly survey software packages for the solution of sparse linear least squares problems Finally, we focus on various applications from optimization, leading to the solution of large and sparse linear least squares problems In particular, we concentrate on the important case where the coefficient matrix is a fixed general sparse matrix with a variable diagonal matrix below Inner point methods for constrained linear least squares problems give, for example, rise to such subproblems Important gains can be made by taking advantage of structure Closely related is also the choice of numerical method for these subproblems We discuss why the less accurate normal equations tend to be sufficient in many applications

Journal ArticleDOI
TL;DR: Examples of applications of the jump number problem are given and a new heuristic algorithm and an exact algorithm are proposed for the general case.
Abstract: The jump number of a partially ordered set (poset) P is the minimum number of incomparable adjacent pairs (jumps) in some linear extension of P. The problem of finding a linear extension of P with minimum number of jumps (jump number problem) is known to be NP-hard in general and, at the best of our knowledge, no exact algorithm for general posets has been developed. In this paper, we give examples of applications of this problem and propose for the general case a new heuristic algorithm and an exact algorithm. Performances of both algorithms are experimentally evaluated on a set of randomly generated test problems.

Journal ArticleDOI
TL;DR: It is shown that the proposed algorithm provides strongly convergents solutions of the ARE, and the convergence of optimal solutions as well as the associated performance index is established.
Abstract: An optimal control problem governed by a coupled hyperbolic-parabolic “like” dynamics arising in structural acoustic problems is considered The control operator is assumed to be unbounded on the space of finite energy (for the so-called boundary or point control problems) A numerical algorithm (based on FEM methods) for computations of discrete solutions to Algebraic Riccati Equations (ARE) is formulated It is shown that the proposed algorithm provides strongly convergent solutions of the ARE As the result, the convergence of optimal solutions as well as the associated performance index is established

Journal ArticleDOI
TL;DR: This paper shows that constant cost nondegeneracy of an LP problem is equivalent to the condition that the union of all minimal faces of the feasible polyhedron be equal to the set of feasible points satisfying a certain generalized strict complementarity condition.
Abstract: This paper deals with nondegeneracy of polyhedra and linear programming (LP) problems. We allow for the possibility that the polyhedra and the feasible polyhedra of the LP problems under consideration be non-pointed. (A polyhedron is pointed if it has a vertex.) With respect to a given polyhedron, we consider two notions of nondegeneracy and then provide several equivalent characterizations for each of them. With respect to LP problems, we study the notion of constant cost nondegeneracy first introduced by Tsuchiya [25] under a different name, namely dual nondegeneracy. (We do not follow this terminology since the term dual nondegeneracy is already used to refer to a related but different type of nondegeneracy.) We show two main results about constant cost nondegeneracy of an LP problem. The first one shows that constant cost nondegeneracy of an LP problem is equivalent to the condition that the union of all minimal faces of the feasible polyhedron be equal to the set of feasible points satisfying a certain generalized strict complementarity condition. When the feasible polyhedron of an LP is nondegenerate, the second result shows that constant cost nondegeneracy is equivalent to the condition that the set of feasible points satisfying the generalized condition be equal to the set of feasible points satisfying the same complementarity condition strictly. For the purpose of giving a preview of the paper, the above results specialized to the context of polyhedra and LP problems in standard form are described in the introduction.

Journal ArticleDOI
TL;DR: A necessary and sufficient condition for identification of dominatedcolumns, which correspond to one type of redundant integer variables, in the matrix of a general Integer Programming problem, is derived.
Abstract: A necessary and sufficient condition for identification of dominated columns, which correspond to one type of redundant integer variables, in the matrix of a general Integer Programming problem, is derived. The given condition extends our recent work on eliminating dominated integer variables in Knapsack problems, and revises a recently published procedure for reducing the number of variables in general Integer Programming problems given in the literature. A report on computational experiments for one class of large scale Knapsack problems, illustrating the function of this approach, is included.

Journal ArticleDOI
TL;DR: An algorithm without line search for solving the continuous type facility location problems is proposed and its global convergence, linear convergence rate is proved.
Abstract: In this paper, we extend the ordinary discrete type facility location problems to continuous type ones. Unlike the discrete type facility location problem in which the objective function isn‘t everywhere differentiable, the objective function in the continuous type facility location problem is strictly convex and continuously differentiable. An algorithm without line search for solving the continuous type facility location problems is proposed and its global convergence, linear convergence rate is proved. Numerical experiments illustrate that the algorithm suggested in this paper have smaller amount of computation, quicker convergence rate than the gradient method and conjugate direction method in some sense.

Journal ArticleDOI
TL;DR: It is proved that the existence of a polynomial timeρ-approximation algorithm, for a class of independent set problems, leads to a poynomial time approximation algorithm with approximation ratio strictly smaller than 2 for vertex covering, while the non-existence of such analgorithm induces a lower bound on the ratio of everyPolynomialtime approximation algorithm for vertexcovering.
Abstract: We prove that the existence of a polynomial time ρ-approximation algorithm (where ρ < 1 is a fixed constant) for a class of independent set problems, leads to a polynomial time approximation algorithm with approximation ratio strictly smaller than 2 for vertex covering, while the non-existence of such an algorithm induces a lower bound on the ratio of every polynomial time approximation algorithm for vertex covering. We also prove a similar result for a (maximization) convex programming problem including quadratic programming as subproblem.

Journal ArticleDOI
TL;DR: A modification of the Hellerman-Rarick P3 algorithm is presented which includes a procedure to recover from this type of numerical instability and the recovery procedure is integrated into P3 in such a way that all previous work can be maintained and it reduces the likelihood of additional recovery.
Abstract: Most of the preassigned pivot agenda algorithms that extend the Hellerman-Rarick P3 algorithm assume that the input matrix is nonsingular. Due to numerical instability, this assumption may be violated and these algorithms fail. We present a modification of the P3 algorithm which includes a procedure to recover from this type of numerical instability. The recovery procedure is integrated into P3 in such a way that all previous work can be maintained and it reduces the likelihood that additional recovery will be required.

Journal ArticleDOI
TL;DR: A methodology is presented for applying annealing techniques tomultisource absolute location problems on graph and a class of new algorithms is described, which starts from the iterative “cluster-and-locate” algorithm and relies upon the relaxation of the integrality constraints on allocation variables.
Abstract: A methodology is presented for applying annealing techniques to multisource absolute location problems on graph. Two kinds of objective functions are considered: barycenters and centers. A class of new algorithms is described: its development starts from the iterative “cluster-and-locate” algorithm and relies upon the relaxation of the integrality constraints on allocation variables. Experimental results are reported.