scispace - formally typeset
Search or ask a question

Showing papers on "Discrete optimization published in 2003"


Journal ArticleDOI
TL;DR: This work proposes a robust integer programming problem of moderately larger size that allows controlling the degree of conservatism of the solution in terms of probabilistic bounds on constraint violation, and proposes an algorithm for robust network flows that solves the robust counterpart by solving a polynomial number of nominal minimum cost flow problems in a modified network.
Abstract: We propose an approach to address data uncertainty for discrete optimization and network flow problems that allows controlling the degree of conservatism of the solution, and is computationally tractable both practically and theoretically. In particular, when both the cost coefficients and the data in the constraints of an integer programming problem are subject to uncertainty, we propose a robust integer programming problem of moderately larger size that allows controlling the degree of conservatism of the solution in terms of probabilistic bounds on constraint violation. When only the cost coefficients are subject to uncertainty and the problem is a 0−1 discrete optimization problem on n variables, then we solve the robust counterpart by solving at most n+1 instances of the original problem. Thus, the robust counterpart of a polynomially solvable 0−1 discrete optimization problem remains polynomially solvable. In particular, robust matching, spanning tree, shortest path, matroid intersection, etc. are polynomially solvable. We also show that the robust counterpart of an NP-hard α-approximable 0−1 discrete optimization problem, remains α-approximable. Finally, we propose an algorithm for robust network flows that solves the robust counterpart by solving a polynomial number of nominal minimum cost flow problems in a modified network.

1,747 citations


Book
30 Jun 2003
TL;DR: This book examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques, and outlines the computational technology underlying these methods.
Abstract: From the Publisher: "Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. Since it became possible to analyze random systems using computers, scientists and engineers have sought the means to optimize systems using simulation models. Only recently, however, has this objective had success in practice. Cutting-edge work in computational operations research, including non-linear programming (simultaneous perturbation), dynamic programming (reinforcement learning), and game theory (learning automata) has made it possible to use simulation in conjunction with optimization techniques. As a result, this research has given simulation added dimensions and power that it did not have in the recent past." "The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together, these two aspects demonstrate that the mathematical and computational methods discussed in this book do work." "Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: an accessible introduction to reinforcement learning and parametric-optimization techniques; a step-by-step description of several algorithms of simulation-based optimization; a clear and simple introduction to the methodology of neural networks; a gentle introduction to convergence analysis of some of the methods enumerated above; and Computer programs for many algorithms of simulation-based optimization." This book is written for students and researchers in the fields of engineering (electrical, industrial and computer), computer science, operations research, management science, and applied mathematics.

442 citations


Book
01 Jan 2003
TL;DR: This book applies applied optimization to the hazardous waste blending problem and explores linear programming, nonlinear programming, discrete optimization, global optimization, optimization under uncertainty, multi-objective optimization, optimal control and stochastic optimal control.
Abstract: Provides well-written self-contained chapters, including problem sets and exercises, making it ideal for the classroom setting; Introducesapplied optimization to the hazardous waste blending problem; Explores linear programming, nonlinear programming, discrete optimization, global optimization, optimization under uncertainty, multi-objective optimization, optimal control and stochastic optimal control; Includes an extensive bibliography at the end of each chapter and an index; GAMS files of case studies for Chapters 2, 3, 4, 5, and 7are linked to http://www.springer.com/math/book/978-0-387-76634-8; Solutions manual available upon adoptions.

287 citations


Journal ArticleDOI
TL;DR: This work describes the developments of a number of long-open QAPs, including those posed by Steinberg (1961), Nugent et al. (1968) and Krarup (1972), as well as recent work which is likely to result in the solution of even more difficult instances.
Abstract: The quadratic assignment problem (QAP) is notoriously difficult for exact solution methods. In the past few years a number of long-open QAPs, including those posed by Steinberg (1961), Nugent et al. (1968) and Krarup (1972) were solved to optimality for the first time. The solution of these problems has utilized both new algorithms and novel computing structures. We describe these developments, as well as recent work which is likely to result in the solution of even more difficult instances.

148 citations


Journal ArticleDOI
TL;DR: New formulations and algorithms for solving MIDO problems based on decomposition into primal, dynamic optimization and master, mixed-integer linear programming sub-problems are presented.

128 citations


Journal ArticleDOI
David Mester1, Yefim Ronin1, Dina Minkov1, E. Nevo1, Abraham B. Korol1 
01 Dec 2003-Genetics
TL;DR: In this article, a fast and reliable algorithm developed for the TSP and based on evolution-strategy discrete optimization was applied for multilocus ordering on the basis of pairwise recombination frequencies.
Abstract: This article is devoted to the problem of ordering in linkage groups with many dozens or even hundreds of markers. The ordering problem belongs to the field of discrete optimization on a set of all possible orders, amounting to n!/2 for n loci; hence it is considered an NP-hard problem. Several authors attempted to employ the methods developed in the well-known traveling salesman problem (TSP) for multilocus ordering, using the assumption that for a set of linked loci the true order will be the one that minimizes the total length of the linkage group. A novel, fast, and reliable algorithm developed for the TSP and based on evolution-strategy discrete optimization was applied in this study for multilocus ordering on the basis of pairwise recombination frequencies. The quality of derived maps under various complications (dominant vs. codominant markers, marker misclassification, negative and positive interference, and missing data) was analyzed using simulated data with approximately 50-400 markers. High performance of the employed algorithm allows systematic treatment of the problem of verification of the obtained multilocus orders on the basis of computing-intensive bootstrap and/or jackknife approaches for detecting and removing questionable marker scores, thereby stabilizing the resulting maps. Parallel calculation technology can easily be adopted for further acceleration of the proposed algorithm. Real data analysis (on maize chromosome 1 with 230 markers) is provided to illustrate the proposed methodology.

122 citations


01 Jan 2003
TL;DR: A metaheuristic based on annealing-like restarts to diversify and intensify local searches for solving the vehicle routing problem with time windows and is comparable to the best in published literature.
Abstract: In this paper, we propose a metaheuristic basedon annealing-like restarts to diversify andintensify local searches for solving the vehicle routing problem with time windows (VRPTW). Using the Solomons benchmark instances for the problem, our methodobtainedseven new best results andequaled19 other best results. Extensive comparisons indicate that our methodis comparable to the best in publishedliterature. This approach is flexible andcan be extend edto handle other variants of vehicle routing problems and other combinatorial optimization problems. 2002 Elsevier Science B.V. All rights reserved.

112 citations


Book
05 Sep 2003
TL;DR: This chapter discusses large-scale PDE-Constrained Optimization, the SIERRA Framework for Developing Advanced Parallel Mechanics Applications, and the rSQP++ Object-Oriented Framework for Successive Quadratic Programming.
Abstract: I Introduction -- Large-Scale PDE-Constrained Optimization: An Introduction -- II Large-Scale CFD Applications -- Nonlinear Elimination in Aerodynamic Analysis and Design Optimization -- Optimization of Large-Scale Reacting Flows using MPSalsa and Sequential Quadratic Programming -- III Multifidelity Models and Inexactness -- First-Order Approximation and Model Management in Optimization -- Multifidelity Global Optimization Using DIRECT -- Inexactness Issues in the Lagrange-Newton-Krylov-Schur Method for PDE-constrained Optimization -- IV Sensitivities for PDE-based Optimization -- Solution Adapted Mesh Refinement and Sensitivity Analysis for Parabolic Partial Differential Equation Systems -- Challenges and Opportunities in Using Automatic Differentiation with Object-Oriented Toolkits for Scientific Computing -- Piggyback Differentiation and Optimization -- V NLP Algorithms and Inequality Constraints -- Assessing the Potential of Interior Methods for Nonlinear Optimization -- An Interior-Point Algorithm for Large Scale Optimization -- SQP SAND Strategies that Link to Existing Modeling Systems -- Interior Methods For a Class of Elliptic Variational Inequalities -- Hierarchical Control of a Linear Diffusion Equation -- VI Time-Dependent Problems -- A Sequential Quadratic Programming Method for Nonlinear Model Predictive Control -- Reduced Order Modelling Approaches to PDE-Constrained Optimization Based on Proper Orthogonal Decomposition -- Adaptive Simulation, the Adjoint State Method, and Optimization -- VII Frameworks for PDE-Constrained Optimization -- 18 The SIERRA Framework for Developing Advanced Parallel Mechanics Applications -- rSQP++: An Object-Oriented Framework for Successive Quadratic Programming -- Sundance Rapid Prototyping Tool for Parallel PDE Optimization -- Color Plates

106 citations


Journal ArticleDOI
TL;DR: Two state-of-the-art NLP solvers, SNOPT and IPOPT, are compared for dynamic optimization on a number of challenging control scenarios, and some of the advantages of IPopT are illustrated.

87 citations


Proceedings ArticleDOI
Li-Biao Zhang1, Chunguang Zhou1, Xiangdong Liu1, Z.Q. Ma1, Ming Ma1, Ying-Chang Liang1 
08 Dec 2003
TL;DR: An algorithm for solving multiobjective optimization problems is presented based on PSO through the improvement of the selection manner for global and individual extremum and numerical simulations show the effectiveness of the proposed algorithm.
Abstract: An algorithm for solving multiobjective optimization problems is presented based on PSO through the improvement of the selection manner for global and individual extremum. The search for the Pareto optimal set of multiobjective optimization problems is performed. Numerical simulations show the effectiveness of the proposed algorithm.

72 citations


Proceedings ArticleDOI
01 Dec 2003
TL;DR: The paper demonstrates how discrete exterior calculus tools may be useful in computer vision and graphics and shows some example applications using variational problems from computer graphics and mechanics to demonstrate that formulating the problem discretely and using discrete methods for solution can lead to efficient algorithms.
Abstract: The paper demonstrates how discrete exterior calculus (DEC) tools may be useful in computer vision and graphics. A variational approach provides a link with mechanics. Our development of DEC includes discrete differential forms, discrete vector fields and the operators acting on these. This development of a discrete calculus, when combined with the methods of discrete mechanics and other recent work is likely to have promising applications in a field like computer vision which offers such a rich variety of challenging variational problems to be solved computationally. As a specific example we consider the problem of template matching and show how numerical methods derived from a discrete exterior calculus are starting to play an important role in solving the equations of averaged template matching. We also show some example applications using variational problems from computer graphics and mechanics to demonstrate that formulating the problem discretely and using discrete methods for solution can lead to efficient algorithms.

Journal ArticleDOI
TL;DR: A new approach for reducing the number of the fitness function evaluations required by a genetic algorithm for optimization problems with mixed continuous and discrete design variables is described.

Journal ArticleDOI
TL;DR: Two new characterizations of M- and M'-convex functions are given generalizing Gul and Stacchetti's results on the equivalence among the single improvement condition, the gross substitutes condition and the no complementarities condition for set functions.

Journal ArticleDOI
TL;DR: The next-generation framework is described, which improves scalability and further abstracts many of the notions inherent in parallel BCP, making it possible to implement and parallelize more general classes of algorithms.
Abstract: In discrete optimization, most exact solution approaches are based on branch and bound, which is conceptually easy to parallelize in its simplest forms. More sophisticated variants, such as the so-called branch, cut, and price algorithms, are more difficult to parallelize because of the need to share large amounts of knowledge discovered during the search process. In the first part of the paper, we survey the issues involved in parallelizing such algorithms. We then review the implementation of SYMPHONY and COIN/BCP, two existing frameworks for implementing parallel branch, cut, and price. These frameworks have limited scalability, but are effective on small numbers of processors. Finally, we briefly describe our next-generation framework, which improves scalability and further abstracts many of the notions inherent in parallel BCP, making it possible to implement and parallelize more general classes of algorithms.


Journal ArticleDOI
TL;DR: A direct solution method for the HTC problem that is based in semidefinite programming (SDP) is presented, which shows only minor mismatches in the integer variables, which are easily corrected by a heuristic method.
Abstract: Hydrothermal coordination (HTC) is a problem that has been solved using direct and decomposition solution methods. The latter has shown shorter solution times than the former. A direct solution method for the HTC problem that is based in semidefinite programming (SDP) is presented in this paper. SDP is a convex programming method with polynomial solution time. The variables of the problem are arranged in a vector, which is used to construct a positive-definite matrix; the optimal solution is then found in the cone defined by the set of positive-definite matrices. An HTC problem can be formulated as a convex optimization problem without explicitly stating the integer value requirements for the thermal-plants discrete variables. Thus, it is possible to replace the nonconvex integer-value constraints by convex quadratic constraints, and then use SDP. Due to its polynomial complexity, it is not necessary to use decomposition or other tools for discrete optimization, such as enumeration schemes or other exponential-time procedures. No initial relaxation is necessary when applying a SDP algorithm; the solution shows only minor mismatches in the integer variables, which are easily corrected by a heuristic method. Different size test cases are presented. The solution quality is assessed by comparing with that produced by a Lagrangian relaxation method.

Journal ArticleDOI
TL;DR: In this paper, an iterative optimization algorithm using orthogonal arrays is proposed for design in discrete space and matrix experiments are conducted to show the validity of the proposed method and the results are compared with those from a genetic algorithm.

Journal ArticleDOI
TL;DR: The purpose of this work is to construct optimal discrete gradient vector fields, where optimality means having the minimum number of critical elements, in terms of maximal hyperforests of hypergraphs.
Abstract: Morse theory is a fundamental tool for investigating the topology of smooth manifolds. This tool has been extended to discrete structures by Forman, which allows combinatorial analysis and direct computation. This theory relies on discrete gradient vector fields, whose critical elements describe the topology of the structure. The purpose of this work is to construct optimal discrete gradient vector fields, where optimality means having the minimum number of critical elements. The problem is equivalently stated in terms of maximal hyperforests of hypergraphs. Deduced from this theoretical result, a algorithm constructing almost optimal discrete gradient fields is provided. The optimal parts of the algorithm are proved, and the part of exponential complexity is replaced by heuristics. Although reaching optimality is MAX-SNP hard, the experiments on odd topological models are almost always optimal.

Journal ArticleDOI
Serpil Sayin1
TL;DR: This work presents a method that can find discrete representations of the efficient set according to a specified level of quality, based on mathematical programming tools and can be implemented relatively easily when the domain of interest is a polyhedron.
Abstract: An important issue in multiple objective mathematical programming is finding discrete representations of the efficient set. Because discrete points can be directly studied by a decision maker, a discrete representation can serve as the solution to the multiple objective problem at hand. However, the discrete representation must be of acceptable quality to ensure that a most--preferred solution identified by a decision maker is of acceptable quality. Recently, attributes for measuring the quality of discrete representations have been proposed. Although discrete representations can be obtained in many different ways, and their quality evaluated afterwards, the ultimate goal should be to find such representations so as to conform to specified quality standards. We present a method that can find discrete representations of the efficient set according to a specified level of quality. The procedure is based on mathematical programming tools and can be implemented relatively easily when the domain of interest is a polyhedron. The nonconvexity of the efficient set is dealt with through a coordinated decomposition approach. We conduct computational experiments and report results.

Journal ArticleDOI
TL;DR: The paper examines the efficiency of soft computing techniques in structural optimization, in particular algorithms based on evolution strategies combined with neural networks, for solving large-scale, continuous or discrete structural optimization problems.

Patent
20 Mar 2003
TL;DR: In this article, a method is provided for collaboratively solving an optimization problem using at least first optimization software and second optimization software each having at least partial information concerning the optimization problem.
Abstract: In one embodiment, a method is provided for collaboratively solving an optimization problem using at least first optimization software and second optimization software each having at least partial information concerning the optimization problem. The method includes: (1) determining a solution to a first sub-problem of the optimization problem using the first optimization software based on the at least partial information concerning the optimization problem known to the first optimization software; (2) communicating from the first optimization software to the second optimization software the solution to the first sub-problem and information concerning one or more penalties for deviating from the solution to the first sub-problem; and (3) determining a solution to a second sub-problem of the optimization problem using the second optimization software based on the at least partial information concerning the optimization problem known to the second optimization software, the communicated solution to the first sub-problem, and the communicated information concerning one or more penalties for deviating from the solution to the first sub-problem.

Proceedings ArticleDOI
Ren-Jye Yang1, Lei Gu1
07 Apr 2003
TL;DR: In this paper, several approximate RBDO methods are coded, discussed, and tested against a double loop algorithm through four design problems, and they are compared to a single loop algorithm.
Abstract: Traditional reliability-based design optimization (RBDO) requires a double loop iteration process. The inner optimization loop is to find the most probable point (MPP) and the outer is the regular optimization loop to optimize the RBDO problem with reliability objectives or constraints. It is well known that the computation can be prohibitive when the associated function evaluation is expensive. As a result, many approximate RBDO methods, which convert the double loop to a single loop, have been developed. In this work, several approximate RBDO methods are coded, discussed, and tested against a double loop algorithm through four design problems.

Journal ArticleDOI
TL;DR: The development of an unified formal modeling framework designed to provide for model sharing, reusability and integration of simulation and optimization methods to allow decision-makers to master the dynamics of complex discrete systems is concerns.

Journal ArticleDOI
TL;DR: Two classes of discrete quasiconvex functions are introduced by generalizing the concepts of M- and L-convexity due to Murota, and it is shown that various greedy algorithms work for the minimization of quasi M-

Journal ArticleDOI
TL;DR: In this method a back-propagation neural network is trained to simulate a rough map of the feasible domain formed by the constraints using a few representative training data, and the approximate model locates the same optimal point in consecutive iterations.
Abstract: There are three characteristics in engineering design optimization problems: (1) the design variables are often discrete physical quantities; (2) the constraint functions often cannot be expressed analytically in terms of design variables; (3) in many engineering design applications, critical constraints are often ‘pass–fail’, ‘0–1’ type binary constraints. This paper presents a sequential approximation method specifically for engineering optimization problems with the three characteristics. In this method a back-propagation neural network is trained to simulate a rough map of the feasible domain formed by the constraints using a few representative training data. A training data point consists of a discrete design point and whether this design point is feasible or infeasible. Function values of the constraints are not required. A search algorithm then searches for the optimal point in the feasible domain simulated by the neural network. This new design point is checked against the true constraints to see ...

Journal ArticleDOI
TL;DR: A quadratic control for discrete stochastic systems with random parameters and additive and multiplicative noises dependent on state and controls is studied and equations for the optimal linear static and dynamic output controllers are derived.
Abstract: A quadratic control for discrete stochastic systems with random parameters and additive and multiplicative noises dependent on state and controls is studied. Equations for the optimal linear static and dynamic output controllers are derived. The controllers are robust to the type of the distribution of the vector of random parameters. The results are applied to dynamic investment portfolio optimization.


Journal ArticleDOI
TL;DR: The study achieves an improved understanding and description of the sampling phenomena based on the concepts of fractal geometry and incorporating the knowledge of the accuracy ofThe sampling (fractal model) in the stochastic optimization framework thereby, automating and improving the combinatorial optimization algorithm.
Abstract: The generalized approach to stochastic optimization involves two computationally intensive recursive loops: (1) the outer optimization loop, (2) the inner sampling loop. Furthermore, inclusion of discrete decision variables adds to the complexity. The focus of the current endeavor is to reduce the computational intensity of the two recursive loops. The study achieves the goals through an improved understanding and description of the sampling phenomena based on the concepts of fractal geometry and incorporating the knowledge of the accuracy of the sampling (fractal model) in the stochastic optimization framework thereby, automating and improving the combinatorial optimization algorithm. The efficiency of the algorithm is presented in the context of a large scale real world problem, related to the nuclear waste at Hanford, involving discrete and continuous decision variables, and uncertainties. These new developments reduced the computational intensity for solving this problem from an estimated 20 days of CPU time on a dedicated Alpha workstation to 18 hours of CPU time on the same machine.

Proceedings ArticleDOI
07 Apr 2003
TL;DR: In this investigation, a decomposition approach is employed for reliability-based design optimization of multidisciplinary systems and the method of Simultaneous ANalysis and Design (SAND) is employed as an optimization driver within a single level reliability- based design optimization strategy.
Abstract: In this investigation, a decomposition approach is employed for reliability-based design optimization of multidisciplinary systems. In the last few years, a variety of different approaches for reliability-based design optimization have been proposed in the literature. Traditionally, reliability-based design optimization is formulated as a nested optimization problem, where the inner loop, generally involves the solution to an optimization problem for computing the probability of failure corresponding to the failure modes. Such a formulation is by nature computationally intensive, requiring a large numbers of function and constraint evaluations. To alleviate this problem, researchers have developed single level reliability-based optimization formulations. The single level approach is expensive for multidisciplinary systems because of the fact that the complex coupled nature of multidisciplinary systems are likewise computationally intensive requiring iterative solvers. To reduce the computational effort of performing reliability-based design optimization in application to multidisciplinary systems, a decomposition approach for optimization is employed. The method of Simultaneous ANalysis and Design (SAND) is employed as an optimization driver within a single level reliability-based design optimization strategy. The methodology is illustrated in application to multidisciplinary test problems.

Journal Article
TL;DR: FOM as mentioned in this paper is an object-oriented framework for metaheuristic optimization to be used as a general tool for the development and the implementation of metaheuristics algorithms, allowing this to reuse different meta-heuristic components in different problems.
Abstract: Most metaheuristic approaches for discrete optimization are usually implemented from scratch. In this paper, we introduce and discuss FOM, an object-oriented framework for metaheuristic optimization to be used as a general tool for the development and the implementation of metaheuristic algorithms. The basic idea behind the framework is to separate the problem side from the metaheuristic algorithms, allowing this to reuse different metaheuristic components in different problems. In addition to describing the design and functionality of the framework, we apply it to illustrative examples. Finally, we present our conclusions and discuss futures developments.