scispace - formally typeset
Search or ask a question

Showing papers on "Discrete optimization published in 1992"


Journal ArticleDOI
TL;DR: A penalty‐based transformation method depends on the degree of constraint violation, which is found to be wellsuited for a parallel search using genetic algorithms.
Abstract: Optimizing most structural systems used in practice requires considering design variables as discrete quantities. The paper presents a simple genetic algorithm for optimizing structural systems with discrete design variables. As genetic algorithms (GAs) are best suited for unconstrained optimization problems, it is necessary to transform the constrained problem into an unconstrained one. A penalty‐based transformation method is used in the present work. The penalty parameter depends on the degree of constraint violation, which is found to be wellsuited for a parallel search using genetic algorithms. The concept of optimization using the genetic algorithm is presented in detail using a three‐bar truss problem. All the computations for three successive generations are presented in the form of tables for easy understanding of the algorithm. Two standard problems from literature are solved and results compared. The application of the genetic algorithm to design optimization of a larger problem is illustrated ...

784 citations


Journal ArticleDOI
TL;DR: It is argued that cardinal rather than cardinal optimization, i.e., concentrating on finding good, better, or best designs rather than on estimating accurately the performance value of these designs, offers a new, efficient, and complementary approach to the performance optimization of systems.
Abstract: In this paper we argue thatordinal rather thancardinal optimization, i.e., concentrating on finding good, better, or best designs rather than on estimating accurately the performance value of these designs, offers a new, efficient, and complementary approach to the performance optimization of systems. Some experimental and analytical evidence is offered to substantiate this claim. The main purpose of the paper is to call attention to a novel and promising approach to system optimization.

412 citations


Journal ArticleDOI
TL;DR: In this paper, a stochastic search method is proposed for finding a global solution to the discrete optimization problem in which the objective function must be estimated by Monte Carlo simulation, and it is shown under mild conditions that the Markov chain is strongly ergodic.
Abstract: In this paper a stochastic search method is proposed for finding a global solution to the stochastic discrete optimization problem in which the objective function must be estimated by Monte Carlo simulation. Although there are many practical problems of this type in the fields of manufacturing engineering, operations research, and management science, there have not been any nonheuristic methods proposed for such discrete problems with stochastic infrastructure. The proposed method is very simple, yet it finds a global optimum solution. The method exploits the randomness of Monte Carlo simulation and generates a sequence of solution estimates. This generated sequence turns out to be a nonstationary Markov chain, and it is shown under mild conditions that the Markov chain is strongly ergodic and that the probability that the current solution estimate is global optimum converges to one. Furthermore, the speed of convergence is also analyzed.

204 citations


Proceedings ArticleDOI
01 Jul 1992
TL;DR: A multidimensional version of Megiddo ’s parametric search technique is used, in connection with an output-sensitive algorithm of Seidel, to get that a convex hull of an n-point set in Ed can be computed in time 0(n2”*+h + h log n), where h is the number of faces of the conveX hull.
Abstract: Jiii Matou5ekt Department of Applied Mathematics, Charles LJniversity Malostransk6 n~m. 25, 11800 Praha 1, Czechoslovakia Otfried Schwarzkopf $ Utrecht University, Department of Computer Science P.O. Box 80.089, 3508 TB Utrecht, the Netherlands Let F be a set of n halfspaces in Ed (where the dimension d ~ 3 is fixed) and c a d-component vector. We denote by LP(I’, c) the linear programming problem of minimizing the function c . x over the intersection of all ha~spaces of I’. We show that r can be preprocessed in time and space O(ml+j) (for any fixed & >0, m is an adjustable parameter, n < m < nldizj) so that given c c Ed, LP(I’, c) can be solved in ttme O((m,/~./,J + lrq 1) log2d+1 n). The data structure can be dynamically maintained under insertions and deletions of hyperplanes from 17, in 0(m1+3/n) amortized i!ime per update operation. We use a multidimensional version of Megiddo ’s parametric search technique. In connection with an output-sensitive algorithm of Seidel, we get that a convex hull of an n-point set in Ed (d ~ 4) can be computed in time 0(n2”*+h + h log n), where h is the number of faces of the convex hull. We also show that given an n-point set P in Ed, one can determine the extreme points of P m time 0(n2-*f6 ) (for any fixed 6> o). *This extended abstract combines a paper [Mat91d] of the first author with an improvement and simplification achieved by the second author in [Sch91]. t The research by J. M, was performed while he was visiting at School of Mathematics, Georgia Institute of Technology, Atlanta. t 0. S. acknowledges support by the ESPRIT II Basic Research Action of the European Community under contract No. 3075 (project AI.ICOM). This research was done while he was e~nployed at Freie Universit?4t Berlin. Furthermore, part of this research was done while he visited INRIA-Sophia Antipolis. Permission to copy without fee all or part of this material is granted provided that the copies are not made or dktributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machine~. To copy otherwise, or to republish, requires a fee and/or specific permission.

105 citations


Journal ArticleDOI
TL;DR: The neural network approach is introduced, from an OR perspective—and just where and how such a tool might find application is indicated.

86 citations


Proceedings ArticleDOI
10 May 1992
TL;DR: A heuristic algorithm based on Lagrangian optimization and using an operational rate-distortion framework that, with much-reduced computing complexity, approaches the optimally achievable SNR is provided.
Abstract: The description of the buffer-constrained quantization problem is formalized. For a given set of admissible quantizers for coding a discrete nonstationary signal sequence in a buffer-constrained environment, and for any global distortion minimization criterion that is additive over the individual elements of the sequence, the optimal solution and slightly suboptimal but much faster approximations are formulated. The problem is first defined as one of constrained, discrete optimization, and its equivalence to some problems in integer programming is established. Dynamic programming using the Viterbi algorithm is shown to provide a way of computing the optimal solution. A heuristic algorithm based on Lagrangian optimization and using an operational rate-distortion framework that, with much-reduced computing complexity, approaches the optimally achievable SNR is provided. >

45 citations


Journal ArticleDOI
TL;DR: Analog neural nets for constrained optimization are proposed as an analogue of Newton's algorithm in numerical analysis and nonlinear neurons are introduced into the net, making it possible to solve optimization problems where the variables take discrete values, i.e., combinatorial optimization.
Abstract: Analog neural nets for constrained optimization are proposed as an analogue of Newton's algorithm in numerical analysis. The neural model is globally stable and can converge to the constrained stationary points. Nonlinear neurons are introduced into the net, making it possible to solve optimization problems where the variables take discrete values, i.e., combinatorial optimization. >

39 citations


Proceedings ArticleDOI
16 Dec 1992
TL;DR: The authors combine the advantages of both algorithms to form an iterative random search algorithm called the stochastic comparison (SC) algorithm, which actually solves an alternative optimization problem, and it is shown under symmetry assumption that the alternative problem is equivalent to the original one.
Abstract: An iterative discrete optimization algorithm that works with Monte Carlo estimation of the objective function is developed Two algorithms, the simulated annealing algorithm and the stochastic ruler algorithm, are considered The authors examine some of the problems of their use and combine the advantages of both algorithms to form an iterative random search algorithm called the stochastic comparison (SC) algorithm The SC algorithm actually solves an alternative optimization problem, and it is shown under symmetry assumption that the alternative problem is equivalent to the original one The convergence of the SC algorithm is proved based on time-inhomogeneous Markov chain theory Results of numerical experiments on a testbed problem with randomly generated objective function are presented >

38 citations


Journal ArticleDOI
TL;DR: In this paper, a heuristic algorithm is developed to generate a solution for this problem, and computational performance of this algorithm is presented and discussed, and a nonlinear discrete optimization model that addresses an explicit districting problem is presented.
Abstract: The land-allocation problem is one of selecting, from a finite number of candidate parcels, that set of parcels that best meets the needs of a specific land-use objective. This research extends land-allocation modeling methodologies to address problems requiring allocation of parcels to multiple areas with different shapes. The approach relies on the use of shape constraints within the context of discrete multiobjective programming models set on a regular, or uniform grid structure. Several model formulations are presented including a nonlinear discrete optimization model that addresses an explicit districting problem. A heuristic algorithm is developed to generate a solution for this problem, and computational performance of this algorithm is presented and discussed.

38 citations


Journal ArticleDOI
TL;DR: In this article, a new strategy is presented for discrete optimization problems, called the filtered simulated annealing strategy, which includes a filter size which can be adjusted by the user to obtain the global optimum provided enough cycles are executed.
Abstract: A new strategy is presented for discrete optimization problems. This strategy is called the “filtered simulated annealing strategy”. It includes a filter size which may be adjusted by the user. A coarse filter size results in an unfiltered simulated annealing strategy which is quite robust in obtaining the global optimum provided enough cycles are executed. A fine filter size blocks many candidate designs which are viewed as having little potential, and produces good designs quickly. The strategy is applied to a realistic 3D steel frame test problem. Extensive results are presented and the performance of the strategy is analysed for parameter sensitivity. The performance is also compared to that of the well-known branch and bound strategy.

36 citations



Journal ArticleDOI
TL;DR: The equivalent formulation technique proposed in this paper linearizes a binary quadratic integer problem of n variables by introducing only n new linear constraints, whereas the most economical method in the literature requires the addition of 2n of such constraints.

Book ChapterDOI
TL;DR: This chapter provides an overview of simultaneous optimization strategies for process engineering and shows how inefficient convergence algorithms that are incorporated within a calculation procedure can be replaced with a simultaneous Newton-type algorithm.
Abstract: Publisher Summary The optimization of models can be described by differential or algebraic equations (DAEs). This approach allows the direct enforcement of profile constraints for state and control variables. Also, the successive quadratic programming (SQP) algorithms can be tailored to the DAE system to allow for moving finite elements and the accurate determination of state and optimal control profiles. Parameter optimization is frequently encountered in process design and analysis. This chapter provides an overview of simultaneous optimization strategies for process engineering. Over the past decade, the recognition of the effectiveness of sophisticated nonlinear programming algorithms, such as SQP, has led to the formulation of larger and more difficult optimization problems. The key to this advance lies in flexible formulations of the optimization problems. Inefficient convergence algorithms that are incorporated within a calculation procedure can now be replaced with a simultaneous Newton-type algorithm. Finally, simultaneous solution and optimization strategies have been extended and demonstrated on large optimization problems.

Proceedings ArticleDOI
01 Mar 1992
TL;DR: In this paper, the authors considered the problem of finding a space-optimal (minimum processor count) mapping of a systolic algorithm onto a regularly connected array architecture, and showed that the solution space for this discrete optimization problem can be nicely bounded and hence, the optimal solution can be efficiently determined with enumeration.
Abstract: The mapping of a systolic algorithm onto a regularly connected array architecture can be considered as a linear transformation problem. However, to derive the 'optimal' transformation is difficult because the necessary optimizations involve discrete decision variables and the cost functions do not usually have closed-form expressions. The paper considers the derivation of a space-optimal (minimum processor count) mapping of a given time performance. Utilizing some recent results from the geometry of numbers, it is shown that the solution space for this discrete optimization problem can be nicely bounded and hence, the optimal solution can be efficiently determined with enumeration for practical cases. Examples are provided to demonstrate the effectiveness of this approach. >

Proceedings ArticleDOI
13 Apr 1992
TL;DR: In each of the examples, the ability of the neural network to represent the desired information was achieved, and the simulated annealing procedure was able to extract improved designs (improved over the "best" designs in the training data).
Abstract: R. A. Swift t and S. M. Batill tt Hessert Center for Aerospace Research Department of Aerospace and Mechanical Engineering University of Notre Dame Noue Dame, Indiana 46556 A simulated annealing application to the optimal design of structures involving discrete design variables is presented. Neural networks were used as approximate representations of the design spaces for candidate structural concepts. The simulated annealing algorithm was used to search these discrete design spaces. Design information obtained from finite element analysis and math-programming optimization was used to train the neural network representations. Three examples are presented. The first is a material system design of a 10 bar truss in which four isotropic materials were considered for each of the 10 axial force rods. Minimum weight was considered as the objective function. The second example is an ACOSS I1 space truss in which four materials were considered for each of the 113 rod elements, minimum weight being again the objective function. The final example is that of an Intermediate Complexity Wing (ICW), in which a discrete set of lamina orientations was considered for the composite skin, natural frequency being considered as the objective function. In each of the examples, the ability of the neural network to represent the desired information was achieved, and the simulated annealing procedure was able to extract improved designs (improved over the "best" designs in the training data).

02 Jan 1992
TL;DR: The purpose of this paper is to show where and how mathematical problems of a discrete nature arise in manufacturing and to demonstrate the savings and improvements that can be achieved by employing the techniques of combinatorial optimization.
Abstract: Manufacturing is a topic that provides rich opportunities for important mathematical contributions to real-world problems. The purpose of this paper is to show, by means of several examples, where and how mathematical problems of a discrete nature arise in manufacturing and to demonstrate the savings and improvements that can be achieved by employing the techniques of combinatorial optimization. The topics covered range from the design phase of a product (e. g.,routing, placement and via minimization in VLSI design), the control of CNC machines (e. g., drilling and plotting), to the management of assembly lines, storage systems and whole factories. We also point out difficulties in the modelling of complex situations and outline the algorithmic methods that are used for the solution of the mathematical problems arising in manufacturing. {\bf Key words:} discrete mathematics , combinatorial optimization, applications to manufacturing.

Journal ArticleDOI
TL;DR: A method based on solution set convergence is employed for finding optimal initial decisions by solving finite horizon problems and is applicable to general discrete decision models that satisfy a weak reachability condition.
Abstract: We study discrete infinite horizon optimization problems without the common assumption of a unique optimum. A method based on solution set convergence is employed for finding optimal initial decisions by solving finite horizon problems. This method is applicable to general discrete decision models that satisfy a weak reachability condition. The algorithm, together with a stopping rule, is applied to production planning and capacity expansion, and computational results are reported.

Journal ArticleDOI
TL;DR: Manners of creating lower bounds have been formally presented and domination rules for the lower bounds has been defined and the considerations are illustrated by means of example.


Proceedings ArticleDOI
01 Dec 1992
TL;DR: A method for discrete simulation optimization is discussed and it is shown how this method can be applied to optimize a special class of objective functions.
Abstract: Consider the problem of using simulation to optimize the performance of a stochastic system with respect to a number of decision variables. In the past, a considerable effort haa been spent on the development of methods for solving such problems in the case when all the decision variables are continuous. However, the case when the decision variables are discr’ete has received very little attention to date. In this paper, we discuss a method for discrete simulation optimization and show how this method can be applied to optimize a special class of objective functions.

Journal ArticleDOI
TL;DR: The two most interesting neighbourhood search-based algorithms, simulated annealing and tabu search, are presented and evaluated by comparing them with an exact algorithm for a simple scheduling problem and the practitioner will find these algorithms a most useful tool.

Journal ArticleDOI
03 Aug 1992
TL;DR: The problem of global optimization of magnetic structures composed of solenoids is examined, using a modified simulated annealing algorithm able to deal with the functions of continuous and/or discrete variables.
Abstract: The problem of global optimization of magnetic structures composed of solenoids is examined, using a modified simulated annealing algorithm able to deal with the functions of continuous and/or discrete variables. The algorithm is tested using a discrete problem which allows determination of the cost function for every possible configuration of the system, providing information about the pattern of the cost function with respect to design variables. Despite the difficulty of minimizing the function, the proposed algorithm was able to locate the global minimum, or a point where the cost function has a value very close to it, seven times out of a total of ten runs. The algorithm is described, and results are discussed. >


Journal ArticleDOI
TL;DR: This paper identifies special classes of bi-objective linear and concave integer programs that can be easily solved when their constraint matrices are totally unimodular.

Journal ArticleDOI
TL;DR: This work describes a procedure for identifying approximate solutions to constrained optimization problems, which performs well when training cases are selected according to a simple rule, identifying good heuristic solutions for representative test cases.
Abstract: Many optimization procedures presume the availability of an initial approximation in the neighborhood of a local or global optimum. Unfortunately, finding a set of good starting conditions is itself a nontrivial proposition. We describe a procedure for identifying approximate solutions to constrained optimization problems. Recurrent neural network structures are interpreted in the context of linear associative memory matrices. A recurrent associative memory (RAM) is trained to map the inputs of closely related transportation linear programs to optimal solution vectors. The procedure performs well when training cases are selected according to a simple rule, identifying good heuristic solutions for representative test cases. Modest infeasibilities exist in some of these estimated solutions, but the basic variables associated with true optimums are usually apparent. In the great majority of cases, rounding identifies the true optimum.

Journal ArticleDOI
TL;DR: Analysis uncovers a complexity that is often present in simply stated discrete optimization problems, and illustrates pitfalls that can arise from naively applying brute force techniques on the computer.
Abstract: Introduction Interesting mathematics problems arise from a wide variety of sources in everyday life. This paper explores a scheduling problem that was presented to one of the authors by a local bridge club. Although the problem appears simple on the surface, analysis uncovers a complexity that is often present in simply stated discrete optimization problems, and illustrates pitfalls that can arise from naively applying brute force techniques on the computer. The paper first defines the problem and explores the meaning of arn optimal solution. Next an analytical solution is sought based on the classification of the problem, and finally the paper considers four increasingly sophisticated techniques of discrete optimization. Historically, optimization problems have arisen in a variety of applications including electrical engineering, operations research, computer science, and communication. Although a variety of techniques for solving linear and non-linear optimization problems with continuous variables has been well known for 25 years [2, 4, 15], it is only recently that progress has been made in solving optimization problems involving discrete variables [9, 101. This paper examines a variety of these discrete techniques in the context of solving one specific scheduling problem.

Journal ArticleDOI
TL;DR: In this article, a two-phase global random search procedure for solving some computationally intractable discrete optimization problems is proposed, where guarantees for quality of random search results are derived from analysis of nonasymptotic order statistics and distribution-free intervals that are obtainable in this way.
Abstract: A two-phase global random search procedure for solving some computationally intractable discrete optimization problems is proposed. Guarantees for quality of random search results are derived from analysis of non-asymptotic order statistics and distribution-free intervals that are obtainable in this way: the confidence interval for a quantile of given order, or the tolerance interval for the parent distribution of goal function values. It has been shown that results, related to the multiconstrained 0–1 knapsack problem, within a few percentage from the true optimal solution can be obtained.