scispace - formally typeset
Search or ask a question

Showing papers in "Informs Journal on Computing in 1997"


Journal ArticleDOI
TL;DR: This feature article will consider what genetic algorithms have achieved, discuss some of the factors that influence their success or failure, and offer a guide for operations researchers who want to get the best out of them.
Abstract: Genetic algorithms have become increasingly popular as a means of solving hard combinatorial optimization problems of the type familiar in operations research. This feature article will consider what genetic algorithms have achieved in this area, discuss some of the factors that influence their success or failure, and offer a guide for operations researchers who want to get the best out of them.

256 citations


Journal ArticleDOI
TL;DR: The basic methodology for optimal-decision models for stochastic programming models, recent developments in computation, and several practical applications are described.
Abstract: Although decisions frequently have uncertain consequences, optimal-decision models often replace those uncertainties with averages or best estimates. Limited computational capability may have motivated this practice in the past. Recent computational advances have, however, greatly expanded the range of optimal-decision models with explicit consideration of uncertainties. This article describes the basic methodology for these stochastic programming models, recent developments in computation, and several practical applications.

231 citations


Journal ArticleDOI
TL;DR: A reactive tabu search metaheuristic for the vehicle routing and scheduling problem with time window constraints is developed and achieves solutions that compare favorably with previously reported results.
Abstract: This article develops a reactive tabu search metaheuristic for the vehicle routing and scheduling problem with time window constraints. Reactive tabu search dynamically varies the size of the list of forbidden moves to avoid cycles as well as an overly constrained search path. Intensification and diversification strategies are examined as ways to achieve higher quality solutions. The λ-interchange mechanism of Osman is used as the neighborhood structure for the search process. Computational results on test problems from the literature as well as large-scale real-world problems are reported. The metaheuristic achieves solutions that compare favorably with previously reported results.

215 citations


Journal ArticleDOI
TL;DR: In this article, a new branch-and-bound algorithm (Salome) is developed based on an analysis of their specific strengths, and its main characteristic is a new branching strategy (local lower-bound method) and a bidirectional branching rule.
Abstract: In this article, we report on new results for the well-known Simple Assembly Line Balancing Problem Type 1. For this NP-hard problem, a large number of exact and heuristic algorithms have been proposed in the last four decades. Recent research has led to efficient branch-and-bound procedures. Based on an analysis of their specific strengths, a new algorithm (Salome) is developed. Its main characteristic is a new branching strategy (local lower-bound method) and a bidirectional branching rule. Furthermore, new bounding and dominance rules are included. Computational experiments on the basis of former data sets, as well as a new, more challenging one, show that Salome outperforms the most effective existing procedures for solving this problem.

173 citations


Journal ArticleDOI
TL;DR: The LBATCH and ABATCH rules for applying the batch means method to analyze output of Monte Carlo and, in particular, discrete-event simulation experiments are introduced.
Abstract: This article introduces the LBATCH and ABATCH rules for applying the batch means method to analyze output of Monte Carlo and, in particular, discrete-event simulation experiments. Sufficient conditions are given for these rules to produce strongly consistent estimators of the variance of the sample mean and asymptotically valid confidence intervals for the mean. The article studies the performance of these rules and two others suggested in the literature, comparing confidence interval coverage rates and mean half-lengths. The article also gives detailed algorithms for implementing the rules in O(t) time with O(log2 t) space. FORTRAN, C, and SIMSCRIPT II.5 implementations of the procedures are available by anonymous file transfer protocol.

119 citations


Journal ArticleDOI
TL;DR: A classification of parallel tabu search metaheuristics based on the control and communication strategies used in the design of the paralleltabu search procedures, and on how the search space is partitioned is presented.
Abstract: In this paper we present a classification of parallel tabu search metaheuristics based, on the one hand, on the control and communication strategies used in the design of the parallel tabu search procedures, and on the other hand, on how the search space is partitioned. These criteria are then used to review the parallel tabu search implementations described in the literature. The taxonomy is further illustrated by the results of several parallelization implementations of a tabu search procedure for multicommodity location-allocation problems with balancing requirements.

104 citations


Journal ArticleDOI
TL;DR: The implementation of theoretical tests to assess the structural properties of simple or combined linear congruential and multiple recursive random number generators are discussed and a package implementing the so-called spectral and lattice tests for such generators is described.
Abstract: We discuss the implementation of theoretical tests to assess the structural properties of simple or combined linear congruential and multiple recursive random number generators. In particular, we describe a package implementing the so-called spectral and lattice tests for such generators. Our programs analyze the lattices generated by vectors of successive or nonsuccessive values produced by the generator, analyze the behavior of generators in high dimensions, and deal with moduli of practically unlimited sizes. We give numerical illustrations. We also explain how to build lattice bases in several different cases, e.g., for vectors of far-apart nonsuccessive values, or for sublattices generated by the set of periodic states or by a subcycle of a generator, and, for all these cases, how to increase the dimension of a (perhaps partially reduced) basis.

80 citations


Journal ArticleDOI
TL;DR: A tabu-search heuristic (TSH) for DPLP is defined, which employs short-term and longer-term memory features such as the use of an aspiration criterion, dynamic tabu list strategies, and other strategies for search intensification and diversification.
Abstract: We consider the dynamic plant layout problem (DPLP) in which the layout of a facility must be determined in each period of a finite planning horizon. We begin by reviewing the literature on DPLP, discussing various formulations of the problem which have appeared in the literature, as well as a number of exact and heuristic solution procedures which have been proposed for DPLP. We then define a tabu-search heuristic (TSH) for DPLP. The TSH employs short-term and longer-term memory features such as the use of an aspiration criterion, dynamic tabu list strategies, and other strategies for search intensification and diversification. Computational experience with the heuristic on a set of test problems appearing in the literature is reported. The TSH is seen to be extremely effective in obtaining high-quality solutions to the test problems. The TSH procedure produces new best-known solutions for over one-third of the test problems.

70 citations


Journal ArticleDOI
TL;DR: This article considers the case where the t values taken are not successive, but separated by lags that are chosen a priori, and gives lower bounds on the distance between hyperplanes.
Abstract: Usually, the t-dimensional spectral test for linear congruential generators examines the lattice structure of all the points formed by taking t successive values in the sequence. In this article, we consider the case where the t values taken are not successive, but separated by lags that are chosen a priori. For certain classes of linear congruential and multiple recursive generators, and for certain choices of the lags, we give lower bounds on the distance between hyperplanes. In some cases, those lower bounds are quite large, even in dimensions as small as t = 3. We give illustrations with specific classes of generators that have been proposed in the literature, and discuss the possible implications.

69 citations


Journal ArticleDOI
TL;DR: The classification problem of constructing a plane to separate the members of two sets can be formulated as a parametric bilinear program, where the subproblems represent alternative error functions of the misclassified points.
Abstract: The classification problem of constructing a plane to separate the members of two sets can be formulated as a parametric bilinear program. This approach was originally created to minimize the number of points misclassified. However, a novel interpretation of the algorithm is that the subproblems represent alternative error functions of the misclassified points. Each subproblem identifies a specified number of outliers and minimizes the magnitude of the errors on the remaining points. A tuning set is used to select the best result among the subproblems. A parametric Frank-Wolfe method was used to solve the bilinear subproblems. Computational results on a number of datasets indicate that the results compare very favorably with linear programming and heuristic search approaches. The algorithm can be used as part of a decision tree algorithm to create nonlinear classifiers.

67 citations


Journal ArticleDOI
TL;DR: Several issues concerning an analysis of large and sparse linear programming problems prior to solving them with an interior point based optimizer are addressed in this paper.
Abstract: Several issues concerning an analysis of large and sparse linear programming problems prior to solving them with an interior point based optimizer are addressed in this paper. Three types of presolve procedures are distinguished. Routines from the first class repeatedly analyze an LP problem formulation: eliminate empty or singleton rows and columns, look for primal and dual forcing or dominated constraints, tighten bounds for variables and shadow prices or just the opposite, relax them to find implied free variables. The second type of analysis aims at reducing a fill-in of the Cholesky factor of the normal equations matrix used to compute orthogonal projections and includes a heuristic for increasing the sparsity of the LP constraint matrix and a technique of splitting dense columns in it. Finally, routines from the third class detect, and remove, different linear dependecies of rows and columns in a constraint matrix. Computational results on problems from the Netlib collection, including some recently...

Journal ArticleDOI
TL;DR: This article addresses the problem of finding IISs having few rows in infeasible linear programs using a modified version of MINOS 5.4 called MINOS(IIS).
Abstract: Infeasibility is often encountered during the process of initial model formulation or reformulation, and it can be extremely difficult to diagnose the cause, especially in large linear programs. While explanation of the error is the domain of humans or artificially intelligent assistants, algorithmic assistance is available to isolate the infeasibility to a subset of the constraints, which helps speed the diagnosis. The isolation should be infeasible, of course, and should not contain any constraints which do not contribute to the infeasibility. Algorithms for finding such irreducible inconsistent systems (IISs) of constraints have been proposed, implemented, and tested in recent years. Experience with IISs shows that a further property of the isolation is highly desirable for easing diagnosis: the isolation should contain as few model rows as possible. This article addresses the problem of finding IISs having few rows in infeasible linear programs. Theory is developed, then implemented and tested on a ra...

Journal ArticleDOI
TL;DR: It is proved that a TKP can be solved by the proposed pseudopolynomial-time algorithm in (theta)( nH ) time, where n is the number of nodes in T and H is the given capacity.
Abstract: The Tree Knapsack Problem (TKP) can be regarded as a 0–1 knapsack problem on a rooted tree T such that if a node is selected into a knapsack, then all nodes on the path from the selected node to the root node must also be selected into the knapsack. In this paper, we develop a pseudopolynomial-time algorithm for TKP, the depth-first dynamic programming algorithm. We prove that a TKP can be solved by our algorithm in θ(nH) time, where n is the number of nodes in T and H is the given capacity. We also report the computational results of the depth-first dynamic programming algorithm.

Journal ArticleDOI
TL;DR: This article proposes a heuristic method for the PFLP problem based on genetic algorithms (GA), an adaptive, robust, search and optimization technique based on the principles of natural genetics and survival of the fittest and emphasizes the notion of masking to preserve optimal subsequences in chromosomes and prevent their disruption during crossover and mutation.
Abstract: Cartographic label placement is one of the most time-consuming tasks in the production of high quality maps and other high quality graphical displays. It is essential that text labels used to identify various features and objects be placed in a clear and unobscured manner. In this article we are concerned with the placement of labels for point features. Specifically, the point feature label placement (PFLP) problem is the problem of placing text labels to point features on a map, graph, or diagram in such a manner so as to maximize legibility. The PFLP problem has been shown to be NP-hard. We propose a heuristic method for the PFLP problem based on genetic algorithms (GA), an adaptive, robust, search and optimization technique based on the principles of natural genetics and survival of the fittest. In particular we emphasize the notion of masking to preserve optimal subsequences in chromosomes and prevent their disruption during crossover and mutation. We ran our algorithms on randomly placed point featur...

Journal ArticleDOI
TL;DR: This work describes two approaches for computing gradients of partially separable functions via automatic differentiation, providing code for the efficient computation of the gradient without the need for tedious hand-coding.
Abstract: The accurate and efficient computation of gradients for partially separable functions is central to the solution of large-scale optimization problems, because these functions are ubiquitous in large-scale problems. We describe two approaches for computing gradients of partially separable functions via automatic differentiation. In our experiments we employ the ADIFOR (automatic differentiation of Fortran) tool and the SparsLinC (sparse linear combination) library. We use applications from the MINPACK-2 test problem collection to compare the numerical reliability and computational efficiency of these approaches with hand-coded derivatives and approximations based on differences of function values. Our conclusion is that automatic differentiation is the method of choice, providing code for the efficient computation of the gradient without the need for tedious hand-coding.

Journal ArticleDOI
TL;DR: The authors’ experience solving large multicommodity flow problems with an embedded network simplex algorithm augmented with a fast-start heuristic for choosing an initial basis is described, and the efficacy of the heuristic and of the embedded networksimplex method are demonstrated on large publicly available multicommodation flow problems.
Abstract: This article describes the authors’ experience solving large multicommodity flow problems with an embedded network simplex algorithm augmented with a fast-start heuristic for choosing an initial basis. The heuristic makes successive capacity allocations in an attempt to find a feasible initial basis. Our implementation of the heuristic makes use of piece-wise linear convex costs. The efficacy of our heuristic, and of the embedded network simplex method, is demonstrated on large publicly available multicommodity flow problems. Comparisons with other published computational results are given.

Journal ArticleDOI
TL;DR: This commentary hopes to complement the survey by focusing on some aspects of GAs that Reeves did not have an opportunity to comment on in depth.
Abstract: The feature article by Reeves presents an excellent survey of genetic algorithms (GAs). It covers the history of GAs and the application of GAs to combinatorial problems, while providing useful background and a balanced operations research (OR) perspective. In this commentary, we hope to complement the survey by focusing on some aspects of GAs that Reeves did not have an opportunity to comment on in depth.

Journal ArticleDOI
TL;DR: This article proposes a simple new scaling procedure for nonprobability functions that is based on transforming the given function into a probability density function or a probability mass function and transforming the point of inversion to the mean.
Abstract: It is known that probability density functions and probability mass functions usually can be calculated quite easily by numerically inverting their transforms (Laplace transforms and generating functions, respectively) with the Fourier-series method. Other more general functions can be substantially more difficult to invert, because the aliasing and roundoff errors tend to be more difficult to control. In this article we propose a simple new scaling procedure for nonprobability functions that is based on transforming the given function into a probability density function or a probability mass function and transforming the point of inversion to the mean. This new scaling is even useful for probability functions, because it enables us to compute very small values at large arguments with controlled relative error.

Journal ArticleDOI
TL;DR: An approach for implementing a strong cutting plane method that employs valid inequalities that are known to be valid for the line-balancing polytope, including separation algorithms is described.
Abstract: This article presents a strong cutting plane method implemented by branch and cut to solve the assembly line workload smoothing problem which minimizes the maximum idle time for a specified number of stations to balance workloads assigned to all stations. The approach exploits a problem formulation that embeds the assembly line-balancing polytope. Thus, inequalities that are known to be valid for the line-balancing polytope are also valid for workload smoothing. This article describes an approach for implementing a strong cutting plane method that employs such valid inequalities, including separation algorithms. Preprocessing methods are described to decompose and reduce a precedence graph as well as to estimate bounds on parameters that are involved in valid inequalities. Finally, computational experience that evaluates the efficacy of the approach is presented.

Journal ArticleDOI
TL;DR: In this commentary, the author's experience with GAs is as a practitioner and software developer, and it is from this perspective that he will comment on Reeves' article.
Abstract: The feature article by Colin Reeves provides a useful introduction to genetic algorithms (GAs) for the operations researcher. In this commentary, I will elaborate on several of Reeves' points and introduce some others. My experience with GAs is as a practitioner and software developer, and it is from this perspective that I will comment on Reeves' article.

Journal ArticleDOI
TL;DR: A generalized version of the univariate change-of-variable technique for transforming continuous random variables and an implementation of the theorem in a computer algebra system that automates the technique are presented.
Abstract: We present a generalized version of the univariate change-of-variable technique for transforming continuous random variables. Extending a theorem from Casella and Berger [1990. Statistical Inference, Wadsworth and Brooks/Cole, Inc., Pacific Grove, CA] for many-to-1 transformations, we consider more general univariate transformations. Specifically, the transformation can range from 1-to-1 to many-to-1 on various subsets of the support of the random variable of interest. We also present an implementation of the theorem in a computer algebra system that automates the technique. Some examples demonstrate the theorem's application.

Journal ArticleDOI
TL;DR: Colin Reeves' feature article provides an insightful survey of many of the applications of genetic algorithms (GAs) to the solution of current problems in operations research.
Abstract: Colin Reeves' feature article provides an insightful survey of many of the applications of genetic algorithms (GAs) to the solution of current problems in operations research. In some cases, GAs were successful, and in others they were not. Reeves correctly does not offer GAs as a panacea, but instead offers considerable insight into when and why GAs work. The difference between a successful application of a GA and an unsuccessful one often lies in the encoding. Five properties of an ideal encoding are listed.

Journal ArticleDOI
TL;DR: It is proved that the problem of nonpreemptively scheduling periodic tasks on a minimum number of identical processors is NP-hard in the strong sense, and an approximation algorithm is proposed.
Abstract: We consider the problem of nonpreemptively scheduling periodic tasks on a minimum number of identical processors, assuming that some slack is allowed in the time between successive executions of a periodic task. We prove that the problem is NP-hard in the strong sense. Necessary and sufficient conditions are derived for scheduling two periodic tasks on a single processor, and for combining two periodic tasks into one larger task. Based on these results, we propose an approximation algorithm.

Journal ArticleDOI
TL;DR: This article describes a variation of the branch and bound method for solving a clustering problem stated as a partitioning problem on edge-weighted graphs that employs the transformation of a subproblem according to some heuristic solution.
Abstract: This article describes a variation of the branch and bound method for solving a clustering problem stated as a partitioning problem on edge-weighted graphs. The key features of the approach are two. First, it employs the transformation of a subproblem according to some heuristic solution. For this, two clustering heuristics, constructive and iterative improvement, are adopted. Second, the lower bound computation is based on the use of the inequalities with zero right-hand side coefficient, each defining a facet of the polytope related to the transformed subproblem. A procedure is described for covering negative edges of the subproblem graph by cycles and paths inducing such inequalities. The objective function is modified each time by adding the left-hand side of the selected inequality to it. The lower bound is obtained by summing the weights of uncovered negative edges. The algorithm can be used for solving clustering problems in the area of qualitative data analysis. Computational results on both real ...

Journal ArticleDOI
TL;DR: Empirical outcomes show the procedure is significantly superior to advanced branch-and-bound methods (previously established to be the most efficient knapsack solution procedures), obtaining solutions several orders of magnitude faster for hard problems.
Abstract: We present a new and highly efficient algorithm for the integer knapsack problem based on a special strategy for aggregating integer-valued equations. Employing a new theorem for creating a single equation with the same nonnegative integer solution set as a system of original equations, we transform the integer knapsack problem into an equivalent problem of determining the consistency of an aggregated equation for a parameterized right hand side. This last problem is solved by a newly developed algorithm with complexity O(min(n α1, n + α12)), where n is the number of variables and α1 is the smallest coefficient in the aggregated equation. Empirical outcomes show our procedure is significantly superior to advanced branch-and-bound methods (previously established to be the most efficient knapsack solution procedures), obtaining solutions several orders of magnitude faster for hard problems.

Journal ArticleDOI
TL;DR: This article attempts to address the question of how parallel can the MIP branch and bound algorithm be when communication is more rudimentary by comparing two different implementations of MIP Branch and bound on CM-5 systems with between 4 and 64 processors.
Abstract: Consider the classical branch and bound algorithm for mixed integer programming (MIP). Parallel implementations of this method are known to be viable and reasonably scalable on systems with sophisticated, high-performance interprocessor communication capabilities. But how parallel can the algorithm be when communication is more rudimentary? This article attempts to address this question by comparing two different implementations of MIP branch and bound on CM-5 systems with between 4 and 64 processors, using a selection of MIPLIB test problems. The first code, CMMIP, exploits many of the advanced features of the CM-5 network, whereas the second, called LCMIP, has a minimal, relatively infrequent communication pattern. Generally speaking, the performance of the low-communication code is competitive for relatively difficult problems and/or small processor configurations, but it appears that advanced communications features are essential to extract the maximum degree of parallelism from a given problem instance.

Journal ArticleDOI
TL;DR: It is shown that the idle period of the server and the stationary waiting time of an admitted job are of phase type and that the departure process can be modeled using a versatile Markovian point process.
Abstract: In this article we consider a finite capacity queuing model in which jobs (or customers) arrive according to a nonrenewal process The jobs are processed by a single server in groups of varying size, between a predetermined threshold value and the buffer size A dynamic probability rule is associated with the service mechanism The services are assumed to be exponential whose parameter may depend on the group size The steady-state analysis of this queuing model is performed using Markov chain theory It is shown that the idle period of the server and the stationary waiting time of an admitted job are of phase type and that the departure process can be modeled using a versatile Markovian point process Efficient algorithms for computing various performance measures such as throughput, mean number served, job overflow probability, server idle probability, the stationary mean waiting time, and the stationary mean idle time of the server, useful in qualitative and quantitative interpretations are developed

Journal ArticleDOI
TL;DR: GAs should be rather viewed as metaheuristic with a distinctive feature to employ crossover, cases when GAs might be appropriate and some key issues are discussed.
Abstract: Colin Reeves has done an excellent job of surveying the basic concepts and applications of genetic algorithms (GAs) in the area of operations research. GAs do have a tremendous appeal, based as they are on the metaphor of Darwinian evolution and Herbert Spencer's famous phrase, “survival of the fittest”—surely one of the most widely expounded scientific notions ever. GAs have sometimes been portrayed as being solely a form of a knowledge-poor optimization technique. The are not general-purpose optimizers but they are very diverse and problem-specific. GAs should be rather viewed as metaheuristic with a distinctive feature to employ crossover. Cases when GAs might be appropriate and some key issues are discussed.

Journal ArticleDOI
TL;DR: The power series algorithm can be used for each Markov process with a single recurrent class to solve finite state processes, which is illustrated with the analysis of a bounded stochastic Petri net model.
Abstract: The power series algorithm has been developed as numerical procedure for solving queueing models. This paper shows that it can be used for each Markov process with a single recurrent class. This ap...

Journal ArticleDOI
TL;DR: In this article, the existence of another class of problems that are structurally less complicated than the general earliness-tardiness problem is discussed, and the optimality principle of the dynamic algorithm applies to these problems.
Abstract: Discusses the existence of another class of problems that are structurally less complicated than the general earliness-tardiness problem. Details of common due date problems; Logic behind Emmons' matching algorithm; List of earliness-tardiness problems to which the optimality principle of the dynamic algorithm applies; Properties that apply to the variants of dynamic programming.