scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Operations Research in 1994"


Journal ArticleDOI
TL;DR: Techniques for optimizing stochastic discrete-event systems via simulation, including perturbation analysis, the likelihood ratio method, and frequency domain experimentation are reviewed.
Abstract: We review techniques for optimizing stochastic discrete-event systems via simulation. We discuss both the discrete parameter case and the continuous parameter case, but concentrate on the latter which has dominated most of the recent research in the area. For the discrete parameter case, we focus on the techniques for optimization from a flnite set: multiple-comparison procedures and ranking-and-selection procedures. For the continuous parameter case, we focus on gradient-based methods, including perturbation analysis, the likelihood ratio method, and frequency domain experimentation. For illustrative purposes, we compare and contrast the implementation of the techniques for some simple discrete-event systems such as the (s;S) inventory system and the GI=G=1 queue. Finally, we speculate on future directions for the fleld, particularly in the context of the rapid advances being made in parallel computing.

444 citations


Journal ArticleDOI
Osman Balci1
TL;DR: Current software VV&T techniques and current simulation model VV &T techniques are surveyed and how they can all be applied throughout the life cycle of a simulation study are described.
Abstract: Life cycle validation, verification, and testing (VV&T) is extremely important for the success of a simulation study. This paper surveys current software VV&T techniques and current simulation model VV&T techniques and describes how they can all be applied throughout the life cycle of a simulation study. The processes and credibility assessment stages of the life cycle are described and the applicability of the VV&T techniques for each stage is stated. A glossary is provided to explicitly define important terms and VV&T techniques.

420 citations


Journal ArticleDOI
TL;DR: In this article, the authors examine practical ways of generating (deterministic approximations to) such uniform variates on a computer and compare them in terms of ease of implementation, efficiency, theoretical support, and statistical robustness.
Abstract: In typical stochastic simulations, randomness is produced by generating a sequence of independent uniform variates (usually real-valued between 0 and 1, or integer-valued in some interval) and transforming them in an appropriate way. In this paper, we examine practical ways of generating (deterministic approximations to) such uniform variates on a computer. We compare them in terms of ease of implementation, efficiency, theoretical support, and statistical robustness. We look in particular at several classes of generators, such as linear congruential, multiple recursive, digital multistep, Tausworthe, lagged-Fibonacci, generalized feedback shift register, matrix, linear congruential over fields of formal series, and combined generators, and show how all of them can be analyzed in terms of their lattice structure. We also mention other classes of generators, like non-linear generators, discuss other kinds of theoretical and empirical statistical tests, and give a bibliographic survey of recent papers on the subject.

235 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the Choquet integral with respect to non-additive probabilities and Capacities of set functions in the case of a finite domain and showed that it is an isomorphism between nonadditive measures on the original space and additive ones on a larger space (of events).
Abstract: This paper studies some new properties of set functions (and, in particular, “non-additive probabilities” or “capacities”) and the Choquet integral with respect to such functions, in the case of a finite domain. We use an isomorphism between non-additive measures on the original space (of states of the world) and additive ones on a larger space (of events), and embed the space of real-valued functions on the former in the corresponding space on the latter. This embedding gives rise to the following results: We also discuss the interpretation of these results and the new light they shed on the theory of expected utility maximization with respect to non-additive measures.

200 citations


Journal ArticleDOI
TL;DR: It follows that unless the uninsured position is Bickel and Lehmann more dispersed than the insured position, the existing contract can be improved so as to raise the expected utility of both parties, regardless of their (concave) utility functions.
Abstract: For every integrable allocation (X 1,X 2, ...,X n ) of a random endowmentY=Σ =1/ X i amongn agents, there is another allocation (X 1*,X 2*, ...,X n *) such that for every 1≤i≤n,X i * is a nondecreasing function ofY (or, (X 1*,X 2*, ...,X n *) areco-monotone) andX i * dominatesX i by Second Degree Dominance. If (X 1*,X 2*, ...,X n *) is a co-monotone allocation ofY=Σ =1/ X i *, then for every 1≤i≤n, Y is more dispersed thanX i * in the sense of the Bickel and Lehmann stochastic order. To illustrate the potential use of this concept in economics, consider insurance markets. It follows that unless the uninsured position is Bickel and Lehmann more dispersed than the insured position, the existing contract can be improved so as to raise the expected utility of both parties, regardless of their (concave) utility functions.

160 citations


Journal ArticleDOI
TL;DR: This work investigates two versions of multiple objective minimum spanning tree problems defined on a network with vectorial weights and uses neighbourhood search to determine a sequence of solutions with the property that the distance between two consecutive solutions is less than a given accuracy.
Abstract: We investigate two versions of multiple objective minimum spanning tree problems defined on a network with vectorial weights. First, we want to minimize the maximum ofQ linear objective functions taken over the set of all spanning trees (max-linear spanning tree problem, ML-ST). Secondly, we look for efficient spanning trees (multi-criteria spanning tree problem, MC-ST). Problem ML-ST is shown to be NP-complete. An exact algorithm which is based on ranking is presented. The procedure can also be used as an approximation scheme. For solving the bicriterion MC-ST, which in the worst case may have an exponential number of efficient trees, a two-phase procedure is presented. Based on the computation of extremal efficient spanning trees we use neighbourhood search to determine a sequence of solutions with the property that the distance between two consecutive solutions is less than a given accuracy.

156 citations


Journal ArticleDOI
TL;DR: This paper traces the progress achieved so far towards the creation of ME and MRE product-form approximations and related algorithms for the performance analysis of general Queueing Network Models (QNMs) and indicates potential research extensions in the area.
Abstract: Over recent years it has become increasingly evident that “classical” queueing theory cannot easily handle complex queueing systems and networks with many interacting elements. As a consequence, alternative ideas and tools, analogous to those applied in the field of Statistical Mechanics, have been proposed in the literature. In this context, the principles of Maximum Entropy (ME) and Minimum Relative Entropy (MRE), a generalisation, provide consistent methods of inference for characterising the form of an unknown but true probability distribution, based on information expressed in terms of known to exist true expected values or when, in addition, there exists a prior estimate of the unknown distribution. This paper traces the progress achieved so far towards the creation of ME and MRE product-form approximations and related algorithms for the performance analysis of general Queueing Network Models (QNMs) and indicates potential research extensions in the area.

147 citations


Journal ArticleDOI
TL;DR: This paper surveys topics that presently define the state of the art in parallel simulation and includes discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic snchronization.
Abstract: This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

142 citations


Journal ArticleDOI
Richard E. Nance1
TL;DR: This evolutionary overview describes the principles underlying the Conical Methodology, the environment structured according to these principles, and the capabilities for large complex simulation modeling tasks not provided in textbook descriptions.
Abstract: Originating with ideas generated in the mid-1970s, the Conical Methodology (CM) is the oldest procedural approach to simulation model development. This evolutionary overview describes the principles underlying the CM, the environment structured according to these principles, and the capabilities for large complex simulation modeling tasks not provided in textbook descriptions. The CM is an object-oriented, hierarchical specification language that iteratively prescribes object attributes in a definitional phase that is topdown, followed by a specification phase that is bottom-up. The intent is to develop successive model representations at various levels of abstraction that can be diagnosed for correctness, completeness, consistency, and other characteristics prior to implementation as an executable program. Related or competitive approaches, throughout the evolutionary period are categorized as emanating from: artificial intelligence, mathematical programming, software engineering, conceptual modeling, systems theory, logic-based theory, or graph theory. Work in each category is briefly described.

105 citations


Journal ArticleDOI
Ward Whitt1
TL;DR: With all procedures proposed here, the approximate variability parameter of the departure process of each class is a linear function of the variability parameters of the arrival processes of all the classes served at that queue, thus ensuring that the final arrival variability parameters in a general open network can be calculated by solving a system of linear equations.
Abstract: Methods are developed for approximately characterizing the departure process of each customer class from a multi-class single-server queue with unlimited waiting space and the first-in-first-out service discipline. The model is σ(GT i /GI i )/1 with a non-Poisson renewal arrival process and a non-exponential service-time distribution for each class. The methods provide a basis for improving parametric-decomposition approximations for analyzing non-Markov open queueing networks with multiple classes. For example, parametric-decomposition approximations are used in the Queueing Network Analyzer (QNA). The specific approximations here extend ones developed by Bitran and Tirupati [5]. For example, the effect of class-dependent service times is considered here. With all procedures proposed here, the approximate variability parameter of the departure process of each class is a linear function of the variability parameters of the arrival processes of all the classes served at that queue, thus ensuring that the final arrival variability parameters in a general open network can be calculated by solving a system of linear equations.

84 citations


Journal ArticleDOI
TL;DR: This work introduces a formulation for the constrained minimum weight Hamiltonian path problem, and defines Lagrangian relaxation for obtaining strong lower bounds on the makespan, and valid cuts for further tightening of the lower bounds.
Abstract: The sequential ordering problem with precedence relationships was introduced in Escudero [7]. It has a broad range of applications, mainly in production planning for manufacturing systems. The problem consists of finding a minimum weight Hamiltonian path on a directed graph with weights on the arcs, subject to precedence relationships among nodes. Nodes represent jobs (to be processed on a single machine), arcs represent sequencing of the jobs, and the weights are sums of processing and setup times. We introduce a formulation for the constrained minimum weight Hamiltonian path problem. We also define Lagrangian relaxation for obtaining strong lower bounds on the makespan, and valid cuts for further tightening of the lower bounds. Computational experience is given for real-life cases already reported in the literature.

Journal ArticleDOI
TL;DR: Several update rules for non-additive probabilities, among them the Dempster-Shafer rule for belief functions and certain update rules in the spirit of Bayesian statistics with multiple prior probabilities are reviewed, investigated and compared with each other.
Abstract: Several update rules for non-additive probabilities, among them the Dempster-Shafer rule for belief functions and certain update rules in the spirit of Bayesian statistics with multiple prior probabilities, are reviewed, investigated and compared with each other. This is done within the unifying framework of general, non-additive measure and integration theory. The methods exposed here are capable of generalizing conditional expectation of random variables to the sub-modular or supermodular case at least if the given algebra is finite.

Journal ArticleDOI
TL;DR: This paper presents in a unified framework a survey of some results related to Choquet Expected Utility models, a promising class of models introduced separately by Quiggin, Yaari and Schmeidler which allow to separate attitudes towards uncertainty (or risk) from attitudes towards wealth, while respecting the first order stochastic dominance axiom.
Abstract: The aim of this paper is to present in a unified framework a survey of some results related to Choquet Expected Utility (CEU) models, a promising class of models introduced separately by Quiggin [35], Yaari [48] and Schmeidler [40, 41] which allow to separate attitudes towards uncertainty (or risk) from attitudes towards wealth, while respecting the first order stochastic dominance axiom.

Journal ArticleDOI
TL;DR: Computational results are reported indicating that the new lower bounds have advantages over previous bounds and can be used in a branch-and-bound type algorithm for the quadratic assignment problem.
Abstract: We investigate the classical Gilmore-Lawler lower bound for the quadratic assignment problem We provide evidence of the difficulty of improving the Gilmore-Lawler bound and develop new bounds by means of optimal reduction schemes Computational results are reported indicating that the new lower bounds have advantages over previous bounds and can be used in a branch-and-bound type algorithm for the quadratic assignment problem

Journal ArticleDOI
TL;DR: In this article, the authors compare the exponential symmetric shortest queue system with two related systems: the shortest queuing system with threshold jockeying and the short queuing scheme with threshold blocking.
Abstract: In this paper we compare the exponential symmetric shortest queue system with two related systems: the shortest queue system with Threshold Jockeying and the shortest queue system with Threshold Blocking. The latter two systems are easier to analyse and are shown to give tight lower and upper bounds respectively for the mean waiting time in the shortest queue system. The approach also gives bounds for the distribution of the total number of jobs in the system.

Journal ArticleDOI
TL;DR: By using cuts defined by multistars, partial multistar and generalized subtour elimination constraints, this work is able to consistently solve 60-city problems to proven optimality and is currently attempting to solve problems involving a hundred cities.
Abstract: We present a branch-and-cut algorithm for the identical customer Vehicle Routing Problem. Transforming the problem into an equivalent Path-Partitioning Problem allows us to exploit its polyhedral structure and to generate strong cuts corresponding to facet-inducing inequalities. By using cuts defined by multistars, partial multistars and generalized subtour elimination constraints, we are able to consistently solve 60-city problems to proven optimality and are currently attempting to solve problems involving a hundred cities. We also present details of the computer implementation and our computational results.

Book ChapterDOI
TL;DR: New algorithms for immediate selection of job-shop problems using anO(max {n logn,f})-algorithm for fixing all disjunctions induced by cliques, based on concepts which are different from those used by Carlier and Pinson are presented.
Abstract: In a job-shop scheduling problem we have n jobs J 1,...,J n to be processed on m different machines M l,...,M m . Each job J; consists of a number n i of operations \( \left( {i = 1,...,n;j,...,{n_i}} \right).\) which have to be processed in this order. Operations O ij can be processed only by one machine μ ij (i = 1,..., n; j i = 1,..., n i ). Denote by pi, the corresponding processing time. We assume that all processing times are integer numbers.

Journal ArticleDOI
TL;DR: A survey of product form solutions of queueing networks with blocking and equivalence properties among different blocking network models is presented and relationships between open and closed product form queueing network models with different blocking mechanisms are examined.
Abstract: Queueing network models have been extensively used to represent and analyze resource sharing systems, such as production, communication and information systems. Queueing networks with blocking are used to represent systems with finite capacity resources and with resource constraints. Different blocking mechanisms have been defined and analyzed in the literature to represent distinct behaviors of real systems with limited resources. Exact product form solutions of queueing networks with blocking have been derived, under special constraints, for different blocking mechanisms. In this paper we present a survey of product form solutions of queueing networks with blocking and equivalence properties among different blocking network models. By using such equivalences we can extend product form solutions to queueing network models with different blocking mechanisms. The equivalence properties include relationships between open and closed product form queueing networks with different blocking mechanisms.

Journal ArticleDOI
TL;DR: The optimal scheduling problem in two queueing models arising in multihop radio networks with scheduled link activation is investigated and it is shown that the optimal policy activates the servers so that the maximum number of packets are served at each slot.
Abstract: The optimal scheduling problem in two queueing models arising in multihop radio networks with scheduled link activation is investigated. A tandem radio network is considered. Each node receives exogenous arriving packets which are stored in its unlimited capacity buffer. Links adjacent to the same node cannot transmit simultaneously because of radio interference constraints. The problem of link activation scheduling for minimum delay is studied for two different traffic types. In the first type all packets have a common destination that is one end-node of the tandem. In this case the system is modeled by a tandem queueing network with dependent servers. The server scheduling policy that minimizes the delay is obtained. In the second type of traffic, the destination of each packet is an immediate neighbor of the node at which the packet enters the network. In this case the system corresponds to a set of parallel queues with dependent servers. It is shown that the optimal policy activates the servers so that the maximum number of packets are served at each slot.

Journal ArticleDOI
TL;DR: The most important power indices are presented and the effectiveness of these indicators is discussed with reference to description of political and financial events.
Abstract: The most important power indices are presented. The effectiveness of these indicators is discussed with reference to description of political and financial events. Some recent studies and applications are shown.

Journal ArticleDOI
TL;DR: Three solution algorithms have been developed and tested: a simple greedy heuristic, a method based on simulated annealing (SA), and an exact algorithm based onColumn Generation with Branch and Bound (CG), an LP-based method for generating tight lower bounds was also developed (LB).
Abstract: In this paper we consider a class of bin selection and packing problems (BPP) in which potential bins are of various types, have two resource constraints, and the resource requirement for each object differs for each bin type. The problem is to select bins and assign the objects to bins so as to minimize the sum of bin costs while meeting the two resource constraints. This problem represents an extension of the classical two-dimensional BPP in which bins are homogeneous. Typical applications of this research include computer storage device selection with file assignment, robot selection with work station assignment, and computer processor selection with task assignment. Three solution algorithms have been developed and tested: a simple greedy heuristic, a method based onsimulated annealing (SA) and an exact algorithm based onColumn Generation with Branch and Bound (CG). An LP-based method for generating tight lower bounds was also developed (LB). Several hundred test problems based on computer storage device selection and file assignment were generated and solved. The heuristic solved problems up to 100 objects in less than a second; average solution value was within about 3% of the optimum. SA improved solutions to an average gap of less than 1% but a significant increase in computing time. LB produced average lower bounds within 3% of optimum within a few seconds. CG is practical for small to moderately-sized problems — possibly as many as 50 objects.

Journal ArticleDOI
TL;DR: An algorithm for learning decision trees for classification and prediction is described which converts real-valued attributes into intervals using statistical considerations, and some applications are described, especially the task of predicting the high water level in a mountain river.
Abstract: An algorithm for learning decision trees for classification and prediction is described which converts real-valued attributes into intervals using statistical considerations. The trees are automatically pruned with the help of a threshold for the estimated class probabilities in an interval. By means of this threshold the user can control the complexity of the tree, i.e. the degree of approximation of class regions in feature space. Costs can be included in the learning phase if a cost matrix is given. In this case class dependent thresholds are used. Some applications are described, especially the task of predicting the high water level in a mountain river.

Journal ArticleDOI
R. A. Bowman1
TL;DR: A heuristic using the gradient estimators is developed and shown to give close to locally optimal performance relatively quickly, and an attempt is made to characterize the amount of variability in networks that would warrant the use of this heuristic.
Abstract: Infinitesimal perturbation analysis and score function gradient estimators are developed for PERT (Program Evaluation and Review Technique) networks and analyzed for potential use for prescriptive project management. In particular, a stochastic version of the familiar deterministic “project crashing” problem is considered. A heuristic using the gradient estimators is developed and shown to give close to locally optimal performance relatively quickly. An attempt is made to characterize the amount of variability in networks that would warrant the use of this heuristic by comparing its performance experimentally to that of the standard linear programming approach using only the task means (ignoring variability).

Journal ArticleDOI
TL;DR: A heuristic algorithm embedded in a methodology to evaluate the location of 85 public schools, among 389 possible vertices, in the metropolitan area of Rio de Janeiro is reported, confirming the conjecture of poor location and the procedure was able to identify several micro-regions simply void of schools.
Abstract: This paper initially proposes a heuristic algorithm for thep-median problem designed for large weighted graphs. The problem is approached through the construction ofp trees whose shapes are progressively modified according to successive tests over the stability of their roots and vertices. The algorithm seems promising because: (i) on a regular PC it can handle problems of the order of 500 vertices, while the mainframe version goes indefinitely further, (ii) contrary to what normally would be expected, execution times seem to be inversely proportional top, and even for large problems, they may be reasonable, especially ifp is large relative to the number of vertices, and (iii) it produces solutions of good quality and in most of the cases studied, it outperforms the traditional heuristic of Teitz and Bart. A real application of the algorithm embedded in a methodology to evaluate the location of 85 public schools, among 389 possible vertices, in the metropolitan area of Rio de Janeiro is reported. Results confirmed the conjecture of poor location and the procedure was able to identify several micro-regions simply void of schools. The methodology is being well received by the education authorities and its extension to the whole metropolitan area is being considered.

Journal ArticleDOI
TL;DR: This paper shows how the reference point method can be modeled within the Goal Programming methodology, and shows how Goal Programming with relaxation of some traditional assumptions can be extended to a multiobjective optimization technique meeting the efficiency principle.
Abstract: Real-life decision problems are usually so complex they cannot be modeled with a single objective function, thus creating a need for clear and efficient techniques of handling multiple criteria to support the decision process. The most commonly used technique is Goal Programming. It is clear and appealing, but in the case of multiobjective optimization problems strongly criticized due to its noncompliance with the efficiency (Pareto-optimality) principle. On the other hand, the reference point method, although using similar control parameters as Goal Programming, always generates efficient solutions. In this paper, we show how the reference point method can be modeled within the Goal Programming methodology. It allows us to simplify implementations of the reference point method as well as shows how Goal Programming with relaxation of some traditional assumptions can be extended to a multiobjective optimization technique meeting the efficiency principle.

Journal ArticleDOI
TL;DR: A criterion for the validity of a geometric product form equilibrium distribution is given for these extended networks by allowing customer arrivals to the network, or the transfer between queues of a single positive customer in the network to trigger the creation of a batch of negative customers at the destination queue.
Abstract: Gelenbe et al. [1, 2] consider single server Jackson networks of queues which contain both positive and negative customers. A negative customer arriving to a nonempty queue causes the number of customers in that queue to decrease by one, and has no effect on an empty queue, whereas a positive customer arriving at a queue will always increase the queue length by one. Gelenbe et al. show that a geometric product form equilibrium distribution prevails for this network. Applications for these types of networks can be found in systems incorporating resource allocations and in the modelling of decision making algorithms, neural networks and communications protocols.

Journal ArticleDOI
TL;DR: It is shown that Evolutionary Algorithm are also more robust optimisers when only few simulations of each trial solution are performed, which may be used to reduce the generally higher CPU-requirements of population-based search methods like EA as opposed to point-based traditional optimisation techniques.
Abstract: Evolutionary Algorithms are robust search methods that mimic basic principles of evolution. We discuss different combinations of Evolutionary Algorithms and the versatile simulation method resulting in powerful tools not only for complex decision situations but explanatory models also. Realised and suggested applications from the domains of management and economics demonstrate the relevance of this approach. In a practical example three EA-variants produce better results than two conventional methods when optimising the decision variables of a stochastic inventory simulation. We show that EA are also more robust optimisers when only few simulations of each trial solution are performed. This characteristic may be used to reduce the generally higher CPU-requirements of population-based search methods like EA as opposed to point-based traditional optimisation techniques.

Journal ArticleDOI
TL;DR: An overview of the five most commonly used statistical techniques for improving the efficiency of stochastic simulations: control variates, common random numbers, importance sampling, conditional Monte Carlo, and stratification is provided.
Abstract: This paper provides an overview of the five most commonly used statistical techniques for improving the efficiency of stochastic simulations: control variates, common random numbers, importance sampling, conditional Monte Carlo, and stratification. The paper also describes a mathematical framework for discussion of efficiency issues that quantifies the trade-off between lower variance and higher computational time per observation.

Journal ArticleDOI
TL;DR: It is shown that on the sets of large test problems, the quality of the solution found by VDSH exceeds that of the leading heuristic by an average of over twenty percent, while maintaining acceptable solution times.
Abstract: The Generalized Assignment Problem, in the class of NP-hard problems, occurs in a wide range of applications — vehicle packing, computers, and logistics, to name only a few. Previous research has been concentrated on optimization methodologies for the GAP. Because the Generalized Assignment Problem is NP-hard, optimization methods tend to require larger computation times for large-scale problems. This paper presents a new heuristic,Variable-Depth-Search Heuristic (VDSH). We show that on the sets of large test problems, the quality of the solution found by VDSH exceeds that of the leading heuristic by an average of over twenty percent, while maintaining acceptable solution times. On difficult problem instances, VDSH provides solutions having costs 140% less than those found by the leading heuristic. A duality gap analysis of VDSH demonstrates the robustness of our heuristics.

Journal ArticleDOI
TL;DR: This paper presents a new approach for computing the response time distributions using stochastic Petri nets called SRNs, and examines the effects of changing the service discipline and the service time distribution at a queueing center on the responseTime distribution.
Abstract: We consider the numerical computation of response time distributions for closed product form queueing networks using thetagged customer approach. We map this problem on to the computation of the time to absorption distribution of a finite-state continuous time Markov chain. The construction and solution of these Markov chains is carried out using a variation of stochastic Petri nets called stochastic reward nets (SRNs). We examine the effects of changing the service discipline and the service time distribution at a queueing center on the response time distribution. A multiserver queueing network example is also presented. While the tagged customer approach for computing the response time distribution is not new, this paper presents a new approach for computing the response time distributions using SRNs.