scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Modeling and Computer Simulation in 2006"


Journal ArticleDOI
TL;DR: The goal is to contribute to the larger simulation community the authors' accumulated experiences from developing several implementations of an agent-based simulation toolkit and it is hoped that ongoing architecture standards efforts will benefit from this new knowledge and use it to produce architecture standards with increased robustness.
Abstract: Many agent-based modeling and simulation researchers and practitioners have called for varying levels of simulation interoperability ranging from shared software architectures to common agent communications languages. These calls have been at least partially answered by several specifications and technologies. In fact, Tanenbaum [1988] has remarked that the “nice thing about standards is that there are so many to choose from.” Tanenbaum goes on to say that “if you do not like any of them, you can just wait for next year's model.” This article does not seek to introduce next year's model. Rather, the goal is to contribute to the larger simulation community the authors' accumulated experiences from developing several implementations of an agent-based simulation toolkit. As such, this article focuses on the implementation of simulation architectures rather than agent communications languages. It is hoped that ongoing architecture standards efforts will benefit from this new knowledge and use it to produce architecture standards with increased robustness.

696 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide guidelines for achieving a high efficiency in a simulation with RESTART, focusing on the choice of the importance function, that is, the function of the system state for determining when retrials are made.
Abstract: RESTART (Repetitive Simulation Trials After Reaching Thresholds) is a widely applicable accelerated simulation technique that allows the evaluation of extremely low probabilities. The focus of this article is on providing guidelines for achieving a high efficiency in a simulation with RESTART. Emphasis is placed on the choice of the importance function, that is, the function of the system state for determining when retrials are made. A heuristic approach which is shown to be effective for some systems is proposed for this choice. A two-queue tandem network is used to illustrate the efficiency achieved following these guidelines. The importance function chosen in this example shows that an appropriate choice of the importance function leads to an efficient simulation of a system with multidimensional state space. Also presented are sufficient conditions for achieving asymptotic efficiency, and it is shown that they are not very restrictive in RESTART simulation.

59 citations


Journal ArticleDOI
TL;DR: This work investigates the simulation of overflow of the total population of a Markovian two-node tandem queue model during a busy cycle, using importance sampling with a state-independent change of measure to classify the model's parameter space into regions of asymptotic efficiency, exponential growth of the relative error, and infinite variance.
Abstract: We investigate the simulation of overflow of the total population of a Markovian two-node tandem queue model during a busy cycle, using importance sampling with a state-independent change of measure. We show that the only such change of measure that may possibly result in asymptotically efficient simulation for large overflow levels is exchanging the arrival rate with the smallest service rate. For this change of measure, we classify the model's parameter space into regions of asymptotic efficiency, exponential growth of the relative error, and infinite variance, using both analytical and numerical techniques.

58 citations


Journal ArticleDOI
TL;DR: Asymptotic approximations are used to obtain closed-form results for sampling plans for a wide class of stochastic models, including situations where the mean performance is unknown but estimated with simulation.
Abstract: The design of many production and service systems is informed by stochastic model analysis. But the parameters of statistical distributions of stochastic models are rarely known with certainty, and are often estimated from field data. Even if the mean system performance is a known function of the model's parameters, there may still be uncertainty about the mean performance because the parameters are not known precisely. Several methods have been proposed to quantify this uncertainty, but data sampling plans have not yet been provided to reduce parameter uncertainty in a way that effectively reduces uncertainty about mean performance. The optimal solution is challenging, so we use asymptotic approximations to obtain closed-form results for sampling plans. The results apply to a wide class of stochastic models, including situations where the mean performance is unknown but estimated with simulation. Analytical and empirical results for the M/M/1 queue, a quadratic response-surface model, and a simulated critical care facility illustrate the ideas.

55 citations


Journal ArticleDOI
TL;DR: This work presents two implementations of SMARTS that both achieve an average error of only 0.64% on CPI and presents a statistically sound procedure for configuring a systematic sampling simulation run to achieve a desired quantifiable confidence in estimates.
Abstract: Current software-based microarchitecture simulators are many orders of magnitude slower than the hardware they simulate. Hence, most microarchitecture design studies draw their conclusions from drastically truncated benchmark simulations that are often inaccurate and misleading. This article presents the Sampling Microarchitecture Simulation (SMARTS) framework as an approach to enable fast and accurate performance measurements of full-length benchmarks. SMARTS accelerates simulation by selectively measuring in detail only an appropriate benchmark subset. SMARTS prescribes a statistically sound procedure for configuring a systematic sampling simulation run to achieve a desired quantifiable confidence in estimates.Analysis of the SPEC CPU2000 benchmark suite shows that CPI and energy per instruction (EPI) can be estimated to within ±3p with 99.7p confidence by measuring fewer than 50 million instructions per benchmark. In practice, inaccuracy in microarchitectural state initialization introduces an additional uncertainty which we empirically bound to ∼2p for the tested benchmarks. Our implementation of SMARTS achieves an actual average error of only 0.64p on CPI and 0.59p on EPI for the tested benchmarks, running with average speedups of 35 and 60 over detailed simulation of 8-way and 16-way out-of-order processors, respectively.

44 citations


Journal ArticleDOI
TL;DR: The technique of truck platooning is employed in the ACTIPOT system in order to simplify the control of the overall system and to minimize the possibility of deadlocks, congestion, and failures.
Abstract: In this article we propose a new concept called automated container transportation system between inland port and terminals (ACTIPOT) which involves the use of automated trucks to transfer containers between an inland port and container terminals. The inland port is located a few miles away from the terminals and is used for storing and processing import/export containers before distribution to customers or transfer to the terminals. We design and analyze the ACTIPOT system with particular attention paid to the overall supervisory controller that synchronizes all the operations inside the ACTIPOT system. We employ the technique of truck platooning in order to simplify the control of the overall system and to minimize the possibility of deadlocks, congestion, and failures. A microscopic simulation model is developed and used to demonstrate the overall performance of the ACTIPOT system. The contribution of this article is the design, analysis, and evaluation of the new concept ACTIPOT.

37 citations


Journal ArticleDOI
TL;DR: A general result is presented that guarantees the global convergence with probability one of the simulation optimization algorithms in this class of algorithms for solving simulation optimization problems with countably infinite number of feasible points.
Abstract: This article is concerned with proving the almost sure and global convergence of a broad class of algorithms for solving simulation optimization problems with countably infinite number of feasible points. We first describe the class of simulation optimization algorithms under consideration and discuss how the estimate of the optimal solution should be chosen when the feasible region of the underlying optimization problem is countably infinite. Then, we present a general result that guarantees the global convergence with probability one of the simulation optimization algorithms in this class. The assumptions of this result are sufficiently weak to allow the algorithms under consideration to be efficient, in that they are not required to either allocate the same amount of computer effort to all the feasible points these algorithms visit, or to spend an increasing amount of computer effort per iteration as the number of iterations grows. This article concludes with a discussion of how our assumptions can be satisfied and also generalized.

34 citations


Journal ArticleDOI
TL;DR: A queue fed by a large number n of independent discrete-time Gaussian processes with stationary increments is studied, and the many-sources asymptotic regime is considered, that is, the buffer-exceedance threshold B and the service capacity C are scaled by the number of sources.
Abstract: In this article, we study a queue fed by a large number n of independent discrete-time Gaussian processes with stationary increments. We consider the many-sources asymptotic regime, that is, the buffer-exceedance threshold B and the service capacity C are scaled by the number of sources (B ≡ nb and C ≡ nc).We discuss four methods for simulating the steady-state probability that the buffer threshold is exceeded: the single-twist method (suggested by large deviation theory), the cut-and-twist method (simulating timeslot by timeslot), the random-twist method (the twist is sampled from a discrete distribution), and the sequential-twist method (simulating source by source).The asymptotic efficiency of these four methods is analytically investigated for n → ∞. A necessary and sufficient condition is derived for the efficiency of the single-twist method, indicating that it is nearly always asymptotically inefficient. The other three methods, however, are asymptotically efficient. We numerically evaluate the four methods by performing a detailed simulation study where it is our main objective to compare the three efficient methods in practical situations.

27 citations


Journal ArticleDOI
TL;DR: Control variates can be much more statistically efficient than sample means, leading to R&S procedures that are correspondingly more efficient, and the related problem of estimating the expected value of the best (as opposed to the selected) system design is considered.
Abstract: Ranking and selection procedures (R&S) were developed by statisticians to search for the best among a small collection of populations or treatments, where the “best” treatment is typically the one with the largest or smallest expected(long-run average) response. R&S procedures have been successfully extended to address situations that are encountered in stochastic simulation of alternative system designs, including unequal variances across alternatives, dependence both within the output of each system and across the outputs from alternative systems, and large numbers of alternatives to compare. In nearly all cases the estimator of the expected response is a (perhaps generalized) sample mean of the output of interest. In this article we derive R&S procedures that employ control-variate estimators instead of sample means. Control variates can be much more statistically efficient than sample means, leading to R&S procedures that are correspondingly more efficient. We also consider the related problem of estimating the expected value of the best (as opposed to the selected) system design.

24 citations


Journal ArticleDOI
TL;DR: A discrete event interpretation of the finite difference time domain (FDTD) and digital wave guide network (DWN) wave simulation schemes is described, formalized using the discrete event system specification (DEVS).
Abstract: This article describes a discrete event interpretation of the finite difference time domain (FDTD) and digital wave guide network (DWN) wave simulation schemes. The discrete event method is formalized using the discrete event system specification (DEVS). The scheme is shown to have errors that are proportional to the resolution of the spatial grid. A numerical example demonstrates the relative efficiency of the scheme with respect to FDTD and DWN schemes. The potential for the discrete event scheme to reduce numerical dispersion and attenuation errors is discussed.

21 citations


Journal ArticleDOI
TL;DR: In this article, the effect of a heavy-tailed file-size distribution whose corresponding density follows an inverse power law with exponent α p 1, where the shape parameter α is strictly between 1 and 2.
Abstract: For statistical inference based on telecommunications network simulation, we examine the effect of a heavy-tailed file-size distribution whose corresponding density follows an inverse power law with exponent α p 1, where the shape parameter α is strictly between 1 and 2. Representing the session-initiation and file-transmission processes as an infinite-server queueing system with Poisson arrivals, we derive the transient conditional mean and covariance function that describes the number of active sessions as well as the steady-state counterparts of these moments. Assuming the file size (service time) for each session follows the Lomax distribution, we show that the variance of the sample mean for the time-averaged number of active sessions tends to zero as the power of 1 − α of the simulation run length. Therefore, impractically large sample-path lengths are required to achieve point estimators with acceptable levels of statistical accuracy. This study compares the accuracy of point estimators based on the Lomax distribution with those for lognormal and Weibull file-size distributions whose parameters are determined by matching their means and a selected extreme quantile with those of the Lomax. Both alternatives require shorter run lengths than the Lomax to achieve a given level of accuracy. Although the lognormal requires longer sample paths than the Weibull, it better approximates the Lomax and leads to practicable run lengths in almost all scenarios.

Journal ArticleDOI
TL;DR: This work uses Propp and Wilson's CFTP algorithm and ROCFTP algorithm to construct perfect samplers for several queueing and network models and gives effective ways to generate random samples from the steady-state distributions of these queues.
Abstract: We review Propp and Wilson's [1996] CFTP algorithm and Wilson's [2000] ROCFTP algorithm. We then use these to construct perfect samplers for several queueing and network models: Poisson arrivals and exponential service times, several types of customers, and a trunk reservation protocol for accepting new customers; a similar protocol on a network switching model; a queue with a general arrival process; and a queue with both general arrivals and service times. Our samplers give effective ways to generate random samples from the steady-state distributions of these queues.

Journal ArticleDOI
TL;DR: Analy-tical results for the mean and variance of the RBM estimator for a class of processes having initial transients with an additive form are given and it is suggested that in some cases, the R BM estimator is a good compromise choice with respect to bias and variance.
Abstract: Independent replications (IR) and batch means (BM) are two of the most widely used variance-estimation methods for simulation output analysis. Alexopoulos and Goldsman conducted a thorough examination of IR and BM; and Andradottir and Argon proposed the method of replicated batch means (RBM), which combines good characteristics of IR and BM. This article gives analy-tical results for the mean and variance of the RBM estimator for a class of processes having initial transients with an additive form. Along the way, we provide succinct complementary extensions of some of the results in the aforementioned papers. Our expressions explicitly show how the transient function affects estimator performance and suggest that in some cases, the RBM estimator is a good compromise choice with respect to bias and variance. However, care must be taken to avoid an excessive number of replications when the transient function is pervasive. An example involving a simple moving average process illustrates our findings.

Journal ArticleDOI
TL;DR: This article formulates distance limit derivation and mobility update reduction that introduce bounded inaccuracy to the radio propagation simulation and proposes a novel technique, Lazy Event Scheduling with Corrective Retrospection, that reduces simulation events twenty-five fold without introducing any inaccuracy at all.
Abstract: Discrete event network simulators have emerged as popular tools for verification and performance evaluation of wireless networks. Nevertheless, the desire to model such networks at high fidelity implies high computational costs, limiting most researchers the ability to simulate networks with thousands of nodes. Previous attempts to optimize simulation of large-scale wireless networks have not appropriately modeled accumulation of weak interference, thereby suffering inaccuracies that may be further magnified in the evaluation of upper-layer protocols. This article presents a comprehensive analysis on the effects of common optimization techniques for large-scale wireless network simulation on the overall network performance. Based on the analysis, it formulates distance limit derivation and mobility update reduction that introduce bounded inaccuracy to the radio propagation simulation. It further proposes a novel technique, Lazy Event Scheduling with Corrective Retrospection, that reduces simulation events twenty-five fold without introducing any inaccuracy at all. The experimental results show that these optimizations can substantially improve the runtime performance of an already efficient wireless network simulator, by a factor of up to 55 for wireless networks with 3200 nodes without compromising the simulation's accuracy.

Journal ArticleDOI
TL;DR: A neuro-fuzzy inference algorithm is employed to automatically learn the required resemblance relation from real and simulated data, and defuzzification strategies are applied to obtain a coefficient on the unit interval that characterizes the degree of model validity.
Abstract: We develop a new approach to the validation of simulation models by exploiting elements from fuzzy set theory and machine learning. A fuzzy resemblance relation concept is used to set up a mathematical framework for measuring the degree of similarity between the input-output behavior of a simulation model and the corresponding behavior of the real system. A neuro-fuzzy inference algorithm is employed to automatically learn the required resemblance relation from real and simulated data. Ultimately, defuzzification strategies are applied to obtain a coefficient on the unit interval that characterizes the degree of model validity. An example in the airline industry illustrates the practical application of this methodology.

Journal ArticleDOI
TL;DR: The semi-regenerative method is a generalization of the regenerative method, and it can increase efficiency, and the estimation of various performance measures, including steady-state means, expected cumulative reward until hitting a set of states, derivatives of steady- state means, and time-average variance constants is considered.
Abstract: We develop a class of techniques for analyzing the output of simulations of a semi-regenerative process. Called the semi-regenerative method, the approach is a generalization of the regenerative method, and it can increase efficiency. We consider the estimation of various performance measures, including steady-state means, expected cumulative reward until hitting a set of states, derivatives of steady-state means, and time-average variance constants. We also discuss importance sampling and a bias-reduction technique. In each case, we develop two estimators: one based on a simulation of a single sample path, and the other a type of stratified estimator in which trajectories are generated in an independent and identically distributed manner. We establish a central limit theorem for each estimator so confidence intervals can be constructed.

Journal ArticleDOI
TL;DR: Ghosh and Henderson as discussed by the authors reported that α1 and α2 are incorrect, and should instead be (d − k)/2 + 1, and the error is repeated in the recap of the generation steps reported on the same page.
Abstract: As part of their analysis of the NORTA method, Ghosh and Henderson [2003] developed an algorithm for generating a random correlation matrix. The derivation of the algorithm is correct up until the discussion of the parameters of the beta distribution on page 288. The parameters α1 and α2 were reported to take the values (k − 1)/2 and (d − k)/2 respectively. The value of α2 is incorrect, and should instead equal (d − k)/2 + 1. The error is repeated in the recap of the generation steps reported on the same page. One should instead sample y from a beta distribution with parameters α1 = (k − 1)/2 and α2 = (d − k)/2 + 1, so that the correct density for y is proportional to y (k−3)/2 (1 − y) (d−k)/2