Author
Averill M. Law
Other affiliations: University of Michigan, University of Wisconsin-Madison
Bio: Averill M. Law is an academic researcher from University of Arizona. The author has contributed to research in topics: Simulation software & Probability distribution. The author has an hindex of 32, co-authored 68 publications receiving 13854 citations. Previous affiliations of Averill M. Law include University of Michigan & University of Wisconsin-Madison.
Papers published on a yearly basis
Papers
More filters
•
01 Jan 1982
TL;DR: The text is designed for a one-term or two-quarter course in simulation offered in departments of industrial engineering, business, computer science and operations research.
Abstract: From the Publisher:
This second edition of Simulation Modeling and Analysis includes a chapter on "Simulation in Manufacturing Systems" and examples. The text is designed for a one-term or two-quarter course in simulation offered in departments of industrial engineering,business,computer science and operations research.
9,905 citations
••
TL;DR: In this tutorial, techniques for building valid and credible simulation models are presented and the importance of a definitive problem formulation, discussions with subject-matter experts, and interacting with the decision-maker on a regular basis are discussed.
Abstract: In this tutorial we present techniques for building valid and credible simulation models. Ideas to be discussed include the importance of a definitive problem formulation, discussions with subject-matter experts, interacting with the decision-maker on a regular basis, development of a written assumptions document, structured walk-through of the assumptions document, use of sensitivity analysis to determine important model factors, and comparison of model and system output data for an existing system (if any). Each idea will be illustrated by one or more real-world examples. We will also discuss the difficulty in using formal statistical techniques (e.g., confidence intervals) to validate simulation models.
362 citations
••
04 Dec 2005
TL;DR: In this paper, the authors present techniques for building valid and credible simulation models, including the importance of a definitive problem formulation, discussions with subject-matter experts, interacting with the decision-maker on a regular basis, development of a written assumptions document, structured walk-through of the assumptions document and use of sensitivity analysis to determine important model factors.
Abstract: In this tutorial we present techniques for building valid and credible simulation models. Ideas to be discussed include the importance of a definitive problem formulation, discussions with subject-matter experts, interacting with the decision-maker on a regular basis, development of a written assumptions document, structured walk-through of the assumptions document, use of sensitivity analysis to determine important model factors, and comparison of model and system output data for an existing system (if any). Each idea will be illustrated by one or more real-world examples. We will also discuss the difficulty in using formal statistical techniques (e.g., confidence intervals) to validate simulation models.
234 citations
••
TL;DR: This paper presents a new sequential procedure based on the method of batch means for constructing a confidence interval with coverage close to the desired level that does not explicitly require a stochastic process to have regeneration points.
Abstract: A common problem faced by simulators is that of constructing a confidence interval for the steady-state mean of a stochastic process. We have reviewed the existing procedures for this problem and found that all but one either produce confidence intervals with coverages which may be considerably lower than desired or have not been adequately tested. Thus, in many cases simulators will have more confidence in their results than is justified. In this paper we present a new sequential procedure based on the method of batch means for constructing a confidence interval with coverage close to the desired level. The procedure has the advantage that it does not explicitly require a stochastic process to have regeneration points. Empirical results for a large number of stochastic systems indicate that the new procedure performs quite well.
223 citations
Cited by
More filters
•
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON
13,333 citations
••
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.
5,689 citations
••
TL;DR: Several Markov chain methods are available for sampling from a posterior distribution as discussed by the authors, including Gibbs sampler and Metropolis algorithm, and several strategies for constructing hybrid algorithms, which can be used to guide the construction of more efficient algorithms.
Abstract: Several Markov chain methods are available for sampling from a posterior distribution. Two important examples are the Gibbs sampler and the Metropolis algorithm. In addition, several strategies are available for constructing hybrid algorithms. This paper outlines some of the basic methods and strategies and discusses some related theoretical and practical issues. On the theoretical side, results from the theory of general state space Markov chains can be used to obtain convergence rates, laws of large numbers and central limit theorems for estimates obtained from Markov chain methods. These theoretical results can be used to guide the construction of more efficient algorithms. For the practical use of Markov chain methods, standard simulation methodology provides several variance reduction techniques and also give guidance on the choice of sample size and allocation.
3,780 citations
••
TL;DR: A Bayesian calibration technique which improves on this traditional approach in two respects and attempts to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best‐fitting parameter values is presented.
Abstract: We consider prediction and uncertainty analysis for systems which are approximated using complex mathematical models. Such models, implemented as computer codes, are often generic in the sense that by a suitable choice of some of the model's input parameters the code can be used to predict the behaviour of the system in a variety of specific applications. However, in any specific application the values of necessary parameters may be unknown. In this case, physical observations of the system in the specific context are used to learn about the unknown parameters. The process of fitting the model to the observed data by adjusting the parameters is known as calibration. Calibration is typically effected by ad hoc fitting, and after calibration the model is used, with the fitted input values, to predict the future behaviour of the system. We present a Bayesian calibration technique which improves on this traditional approach in two respects. First, the predictions allow for all sources of uncertainty, including the remaining uncertainty over the fitted parameters. Second, they attempt to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best-fitting parameter values. The method is illustrated by using data from a nuclear radiation release at Tomsk, and from a more complex simulated nuclear accident exercise.
3,745 citations
••
01 Jul 1992TL;DR: A general method for automatic reconstruction of accurate, concise, piecewise smooth surfaces from unorganized 3D points that is able to automatically infer the topological type of the surface, its geometry, and the presence and location of features such as boundaries, creases, and corners.
Abstract: This thesis describes a general method for automatic reconstruction of accurate, concise, piecewise smooth surfaces from unorganized 3D points. Instances of surface reconstruction arise in numerous scientific and engineering applications, including reverse-engineering--the automatic generation of CAD models from physical objects.
Previous surface reconstruction methods have typically required additional knowledge, such as structure in the data, known surface genus, or orientation information. In contrast, the method outlined in this thesis requires only the 3D coordinates of the data points. From the data, the method is able to automatically infer the topological type of the surface, its geometry, and the presence and location of features such as boundaries, creases, and corners.
The reconstruction method has three major phases: (1) initial surface estimation, (2) mesh optimization, and (3) piecewise smooth surface optimization. A key ingredient in phase 3, and another principal contribution of this thesis, is the introduction of a new class of piecewise smooth representations based on subdivision. The effectiveness of the three-phase reconstruction method is demonstrated on a number of examples using both simulated and real data.
Phases 2 and 3 of the surface reconstruction method can also be used to approximate existing surface models. By casting surface approximation as a global optimization problem with an energy function that directly measures deviation of the approximation from the original surface, models are obtained that exhibit excellent accuracy to conciseness trade-offs. Examples of piecewise linear and piecewise smooth approximations are generated for various surfaces, including meshes, NURBS surfaces, CSG models, and implicit surfaces.
3,119 citations