scispace - formally typeset
Search or ask a question
Author

Barry L. Nelson

Bio: Barry L. Nelson is an academic researcher from Northwestern University. The author has contributed to research in topics: Stochastic simulation & Estimator. The author has an hindex of 53, co-authored 272 publications receiving 14815 citations. Previous affiliations of Barry L. Nelson include Lancaster University & Ohio State University.


Papers
More filters
Book
21 Sep 1995
TL;DR: Beleska o autorima: str. XV-XVI. as mentioned in this paper - Bibliografija uz svako poglavlje. - Registar.
Abstract: Beleska o autorima: str. XV-XVI. - Bibliografija uz svako poglavlje. - Registar. - Summaries.

3,866 citations

Journal ArticleDOI
K. Abe1, N. Abgrall2, Yasuo Ajima, Hiroaki Aihara1  +413 moreInstitutions (53)
TL;DR: The T2K experiment observes indications of ν (μ) → ν(e) appearance in data accumulated with 1.43×10(20) protons on target, and under this hypothesis, the probability to observe six or more candidate events is 7×10(-3), equivalent to 2.5σ significance.
Abstract: The T2K experiment observes indications of nu(mu) -> nu(mu) e appearance in data accumulated with 1.43 x 10(20) protons on target. Six events pass all selection criteria at the far detector. In a three-flavor neutrino oscillation scenario with |Delta m(23)(2)| = 2.4 x 10(-3) eV(2), sin(2)2 theta(23) = 1 and sin(2)2 theta(13) = 0, the expected number of such events is 1.5 +/- 0.3(syst). Under this hypothesis, the probability to observe six or more candidate events is 7 x 10(-3), equivalent to 2.5 sigma significance. At 90% C.L., the data are consistent with 0.03(0.04) < sin(2)2 theta(13) < 0.28(0.34) for delta(CP) = 0 and a normal (inverted) hierarchy.

1,361 citations

Journal ArticleDOI
K. Abe1, N. Abgrall2, Hiroaki Aihara1, Yasuo Ajima  +533 moreInstitutions (53)
TL;DR: The T2K experiment as discussed by the authors is a long-baseline neutrino oscillation experiment whose main goal is to measure the last unknown lepton sector mixing angle by observing its appearance in a particle beam generated by the J-PARC accelerator.
Abstract: The T2K experiment is a long-baseline neutrino oscillation experiment Its main goal is to measure the last unknown lepton sector mixing angle {\theta}_{13} by observing { u}_e appearance in a { u}_{\mu} beam It also aims to make a precision measurement of the known oscillation parameters, {\Delta}m^{2}_{23} and sin^{2} 2{\theta}_{23}, via { u}_{\mu} disappearance studies Other goals of the experiment include various neutrino cross section measurements and sterile neutrino searches The experiment uses an intense proton beam generated by the J-PARC accelerator in Tokai, Japan, and is composed of a neutrino beamline, a near detector complex (ND280), and a far detector (Super-Kamiokande) located 295 km away from J-PARC This paper provides a comprehensive review of the instrumentation aspect of the T2K experiment and a summary of the vital information for each subsystem

714 citations

Journal ArticleDOI
TL;DR: The basic theory of kriging is extended, as applied to the design and analysis of deterministic computer experiments, to the stochastic simulation setting to provide flexible, interpolation-based metamodels of simulation output performance measures as functions of the controllable design or decision variables.
Abstract: We extend the basic theory of kriging, as applied to the design and analysis of deterministic computer experiments, to the stochastic simulation setting. Our goal is to provide flexible, interpolation-based metamodels of simulation output performance measures as functions of the controllable design or decision variables, or uncontrollable environmental variables. To accomplish this, we characterize both the intrinsic uncertainty inherent in a stochastic simulation and the extrinsic uncertainty about the unknown response surface. We use tractable examples to demonstrate why it is critical to characterize both types of uncertainty, derive general results for experiment design and analysis, and present a numerical example that illustrates the stochastic kriging method.

576 citations

Journal ArticleDOI
TL;DR: The procedures presented are appropriate when it is possible to repeatedly obtain small, incremental samples from each simulated system and are based on the assumption of normally distributed data, so the impact of batching is analyzed.
Abstract: We present procedures for selecting the best or near-best of a finite number of simulated systems when best is defined by maximum or minimum expected performance. The procedures are appropriate when it is possible to repeatedly obtain small, incremental samples from each simulated system. The goal of such a sequential procedure is to eliminate, at an early stage of experimentation, those simulated systems that are apparently inferior, and thereby reduce the overall computational effort required to find the best. The procedures we present accommodate unequal variances across systems and the use of common random numbers. However, they are based on the assumption of normally distributed data, so we analyze the impact of batching (to achieve approximate normality or independence) on the performance of the procedures. Comparisons with some existing indifference-zone procedures are also provided.

422 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI

6,278 citations

Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations