scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Asymptotic Theory of Certain "Goodness of Fit" Criteria Based on Stochastic Processes

01 Jan 1952-Annals of Mathematical Statistics (Institute of Mathematical Statistics)-Vol. 23, Iss: 2, pp 193-212
TL;DR: In this article, a general method for calculating the limiting distributions of these criteria is developed by reducing them to corresponding problems in stochastic processes, which in turn lead to more or less classical eigenvalue and boundary value problems for special classes of differential equations.
Abstract: The statistical problem treated is that of testing the hypothesis that $n$ independent, identically distributed random variables have a specified continuous distribution function $F(x)$. If $F_n(x)$ is the empirical cumulative distribution function and $\psi(t)$ is some nonnegative weight function $(0 \leqq t \leqq 1)$, we consider $n^{\frac{1}{2}} \sup_{-\infty

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the authors considered tests for parameter instability and structural change with unknown change point, and the results apply to a wide class of parametric models that are suitable for estimation by generalized method of moments procedures.
Abstract: This paper considers tests for parameter instability and structural change with unknown change point. The results apply to a wide class of parametric models that are suitable for estimation by generalized method of moments procedures. The asymptotic distributions of the test statistics considered here are nonstandard because the change point parameter only appears under the alternative hypothesis and not under the null. The tests considered here are shown to have nontrivial asymptotic local power against all alternatives for which the parameters are nonconstant. The tests are found to perform quite well in a Monte Carlo experiment reported elsewhere. Copyright 1993 by The Econometric Society.

4,348 citations

Journal ArticleDOI
TL;DR: In this article, a residual-based Lagrange multiplier (LM) test for a null that the individual observed series are stationary around a deterministic level or around deterministic trend against the alternative of a unit root in panel data is proposed.
Abstract: This paper proposes a residual-based Lagrange multiplier (LM) test for a null that the individual observed series are stationary around a deterministic level or around a deterministic trend against the alternative of a unit root in panel data. The tests which are asymptotically similar under the null, belong to the locally best invariant (LBI) test statistics. The asymptotic distributions of the statistics are derived under the null and are shown to be normally distributed. Finite sample sizes and powers are considered in a Monte Carlo experiment. The empirical sizes of the tests are close to the true size even in small samples. The testing procedure is easy to apply, including, to panel data models with fixed effects, individual deterministic trends and heterogeneous errors across cross-sections. It is also shown how to apply the tests to the more general case of serially correlated disturbance terms.

2,242 citations

Journal ArticleDOI
TL;DR: The Kolmogorov test as discussed by the authors is a distribution-free test of goodness of fit that is sensitive to discrepancies at the tails of the distribution rather than near the median.
Abstract: Some (large sample) significance points are tabulated for a distribution-free test of goodness of fit which was introduced earlier by the authors. The test, which uses the actual observations without grouping, is sensitive to discrepancies at the tails of the distribution rather than near the median. An illustration is given, using a numerical example used previously by Birnbaum in illustrating the Kolmogorov test.

2,013 citations


Cites background or methods from "Asymptotic Theory of Certain "Goodn..."

  • ...The asymptotic distribution of this statistic is given in [1]....

    [...]

  • ...COMPUTATION OF THE ASYMPTOTIC SIGNIFICANCE POINTS It was proved in [1] that the limiting characteristic function of Wn2 defined in either (2) or (4) is...

    [...]

  • ...The present writers uggested [1] the use of the criterion...

    [...]

  • ...The test procedure we have proposed earlier [1] is the following: Let xl ? x2 ?< < ? x....

    [...]

  • ...5) in [1] cannot be used directly here, for that formula requires that f ,(u)du < oo, which is not true of (6)....

    [...]

Book
05 Dec 2000
TL;DR: In this article, the authors present a risk analysis approach based on Monte-Carlo simulation, which is used to fit a first-order parametric distribution to observed data and then combine it with a second-order probability distribution.
Abstract: Preface. Part 1: Introduction. 1. Why do a risk analysis? 1.1. Moving on from "What If" Scenarios. 1.2. The Risk Analysis Process. 1.3. Risk Management Options. 1.4. Evaluating Risk Management Options. 1.5. Inefficiencies in Transferring Risks to Others. 1.6. Risk Registers. 2. Planning a risk analysis. 2.1. Questions and Motives. 2.2. Determine the Assumptions that are Acceptable or Required. 2.3. Time and Timing. 2.4. You'll Need a Good Risk Analyst or Team. 3. The quality of a risk analysis. 3.1. The Reasons Why a Risk Analysis can be Terrible. 3.2. Communicating the Quality of Data Used in a Risk Analysis. 3.3. Level of Criticality. 3.4. The Biggest Uncertainty in a Risk Analysis. 3.5. Iterate. 4. Choice of model structure. 4.1. Software Tools and the Models they Build. 4.2. Calculation Methods. 4.3. Uncertainty and Variability. 4.4. How Monte Carlo Simulation Works. 4.5. Simulation Modelling. 5. Understanding and using the results of a risk analysis. 5.1. Writing a Risk Analysis Report. 5.2. Explaining a Model's Assumptions. 5.3. Graphical Presentation of a Model's Results. 5.4. Statistical Methods of Analysing Results. Part 2: Introduction. 6. Probability mathematics and simulation. 6.1. Probability Distribution Equations. 6.2. The Definition of "Probability". 6.3. Probability Rules. 6.4. Statistical Measures. 7. Building and running a model. 7.1. Model Design and Scope. 7.2. Building Models that are Easy to Check and Modify. 7.3. Building Models that are Efficient. 7.4. Most Common Modelling Errors. 8. Some basic random processes. 8.1. Introduction. 8.2. The Binomial Process. 8.3. The Poisson Process. 8.4. The Hypergeometric Process. 8.5. Central Limit Theorem. 8.6. Renewal Processes. 8.7. Mixture Distributions. 8.8. Martingales. 8.9. Miscellaneous Example. 9. Data and statistics. 9.1. Classical Statistics. 9.2. Bayesian Inference. 9.3. The Bootstrap. 9.4. Maximum Entropy Principle. 9.5. Which Technique Should You Use? 9.6. Adding uncertainty in Simple Linear Least-Squares Regression Analysis. 10. Fitting distributions to data. 10.1. Analysing the Properties of the Observed Data. 10.2. Fitting a Non-Parametric Distribution to the Observed Data. 10.3. Fitting a First-Order Parametric Distribution to Observed Data. 10.4. Fitting a Second-Order Parametric Distribution to Observed Data. 11. Sums of random variables. 11.1. The Basic Problem. 11.2. Aggregate Distributions. 12. Forecasting with uncertainty. 12.1. The Properties of a Time Series Forecast. 12.2. Common Financial Time Series Models. 12.3. Autoregressive Models. 12.4. Markov Chain Models. 12.5. Birth and Death Models. 12.6. Time Series Projection of Events Occurring Randomly in Time. 12.7. Time Series Models with Leading Indicators. 12.8. Comparing Forecasting Fits for Different Models. 12.9. Long-Term Forecasting. 13. Modelling correlation and dependencies. 13.1. Introduction. 13.2. Rank Order Correlation. 13.3. Copulas. 13.4. The Envelope Method. 13.5. Multiple Correlation Using a Look-Up Table. 14. Eliciting from expert opinion. 14.1. Introduction. 14.2. Sources of Error in Subjective Estimation. 14.3. Modelling Techniques. 14.4. Calibrating Subject Matter Experts. 14.5. Conducting a Brainstorming Session. 14.6. Conducting the Interview. 15. Testing and modelling causal relationships. 15.1. Campylobacter Example. 15.2. Types of Model to Analyse Data. 15.3. From Risk Factors to Causes. 15.4. Evaluating Evidence. 15.5. The Limits of Causal Arguments. 15.6. An Example of a Qualitative Causal Analysis. 15.7. Is Causal Analysis Essential? 16. Optimisation in risk analysis. 16.1. Introduction. 16.2. Optimisation Methods. 16.3. Risk Analysis Modelling and Optimisation. 16.4. Working Example: Optimal Allocation of Mineral Pots. 17. Checking and validating a model. 17.1. Spreadsheet Model Errors. 17.2. Checking Model Behaviour. 17.3. Comparing Predictions Against Reality. 18. Discounted cashflow modelling. 18.1. Useful Time Series Models of Sales and Market Size. 18.2. Summing Random Variables. 18.3. Summing Variable Margins on Variable Revenues. 18.4. Financial Measures in Risk Analysis. 19. Project risk analysis. 19.1. Cost Risk Analysis. 19.2. Schedule Risk Analysis. 19.3. Portfolios of risks. 19.4. Cascading Risks. 20. Insurance and finance risk analysis modelling. 20.1. Operational Risk Modelling. 20.2. Credit Risk. 20.3. Credit Ratings and Markov Chain Models. 20.4. Other Areas of Financial Risk. 20.5. Measures of Risk. 20.6. Term Life Insurance. 20.7. Accident Insurance. 20.8. Modelling a Correlated Insurance Portfolio. 20.9. Modelling Extremes. 20.10. Premium Calculations. 21. Microbial food safety risk assessment. 21.1. Growth and Attenuation Models. 21.2. Dose-Response Models. 21.3. Is Monte Carlo Simulation the Right Approach? 21.4. Some Model Simplifications. 22. Animal import risk assessment. 22.1. Testing for an Infected Animal. 22.2. Estimating True Prevalence in a Population. 22.3. Importing Problems. 22.4. Confidence of Detecting an Infected Group. 22.5. Miscellaneous Animal Health and Food Safety Problems. I. Guide for lecturers. II. About ModelRisk. III. A compendium of distributions. III.1. Discrete and Continuous Distributions. III.2. Bounded and Unbounded Distributions. III.3. Parametric and Non-Parametric Distributions. III.4. Univariate and Multivariate Distributions. III.5. Lists of Applications and the Most Useful Distributions. III.6. How to Read Probability Distribution Equations. III.7. The Distributions. III.8. Introduction to Creating Your Own Distributions. III.9. Approximation of One Distribution with Another. III.10. Recursive Formulae for Discrete Distributions. III.11. A Visual Observation On The Behaviour Of Distributions. IV. Further reading. V. Vose Consulting. References. Index.

1,606 citations


Cites background from "Asymptotic Theory of Certain "Goodn..."

  • ...Tables for this statistic can be found in Anderson and Darling (1952)....

    [...]

References
More filters
Book
01 Jan 1944
TL;DR: The tabulation of Bessel functions can be found in this paper, where the authors present a comprehensive survey of the Bessel coefficients before and after 1826, as well as their extensions.
Abstract: 1. Bessel functions before 1826 2. The Bessel coefficients 3. Bessel functions 4. Differential equations 5. Miscellaneous properties of Bessel functions 6. Integral representations of Bessel functions 7. Asymptotic expansions of Bessel functions 8. Bessel functions of large order 9. Polynomials associated with Bessel functions 10. Functions associated with Bessel functions 11. Addition theorems 12. Definite integrals 13. Infinitive integrals 14. Multiple integrals 15. The zeros of Bessel functions 16. Neumann series and Lommel's functions of two variables 17. Kapteyn series 18. Series of Fourier-Bessel and Dini 19. Schlomlich series 20. The tabulation of Bessel functions Tables of Bessel functions Bibliography Indices.

9,584 citations

Book
01 Jan 1948

981 citations

Journal ArticleDOI
TL;DR: In this paper, the authors define a variable V(t) the probability function of a quantity z, which may assume certain real values with certain probabilistic properties, and call V t the probability of z having exactly the value t.
Abstract: 1. By a variable in the sense of the Theory of Probability we mean a quantity z, which may assume certain real values with certain probahilities. We shall call V(t) the probability function of z if, for every real t, V(t) is equal to the probabiliby that z has a value < t, increased by half the probability that z has exactly the value t.

476 citations