scispace - formally typeset
Search or ask a question

Showing papers on "Mathematical statistics published in 2003"


BookDOI
TL;DR: In this article, a survey of elementary applications of probability theory can be found, including the following: 1. Plausible reasoning 2. The quantitative rules 3. Elementary sampling theory 4. Elementary hypothesis testing 5. Queer uses for probability theory 6. Elementary parameter estimation 7. The central, Gaussian or normal distribution 8. Sufficiency, ancillarity, and all that 9. Repetitive experiments, probability and frequency 10. Advanced applications: 11. Discrete prior probabilities, the entropy principle 12. Simple applications of decision theory 15.
Abstract: Foreword Preface Part I. Principles and Elementary Applications: 1. Plausible reasoning 2. The quantitative rules 3. Elementary sampling theory 4. Elementary hypothesis testing 5. Queer uses for probability theory 6. Elementary parameter estimation 7. The central, Gaussian or normal distribution 8. Sufficiency, ancillarity, and all that 9. Repetitive experiments, probability and frequency 10. Physics of 'random experiments' Part II. Advanced Applications: 11. Discrete prior probabilities, the entropy principle 12. Ignorance priors and transformation groups 13. Decision theory: historical background 14. Simple applications of decision theory 15. Paradoxes of probability theory 16. Orthodox methods: historical background 17. Principles and pathology of orthodox statistics 18. The Ap distribution and rule of succession 19. Physical measurements 20. Model comparison 21. Outliers and robustness 22. Introduction to communication theory References Appendix A. Other approaches to probability theory Appendix B. Mathematical formalities and style Appendix C. Convolutions and cumulants.

4,641 citations


Journal ArticleDOI
TL;DR: In this article, two nonparametric approaches, based on kernel methods and orthogonal series, are proposed to estimate regression functions in the presence of instrumental variables, and the authors derive optimal convergence rates, and show that they are attained by particular estimators.
Abstract: We suggest two nonparametric approaches, based on kernel methods and orthogonal series to estimating regression functions in the presence of instrumental variables. For the first time in this class of problems, we derive optimal convergence rates, and show that they are attained by particular estimators. In the presence of instrumental variables the relation that identifies the regression function also defines an ill-posed inverse problem, the "difficulty" of which depends on eigenvalues of a certain integral operator which is determined by the joint density of endogenous and instrumental variables. We delineate the role played by problem difficulty in determining both the optimal convergence rate and the appropriate choice of smoothing parameter.

423 citations


Book
01 Jan 2003

400 citations



Journal ArticleDOI
TL;DR: In this paper, the authors show that classical score test statistics, frequently advocated in practice, cannot be used in this context, but that well-chosen one-sided counterparts could be used instead.
Abstract: Whenever inference for variance components is required, the choice between one-sided and two-sided tests is crucial. This choice is usually driven by whether or not negative variance components are permitted. For two-sided tests, classical inferential procedures can be followed, based on likelihood ratios, score statistics, or Wald statistics. For one-sided tests, however, one-sided test statistics need to be developed, and their null distribution derived. While this has received considerable attention in the context of the likelihood ratio test, there appears to be much confusion about the related problem for the score test. The aim of this paper is to illustrate that classical (two-sided) score test statistics, frequently advocated in practice, cannot be used in this context, but that well-chosen one-sided counterparts could be used instead. The relation with likelihood ratio tests will be established, and all results are illustrated in an analysis of continuous longitudinal data using linear mixed models.

223 citations


Journal ArticleDOI
TL;DR: The current usage by biologists of the bootstrap is presented as a tool both for making inferences and for evaluating robustness, and a framework for thinking about these problems in terms of mathematical statistics is proposed.
Abstract: This is a survey of the use of the bootstrap in the area of systematic and evolutionary biology. I present the current usage by biologists of the bootstrap as a tool both for making inferences and for evaluating robustness, and propose a framework for thinking about these problems in terms of mathematical statistics.

149 citations


Book
01 Oct 2003
TL;DR: In this paper, the Chi-square test was used for regression and correlation analysis of covariance in the context of non-parametric statistics, and the results showed that correlation between covariance and covariance can be obtained using a fixed effects model and a mixed model.
Abstract: Descriptive Statistics Probability Random Variables Probability Distributions Statistical Inference -- Interval Estimation Hypothesis Testing -- Fundamental Concepts Testing Hypotheses Concerning Population Means and Population Proportions The Chi-Square Test Linear Regression Correlation Multiple Regression and Correlation One-Way Analysis of Variance Two-Way Analysis of Variance -- Fixed Effects Model Two-Way Analysis of Variance Random-Effects Model and Mixed Model Design of Experiment Analysis of Covariance Non-Parametric Statistics.

127 citations


Book
01 Jan 2003
TL;DR: In this article, the authors introduce the concept of Probability Distributions and Probability Densities, and present a set of test cases for different types of probability distributions, including normal distributions, sampling distributions, and special distributions.
Abstract: 1. Introduction. 2. Probability. 3. Probability Distributions and Probability Densities. 4. Mathematical Expectation. 5. Special Probability Distributions. 6. Special Probability Densities. 7. Functions of Random Variables. 8. Sampling Distributions. 9. Decision Theory. 10. Point Estimation. 11. Interval Estimation. 12. Hypothesis Testing. 13. Tests of Hypotheses Involving Means, Variances, and Proportions. 14. Regression and Correlation. 15. Design and Analysis of Experiments. 16. Nonparametric Tests.

51 citations


Journal ArticleDOI
TL;DR: Six statistics for evaluating a structural equation model are extended from the conventional context to the multilevel context; these statistics are asymptotically distribution free, that is, their distributions do not depend on the sampling distribution when sample size at the highest level is large enough.

31 citations



Journal Article
TL;DR: In this article, a new method is presented to quantify and evaluate the power quality by selected the day cycle, and the indexes showing an aspect of power quality are qualified and unified using probability and mathematical statistics and vector algebra.
Abstract: The power quality is represented by several indexes showing each aspects of electric energy. The indexes have to be unified as a global unique index to meet the demand of analysis of the quality and price of electricity commodity. It is needed increasingly with the rapid developing of electricity market. As a result, a new method is presented to quantify and evaluate the power quality by selected the day cycle. The indexes showing an aspect of power quality are qualified and unified using probability and mathematical statistics and vector algebra. Then the global unique power quality index is obtained by unified the indexes representing each aspect of power quality by using vector algebra. At last, the class evaluation is presented to evaluate the global unique power quality index, which makes it possible that a class of quality for power has a corresponding price. The proposed method lays the foundation for further study of electricity market.

Book
01 Jan 2003
TL;DR: In this article, the authors present an overview of business statistics using statistical tables, including the properties of the Mean and the Variance of a Random Variable and the Co-variance of x(bar) and p(hat).
Abstract: 1. An Introduction to Business Statistics 2. Descriptive Statistics: Tabular and Graphical Methods 3. Descriptive Statistics: Numerical Methods 4. Probability 5. Discrete Random Variables 6. Continuous Random Variables 7. Sampling and Sampling Distributions 8. Confidence Intervals 9. Hypothesis Testing 10. Statistical Inferences Based on Two Samples 11. Experimental Design and Analysis of Variance 12. Chi-Square Tests 13. Simple Linear Regression Analysis 14. Multiple Regression and Model Building Appendix A: Statistical Tables Answers to Most Odd-Numbered Exercises References Photo Credits Index On the Website: 15. Process Improvement Using Control Charts Appendix B: Properties of the Mean and the Variance of a Random Variable and the Co-variance Appendix C: Derivatives of the Mean and Variance of x(bar) and p(hat) Appendix D: Confidence Intervals for Parameters of Finite Populations Appendix E: Logistic Regression



BookDOI
01 Jan 2003
TL;DR: This paper presents a meta-modelling procedure called the Kaiman Filter, which automates the very labor-intensive and therefore time-heavy and expensive process of integrating Gaussian Hints into Classical Regression Models.
Abstract: 1. The Theory of Generalized Functional Models.- 2. The Plausibility and Likelihood Functions.- 3. Hints on Continuous Frames and Gaussian Linear Systems.- 4. Assumption-Based Reasoning with Classical Regression Models.- 5. Assumption-Based Reasoning with General Gaussian Linear Systems.- 6. Gaussian Hints as a Valuation System.- 7. Local Propagation of Gaussian Hints.- 8. Application to the Kaiman Filter.- References.

Journal ArticleDOI
TL;DR: In this paper, a new technique of invariant embedding of sample statistics in a loss function is proposed, which represents a simple and computationally attractive statistical method based on the constructive use of the invariance principle in mathematical statistics.
Abstract: In the present paper, for constructing the minimum risk estimators of state of stochastic systems, a new technique of invariant embedding of sample statistics in a loss function is proposed. This technique represents a simple and computationally attractive statistical method based on the constructive use of the invariance principle in mathematical statistics. Unlike the Bayesian approach, an invariant embedding technique is independent of the choice of priors. It allows one to eliminate unknown parameters from the problem and to find the best invariant estimator, which has smaller risk than any of the well‐known estimators. There exists a class of control systems where observations are not available at every time due to either physical impossibility and/or the costs involved in taking a measurement. In this paper, the problem of how to select the total number of the observations optimally when a constant cost is incurred for each observation taken is discussed. To illustrate the proposed technique, an example is given and comparison between the maximum likelihood estimator (MLE), minimum variance unbiased estimator (MVUE), minimum mean square error estimator (MMSEE), median unbiased estimator (MUE), and the best invariant estimator (BIE) is discussed.

Posted Content
TL;DR: An asymptotic approximation for integrals of probability densities over sets in finite dimensional euclidean space, which are far away from the origin (asymptotic sets), is proved and used to investigate tails of quadratic forms of random vectors, supremum of random linear forms among others.
Abstract: In this monograph, we prove an asymptotic approximation for integrals of probability densities over sets in finite dimensional euclidean space, which are far away from the origin (asymptotic sets). We use this approximation to investigate tails of quadratic forms of random vectors, supremum of random linear forms among others. Applications to the study of finite size random matrices, finite sample statistics of autoregressive processes, and supremum of some stochastic processes.

Journal ArticleDOI
TL;DR: Almost all the statistical inferences typically seen in the medical literature are based on probability models that connect summary statistics calculated using the observed data to estimates.
Abstract: tatistical inference allows one to draw conclusions about the characteristics of a population on the basis of data collected from a sample of subjects from that population. Almost all the statistical inferences typically seen in the medical literature are based on probability models that connect summary statistics calculated using the observed data to estimates

Journal ArticleDOI
TL;DR: In this paper, the authors extend some known results about stochastic comparisons of univariate order statistics to the case of random vectors of order statistics and give conditions on the parent distribution to classify the random vector in the MIFR or MPF2 classes.


01 Jan 2003
TL;DR: A general overview of the theory of copulas is presented and some of the main results of this theory, various examples, and some open problems will be described.
Abstract: The notion of copula was introduced by A. Sklar in 1959, when answering a question raised by M. Frechet about the relationship between a multidimensional probability function and its lower dimensional margins. At the beginning, copulas were mainly used in the development of the theory of probabilistic metric spaces. Later, they were of interest to define nonparametric measures of dependence between random variables, and since then, they began to play an important role in probability and mathematical statistics. In this paper, a general overview of the theory of copulas will be presented. Some of the main results of this theory, various examples, and some open problems will be described.

Book ChapterDOI
01 Jan 2003
TL;DR: How case studies are incorporated in the course are described, the challenges that are faced in adopting this approach are outlined, and the efforts to overcome these challenges are discussed.
Abstract: We have developed a model for teaching mathematical statistics through detailed case studies. We use these case studies to bridge the gap between statistical theory and practice, and to help students develop an understanding of the basic ideas in mathematical statistics. We also use them to motivate students to explore the concepts of statistics. Although we strongly advocate teaching mathematical statistics through case studies, there are many challenges that arise from this approach. In this paper, we describe how we find case studies and incorporate them into the course. We outline the challenges that we face in adopting this approach, and discuss our efforts to overcome these challenges.

Book
09 Jul 2003
TL;DR: In this article, the authors describe a confidence interval for correlation coefficients and test correlation coefficients with a regression line from a simple linear regression with two groups of variables, and show that the confidence interval is a function of the correlation coefficient.
Abstract: I. PROBABILITY: PROPERTIES OF SAMPLES. 1. Descriptive Statistics. Summary Measures. Graphic Representation. 2. Probability. Eight Rules of Probability. Composite Events. Bayes' Rule. Four Probability Problems. 3. Random Variables. Random Variables. Joint Probability Distribution. 4. Probability Distributions. Binominal Probability Distribution. Normal Profitability Distribution. Central Limit Theorem. II. STATISTICS: PROPERTIES OF SAMPLED POPULATIONS. 5. Statistical Inference I. Description of a Confidence Interval. Statistical Hypothesis Testing. 6. Statistical Inference II. Student's t-Distribution. Computation of Sample Size. 7. Chi-Square Analysis. Independence of Two Categorical Variables "r by c Table." 8. Linear Regression. Least Squares Estimation. Assessing an Estimated Regression Line. Assessing Regression Lines from Two Groups. 9. Correlation. Testing a Correlation Coefficient. Confidence Interval for a Correlation Coefficient. References. Appendix A: Table A.1: Normal Distributions. Table A.2: t-distribution. Table A.3: Chi-square Distribution. Table A.4: Values for Testing Correlations (conversion of t-values). Table A.5: Values for Testing Rank Correlations. Chart: Confidence Intervals for Correlation Coefficients. Appendix B: B.1: Summation Notation. B.2: Derivation of the Normal Equations for Simple Linear Regression. B.3: Poisson Probability Distribution. B.4: Problem Sets: 1 to 15 B.5: Partial Solutions to Most Problems (Sets 1 to 15).

Journal ArticleDOI
TL;DR: An algorithm that may be used to compute the probability of the occurrence of a given number of consecutive successes (or failures) in binary trials that are dependent is given.

Journal ArticleDOI
TL;DR: This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data.
Abstract: Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.

Journal ArticleDOI
TL;DR: This paper proposes to detect seasonal unit roots within the context of a structural time series model and shows that the method works well using Monte Carlo simulations.
Abstract: In this paper, we propose to detect seasonal unit roots within the context of a structural time series model. Such a model is often found to be useful in practice. Using Monte Carlo simulations, we show that our method works well. We illustrate our approach for several quarterly macroeconomic time series variables.

Book ChapterDOI
20 Mar 2003
TL;DR: In this article, the authors propose a change-point analysis for high-dimensional data using Lévy processes and piecewise deterministic Markov processes, which is based on the Semiparametric statistics.
Abstract: * Asymptotic approximations * Change-point analysis * Control charts * Copulas * Experimental design * Functional data analysis * Markovian modeling * Missing/incomplete observations * Mixtures of distributions and regressions * Models for high dimensional data * Lévy processes * Piecewise deterministic Markov processes * Semiparametric statistics * Spatial and temporal sampling * Survival analysis

Posted Content
03 Mar 2003
TL;DR: In this article, a multiparametric statistical model providing stable reconstruction of parameters by observations is considered, which is referred to as a psi function in analogy with quantum mechanics, where the psi function is represented by an expansion in terms of an orthonormal set of functions.
Abstract: Multiparametric statistical model providing stable reconstruction of parameters by observations is considered. The only general method of this kind is the root model based on the representation of the probability density as a squared absolute value of a certain function, which is referred to as a psi function in analogy with quantum mechanics. The psi function is represented by an expansion in terms of an orthonormal set of functions. It is shown that the introduction of the psi function allows one to represent the Fisher information matrix as well as statistical properties of the estimator of the state vector (state estimator) in simple analytical forms. A new statistical characteristic, a confidence cone, is introduced instead of a standard confidence interval. The chi-square test is considered to test the hypotheses that the estimated vector converges to the state vector of a general population and that both samples are homogeneous. The expansion coefficients are estimated by the maximum likelihood method. An iteration algorithm for solving the likelihood equation is presented. The stability and rate of convergence of the solution are studied. A special iteration parameter is introduced: its optimal value is chosen on the basis of the maximin strategy. Numerical simulation is performed using the set of the Chebyshev-Hermite functions as a basis.


01 Jan 2003
TL;DR: In this paper, the asymptotic properties of the quasi-maximum likelihood estimator (quasi-MLE) for GARCH(1,2) model under stationary innovations were investigated.
Abstract: In this paper, we investigate the asymptotic properties of the quasi-maximum likelihood estimator (quasi-MLE) for GARCH(1,2) model under stationary innovations. Consistency of the global quasi-MLE and asymptotic normality of the local quasi-MLE are obtained, which extend the previous results for GARCH(1,1) under weaker conditions.