scispace - formally typeset
Search or ask a question

Showing papers on "Mathematical statistics published in 1988"


Book
01 Jan 1988
TL;DR: In this article, the authors present a model for estimating parameters and fitting of probability distributions from the normal distribution. But the model is not suitable for the analysis of categorical data.
Abstract: 1. Probability. 2. Random Variables. 3. Joint Distributions. 4. Expected Values. 5. Limit Theorems. 6. Distributions Derived from the Normal Distribution. 7. Survey Sampling. 8. Estimation of Parameters and Fitting of Probability Distributions. 9. Testing Hypotheses and Assessing Goodness of Fit. 10. Summarizing Data. 11. Comparing Two Samples. 12. The Analysis of Variance. 13. The Analysis of Categorical Data. 14. Linear Least Squares. 15. Decision Theory and Bayesian Inference.

3,521 citations


Book
01 Jan 1988

1,522 citations


Book
03 Mar 1988
TL;DR: The data univariate plots and descriptive statistics scatterplot, correlation and covariance face plots multiple linear regression linear combinations linear discriminant analysis for two groups identification analysis specification analysis principal component analysis comparing the covariance structures of two groups exercises mathematical appendix.
Abstract: the data univariate plots and descriptive statistics scatterplot, correlation and covariance face plots multiple linear regression linear combinations linear discriminant analysis for two groups identification analysis specification analysis principal component analysis comparing the covariance structures of two groups exercises mathematical appendix.

465 citations


Book
01 Jan 1988
TL;DR: In this article, Random Variables: Discrete Case, Discrete case, and Mixed Case are used to estimate the probability of an instance in a discrete case and in a continuous case.
Abstract: Naive Set Theory. Probability. Random Variables: Discrete Case. Random Variables: Continuous and Mixed Cases. Moments. Sums of Random Variables, Probability Inequalities, and Limit Laws. Point Estimation. Data Reduction and Best Estimation (Sufficiency, Completeness, and UMVUE's). Tests of Hypotheses. Interval Estimation. Ranking and Selection Procedures. Decision Theory. Nonparametric Statistical Inference. Regression and Linear Statistical Inference Analysis of Variance. Robust Statistical Procedures. Statistical Tables. Index

312 citations


Book
01 Jan 1988
TL;DR: In this article, the authors define ancillarity, sufficiency and projection for the one-parameter model, and apply it to spatial statistics problems, e.g., in spatial statistics.
Abstract: 1: Introduction.- 2: The Space of Inference Functions: Ancillarity, Sufficiency and Projection.- 2.1 Basic definitions.- 2.2 Projections and product sets.- 2.3 Ancillarity, sufficiency and projection for the one-parameter model.- 2.4 Local concepts of ancillarity and sufficiency.- 2.5 Second order ancillarity and sufficiency.- 2.6 Parametrization invariance of local constructions.- 2.7 Background development.- 3: Selecting an Inference Function for 1-Parameter Models.- 3.1 Linearization of inference functions.- 3.2 Adjustments to reduce curvature.- 3.3 Reducing the number of roots.- 3.4 Median adjustment.- 3.5 Approximate normal inference functions.- 3.6 Background development.- 4: Nuisance Parameters.- 4.1 Eliminating nuisance parameters by invariance.- 4.2 Eliminating nuisance parameters by conditioning.- 4.3 Inference for models involving obstructing nuisance parameters.- 4.4 Background details.- 5: Inference under Restrictions.- 5.1 Linear models.- 5.2 Censoring, grouping and truncation.- 5.3 Errors in observations.- 5.4 Backgound details.- 6: Inference for Stochastic Processes.- 6.1 Linear inference functions.- 6.2 Joint estimation in multiparameter models.- 6.3 Martingale inference functions.- 6.4 Applications in spatial statistics.- 6.5 Background details.- References.

90 citations


Journal ArticleDOI
TL;DR: In this article, a nonclassical form of empirical df Hn which is of U-statistic structure and extend to Hnthe classical exponential probability inequalities and Glivenko-Cantelli convergence properties known for the usual empirical df is studied.
Abstract: We study a nonclassical form of empirical df Hnwhich is of U-statistic structure and extend to Hnthe classical exponential probability inequalities and Glivenko-Cantelli convergence properties known for the usual empirical df. An important class of statistics is given byT(Hn), where T(·) is a generalized form of L-functional. For such statisticswe prove almost sure convergence using an approach which separates the functional-analytic and stochastic components of the problem and handles the latter component by application of Glivenko-Cantelli type properties.Classical results for U-statistics and L-statistics are obtained as special cases without addition of unnecessary restrictions.Many important new types of statistics of current interest are covered as well by our result.

65 citations



01 Jan 1988
TL;DR: In this paper, the structural responses of stochastic structures under uncertain loadings have been investigated using probabilistic finite element codes and correlated random variables, and the response of linear systems under stationary random loading.
Abstract: This paper addresses current work to develop probabilistic structural analysis methods for integration with a specially developed probabilistic finite element code. The goal is to establish distribution functions for the structural responses of stochastic structures under uncertain loadings. Several probabilistic analysis methods are proposed covering efficient structural probabilistic analysis methods, correlated random variables, and response of linear system under stationary random loading.

46 citations



Proceedings ArticleDOI
24 Apr 1988
TL;DR: The application of SDT is applied to obtain a robust test of the hypothesis that data from different sensors is consistent and a robust procedure for combining the date which pass this preliminary consistency test.
Abstract: A sensor fusion problem for location data using statistical decision theory (SDT) is studied. The contribution of this study is the application of SDT to obtain a robust test of the hypothesis that data from different sensors is consistent and a robust procedure for combining the date which pass this preliminary consistency test. Here, robustness refers to the statistical effectiveness of the decision rules when the probability distributions of the observation noise and the a priori position information associated with the individual sensors are uncertain. Location data refers to observations of the form Z= theta +V, where V represents additive sensor noise and theta denotes the sensed parameter of interest to the observer. The paper focuses on epsilon -contamination models, which allow one to account for heavy-tailed deviations from nominal sampling distributions. >

29 citations


Book ChapterDOI
01 Jan 1988
TL;DR: In this article, an analogue of the method of characteristic functions for generalized convolutions is discussed, and the set of weak characteristic functions is described in terms of stable distributions, and some applications to analysis are mentioned.
Abstract: An analogue of the method of characteristic functions for generalized convolutions is discussed. Moreover, the set of weak characteristic functions is described in terms of stable distributions. Finally, some applications to analysis are mentioned.

DissertationDOI
01 Jan 1988

Journal ArticleDOI
TL;DR: A simplification of a recent approach suggested by Windham to characterizing optimization-based clustering methods is suggested based on noting an analogy between certain quantities in Windham's formulation and corresponding quantities in mathematical statistics, particularly sufficient statistics and the exponential family of densities.
Abstract: This paper suggests a simplification of a recent approach suggested by Windham to characterizing optimization-based clustering methods. The simplification is based on noting an analogy between certain quantities in Windham's formulation and corresponding quantities in mathematical statistics, particularly sufficient statistics and the exponential family of densities.

Book
01 Jan 1988
TL;DR: The Go-Fast Running Shoe Company case and basic manipulations with MINITAB show how simple and effective hypothesis testing can be in the face of complex data sets.
Abstract: Introduction. The Go-Fast Running Shoe Company case. Elements of MINITAB. Basic manipulations with MINITAB. Descriptive statistics. Probability and sampling. Probability distributions. Statistical estimation. Introduction to hypothesis testing. Statistical inference - one and two sample cases. Statistical inference - multiple sample cases. Correlations and regression analysis. Forecasting. Perspective.

Journal ArticleDOI
TL;DR: In this article, a discussion of a real problem and the resulting Bayesian analysis that provides a nice solution to the problem is presented, with emphasis on the selection of the prior distribution and the parameters associated with the prior.
Abstract: In the usual beginning mathematical statistics course, such as one with DeGroot's Probability and Statistics (1986) as a textbook, the basic mathematical concepts associated with prior and posterior distributions will be discussed. But these discussions contain very little information about the actual selection of a prior distribution for a specific problem. This article contains a discussion of a real problem and the resulting Bayesian analysis that provides a nice solution to the problem. Emphasis is placed on the selection of the prior distribution and the parameters associated with the prior so that students can see how one can actually apply these methods to a real problem.

Journal ArticleDOI
TL;DR: Several inten'clated methodological issues of this symposium are addressed, with the focus of formulating and quantifying kinetic models whose purpose is to answer particular quantitative questions about living systems.

Journal ArticleDOI
TL;DR: Stoyan, Kendall and Mecke as discussed by the authors, 1987. Stochastic Geometry and its Applications: A Review of the Past, Present, and Future of Geometry.
Abstract: 23. Stochastic Geometry and its Applications. By D. Stoyan, W. S. Kendall and J. Mecke. ISBN 0 471 90519 4. Wiley, 1987. 345p. £23.50. (Wiley Series in Probability and Mathematical Statistics. A co‐production with Akademie‐Verlag, GDR.)

Book ChapterDOI
01 Jan 1988
TL;DR: In this paper, it was shown that a set of probability distributions (statistical states, states) of a nonequilibrium thermodynamic system can be equipped with the structure of a Finsler space.
Abstract: It is shown that a set of probability distributions (statistical states, states) of a nonequilibrium thermodynamical system can be equipped with the structure of a Finsler space. The Lagrange function L of the space is defined by means of the relative entropy between states. The time variable is specified as an additional position variable by requiring L be homogeneous in the directional variables. Such a linsler model is a generalization of the Riemannian model of Fisher, well-known in mathematical statistics, recently applied in equilibrium thermodynamics by the author and his collaborators (A. Kossakowski, R. Mrugala, H. Janyszek and others).


Book ChapterDOI
01 Jan 1988
TL;DR: This first chapter reviews basic statistics concepts pertaining to binary hypothesis-testing problems and some further results of statistical hypothesis testing with which the reader may not be as familiar, but which will be of use to us in later chapters.
Abstract: The signal processing problem which is the object of our study in this book is that of detecting the presence of a signal in noisy observations. Signal detection is a function that has to be implemented in a variety of applications, the more obvious ones being in radar, sonar, and communications. By viewing signal detection problems as problems of binary hypothesis testing in statistical inference, we get a convenient mathematical framework within which we can treat in a unified way the analysis and synthesis of signal detectors for different specific situations. The theory and results in mathematical statistics pertaining to binary hypothesis-testing problems are therefore of central importance to us in this book. In this first chapter we review some of these basic statistics concepts. In addition, we will find in this chapter some further results of statistical hypothesis testing with which the reader may not be as familiar, but which will be of use to us in later chapters.

Journal ArticleDOI
TL;DR: Szekely and Reidel as mentioned in this paper discuss paradoxes in Probability Theory and in Mathematical Statistics and show that paradoxes can be found in probability theory and in statistics.
Abstract: 25. Paradoxes in Probability Theory and in Mathematical Statistics. By Gabor J. Szekely. ISBN 90–277–1899–7. Reidel, 1986. xii, 250p. 135 Dfl, $59, £41.50.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on the debate in the pages of the Soviet statistical journal Vestnik Statistiki as to the existence and relevance of applied statistics as a separate scientific discipline and analyze the contents of four letters to the editor and relevant editorial comments that appeared in this journal between October 1985 and July 1987.
Abstract: The article focuses on the debate in the pages of the Soviet statistical journal Vestnik Statistiki as to the existence and relevance of applied statistics as a separate scientific discipline. The contents of four letters to the editor and relevant editorial comments that appeared in this journal between October 1985 and July 1987 are analyzed.

Journal ArticleDOI
TL;DR: The homogeneity principle bridges the gap between probability theory and applied statistics and makes statistics as precise as the theory of probability as discussed by the authors, and it opens new horizons for estimating parameters when the central limit theorem is not applied.
Abstract: The theory of probability employs a deductive method of thinking which traces effect from cause. Statistics uses an inductive method of thinking and tries to trace cause from effect. This noble goal can be successfully achieved if and only if a researcher deals with a homogeneous data set. A homogeneous data set is generated logically and then it is to be tested statistically. The homogeneity principle bridges the gap between probability theory and applied statistics and makes statistics as precise as the theory of probability. In this role statistics opens new horizons for estimating parameters when the central limit theorem is not applied.

Journal ArticleDOI
TL;DR: In this paper, the structure of cones in the spaces Lp(l ≤ p ≤ ∞) and their application to minimax problems of statistical assumptions are considered, and the application of cones to the minimax problem of minimax statistical assumptions is discussed.
Abstract: The structure of cones in the spaces Lp(l ≤ p ≤ ∞) and their application to minimax problems of statistical assumptions are considered.

Journal ArticleDOI
TL;DR: A priori planned contrasts and Bayesian inference with specific priors are examples of theory-based statistics, with the former having many of the virtues of the latter as mentioned in this paper, and a new simple computational method devised by Pruzek is illustrated for determining the weights of an a priori contrast using "guessed" means.
Abstract: A distinction is made between statistics based on scientific theory and theory-free statistics. Both are valuable and should be included in most research reports. Conventional simple hypothesis testing is often ambiguous; it could be in either class depending on the experimental hypothesis. A priori planned contrasts and Bayesian inference with specific priors are examples of theory-based statistics, with the former having many of the virtues of the latter. A new simple computational method devised by Pruzek is illustrated for determining the weights of an a priori contrast using “guessed” means. Such statistics are desirable to maximize power in tests of the experimenter’s predictions. Theory-free statistics are desirable to permit others to test alternative interpretations of the data.

Journal ArticleDOI
TL;DR: The DM distribution is a one parameter family of logistic/log-logistic distributions that was found empirically in comparative epidemiological research in the then Laboratory of Hygiene of the University of Amsterdam (project DE MORTIBUS Agez 3) as discussed by the authors.
Abstract: The DM distribution is a one parameter family of logistic/log-logistic distributions that was found empirically in comparative epidemiological research in the then Laboratory of Hygiene of the University of Amsterdam (project DE MORTIBUS Agez 3). It is of interest for mathematical statistics as a new family of distributions and for biology as a contribution to the understanding of variability and as a guide in goal-directed experiments.

Book ChapterDOI
01 Jan 1988
TL;DR: This chapter describes some of the methods implemented in SPAD.N, which implements exploratory analyses for large arrays of data using principal axes methods and clustering techniques.
Abstract: Publisher Summary Multivariate descriptive statistical techniques are divided into two main areas: principal axes methods and clustering techniques. Software SPAD.N implements these exploratory analyses for large arrays of data. Descriptive principal component analysis performs the analysis of a set of continuous variables through correlation or covariance matrices. Two-way correspondence analysis is available for description of contingency tables, and multiple correspondence analysis for a set of categorical variables. These principal axes analyses describe the dispersion of a set of n points in p-dimensional space as points in low dimensional subspaces where dependences can be appreciated in terms of proximities. Clustering techniques are exploratory methods used with principal axes analyses. This chapter describes some of the methods implemented in SPAD.N. In exploratory analysis, the statistician's behavior is different from that in mathematical statistics. An appropriate set of tools has to be designed. The problems were—to work with groups of variables, which must be selected and handled; to perform complex transformations on the variables; to link heavy processing of data giving rise to voluminous intermediary results; and to describe the results themselves, which are often complex and require specific interpretation devices.