scispace - formally typeset
Search or ask a question
Author

Jerzy Neyman

Bio: Jerzy Neyman is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Mathematical statistics & Seeding. The author has an hindex of 41, co-authored 122 publications receiving 13740 citations. Previous affiliations of Jerzy Neyman include Central College & University College London.


Papers
More filters
Book ChapterDOI
TL;DR: The problem of testing statistical hypotheses is an old one as discussed by the authors and its origins are usually connected with the name of Thomas Bayes, who gave the well-known theorem on the probabilities a posteriori of the possible causes of a given event.
Abstract: The problem of testing statistical hypotheses is an old one. Its origins are usually connected with the name of Thomas Bayes, who gave the well-known theorem on the probabilities a posteriori of the possible “causes” of a given event.* Since then it has been discussed by many writers of whom we shall here mention two only, Bertrand† and Borel,‡ whose differing views serve well to illustrate the point from which we shall approach the subject.

2,908 citations

Journal ArticleDOI
TL;DR: In this paper, the importance of placing in a logical sequence the stages of reasoning adopted in the solution of certain statistical problems, which may be termed problems of inference, was emphasised.
Abstract: In an earlier paper* we have endeavoured to emphasise the importance of placing in a logical sequence the stages of reasoning adopted in the solution of certain statistical problems, which may be termed problems of inference. In testing whether a given sample, Z, is likely to have been drawn from a population H, we have started from the simple principle that appears to be used in the judgments of ordinary life-that the degree of confidence placed in an hypothesis depends upon the relative probability or improbability of alternative hypotheses. From this point of view any criterion which is to assist in scaling the degree of confidence with which we accept or reject the hypothesis that X has been randomly drawn from HI should be one which decreases as the probability (defined in some definite manner) of alternative hypotheses becomes relatively greater. Now it is of course impossible in practice to scale the confidence with which we form a judgment with any single numerical criterion, partly because there will nearly always be present certain a priori conditions and limitations which cannot be expressed in exact terms. Yet though it may be irmpossible to bring the ideal situation into agreement with the real, some form of numerical measure is essential as a guide and control. In our previous paper we have made use of the criterion of likelihood. That there may be other forms of criteria or that this one can be interpreted in a different manner is very possible, but our object has been to find a single principle connecting the various sampling tests already in use, and one which could be extended to new problems.

1,060 citations

Journal ArticleDOI
TL;DR: In this paper, the authors distinguish two aspects of the problems of estimation: (i) the practical and (ii) the theoretical aspects of estimating a population, which for some reason or other cannot be exhaustively studied exhaustively.
Abstract: We shall distinguish two aspects of the problems of estimation . (i) the practical and (ii) the theoretical. The practical aspect may be described as follows: (i a ) The statistician is concerned with a population, π, which for some reason or other cannot be studied exhaustively. It is only possible to draw a sample from this population which may be studied in detail and used to form an opinion the values of certain constants describing the properties of the population π. For example, it may be desired to calculate approximately the mean of a certain character possessed by the individuals forming the population π, etc.

981 citations

Book ChapterDOI
TL;DR: The popularity of the representative method is also partly due to the general crisis, to the scarcity of money and to the necessity of carrying out statistical investigations connected with social life in a somewhat hasty way.
Abstract: Owing to the work of the International Statistical Institute,* and perhaps still more to personal achievements of Professor A.L. Bowley, the theory and the possibility of practical applications of the representative method has attracted the attention of many statisticians in different countries. Very probably this popularity of the representative method is also partly due to the general crisis, to the scarcity of money and to the necessity of carrying out statistical investigations connected with social life in a somewhat hasty way. The results are wanted in some few months, sometimes in a few weeks after the beginning of the work, and there is neither time nor money for an exhaustive research.

791 citations


Cited by
More filters
Journal ArticleDOI
Jacob Cohen1
TL;DR: A convenient, although not comprehensive, presentation of required sample sizes is providedHere the sample sizes necessary for .80 power to detect effects at these levels are tabled for eight standard statistical tests.
Abstract: One possible reason for the continued neglect of statistical power analysis in research in the behavioral sciences is the inaccessibility of or difficulty with the standard material. A convenient, although not comprehensive, presentation of required sample sizes is provided here. Effect-size indexes and conventional values for these are given for operationally defined small, medium, and large effects. The sample sizes necessary for .80 power to detect effects at these levels are tabled for eight standard statistical tests: (a) the difference between independent means, (b) the significance of a product-moment correlation, (c) the difference between independent rs, (d) the sign test, (e) the difference between independent proportions, (f) chi-square tests for goodness of fit and contingency tables, (g) one-way analysis of variance, and (h) the significance of a multiple or multiple partial correlation.

38,291 citations

Journal ArticleDOI

22,988 citations

Journal ArticleDOI
TL;DR: In this article, a general null model based on modified independence among variables is proposed to provide an additional reference point for the statistical and scientific evaluation of covariance structure models, and the importance of supplementing statistical evaluation with incremental fit indices associated with the comparison of hierarchical models.
Abstract: Factor analysis, path analysis, structural equation modeling, and related multivariate statistical methods are based on maximum likelihood or generalized least squares estimation developed for covariance structure models. Large-sample theory provides a chi-square goodness-of-fit test for comparing a model against a general alternative model based on correlated variables. This model comparison is insufficient for model evaluation: In large samples virtually any model tends to be rejected as inadequate, and in small samples various competing models, if evaluated, might be equally acceptable. A general null model based on modified independence among variables is proposed to provide an additional reference point for the statistical and scientific evaluation of covariance structure models. Use of the null model in the context of a procedure that sequentially evaluates the statistical necessity of various sets of parameters places statistical methods in covariance structure analysis into a more complete framework. The concepts of ideal models and pseudo chi-square tests are introduced, and their roles in hypothesis testing are developed. The importance of supplementing statistical evaluation with incremental fit indices associated with the comparison of hierarchical models is also emphasized. Normed and nonnormed fit indices are developed and illustrated.

16,420 citations

Journal ArticleDOI
TL;DR: In this paper, the role and limitations of retrospective investigations of factors possibly associated with the occurrence of a disease are discussed and their relationship to forward-type studies emphasized, and examples of situations in which misleading associations could arise through the use of inappropriate control groups are presented.
Abstract: The role and limitations of retrospective investigations of factors possibly associated with the occurrence of a disease are discussed and their relationship to forward-type studies emphasized. Examples of situations in which misleading associations could arise through the use of inappropriate control groups are presented. The possibility of misleading associations may be minimized by controlling or matching on factors which could produce such associations; the statistical analysis will then be modified. Statistical methodology is presented for analyzing retrospective study data, including chi-square measures of statistical significance of the observed association between the disease and the factor under study, and measures for interpreting the association in terms of an increased relative risk of disease. An extension of the chi-square test to the situation where data are subclassified by factors controlled in the analysis is given. A summary relative risk formula, R, is presented and discussed in connection with the problem of weighting the individual subcategory relative risks according to their importance or their precision. Alternative relative-risk formulas, R I , R2, Ra, and R4/ which require the calculation of subcategory-adjusted proportions ot the study factor among diseased persons and controls for the computation of relative risks, are discussed. While these latter formulas may be useful in many instances, they may be biased or inconsistent and are not, in fact, overages of the relative risks observed in the separate subcategories. Only the relative-risk formula, R, of those presented, can be viewed as such an average. The relationship of the matched-sample method to the subclassification approach is indicated. The statistical methodolo~y presented is illustrated with examples from a study of women with epidermoid and undifferentiated pulmonary ccrclnomc.e-J. Nat. Cancer Inst, 22: 719748, 1959.

14,433 citations

Journal ArticleDOI
TL;DR: A computationally feasible method for finding such maximum likelihood estimates is developed, and a computer program is available that allows the testing of hypotheses about the constancy of evolutionary rates by likelihood ratio tests.
Abstract: The application of maximum likelihood techniques to the estimation of evolutionary trees from nucleic acid sequence data is discussed. A computationally feasible method for finding such maximum likelihood estimates is developed, and a computer program is available. This method has advantages over the traditional parsimony algorithms, which can give misleading results if rates of evolution differ in different lineages. It also allows the testing of hypotheses about the constancy of evolutionary rates by likelihood ratio tests, and gives rough indication of the error of the estimate of the tree.

13,111 citations