scispace - formally typeset
Search or ask a question

Showing papers in "Biometrika in 1961"



Book ChapterDOI
TL;DR: This IEEE Classic Reissue provides at an advanced level, a uniquely fundamental exposition of the applications of Statistical Communication Theory to a vast spectrum of important physical problems.
Abstract: This IEEE Classic Reissue provides at an advanced level, a uniquely fundamental exposition of the applications of Statistical Communication Theory to a vast spectrum of important physical problems. Included are general analysis of signal detection, estimation, measurement, and related topics involving information transfer. Using the statistical Bayesian viewpoint, renowned author David Middleton employs statistical decision theory specifically tailored for the general tasks of signal processing. Dr. Middleton also provides a special focus on physical modeling of the canonical channel with real-world examples relating to radar, sonar, and general telecommunications. This book offers a detailed treatment and an array of problems and results spanning an exceptionally broad range of technical subjects in the communications field. Complete with special functions, integrals, solutions of integral equations, and an extensive, updated bibliography by chapter, An Introduction to Sta istical Communication Theory is a seminal reference, particularly for anyone working in the field of communications, as well as in other areas of statistical physics. (Originally published in 1960.)

1,257 citations


Journal ArticleDOI
TL;DR: In this paper, exact and approximate methods for computing the distribution of quadratic forms in normal variables are given for a given value x, around the probability P{Q > x}.
Abstract: In this paper exact and approximate methods are given for computing the distribution of quadratic forms in normal variables. In statistical applications the interest centres in general, for a quadratic form Q and a given value x, around the probability P{Q > x}. Methods of computation have previously been given e.g. by Box (1954), Gurland (1955) and by Grad & Solomon (1955). None of these methods is very easily applicable except, when it can be used, the finite series of Box. Furthermore, all the methods are valid only for quadratic forms in central variables. Situations occur where quadratic forms in non-central variables must be considered as well. Let x = (x1, ..., xx)' be a column random vector which follows a multidimensional normal law with mean vector 0 and covariance matrix E. Let s = (,t, . . ., ,,7)' be a constant vector, and consider the quadratic form Q = (x + ,)' A(x + ,u). If E is non-singular, one can by means of a non-singular linear transformation (Scheff6 (1959), p. 418) express Q in the form rn 2 Q =E ArXhr; (1 r=1

1,207 citations


Journal ArticleDOI
G. S. Watson1
TL;DR: In this paper, a statistical analysis is made by use of the null hypothesis test for random samples which have been drawn from a population with the continuous distribution function F(x) for distributions on a circle since its value does not depend on the arbitrary point chosen to begin cumulating the probability density and the sample points.
Abstract: : A statistical analysis is made by use of the null hypothesis test for random samples which have been drawn from a population with the continuous distribution function F(x). It is useful for distributions on a circle since its value does not depend on the arbitrary point chosen to begin cumulating the probability density and the sample points. Sampling experiments on bird flight are plotted by use of an IBM 650 computer.

411 citations


Journal ArticleDOI

351 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that the individual is aware of one dimension of preference, in accordance with which the objects can be arranged in an order from most preferred to least.
Abstract: 1. THE METHOD, AND THE HYPOTHESES CONCERNING IT The method of paired comparison has had a long and honourable history in psychological experiments, beginning with the researches of Witmer and Cohn, published in 1894. Titchener (1901) described it in detail in one of the earliest text-books on experimental psychology and Guilford (1954) devotes a chapter to it in the latest edition of his popular text-book. Theoretical investigations of the method, which has applications outside psychology, still continue to appear, e.g. by David (1959) in this Journal, in technical reports by Gulliksen & Tucker (1959) and in a thesis by the author (1960). The authoritative paper on the null hypothesis concerning it is the one by Kendall & Babington Smith which appeared here in 1939. With this I find myself in disagreement. The experimental procedure is to show a set of m objects to an individual in pairs and ask him each time to choose one. It is always understood that the objects differ from one another, but there may be doubt whether the difference is discernible by the individual. The difference may be confined to one respect, e.g. a set of boxes may be used identical in appearance but differing in weight, and the observer's attention may be directed to that respect, e.g. by the instruction, 'Choose the heavier each time'. Or they may differ in several respects and the criterion of choice may be left to the individual, e.g. in Titchener's standard procedure the objects are coloured cards differing in hue and saturation and the individual is instructed to choose whichever he prefers. It is normally understood, but not always, cf. Myers (1925), that each of the -m(m -1) possible pairs is presented once and once only. We shall assume that the objects may differ in several respects and that the individual's attention has not been directed to any respect for which there is an independent criterion; also that he has been shown every possible pair once and is never permitted to evade the obligation to choose, e.g. by responding, 'Both alike'. The objects will be denoted A,B, ...,M. Initially we may hope to show that the individual is aware of one dimension of preference, in accordance with which the objects can be arranged in an order from most preferred to least. The contrary, Cl, which must be disproved before any such hypothesis, H1, need be conceded, is that the individual is unaware of any differences between the objects and that all his choices are made at random, independently of one another. It may be disproved if an unexpectedly large number of the choices are internally consistent, i.e. cohere with the same one out of all the m! possible orders for m objects; for in the absence of any criterion all possible orders are equally admissible. The minimum number of inconsistent responses will be denoted by i, and an order with which there are only i inconsistent responses will be called a nearest adjoining order. In some specimen schedules of responses the nearest adjoining order is not unique; there may be several orders, say j altogether, with only i inconsistencies. The numbers

332 citations



Journal ArticleDOI

282 citations


Book ChapterDOI
TL;DR: In this paper, a table of the Freeman-Tukey variance stabilizing arc-sine transformation for the binomial distribution together with properties of the transformation is presented, where n is the sample size and x is the number of successes observed in a binomial experiment.
Abstract: We present a table of the Freeman-Tukey variance stabilizing arc-sine transformation for the binomial distribution together with properties of the transformation. Entries in the table are $$ \theta = \frac{1} {2}\left\{ {\arcsin \surd \left( {\frac{x} {{n + 1}}} \right) + \arcsin \surd \left( {\frac{{x + 1}} {{n + 1}}} \right)} \right\},$$ where n is the sample size and x is the number of successes observed in a binomial experiment. Values of θ are given in degrees, to two decimal places, for n = 1[1]50 and x = 0[1]n.

273 citations



Journal ArticleDOI
TL;DR: Bartholomew as mentioned in this paper discusses the best way of making this test when the unknown population means are subject to order restrictions, which is a natural assumption to make when investigating the theoretical properties of tests for means but is unrealistic in many practical applications.
Abstract: In the analysis of variance for a one-way classification it is customary to test the hypothesis that the samples have come from populations with the same mean. The object of the present paper is to discuss the best way of making this test when the unknown population means are subject to order restrictions. A general theory of tests of homogeneity for meanls under ordered alternatives has been developed in three earlier papers (Bartholomew, 1959 a, b, 1961): these are referred to in this paper as I, II and III, respectively. However, apart from a brief discussion in I, this earlier work relates to the case where the population standard deviations are known a priori. This is a natural assumption to make when investigating the theoretical properties of tests for means but is unrealistic in many practical applications. In particular, in the analysis of variance the standard deviations are not usually known, although they can be estimated from the data. In ? 2 the X2-test, first introduced in I and extended in III, will be generalized to cover this case also. The tests based on scores discussed in III may also be adapted for use in the analysis of variance. It, will be shown that a close link exists between the latter and the distribution-free test proposed by Jonckheere (1954). A number of factors must be taken into account when choosing a test of which power is one of the most important. Asymptotic results have been obtained for all the tests mentioned above and they are used in ? 3 to make power comparisons. It is instructive to examine more closely the kind of practical situation which gives rise to ordered alternatives in the analysis of variance. The possibility of ranking the class meanis implies that they correspond to different levels of one or more underlying variates. To take a simple example, suppose that there are k classes with means /,l, /L21 .. ., 1at with the structu re




Journal ArticleDOI
TL;DR: In this article, the authors present a mathematical model in which the parameter is related to a group of transformations on the sample space and show that the fiducial distribution is the aposteriori distribution of a Bayesian argument.
Abstract: 1930. In papers since that time Fisher has frequently discussed aspects of the method and may have, in the view of some readers. modified or altered his ideas concerning the underlying principles and the more general aspects of the method. The method has been frequently criticized adversely in the literature, particularly in recent years. Some of the grounds for this criticism are: conflict with the confidence method in certain problems; non-uniqueness in certain problems; disagreement with 'repeated sampling' frequency interpretations; and a lack of a seemingly-proper relationship with a priori distributions. In his recent book, Statistical Methods and Scientific Inference, Fisher (1956) has devoted considerable space to the fiducial method. He states that an essential ingredient for its use is the absence of prior information concerning the value of the parameter being estimated; in his words, 'it is essential to introduce the absence of knowledge a priori as a distinctive datum in order to demonstrate completely the applicability of the fiducial method of reasoning to the particular real and experimental cases for which it was developed.' An interpretation of one aspect of this requirement might be that all parameter values are equivalent in the way in which the frequency distribution of the observable variable is related to the parameter value determining that distribution. In ? 5 this interpretation is formalized and shown to imply a mathematical model in which the parameter is related to a group of transformations on the sample space. In ?? 2 and 3 the mathematical model involving transformations is presented on its own merits and in ?? 4 and 5 the fiducial argument for it is developed. A consequence for this model is that the information about the parameter from an observed value of the variable is in the form of a frequency distribution, the fiducial distribution, having a frequency interpretation in terms of a well-defined kind of repeated sampling. This is in agreement with Fisher's statement-'the fiducial argument uses the observations (only) to change the logical status of the parameter from one in which nothing is known of it, and no probability statement can be made, to the status of a random variable having a well-defined distribution.' Another consequence in the special framework is that there is no need to require the absence of an a priori distribution for the parameter. For, if the fiducial distribution is combined in a logical manner with the a priori distribution the result is the aposteriori distribution of a Bayesian argument-a reassuring result. This is demonstrated in ? 9. A further consequence concerns prior information that the parameter value is restricted to some specified range. This restriction can be used to condition the fiducial distribution yielding a conditioned fiducial distribution. A probability combination of such restrictions can in effect generate an a priori distribution and an appropriate combination of conditioned fiducial distributions yields the Bayesian a posteriori distribution. This is discussed in ? 10.



Journal ArticleDOI

Journal ArticleDOI
TL;DR: In this paper, it was shown that provided that the roots of the equation zh +jl have moduli less than unity, the logarithm of the likelihood of the sample is given asymptotically by
Abstract: Let {xJ}, t = O, + 1, + 2, ... , be a stationary normal moving-average process, defined by Xt = # +'61 + fli 6f-1 + * e + lhe 6f:--h) (1) where (et) is a set of independent random variables, each distributed normally with mean zero and variance C-2. The problem of making inferences about the parameters fl%, ... *) *A, given a sample of consecutive observations (xl, x2, ..., x.) from the process, is a well-known one in time-series analysis. Little progress with this seems to have been made except under the assumption that n is large, but in the large-sample case the work of Vhittle (1951, Chapter 7, 1953, 1954, pp. 211-18) enables one to obtain, at least in principle, a solution which for most purposes can be regarded as complete. From this work it follows that, provided that the roots of the equation zh +jl.Zh-l + ... + /h = 0 al have moduli less than unity, the logarithm of the likelihood of the sample is given asymptotically by








Journal ArticleDOI
TL;DR: In this article, the authors examined the way in which Foster's result may be applied to particullar processes and showed that the results obtained also have consequences in multi-dimensional random walks, in which context ergodicity simply means that the particle does not escape to infinity.
Abstract: 1. Many stochastic processes occurring in practice may be formulated as Markov chains with an enumerable state space. It is then important to know whether or not the chain is ergodic, i.e. whether or not a stationary distribution exists. For particular problems this has often been determined by complex and ingenious methods, a good example being the analysis by Kiefer & Wolfowitz (1955) of the many-server queue. However, Foster (1953) has given a general criterion for a chain to be ergodic, and the purpose of this paper is to examine the way in which his result may be applied to particullar processes. By way of example, the technique is applied to two important problems in queueing theory. The results obtained also have consequences in the theory of multi-dimensional random walks, in which context ergodicity simply means that the particle does not escape to infinity. The processes with which we will be concerned are those which may be formulated as Markov chains on the state space of vectors


Journal ArticleDOI