scispace - formally typeset
Search or ask a question

Showing papers in "Biometrika in 1977"


Journal ArticleDOI
TL;DR: In this article, the authors used a test derived from the corresponding family of test statistics appropriate for the case when 0 is given and applied to the two-phase regression problem in the normal case.
Abstract: SUMMARY We wish to test a simple hypothesis against a family of alternatives indexed by a one-dimensional parameter, 0. We use a test derived from the corresponding family of test statistics appropriate for the case when 0 is given. Davies (1977) introduced this problem when these test statistics had normal distributions. The present paper considers the case when their distribution is chi-squared. The results are applied to the detection of a discrete frequency component of unknown frequency in a time series. In addition quick methods for finding approximate significance probabilities are given for both the normal and chi-squared cases and applied to the two-phase regression problem in the normal case.

2,047 citations


Journal ArticleDOI
TL;DR: In this article, a group sequential design is proposed to divide patient entry into a number of equal-sized groups so that the decision to stop the trial or continue is based on repeated significance tests of the accumulated data after each group is evaluated.
Abstract: SUMMARY In clinical trials with sequential patient entry, fixed sample size designs are unjustified on ethical grounds and sequential designs are often impracticable. One solution is a group sequential design dividing patient entry into a number of equal-sized groups so that the decision to stop the trial or continue is based on repeated significance tests of the accumulated data after each group is evaluated. Exact results are obtained for a trial with two treatments and a normal response with known variance. The design problem of determining the required size and number of groups is also considered. Simulation shows that these normal results may be adapted to other types of response data. An example shows that group sequential designs can sometimes be statistically superior to standard sequential designs.

1,573 citations




Journal ArticleDOI
TL;DR: In this paper, a canonical transformation of a k-dimensional stationary autoregressive process is proposed, where the components of the transformed process are ordered from least predictable to most predictable.
Abstract: : This paper proposes a canonical transformation of a k dimensional stationary autoregressive process. The components of the transformed process are ordered from least predictable to most predictable. The least predictable components are often nearly white noise and the most predictable can be nearly nonstationary. Transformed variables which are white noise can reflect relationships which may be associated with or point to economic or physical laws. A 5-variate example is given.

361 citations


Journal ArticleDOI
TL;DR: In this article, the asymptotic consistency of cross-validatory assessment and the efficiency of crossvalidatory choice is investigated both in some generality and also in the context of particular applications.
Abstract: SUMMARY The asymptotic consistency of cross-validatory assessment and the asymptotic efficiency of cross-validatory choice is investigated both in some generality and also in the context of particular applications.

289 citations


Journal ArticleDOI
TL;DR: In this paper, a set of families of distributions which might be useful for fitting data was described by Burr and special attention was focused on the family, Type XII, with generic distribution function 1- (1 + Xc)-k (X > 0) which yields a wide range of values of skewness, 1fl1, and kurtosis, fi2.
Abstract: SUMMARY A set of families of distributions which might be useful for fitting data was described by Burr (1942). Special attention was focused on the family, Type XII, with generic distribution function 1- (1 + Xc)-k (X > 0) which yields a wide range of values of skewness, 1fl1, and kurtosis, fi2. The area in the (Ifi1, /2) plane corresponding to the Type XIT distributions is derived and presented in two figures. 1. INTRODIUCTION Suppose that Z is a positive random variable with probability density function

281 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present results of a large-scale simulation study of the power of various tests of normality and show the difficulty of applying tests based on ordered observations, such as the Shapiro-Wilk test, when in practice the sample data may contain "ties" resulting from grouping or rounding.
Abstract: SUMMARY In the present paper we present results of a large scale simulation study of the power of various tests of normality. We follow the procedure adopted by Shapiro, Wilk & Chen of applying a considerable variety of tests to samples drawn from a wide range of nonnormal populations. We, however, consider some additional tests, emphasize the difference between 'omnibus' and 'directional' tests and show the difficulty of applying tests based on ordered observations, such as the Shapiro-Wilk test, when in practice the sample data may contain 'ties' resulting from grouping or rounding.

261 citations


Journal ArticleDOI
TL;DR: In this article, the problem of deciding whether two sets of fragments have come from a common source frequently arises in forensic science and a solution in the realistic case where the distribution is nonnormal is provided.
Abstract: SUMMARY The problem of deciding whether two sets of fragments have come from a common source frequently arises in forensic science. This paper provides a solution in the realistic case where the distribution is nonnormal. The normal case is also discussed because it is there easier to understand the nature of the solution and, in particular, its relationship to significance tests. The solution requires the distribution function of the product of standardized normal quantities which is tabulated in the appendix.

253 citations


Journal ArticleDOI
TL;DR: In this paper, the asymptotic distribution of the order of an autoregression selected by a generalization of Akaike's FPE criterion is investigated and the properties of the distribution are investigated.
Abstract: SUMMARY The asymptotic distribution of the order of an autoregression selected by a generalization of Akaike's FPE criterion is given. Some of the properties of the distribution are investigated. The use of this criterion is illustrated by a simulation study.

236 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present goodness of fit tests for the extreme value distribution, based on the empirical distribution function statistics W2, U2 and A2, for the three cases where one or both of the parameters of the distribution must be estimated from the data.
Abstract: SUMMARY In this paper we present goodness of fit tests for the extreme value distribution, based on the empirical distribution function statistics W2, U2 and A2. Asymptotic percentage points are given for each of the three statistics, for the three cases where one or both of the parameters of the distribution must be estimated from the data. Slight modifications of the calculated statistics are given to enable the points to be used with small samples.

Journal ArticleDOI
David Oakes1
TL;DR: In this paper, various procedures are considered for fitting a regression model to censored survival data in continuous time with time-dependent covariate functions, including maximum likelihood with the underlying hazard function known completely and known up to a multiplicative constant, and the maximization of Cox's partial likelihood.
Abstract: Various procedures are considered for fitting a regression model to censored survival data in continuous time with time-dependent covariate functions. These include maximum likelihood with the underlying hazard function known completely and known up to a multiplicative constant, and the maximization of Cox's partial likelihood. Explicit formulae for the asymptotic variances of the estimators are derived informally and compared. It is shown how sample second derivatives may be used to estimate the amount of information lost through lack of knowledge of the underlying hazard function. Corresponding results for a more general parameterization which includes the Weibull hazard function are indicated.


Book ChapterDOI
TL;DR: In this article, a nonlinear time series system is considered, where the output series corresponding to a given input series is the sum of a noise series and the result of applying in turn the operations of linear filtering, instantaneous functional composition and linear filtering to the input series.
Abstract: A nonlinear time series system is considered. The system has the property that the output series corresponding to a given input series is the sum of a noise series and the result of applying in turn the operations of linear filtering, instantaneous functional composition and linear filtering to the input series. Given a stretch of Gaussian input series and corresponding output series, estimates are constructed of the transfer functions of the linear filters, up to constant multipliers. The investigation discloses that for such a system, the best linear predictor of the output given Gaussian input, has a broader interpretation than might be suspected. The result is derived from a simple expression for the covariance function of a normal variate with a function of a jointly normal variate.

Journal ArticleDOI
TL;DR: In this paper, it was shown that even for moderately large sample sizes, the true significance levels of the portmanteau statistic are likely to be much lower than predicted by asymptotic theory.
Abstract: SUMMARY In time series model building, using the methodology of Box & Jenkins (1970), it is usual to verify the adequacy of a fitted equation by computing residual autocorrelations. Following Box & Pierce (1970), an overall, or 'portmanteau', test of fit can be based on these quantities. Recent experience suggests that surprisingly low values of the portmanteau statistic are often found. This paper shows that, even for moderately large sample sizes, the true significance levels are likely to be much lower than predicted by asymptotic theory.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a model which combines a logistic relationship for the probability of incidence and an exponential distribution for the time of incidence for a particular disease, and compared the efficiency of the model compared with that of a model with no time censoring.
Abstract: SUMMARY A binary variable can specify the incidence of a particular disease, Y = 1 or a lifetime free of the disease, Y = 0. In a study, some subjects have Y = 1 recorded at specified ages. For other subjects, with Y unknown due to incomplete follow-up, the observed follow-up time compared with the usual incidence pattern for the disease gives some information on the possibility that Y = 0. A model for this situation is proposed which combines a logistic relationship for the probability of incidence and an exponential distribution for the time of incidence. The efficiency of the model is compared with that of a logistic model with no time censoring. The behaviour of a logistic model applied without considering the time censoring is also examined.

Journal ArticleDOI
John Hyde1
TL;DR: In this paper, it was shown that the lifetimes of individuals have a given distribution for both the discrete and continuous cases, and the effect of weight functions on power was discussed.
Abstract: SUMMARY In some areas of life testing, subjects may enter or leave the study after they have been put on test. Statistics are presented for testing the hypothesis that the lifetimes have a given distribution for both the discrete and continuous cases. They are shown to be asymptotically normal by associating a martingale with the life process and using basic results about stopping times. There is some discussion of the effect of weight functions on power. An example of the test is provided.

Journal ArticleDOI
TL;DR: In this paper, the authors developed in a more general setting the methods used' by Paulson, Holcomb & Leitch (1975) to estimate the parameters of a stable law, and established consistency under the condition of differentiability of the characteristio function and the existence of bounded second derivatives.
Abstract: SUMMARY The paper develops in a more general setting the methods used' by Paulson, Holcomb & Leitch (1975) to estimate the parameters of a stable law The statistic considered minimizes a distance function determined, by the empirical characteristic function Consistency is established under the condition of differentiability of the characteristio function and the existence of bounded second derivatives is required to obtain a central limit theorem for the estimators of one or more parameters Questions concerning efficiency and robustness are discussed

Journal ArticleDOI
TL;DR: In this paper, a multidigit display, a keyboard and a data processor unit are combined with integral key actuators of the keyboard of a flexible circuit film which carries electrical conductor leaves in a desired pattern.
Abstract: In an electronic calculator essentially comprising a multidigit display, a keyboard and a data processor unit, a multidigit liquid crystal display is deposited together with integral key actuators of the keyboard of a flexible circuit film which carries electrical conductor leaves in a desired pattern. The conductor leaves to be in contact with terminals of the liquid crystal display are formed to extend in the direction of length of the liquid crystal display to thereby establish room for a battery compartment.

Journal ArticleDOI
TL;DR: In this paper, the authors examine consequences of the dependence of survival on the causes of censorixg and the testability of a set of data for the importance in survival inferences of the underlying censoring mechanism as well as the consequences of falsely assuming that the mechanism does not affect inferences about survival.
Abstract: SUMMARY In the analysis of survival-type variables arising from medical investigations, one is often faced with incomplete or right-censored observations. In many cases the mechanisms leading to censoring are intrinsically related to the survival variable and standard methods of analysis are not appropriate. In this paper we examine consequences of the dependence of survival on the causes of censorixg. Also examined are the testability of a set of data for the importance in survival inferences of the underlying censoring mechanism as well as the consequences of falsely assuming that the censoring mechanism does not affect inferences about survival. The special case in which survival is exponential is discussed in detail.

Journal ArticleDOI
TL;DR: In this article, the authors derive conditions for which a sampling design of a two-dimensional finite population is optimal in the sense of minimum average variance, when the estimator of the population mean is the sample mean.
Abstract: SUMMARY Employing a superpopulation model, we derive conditions for which a sampling design of a two-dimensional finite population is optimal in the sense of minimum average variance, when the estimator of the population mean is the sample mean. We find that an overall optimal design does not exist, but that, if we consider three subclasses of two-dimensional sampling designs, then the optimal design within each subclass is a type of systematic sampling.

Journal ArticleDOI
TL;DR: In this article, a parametric test given known variance ratios and a nonparametric test, both for the equality of correlation coefficients of one variable with a set of other variables, are proposed and their power functions are compared.
Abstract: SUMMARY A parametric test given known variance ratios, and a nonparametric test, both for the equality of correlation coefficients of one variable with a set of other variables, are proposed and their power functions are compared


Journal ArticleDOI
TL;DR: In this article, the authors developed statistics which measure the lack of marginal homogeneity when one margin differs only in location relative to the other, which can be used as a convenient summary of the whole table.
Abstract: Paired data frequently arise in 'before and after' experiments where a given number of individuals are measured before and after treatment to determine the treatment effect. The recorded data are often on an ordered categorical scale. We might, for example, measure degree of injury on a four category scale: uninjured, slight, serious, fatal. Such paired data are conveniently summarized in a square k x k table of counts {mf,}, where k is the number of categories and m7is the number of pairs for which the first recording is category i and the second category j. In this paper we develop statistics which, for some purposes, can be used as a convenient summary of the whole table. These statistics measure the lack of marginal homogeneity when one margin differs only in location relative to the other.

Journal ArticleDOI
TL;DR: In this paper, the estimation of the parameters of a normal population given a sample which has been censored at a known point c is considered, and simple estimators,t* and o-* are presented.
Abstract: SUMMARY In a type I left censored normal sample the information consists of the observations xl, ..., which fell above the 'observation limit' c and of the number, n - k, of those observations which fell below c. This paper considers the estimation of the parameters of a normal population given a sample which has been censored at the known point c. When the maximum likelihood method is used to produce estimates of It and oC one has to resort to numerical solution of the resulting equations. In this paper simple estimators ,t* and o-* are presented. They are shown to be almost as good as the maximum likelihood estimators both for small and large samples. In the 'fixed ki' case the censored sample is generated by a sequential procedure: independent observations are made, one by one, until a predetermined number k of observations above c is obtained. The distribution of n - k will then be negative binomial. The 'fixed n' situation occurs when in a random sample of fixed size n all observations, if any, below c are deleted. In this case the number of remaining observations, k, will have a binomial distribution. However, it may then happen that the censored sample is void, and then no reasonable estimates of the population parameters can be produced. When we investigate small-sample properties of estimators we prefer to avoid that complication by excluding all such samples from our considerations: if all n observations fall below c a new sample of size n is taken, and so on, until a sample with at least one observation above c is obtained; furthermore we assume that, whenever this happens, we completely ignore how many times a useless sample was obtained. This situation arises naturally in practice: the client will not consult the stati- stician unless k exceeds zero, and the statistician will never know how many times the client did not consult him. When n is large this change in the meaning of 'fixed n'is of little importance. In particular, it does not affect any asymptotic results. Throughout the paper we assume that the population from which the original observations are taken is normal with unknown mean ,u and standard deviation cr, while the 'observation

Journal ArticleDOI
TL;DR: In this article, a limiting distribution of the estimated maximum and range of a set of monotonically ordered normal means when all means are in fact equal is determined, and the upper percentiles of the studentized distributions are tabulated.
Abstract: SUMMARY This paper determines a limiting distribution of the estimated maximum and range of a set of monotonically ordered normal means when all means are in fact equal. The upper percentiles of the studentized distributions are tabulated. These percentiles can be used as critical values in tests of hypotheses concerning monotonically ordered normal means, and in forming simultaneous confidence limits for contrasts between normal means when the contrast coefficients are restricted to being monotonically ordered.

Journal ArticleDOI
D. Seigmund1
TL;DR: In this article, repeated significance tests for a normal mean, with variance known, are studied asymptotically, and an analogous test for the case of unknown variance is suggested and an approximation to its significance level obtained.
Abstract: SUMMARY Repeated significance tests for a normal mean, with variance known, are studied asymptotically. Approximations are given for the significance level, power and expected sample size. These approximations are shown to be accurate enough for practical numerical purposes over a wide range of parameter values. An analogous test for the case of unknown variance is suggested and an approximation to its significance level obtained.

Journal ArticleDOI
TL;DR: In this paper, the authors combined the statistics of Rothman and Woodroofe (1972) and Srinivasan & Godio (1974) to define a statistic for testing the symmetry of a continuous distribution about a specified median.
Abstract: SUMMARY Statistics of Rothman & Woodroofe (1972) and Srinivasan & Godio (1974) are combined to define a statistic for testing the symmetry of a continuous distribution about a specified median. The proposed statistic is shown to possess a desirable invariance property. Exact and asymptotic null distribution of the statistic and tables of critical values for sample sizes 10 < n < 24 are also given.


Journal ArticleDOI
TL;DR: In this article, the authors consider inferences about the odds ratio when the likelihood function is derived from the marginal totals and find that standard procedures for making statements about an unknown parameter are inconclusive.
Abstract: SUMMARY Inferences about the odds ratio of a 2 x 2 contingency table are usually based on the distribu-. tion of any one frequency conditional on the observed values of the marginal totals. The reason is given that the marginal totals are ancillary statistics which contain no information about the odds ratio. We consider inferences about the odds ratio when the likelihood function is derived from the marginal totals. Standard procedures for making statements about an unknown parameter are found to be inconclusive.