scispace - formally typeset
Search or ask a question

Showing papers in "Biometrika in 1987"


Journal ArticleDOI
TL;DR: In this article, an asymptotic theory for a first-order autoregression with a root near unity is proposed. But the theory is not suitable for continuous time estimation and the analysis of the power of tests for a unit root under a sequence of local alternatives.
Abstract: SUMMARY This paper develops an asymptotic theory for a first-order autoregression with a root near unity. Deviations from the unit root theory are measured through a noncentrality parameter. When this parameter is negative we have a local alternative that is stationary; when it is positive the local alternative is explosive; and when it is zero we have the standard unit root theory. Our asymptotic theory accommodates these possibilities and helps to unify earlier theory in which the unit root case appears as a singularity of the asymptotics. The general theory is expressed in terms of functionals of a simple diffusion process. The theory has applications to continuous time estimation and to the analysis of the asymptotic power of tests for a unit root under a sequence of local alternatives.

772 citations


Journal ArticleDOI
Mike West1
TL;DR: In this article, the exponential power family of distributions of Box and Tiao (1973) is shown to be a subset of the class of scale mixtures of normals, and the corresponding mixing distributions are explicitly obtained, identifying a close relationship between the exponential Power family and a further class of normal scale mixture.
Abstract: SUMMARY The exponential power family of distributions of Box & Tiao (1973) is shown to be a subset of the class of scale mixtures of normals. The corresponding mixing distributions are explicitly obtained, identifying a close relationship between the exponential power family and a further class of normal scale mixtures, namely the stable distributions.

741 citations


Journal ArticleDOI
TL;DR: In this article, the power of tests for distinguishing between fractional Gaussian noise and white noise of a first-order autoregressive process was investigated, based on the beta-optimal principle, local optimality and rescaled range test.
Abstract: SUMMARY We consider the power of tests for distinguishing between fractional Gaussian noise and white noise of a first-order autoregressive process. Our tests are based on the beta-optimal principle (Davies, 1969), local optimality and the rescaled range test.

548 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined the properties of a new class of bivariate distributions whose members are stochastically ordered and likelihood ratio dependent, which can be used to construct families of distributions whose marginals are arbitrary and which include the Frechet bounds as well as the distribution corresponding to independent variables.
Abstract: SUMMARY This paper examines the properties of a new class of bivariate distributions whose members are stochastically ordered and likelihood ratio dependent The proposed class can be used to construct bivariate families of distributions whose marginals are arbitrary and which include the Frechet bounds as well as the distribution corresponding to independent variables Three nonparametric estimators of the association parameter are suggested and Monte Carlo experiments are used to compare their small-sample behaviour to that of the maximum likelihood estimate

416 citations


Journal ArticleDOI
TL;DR: In this article, Wedderburn's original definition of quasi-likelihood for generalized linear models is extended to allow the comparison of variance functions as well as those of linear predictors and link functions.
Abstract: SUMMARY Wedderburn's original definition of quasi-likelihood for generalized linear models is extended to allow the comparison of variance functions as well as those of linear predictors and link functions. The relationship between generalized linear models and the use of transformations of the response variable is explored, and the ideas are illustrated by three examples. generalized linear models by allowing the full distributional assumption about the random component in the model to be replaced by a much weaker assumption in which only the first and second moments were defined. In making this extension Wedderburn widened the scope of generalized linear models in a way very similar to that of Gauss when he replaced the assumption of normality in classical linear models by that of equal variance. For generalized linear models with distributions in the exponential family, likelihood ratio and score tests are used for testing hypotheses concerning nested subsets of covariates in the linear predictor and for assessing hypothesized link functions. These methods are also applicable with Wedderburn's form of quasi-likelihood. However neither of these methods is suitable for the comparison of different variance functions. In this paper we introduce an extended quasi-likelihood function which allows for the comparison of various forms of all the components of a generalized linear model, i.e. the linear predictor, the link function, and the variance function. We then apply the ideas to the analysis of several sets of data.

392 citations



Journal ArticleDOI
TL;DR: The authors decrit un algorithme qui utilise des formules explicites for l'inverse and le determinant de la matrices de covariance donnee par La Motte.
Abstract: On decrit un algorithme qui utilise des formules explicites pour l'inverse et le determinant de la matrice de covariance donnee par La Motte (1972) et evite l'inversion des grandes matrices

374 citations


Journal ArticleDOI
TL;DR: In this article, the product-limit estimator of the survival curve S or an appropriate conditional version of S is considered and the asymptotic behavior of the estimator is briefly described together with an example.
Abstract: SUMMARY In many applications involving follow-up studies, individuals' lifetimes may be subject to left truncation in addition to the usual right censoring Here we consider the product-limit estimator of the survival curve S or an appropriate conditional version of S The asymptotic behaviour of the estimator is briefly described together with an example

338 citations


Journal ArticleDOI
TL;DR: In this article, the root of the confidence set is transformed by its estimated bootstrap cumulative distribution function, and the transformation of a confidence set root by the estimated distribution function can be iterated one or more times with smaller error than do confidence sets based on the original root.
Abstract: SUMMARY Approximate confidence sets for a parameter 0 may be obtained by referring a function of 0 and of the sample to an estimated quantile of that function's sampling distribution. We call this function the root of the confidence set. Either asymptotic theory or bootstrap methods can be used to estimate the desired quantile. When the root is not a pivot, in the sense of classical statistics, the actual level of the approximate confidence set may differ substantially from the intended level. Prepivoting is the transformation of a confidence set root by its estimated bootstrap cumulative distribution function. Prepivoting can be iterated. Bootstrap confidence sets generated from a root prepivoted one or more times have smaller error in level than do confidence sets based on the original root. The first prepivoting is nearly equivalent to studentizing, when that operation is appropriate. Further iterations of prepivoting make higher order corrections automatically.

309 citations


Journal ArticleDOI
TL;DR: In this paper, the sensitivity of permutation inferences to a range of assumptions about unobserved covariates in matched observational studies is analyzed for Wilcoxon's signed rank test, McNemar-Cox test for paired binary responses, and to some matching problems with a variable number of controls.
Abstract: SUMMARY In observational studies, treatments are not randomly assigned to experimental units, so that randomization tests and their associated interval estimates are not generally applicable. In an effort to compensate for the lack of randomization, treated and control units are often matched on the basis of observed covariates; however, the possibility remains of bias due to residual imbalances in unobserved covariates. A general though simple method is proposed for displaying the sensitivity of permutation inferences to a range of assumptions about unobserved covariates in matched observational studies. The sensitivity analysis is applicable to Wilcoxon's signed rank test, to the McNemar-Cox test for paired binary responses, and to some matching problems with a variable number of controls.

295 citations


Journal ArticleDOI
TL;DR: In this article, the explanatory vectors are independent and identically distributed with unknown distribtuion, and efficient score functions are obtained using the theory developed in Begun et al. (1983).
Abstract: : This paper studies estimation of the parameters of generalized linear models in canonical form when the explanatory vector is measured with independent normal error. For the functional case, i.e., when the explanatory vectors are fixed constants, unbiased score functions are obtained by conditioning on certain sufficient statistics. This work generalizes results obtained for logistic regression. In the case that the explanatory vectors are independent and identically distributed with unknown distribtuion, efficient score functions are obtained using the theory developed in Begun et al. (1983). Keywords: Conditional score function; Efficient score function; Functional model; Generalized linear model; Measurement error; Structural model.

Journal ArticleDOI
TL;DR: In this article, it was proposed that for ranking objects or players in an incomplete paired-comparison experiment or tournament with at most one comparison per pair, the score of a player, C, be the total number of (a) wins of players defeated by C minus losses of players to whom C lost, plus (b) C's wins minus C's losses.
Abstract: SUMMARY It is proposed that for ranking objects or players in an incomplete paired-comparison experiment or tournament with at most one comparison per pair, the score of a player, C, be the total number of (a) wins of players defeated by C minus losses of players to whom C lost, plus (b) C's wins minus C's losses. A tied match counts as half a win plus half a loss. More general tournaments can be treated similarly.

Journal ArticleDOI
TL;DR: In this paper, the design aspect of this procedure is explored in terms of the maximum sample size needed to achieve a desired level of power and the expected stopping times under both null and alternative hypotheses.
Abstract: SUMMARY Lan & DeMets (1983) devised a method of constructing discrete group sequential boundaries by using the type I error spending rate function. It is extended so as to generate asymmetric as well as symmetric two-sided boundaries for clinical trials. The design aspect of this procedure is explored in terms of the maximum sample size needed to achieve a desired level of power and the expected stopping times under both null and alternative hypotheses. Finally, these properties are employed in search of appropriate type I error spending rate functions for differing situations.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the use of the smoothed bootstrap and the standard bootstrap for estimating properties of unknown distributions such as the sampling error of parameter estimates, and they develop criteria for determining whether it is advantageous to use the smoothed bootstrap rather than the traditional bootstrap.
Abstract: SUMMARY The bootstrap and smoothed bootstrap are considered as alternative methods of estimating properties of unknown distributions such as the sampling error of parameter estimates. Criteria are developed for determining whether it is advantageous to use the smoothed bootstrap rather than the standard bootstrap. Key steps in the argument leading to these criteria include the study of the estimation of linear functionals of distributions and the approximation of general functionals by linear functionals. Consideration of an example, the estimation of the standard error in the variance-stabilized sample correlation coefficient, elucidates previously-published simulation results and also illustrates the use of computer algebraic manipulation as a useful technique in asymptotic statistics. Finally, the various approximations used are vindicated by a simulation study.

Journal ArticleDOI
TL;DR: In this paper, the problem of testing a sequence of independent normal random variables with constant, known or unknown, variance for no change in mean versus alternatives with a single change point is considered.
Abstract: SUMMARY The problem considered is that of testing a sequence of independent normal random variables with constant, known or unknown, variance for no change in mean versus alternatives with a single change-point. Various tests, such as those based on the likelihood ratio and recursive residuals, are studied. Power approximations are developed by integrating approximations for conditional boundary crossing probabilities. A comparison of several tests is made.

Journal ArticleDOI
TL;DR: In this paper, two natural classes of kernel density estimators for use with spherical data are studied, and explicit formulae are given for bias, variance and loss, and large-sample properties of these quantities are described.
Abstract: SUMMARY We study two natural classes of kernel density estimators for use with spherical data. Members of both classes have already been used in practice. The classes have an element in common, but for the most part they are disjoint. However, all members of the first class are asymptotically equivalent to one another, and to a single element of the second class. In this sense the second class 'contains' the first. It includes some estimators which out-perform all those in the first class, if loss is measured in either squared-error or Kullback-Leibler senses. Explicit formulae are given for bias, variance and loss, and large-sample properties of these quantities are described. Numerical illustrations are presented.

Journal ArticleDOI
TL;DR: This paper examined the consistency of the word usage in a previously unknown nine-stanza poem attributed to Shakespeare with that of the Shakespearean canon using a nonparametric empirical Bayes model.
Abstract: SUMMARY The consistency of the word usage in a previously unknown nine-stanza poem attributed to Shakespeare with that of the Shakespearean canon is examined using a nonparametric empirical Bayes model. We consider also poems by Jonson, Marlowe and Donne, as well as four poems definitely attributed to Shakespeare. On balance, the poem is found to fit previous Shakespearean usage reasonably well.

Journal ArticleDOI
TL;DR: In this article, the statistical theory for the angular central Gaussian model is presented and some topics treated are maximum likelihood estimation of the parameters, testing for uniformity and circularity, and principal components analysis.
Abstract: SUMMARY The angular central Gaussian distribution is an alternative to the Bingham distribution for modeling antipodal symmetric directional data. In this paper the statistical theory for the angular central Gaussian model is presented. Some topics treated are maximum likelihood estimation of the parameters, testing for uniformity and circularity, and principal components analysis. Comparisons to methods based upon the sample second moments are made via an example.

Journal ArticleDOI
TL;DR: In this article, a new extension of the least-squares estimator to censored data is proposed, which is equivalent to applying the ordinary least squares estimators to synthetic times, time constructed by magnifying the gaps between successive order statistics.
Abstract: SUMMARY Estimators for the linear model in the presence of censoring are available. A new extension of the least-squares estimator to censored data is equivalent to applying the ordinary least-squares estimator to synthetic times, time constructed by magnifying the gaps between successive order statistics. Under suitable regularity conditions, the synthetic data estimator is Fisher consistent and asymptotically normal. Examples facilitate com- parison of the synthetic data estimator with estimators proposed by Buckley & James (1979) and by Koul, Susarla & Van Ryzin (1981). The entire lifetime of a person, a machine, a plant or an animal is not always observable. Some lifetimes may be censored, in that only a lower bound for the lifetime is recorded. Statisticians are often interested in modelling the distribution of the true lifetimes as a function of covariates. When censoring is present, a proportional hazards model is often used. When there is no censoring, linear models are often used for ad hoc modelling of dependent variables. Miller & Halpern (1982) survey the methods that have been proposed for the linear model in the presence of censoring. This paper describes a closed-form method which is consistent and asymptotically normal. This method incorporates censor- ing in a natural way. Section 2 formulates classical least-squares estimation in a manner that generalizes to accommodate random right censoring. Section 3 gives some theoretical results for the simplest nontrivial linear model, the two sample case. Section 4 contains examples and the final section discusses the role of this estimator. The covariate vector of the ith person will be denoted by xi, and usually includes one as the first component. The true lifetime Yi follows a linear model if its conditional distribution given xi is

Journal ArticleDOI
TL;DR: A new test of the proportional hazards assumption for two-sample censored data is presented in this paper, which is based on a comparison of different generalized rank estimators of the relative risk.
Abstract: SUMMARY A new test of the proportional hazards assumption for two-sample censored data is presented. The test is based on a comparison of different generalized rank estimators of the relative risk. Asymptotic normality and consistency against alternatives with monotone hazard ratio are shown and relationships to other proposals are pointed out. A related graphical method is presented and recommendations for the choice of appropriate weight functions are given. The strengths and weaknesses of the test procedure proposed are illustrated by four examples featuring various situations which are of practical importance in clinical research.

Journal ArticleDOI
TL;DR: This article improved the fit of the logistic regression model for binary data by transforming the vector of explanatory variables, which are the functions of the explanatory variables which appear in the log density ratio of the conditional distributions.
Abstract: SUMMARY Some results are presented on improving the fit of the logistic regression model for binary data by transforming the vector of explanatory variables. The methods are based on consideration of the distributions of these variables conditional on outcome group. The transformations required are the functions of the explanatory variables which appear in the log density ratio of the conditional distributions.

Journal ArticleDOI
TL;DR: In this article, procedures based on quadratic form rank statistics to test for one or more changepoints in a series of independent observations are considered, and models incorporating both smooth and abrupt changes are introduced.
Abstract: SUMMARY We consider procedures based on quadratic form rank statistics to test for one or more changepoints in a series of independent observations. Models incorporating both smooth and abrupt changes are introduced. Various test statistics are suggested, their asymptotic null distributions are derived and tables of significance points are given. A Monte Carlo study shows that the asymptotic significance points are applicable to moderately sized samples.

Journal ArticleDOI
TL;DR: In this article, it is shown that the profile likelihood can be multimodal, and that the global maximum may not correspond to a sensible value of the parameters of a Gaussian process.
Abstract: SUMMARY Maximum likelihood has frequently been suggested as a way to estimate covariance parameters in spatial Gaussian processes. The results of some simple examples based on simulated data are discussed. It is shown by an example with real data that the profile likelihood can be multimodal, and that the global maximum may not correspond to a sensible value of the parameters. Mardia & Marshall (1984) worked in the following framework. We have a Gaussian process {Z(x) I x E X} with mean given by a regression equation, E{Z(x)} =f(x)T', for a p-dimensional vector ,/. The covariance function cov {Z(x), Z(y)} = c(x, y; 0) is also specified by a q-dimensional vector 0. This process is observed at n points xI, .. , xn and from these observations we wish to estimate /3 and 0. Let Zn = {Z(xI), .. ., Z(xn)}T be the vector of observations, and K = [c(xi, xj; 0)] be its covariance matrix. Finally, let F be the matrix with rows f(Xi)T. Then Zn has a multivariate normal distribution with mean F18, dispersion matrix K and the log likelihood is, up to an additive constant,

Journal ArticleDOI
TL;DR: In this paper, a more general class of estimating functions is investigated which avoids such failure and also is not restricted to the particular forms of mean and variance function of the GLIM.
Abstract: SUMMARY In some cases maximum quasi-likelihood estimation, which is at the core of GLIM, can fail to give reasonable results. A more general class of estimating functions is investigated which avoids such failure and also is not restricted to the particular forms of mean and variance function of GLIM.

Journal ArticleDOI
TL;DR: In this article, the authors adjusted the standard chi-squared, X2, or likelihood ratio, G2 test statistics for logistic regression analysis, involving a binary response variable, to take account of the survey design.
Abstract: SUMMARY Standard chi-squared, X2, or likelihood ratio, G2, test statistics for logistic regression analysis, involving a binary response variable, are adjusted to take account of the survey design. These adjustments are based on certain generalized design effects. Logistic regression diagnostics to detect any outlying cell proportions in the table and influential points in the factor space are also developed, taking account of the survey design. Finally, the results are used to analyse some data from the October 1980 Canadian Labour Force Survey.

Journal ArticleDOI
TL;DR: In this article, the authors considered a class of latent variable models which includes the unrestricted factor analysis model and showed that minimum discrepancy test statistics and estimators derived under normality assumptions retain their asymptotic properties when the common factors are not normally distributed but the unique factors do have a multivariate normal distribution.
Abstract: SUMMARY A class of latent variable models which includes the unrestricted factor analysis model is considered. It is shown that minimum discrepancy test statistics and estimators derived under normality assumptions retain their asymptotic properties when the common factors are not normally distributed but the unique factors do have a multivariate normal distribution. The minimum discrepancy test statistics and estimators considered include the usual likelihood ratio test statistic and maximum likelihood estimators.

Journal ArticleDOI
TL;DR: In this article, the estimation of the parameters of a stationary random field on d-dimensional lattice by minimizing the classical Whittle approximation to the Gaussian log likelihood is considered.
Abstract: SUMMARY We consider the estimation of the parameters of a stationary random field on d-dimensional lattice by minimizing the classical Whittle approximation to the Gaussian log likelihood. If the usual biased sample covariances are used, the estimate is efficient only in one dimension. To remove this edge effect, we introduce data tapers and show that the resulting modified estimate is efficient also in two and three dimensions. This avoids the use of the unbiased sample covariances which are in general not positive-definite.

Journal ArticleDOI
TL;DR: In this article, a procedure for estimating regression coefficients in generalized linear models with canonical link when one or more of the covariates is measured with error has been proposed and compared numerically with the exact maximum likelihood solution, obtained by using Gaussian quadrature instead of the approximation in the E-step of the EM algorithm.
Abstract: SUMMARY The EM algorithm is used to obtain estimators of regression coefficients for generalized linear models with canonical link when normally distributed covariates are masked by normally distributed measurement errors. By casting the true covariates as 'missing data', the EM procedure suggests an iterative scheme in which each cycle consists of an E-step, requiring the computation of approximate first and second conditional moments of the true covariates given the observed data, followed by an M-step in which regression parameters are updated by iteratively reweighted least squares based on these approximations. The proposed procedure is compared numerically with the exact maximum likelihood solution, obtained by using Gaussian quadrature instead of the approximations in the E-step of the EM algorithm, and with alternative estimators for simple logistic regression with measurement error. The results for the proposed procedure are encouraging. A procedure is proposed in this paper for estimating regression coefficients in generalized linear models when one or more of the covariates is measured with error. This work is related to the paper by Carroll et al. (1984) in which maximum likelihood estimates are considered for the structural logistic regression model. These authors provide estimates for the computationally simpler probit regression model and suggest that the structural logistic regression can be done in principle. This paper follows up on that suggestion but differs in detail from similar work by Armstrong (1985). Other procedures with related aims have been suggested by Stefanski & Carroll (1985), Wolter & Fuller (1982) and Prentice (1982). For a quick introduction to the proposed method it is convenient to display the naive estimator which ignores measurement error in its standard computational form. If yi is an observed response variable and xi is the observed covariate, then, for the model which equates the canonical parameter of the distribution of yi to xtf3, the maximum likelihood estimator of f3 can be obtained by iteratively reweighted least-squares. The estimate of f3 after (s +1) cycles is given by

Journal ArticleDOI
TL;DR: A test is given to detect clustering in disease incidence or mortality data using the mean distance between all pairs of disease cases, which is derived under assumptions that accommodate differences in population distribution among demographic subgroups at different disease risk.
Abstract: SUMMARY A test is given to detect clustering in disease incidence or mortality data. The test statistic is the mean distance between all pairs of disease cases. Its null mean and variance and its asymptotic normality are derived under assumptions that accommodate differences in population distribution among demographic subgroups at different disease risk. The test is illustrated on 63 cases of anal and rectal squamous cell carcinoma in San Francisco during 1973-1981. Patterns of disease incidence and mortality over time, space, or occupational categories can provide clues to the cause of the disease. An aetiologic agent can produce a spatial, temporal or occupational cluster of disease cases. It is seldom difficult to detect clusters of rare diseases like angiosarcoma of the liver among men occupationally exposed to polyvinyl chloride. However more common diseases present two problems. First, clusters may be obscured by the scattered occurrence of cases unrelated to the cause of the clusters. Secondly, clusters may be produced by factors unrelated to the disease process, such as variations in the overall population distribution, or variations in the distributions of demographic subgroups at high disease risk. It is therefore useful to have a method for detecting clusters that will adjust for such factors. Previous tests for clustering, for example, Pinkel & Nefzger (1959), Knox (1964), Mantel (1967), are not satisfactory for the study of chronic disease such as cancer. Those tests are designed to determine whether cases are clustered both in space and in time simultaneously. However, cases of chronic disease caused by a spatially localized agent may be close in space, but they are unlikely to be close in time, because of long and variable time periods between exposure and diagnosis. Thus there is need for alternative methods to detect spatial clusters in regional disease incidence or mortality data. Consider the set X = {xl,... , XK } of points called census tracts, with Nk cases of disease, called cancer, in tract Xk. Under the hypothesis Ho of no clustering, the Nk are independent Poisson variables, with the mean of Nk proportional to the population size ek in tract xk:

Journal ArticleDOI
TL;DR: For estimating the single latencies, a maximum likelihood approach employing iterative Fisher-scoring is developed and a statistic for testing on the presence of latency variation is derived.
Abstract: SUMMARY The electric response of the brain related to some event, i.e. stimulus, is usually estimated by repeated stimulation and subsequent averaging of the activity recorded time-locked to the stimulus. The present paper deals with the model that the single responses may have varying latency, i.e. arrival time, after stimulus onset. For estimating the single latencies, a maximum likelihood approach employing iterative Fisher-scoring is developed. Further a statistic for testing on the presence of latency variation is derived. Simulations and real data applications are discussed.