scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the royal statistical society series b-methodological in 1984"



Journal ArticleDOI
TL;DR: Stochastic calculus for these stochastic processes is developed and a complete characterization of the extended generator is given; this is the main technical result of the paper.
Abstract: A general class of non-diffusion stochastic models is introduced with a view to providing a framework for studying optimization problems arising in queueing systems, inventory theory, resource allocation and other areas. The corresponding stochastic processes are Markov processes consisting of a mixture of deterministic motion and random jumps. Stochastic calculus for these processes is developed and a complete characterization of the extended generator is given; this is the main technical result of the paper. The relevance of the extended generator concept in applied problems is discussed and some recent results on optimal control of piecewise-deterministic processes are described.

954 citations


Journal ArticleDOI
TL;DR: In this paper, a method for assessing the influence of minor perturbations of a linear regression model is presented. But the method is not restricted to linear regression models, and it seems to provide a relatively simple, unified approach for handling a variety of problems.
Abstract: SUMMARY Statistical models usually involve some degree of approximation and therefore are nearly always wrong. Because of this inexactness, an assessment of the influence of minor perturbations of the model is important. We discuss a method for carrying out such an assessment. The method is not restricted to linear regression models, and it seems to provide a relatively simple, unified approach for handling a variety of problems.

904 citations


Journal ArticleDOI
TL;DR: The scope of application of iteratively reweighted least squares to statistical estimation problems is considerably wider than is generally appreciated as mentioned in this paper, and it extends beyond the exponential-family-type generalized linear models to other distributions, to non-linear parameterizations, and to dependent observations.
Abstract: The scope of application of iteratively reweighted least squares to statistical estimation problems is considerably wider than is generally appreciated. It extends beyond the exponential-family-type generalized linear models to other distributions, to non-linear parameterizations, and to dependent observations. Various criteria for estimation other than maximum likelihood, including resistant alternatives, may be used. The algorithms are generally numerically stable, easily programmed without he aid of packages, and highly suited to interactive computation.

586 citations


Journal ArticleDOI
TL;DR: In this paper, a general approach to regression modelling for ordered categorical response variables, y, is given, which is equally applicable to ordered and unordered y. The method is based on the logistic family which contains a hierarchy of regression models, ranging from ordered to unordered models.
Abstract: [Read before the Royal Statistical Society, by Professor R. L. Plackett on behalf of the late Professor Anderson, at a meeting organized by the Research Section on Wednesday, October 5th, 1983, Professor J. B. Copas in the Chair] SUMMARY A general approach to regression modelling for ordered categorical response variables, y, is given, which is equally applicable to ordered and unordered y. The regressor variables xT = (xI, ,x ) may be continuous or categorical. The method is based on the logistic family which contains a hierarchy of regression models, ranging from ordered to unordered models. Ordered properties of the former, the stereotype model, are established. The choice between models is made empirically on the basis of model fit. This is particularly important for assessed, ordered categorical response variables, where it is not obvious a priori whether or not the ordering is relevant to the regression relationship. Model simplification is investigated in terms of whether or not the response categories are distinguishable with respect to x. The models are fitted iteratively using the method of maximum likelihood. Examples are given.

493 citations


Journal ArticleDOI
TL;DR: Methods of inference which can be used for implicit statistical models whose distribution theory is intractable are developed, and the kernel method of probability density estimation is advocated for estimating a log-likelihood from simulations of such a model.
Abstract: A prescribed statistical model is a parametric specification of the distribution of a random vector, whilst an implicit statistical model is one defined at a more fundamental level in terms of a generating stochastic mechanism. This paper develops methods of inference which can be used for implicit statistical models whose distribution theory is intractable. The kernel method of probability density estimation is advocated for estimating a log-likelihood from simulations of such a model. The development and testing of an algorithm for maximizing this estimated log-likelihood function is described. An illustrative example involving a stochastic model for quantal response assays is given. Possible applications of the maximization algorithm to ad hoc methods of parameter estimation are noted briefly, and illustrated by an example involving a model for the spatial pattern of displaced amacrine cells in the retina of a rabbit.

441 citations


Journal ArticleDOI
Mike West1
TL;DR: In this article, the authors consider a special class of heavy-tailed, unimodal and symmetric error distributions for which the analyses, though apparently intractable, can be examined in some depth by exploiting certain properties of the assumed error form.
Abstract: SUMMARY Bayesian inference in regression models is considered using heavy-tailed error distri- butions to accommodate outliers The particular class of distributions that can be con- structed as scale mixtures of normal distributions are examined and use is made of them as both error models and prior distributions in Bayesian linear modelling, includ- ing simple regression and more complex hierarchical models with structured priors depending on unknown hyperprior parameters The modelling of outliers in nominally normal linear regression models using alternative error distributions which are heavy-tailed relative to the normal provides an automatic means of both detecting and accommodating possibly aberrant observations Such realistic models do, however, often lead to analytically intractable analyses with complex posterior distributions in several dimensions that are difficult to summarize and understand In this paper we consider a special yet rather wide class of heavy-tailed, unimodal and symmetric error distributions for which the analyses, though apparently intractable, can be examined in some depth by exploiting certain properties of the assumed error form The distributions concerned are those that can be con- structed as scale mixtures of normal distributions In his paper concerning location parameters, de Finetti (1961) discusses such distributions and suggests the hypothetical interpretation that "each observation is taken using an instrument with normal error, but each time chosen at random from a collection of instruments of different precisions, the distribution of the

246 citations


Journal ArticleDOI
TL;DR: In this paper, a connection between the Bartlett adjustment factor of the log-likelihood ratio statistic and the normalizing constant c of the formula c I I 1?2L for the conditional distribution of a maximum likelihood estimator as applied to the full model and the model of the hypothesis tested is established.
Abstract: For rather general parametric models, a simple connection is established between the Bartlett adjustment factor of the log-likelihood ratio statistic and the normalizing constant c of the formula c I I 1?2L for the conditional distribution of a maximum likelihood estimator as applied to the full model and the model of the hypothesis tested. This leads to a relatively simple demonstration that division of the likelihood ratio statistic by a suitable constant or estimated factor improves the chi-squared approximation to its distribution. Various expressions for these quantities are discussed. In particular, for the case of a one-dimensional parameter an approximation to the constants involved is derived, which does not require integration over the sample space.

150 citations


Journal ArticleDOI
TL;DR: In this paper, the likelihood procedure for estimating the pairwise interaction potential function is developed for statistical analysis of homogeneous spatial point patterns and the normalizing factor of Gibbs canonical distribution is discussed both to estimate a scale parameter and to measure the softness or hardness of repulsive interactions.
Abstract: SUMMARY The likelihood procedure for estimating the pairwise interaction potential function is developed for statistical analysis of homogeneous spatial point patterns. Approximation methods of the normalizing factor of Gibbs canonical distribution are discussed both to estimate a scale parameter and to measure the softness (or hardness) of repulsive interactions. The approximations are useful up to a considerably high density. The validity of our procedure is demonstrated by some computer experi-. ments. Some real data are analysed.

138 citations






Journal ArticleDOI
TL;DR: In this paper, a general method for constructing quasi-complete Latin squares based on groups is given, and an explicit construction for valid randomization sets of quasicomplete Latin squares whose side is an odd prime power is given.
Abstract: SUMMARY A general method for constructing quasi-complete Latin squares based on groups is given. This method leads to a relatively straightforward way of counting the number of inequivalent quasi-complete Latin squares of side at most 9. Randomization of such designs is discussed, and an explicit construction for valid randomization sets of quasicomplete Latin squares whose side is an odd prime power is given. It is shown that, contrary to common belief, randomization using a subset of all possible quasicomplete Latin squares may be valid while that using the whole set is not.

Journal ArticleDOI
TL;DR: In this paper, two new tests are described in connexion with analysis of variance, one for the presence of qualitative interaction between a treatment and an intrinsic factor, both with qualitatively defined levels.
Abstract: SUMMARY Two new tests are described in connexion with analysis of variance. The first is for the presence of qualitative interaction between a treatment and an intrinsic factor, both with qualitatively defined levels. Such an interaction is said to be present if there are two levels of the treatment factor and two levels of the intrinsic factor such that the treatment difference has opposite signs at the two levels of the intrinsic factor. The second test is for the presence of a treatment effect which can be tested via any of a hierarchical sequence U1, U1 + U2,..., U1 + . . . + Ur of sums of squares. An allowance for selection is calculated.

Journal ArticleDOI
TL;DR: In this article, a technique for obtaining the Edgeworth type asymptotic expansion associated with the maximum likelihood estimator (MLE) in autoregressive moving-average (ARMA) models is presented.
Abstract: SUMMARY A technique is given for the Edgeworth type asymptotic expansion for the joint as well as marginal and conditional distributions of the maximum likelihood estimators in autoregressive moving-average (ARMA) models. Our methodology is illustrated and results on the expansions for some simple ARMA models are presented. The present paper suggests a technique for obtaining the Edgeworth type asymptotic expansion associated with the maximum likelihood estimator (MLE) in ARMA models. The expansion relates to joint as well as marginal and conditional distributions. The approach developed here is simple and divided into two steps. The first is to obtain the Taylor expansion for the MLE itself from the implicit function determining the MLE. On the basis of the explicit expression for the MLE an asymptotic expansion for the distribution of the MLE is derived at the second step. These procedures are described in Section 2. In Section 3 our methodology is illustrated using a simple model and results on the expansions for some simple ARMA models are shown. This paper ends with some discussion in Section 4.

Journal ArticleDOI
TL;DR: In this article, a Poisson limit theorem is derived for the number of "large" values observed among comparisons of independent, but not necessarily identically distributed random variables, and an application to the assessment of large numbers of correlation coefficients is given.
Abstract: SUMMARY A Poisson limit theorem is derived for the number of "large" values observed among comparisons of independent, but not necessarily identically distributed random variables. The comparisons made need not be the same and may depend on the two variables being compared. An application to the assessment of large numbers of correlation coefficients is given.






Journal ArticleDOI
TL;DR: In this paper, a sequence of multinomial approximations and related maximum likelihood estimators is constructed and a Cramer-Rao lower bound for nonparametric estimators of the mixture proportions is derived.
Abstract: SUMMARY By constructing a sequence of multinomial approximations and related maximum likelihood estimators, we derive a Cramer-Rao lower bound for nonparametric estimators of the mixture proportions and thereby characterize asymptotically optimal estimators. For the case of the sampling model M2 of Hosmer (1973) it is shown that the sequence of maximum likelihood estimators, which can be obtained explicitly, is asymptotically optimal in this sense. The results hold true even when the multinomial approximations involve cells chosen adaptively, from the data, in a wellspecified way.

Journal ArticleDOI
TL;DR: The Journal of the Royal Statistical Society (JSTO R's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the]STOR archive only for your personal, non-commercial use.
Abstract: WI LEY- BLACKWELL Asymptotic Results on the Greenwood Statistic and Some of its Generalizations Author(s): Jq. S. Rao and Morgan Kuo Source: Journal of the Royal Statistical Society. Series B (Methodological), Vol. 46, No. 2 (1984), pp. 228-237 Published by: Blackwell Publishing for the Royal Statistical Society Stable URL: http://www.jstor.org/stable/2345505 Accessed: 03/05/2009 10:39 Y our use of the]STOR archive indicates your acceptance of ]STOR's Terms and Conditions of Use, available at http://www.j stor.org/page/info/about/policies/terms.j sp. JSTO R's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the]STOR archive only for your personal, non—commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=black. Each copy of any part of a]STOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not—for—profit organization founded in 1995 to build trusted digital archives for scholarship. We work with the scholarly community to preserve their work and the materials they rely upon, and to build a common research platform that promotes the discovery and use of these resources. For more information about] STO R, please contact support@jstor.org. Royal Statistical Society and Blackwell Publishin are collaboratin with STQR to digitize, preserve and extend access tojournal of the Royal Statistical ociety. Series B ( etho ological). http://www.jstor.org


Journal ArticleDOI
TL;DR: In this article, Bernoulli-type models are adopted for the sequence of contested points between two players (or teams) in top men's tennis, where there are two types of points "A serving" and "B serving" necessary for a realistic model of top (class) tennis.
Abstract: SUMMARY Bernoulli-type models are adopted for the sequence of contested points between two players (or teams). In bipoints, there are two types of point- "A serving" and "B serving"-necessary for a realistic model of top (class) men's tennis. Unipoints is the special case of bipoints in which the probabilities that A (and hence B) wins each type of point are the same-a reasonable model for top men's squash rackets. Sports scoring systems based upon such play ("uniformats" and "biformats") may be regarded as sequential statistical tests concerning the identity of the "better player". Provided they satisfy certain fairness criteria, these tests are symmetric, with equal probabilities of type I and II error, and a unique efficiency p; then formally 1 - p represents the average fraction of play being virtually wasted through the use of a suboptimal scoring system. Largely because in top men's tennis the proportions of service points won by both players are so high, the efficiency of traditional tennis scoring in such play is unduly low. Whilst the introduction of tiebreaker games has further reduced efficiency, it is shown how a simple entropy-increasing modification of the traditional scoring system serves to increase efficiency.

Journal ArticleDOI
TL;DR: In this paper, the authors point out a serious incompatibility between two such conditions which are often individually imposed on pooling operators, when the two are used simultaneously, the resulting procedure is seen to depend on the opinion of a single expert.
Abstract: With the development of multi-agent statistical decision theory (Weerahandi and Zidek, 1981, 1983), considerable attention has recently been paid to the problem of combining the diverse beliefs of a group of individuals. More specifically, if n > 2 Bayesians formulate their opinions about some quantity 0 in terms of subjective probability densities fi, . . ., fn on E, the parameter space, a consensual density T(fi,.. . fn) is desired which could then be used in a conventional uni-Bayesian analysis. Owing in part to the complexity of existing solution schemes based on the Bayesian inferential framework (Morris, 1974, 1977; French, 1980), various axioms have been proposed in the literature which purport to embrace the minimal requirements that a pooling formula, T, should satisfy. This note points out a serious incompatibility between two such conditions which are often individually imposed on pooling operators. When the two are used simultaneously, the resulting procedure is seen to depend on the opinion of a single expert,

Journal ArticleDOI
TL;DR: In this paper, a simple method for scaling a set of binary responses is proposed using the logit factor model of Bartholomew (1980), where individuals may be ranked on the basis of a linear combination, foaixi of the responses (xi = 0 or 1) where the oil's are the appropriate factor loadings.
Abstract: SUMMARY A simple method for scaling a set of binary responses is proposed using the logit factor model of Bartholomew (1980). It is shown that individuals may be ranked on the basis of a linear combination, foaixi of the responses (xi = 0 or 1) where the oil's are the appropriate factor loadings. This completely avoids the need to calculate the y-scores suggested in Bartholomew (1980) and hence the heavy numerical integration which that method involves.


Journal ArticleDOI
TL;DR: In this article, both one-sided and two-sided cumulative sum procedures of Page (1954) are shown to be closely related to a sequential probability ratio test (SPRT).
Abstract: SUMMARY Both one-sided and two-sided cumulative sum procedures of Page (1954) are shown to be closely related to a sequential probability ratio test (SPRT). The generating functions (Laplace transforms) of cumulative sum procedures are found in terms of the generating functions of a certain SPRT. An example is discussed and some appli- cations are given. Motivated by the sequential probability ratio test (SPRT) Page (1954) developed cumulative sum (cusum) procedures for detecting changes in distribution of certain processes. The desire for quick detection of deteriorating quality of a process led him to define cusum procedures and he observed a relationship between the average run length (ARL) of one-sided cusum procedure and the average sample number (ASN) and a hitting probability of the SPRT. Recently it has been observed by Khan (1981) that the properties of Page's two-sided cusum procedure can be studied in terms of certain one-sided cusum procedures. This note considers the relation- ships between the one-sided as well as two-sided cusum procedures and the SPRT much beyond the average run length. In fact, the relationships given here make both one-sided and two-sided cusum theory as a chapter of sequential analysis. To fix ideas, let X1 , X2, . . . be independent and identically distributed (iid) random variables,