scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the royal statistical society series b-methodological in 1986"


Journal ArticleDOI
Julian Besag1
TL;DR: In this paper, the authors proposed an iterative method for scene reconstruction based on a non-degenerate Markov Random Field (MRF) model, where the local characteristics of the original scene can be represented by a nondegenerate MRF and the reconstruction can be estimated according to standard criteria.
Abstract: may 7th, 1986, Professor A. F. M. Smith in the Chair] SUMMARY A continuous two-dimensional region is partitioned into a fine rectangular array of sites or "pixels", each pixel having a particular "colour" belonging to a prescribed finite set. The true colouring of the region is unknown but, associated with each pixel, there is a possibly multivariate record which conveys imperfect information about its colour according to a known statistical model. The aim is to reconstruct the true scene, with the additional knowledge that pixels close together tend to have the same or similar colours. In this paper, it is assumed that the local characteristics of the true scene can be represented by a nondegenerate Markov random field. Such information can be combined with the records by Bayes' theorem and the true scene can be estimated according to standard criteria. However, the computational burden is enormous and the reconstruction may reflect undesirable largescale properties of the random field. Thus, a simple, iterative method of reconstruction is proposed, which does not depend on these large-scale characteristics. The method is illustrated by computer simulations in which the original scene is not directly related to the assumed random field. Some complications, including parameter estimation, are discussed. Potential applications are mentioned briefly.

4,490 citations


Journal ArticleDOI
TL;DR: On donne des estimations qui minimisent la somme des sommes residuelles des carres et une penalite de difficulte as discussed by the authors, i.e.
Abstract: On donne des estimations qui minimisent la somme des sommes residuelles des carres et une penalite de difficulte

365 citations


Journal ArticleDOI
TL;DR: In this paper, le modele de type Mallows is examined for a distance generale and on etudie ensuite les distances for lesquelles le mode le peut etre decompose en facteurs representant des etapes independantes din le processus de classement.
Abstract: On examine le modele de type Mallows pour une distance generale et on etudie ensuite les distances pour lesquelles le modele peut etre decompose en facteurs representant des etapes independantes dans le processus de classement

340 citations


Journal ArticleDOI
TL;DR: The authors compare deux approches dans le cas important de l'ajustement des modeles de regression logistique, and compare de two approches in the context of regression logistic regression.
Abstract: On compare deux approches dans le cas important de l'ajustement des modeles de regression logistique

219 citations


Journal ArticleDOI
TL;DR: In this article, the problem of computing the Bayes factor for a log-linear model Mo against the saturated model M1 with vague prior information was considered, and it was shown that using the standard Jeffreys prior density proportional to (I1i) 1/2, the problem no longer arises.
Abstract: Spiegelhalter and Smith (1982)hereafter SSproposed an approximate method for calculating the Bayes factor for a log-linear model Mo for a contingency table against the saturated model M1, with vague prior information. We adopt their notation and denote by SS(n) equation (n) of SS. Suppose x1, ..., Xk have a multinomial distribution with parameters 014 ..., 4k where Xi 0 0, X4i = 1. We write yT = (log x1, ..., log xk), 01 = (log 01, ..., log dk), and Y = diag{x1, ..., Xk}. Then if M1 is the saturated model, and Mo is the nested, log-linear, model defined by setting the contrasts CO1 = 0, where C is an s x k matrix with rank s and rows summing to zero, the approximate Bayes factor Bo1 for Mo against M1 is given by SS (32). This, however, is indeterminate if any of the cell frequencies in the table is zero. This is because of the use by SS of a prior density proportional to (H+i)'. If, however, we use, instead, the standard Jeffreys prior density proportional to (I1i) 1/2, the problem no longer arises. Then, by the arguments of SS and Lindley (1964), the resulting Bayes factor is still given by SS (32), with xi replaced by xi + 2 in the definitions of y and Y (i = 1, . . ., k), and SS (33) replaced by c 1 = (2 )s2 1 CCT I 1/2. If this solution is adopted, the prior is proper, and so, in principle, the problem of assigning an arbitrary multiplicative constant, for which the SS approach was primarily devised, need not arise. One could, in theory, simply apply Bayes; theorem directly and so obtain the Bayes factor exactly. However, in practice, this is difficult to do, and I know of no general solution to the problem. Even for simpler, more specific, contingency table and related models, such as those of independence or equiprobability, finding the Bayes factor exactly is not too easy; see, for example, Crook and Good (1982), Altham (1971), Gunel and Dickey (1974), Gunel (1982), Broniatowski (1981), and references therein. The second purpose of this note is to point out that, conditionally on MO,

171 citations


Journal ArticleDOI
TL;DR: In this article, a classe de modeles qui generalisent le travail de Smith (1979) and Bathes (1975) for des donnees censurees are introduced.
Abstract: On developpe une classe de modeles qui generalisent le travail de Smith (1979) et Bathes (1975) pour des donnees censurees

143 citations


Journal ArticleDOI
TL;DR: In this paper, an alternative approach is developed by transforming the data into unsigned four-dimensional directions and using known results on the sampling properties of the spectral decomposition of the resulting sample moment of inertia matrix.
Abstract: SUMMARY Maximum likelihood estimation using the matrix von Mises-Fisher distribution in orientation statistics leads to unacceptably complicated likelihood equations, essentially because of the inconvenient form of the normalizing constant in the probability distribution. For the case of 3 x 2 or 3 x 3 orientations, the main cases of practical importance, an alternative approach is developed here by transforming the data into unsigned four-dimensional directions and using known results on the sampling properties of the spectral decomposition of the resulting sample moment of inertia matrix. It is demonstrated that the necessary computations are relatively simple by applying some of the techniques to a set of vectorcardiogram data.

94 citations


Journal ArticleDOI
TL;DR: In this article, a conditional and marginal predictive likelihood approach is proposed to predict unobserved or missing data in the context of outlier theory and mixed model ANO VA, which is closely related to the Lauritzen (1974)-Hinkley (1979) likelihood method but with a wider range of applicability.
Abstract: SUMMARY Conditional and marginal predictive likelihood are defined and used in predicting un- observed or missing data. A distinction is made between predictive and estimative goals of a data analysis. Such methodology and distinctions are illustrated in the context of outlier theory and mixed model ANO VA. The prediction of unobserved or missing data is considered from a likelihood perspective. A general predictive likelihood is proposed for such purposes and contrasted with parametric likelihood. The resulting maximum likelihood predictor is also compared with the limiting imputation from the EM algorithm. The new methodology is illustrated and contrasted with estimative likelihood analysis in the context of outlier theory and mixed model ANO VA. Predictions are proposed that are based on conditional and marginal predictive likelihoods for the unobserved data which are predictive adaptations of the methods developed by Kalbfleisch and Sprott (1970, 1973). These authors were concerned with the elimination of nuisance parameters in order to make inference about another set of structural parameters. Prediction in the frame- work of parametric distributions involves the elimination of all parameters in such a way that inference about the unobserved data is possible. Therefore, in the predictive setting, all para- meters are nuisance parameters and the unobserved data now play a role similar to that of the structural parameters. Various predictive approaches to statistical inference have been advocated, principally by Geisser (1971, 1975), Stone (1974), Aitchison and Dunsmore (1975), and Guttman (1970). A survey is provided by Geisser (1980) and a brief overview in a parametric setting is presented in the next Section. Following this, a conditional likelihood approach is developed and shown to be closely related to the Lauritzen (1974)-Hinkley (1979) likelihood method but specified in a manner allowing for a wider range of applicability. A marginal predictive likelihood procedure is also specified which, when applicable, essentially agrees with the conditional likelihood procedure for the examples considered. For a general account of likelihood-based inference, see Fraser

89 citations


Journal ArticleDOI
Mike West1
TL;DR: In this article, an approche bayesienne basee sur des comparaisons des predictions a partir du modele standard avec celles d'un modele alternatif simple is presented.
Abstract: On presente une approche bayesienne basee sur des comparaisons des predictions a partir du modele standard avec celles d'un modele alternatif simple

89 citations


Journal ArticleDOI
TL;DR: In this article, the asymptotic distribution of the Pearson X2 and likelihood ratio G2 goodness-of-fit statistics is derived for k independent, non-identically distributed multinomials, as k approaches infinity, with expected cell frequencies bounded.
Abstract: SUMMARY The asymptotic distribution of the Pearson X2 and likelihood ratio G2 goodness-of-fit statistics is derived for k independent, non-identically distributed multinomials, as k approaches infinity, with expected cell frequencies bounded. The application is to large datasets of counts with many zero cells. Conditions are given under which the limiting distributions are normal for originating probabilities (i) known and (ii) dependent on a common, finite-dimensional parameter. A logistic regression model is given which satisfies (ii). Finally, skewness is discussed as a measure of closeness of moderate upper percentage points of X2 to normal percentiles.

65 citations


Journal ArticleDOI
TL;DR: In this article, the statistical behavior of some least squares estimators of the center and radius of a circle is examined and the asymptotic consistency of the estimators is investigated.
Abstract: This paper examines the statistical behaviour of some least squares estimators of the centre and radius of a circle. Two error models are used. The asymptotic consistency of the estimators is investigated. Where asymptotic consistency is established, asymptotic covariance matrices are obtained. A small sample simulation study gives results showing the same pattern as those of the asymptotic theory.

Journal ArticleDOI
TL;DR: On etudie en particulier le cas important des tables 2×2×K d'ou il decoule les extensions aux tables plus grandes as mentioned in this paper, a.k.a.
Abstract: On etudie en particulier le cas important des tables 2×2×K d'ou il decoule les extensions aux tables plus grandes

Journal ArticleDOI
TL;DR: In this paper, a method of testing for the presence of an outlier of unknown type is proposed and the properties of a rule based on the likelihood ratio which attempts to distinguish the two types of outlier are examined and compared with those of corresponding Bayes rules.
Abstract: Distinguishing an outlier in a time series arising through measurement error from one arising through a perturbation of the underlying system can be of use in data validation. In this paper a method of testing for the presence of an outlier of unknown type is proposed. Then the properties of a rule based on the likelihood ratio which attempts to distinguish the two types of outlier are examined and compared with those of the corresponding Bayes rules. An example involving data from an industrial production process is studied.

Journal ArticleDOI
TL;DR: In this article, a test d'exponentialite is proposed, which consists of searching for alternatives for a taux de panne monotone monotonous. But this test is not suitable for monotones.
Abstract: On propose un test d'exponentialite qui est consistant pour des alternatives a taux de panne monotone

Journal ArticleDOI
TL;DR: In this paper, an explicit procedure is given to obtain the exact maximum likelihood estimates of the parameters in a regression model with ARMA time series errors with possibly nonconsecutive data.
Abstract: SUMMARY An explicit procedure is given to obtain the exact maximum likelihood estimates of the parameters in a regression model with ARMA time series errors with possibly nonconsecutive data. The method is based on an innovation transformation approach from which an explicit recursive procedure is derived for the efficient calculation of the exact likelihood function and associated derivatives. The innovations and associated derivatives are used to develop a modified Newton-Raphson procedure for computation of the estimates. A weighted nonlinear least squares interpretation of the estimator is also given. A numerical example is provided to illustrate the method.

Journal ArticleDOI
TL;DR: For a class of linear regression models for grouped and ungrouped data, a necessary and sufficient condition is obtained for the existence of the maximum likelihood estimator as discussed by the authors, which has an intuitively simple interpretation that concavity of the log likelihood alone does not imply that the MLE exists always.
Abstract: SUMMARY In general, concavity of the log likelihood alone does not imply that the MLE exists always. For a class of linear regression models for grouped and ungrouped data, a necessary and sufficient condition is obtained for the existence of the maximum likelihood estimator. This condition has an intuitively simple interpretation. Further, it turns out that there are similar necessary and sufficient conditions for the existence of maximum likelihood estimates for a number of other non-linear models such as Cox's Regression Model. For a given set of data, these conditions may be verified by Linear Programming Methods.

Journal ArticleDOI
TL;DR: In this article, an approche bayesienne traverses l'utilisation de probabilites predictives and on l'applique a un modele parametrique dans lequel (Y,X) a une distribution normale bivariable.
Abstract: On presente une approche bayesienne a travers l'utilisation de probabilites predictives et on l'applique a un modele parametrique dans lequel (Y,X) a une distribution normale bivariable

Journal ArticleDOI
TL;DR: In this paper, it is shown that si l'optimalite est evaluee en termes de comportement des coefficients de Fourier, alors plusieurs estimateurs de series orthogonales tres simples sont asymptotiquement optimaux.
Abstract: On montre que si l'optimalite est evaluee en termes de comportement des coefficients de Fourier, alors plusieurs estimateurs de series orthogonales tres simples sont asymptotiquement optimaux

Journal ArticleDOI
TL;DR: The method relies on the cartesian tensorial nature of those cumulants of the log likelihood derivatives and should be particularly convenient in connection with statistical packages with structure similar to GLIM.
Abstract: SUMMARY We describe, for the numerical calculation of Bartlett adjustments, a method which may be of use when the cumulants of the log likelihood derivatives are easy to determine in one parametrization while the hypotheses to be tested are all linear in some other parametrization. The method relies on the cartesian tensorial nature of those cumulants and should be particularly convenient in connection with statistical packages with structure similar to GLIM.

Journal ArticleDOI
TL;DR: In this article, an approche bayesienne is proposed to deal with the problem of the comparaison of modeles for des processus de Poisson non homogenes, i.e.
Abstract: On developpe une approche bayesienne au probleme de la comparaison de modeles pour des processus de Poisson non homogenes

Journal ArticleDOI
TL;DR: In this paper, the Anderson-Darling statistic is used for normalized spacings for the Weibull (or equivalently the extreme-value) distribution, and a Monte Carlo study on power of the normal tests is given.
Abstract: SUMMARY Normalized spacings provide useful tests of fit for many suitably regular continuous distributions; attractive features of the tests are that they can be used with unknown parameters and also with samples which are censored (Type 2) on the left and/or right. A transformation of the spacings leads, under the null hypothesis, to a set of z-values in (0, 1); however, these are not uniformly distributed except for spacings from the exponential or uniform distributions. Statistics based on the mean or the median of the z-values have already been suggested for tests for the Weibull (or equivalently the extreme-value) distribution; we now add the Anderson-Darling statistic. Asymptotic theory of the test statistics is given in general, and specialized to the normal, logistic and extreme-value distributions. Monte Carlo results show the asymptotic points can be used for relatively small samples. Also, a Monte Carlo study on power of the normal tests is given, which shows the Anderson-Darling statistic to be powerful against a wide range of alternatives; the mean and median can be non-consistent or even biased.

Journal ArticleDOI
TL;DR: In this article, the design effect of a statistic is defined as the ratio of its true variance under the given sample design to its variance had the sample been obtained by simple random sampling.
Abstract: : In sampe surveys, the design effect of a statistic is usually defined as the ratio of its true variance under the given sample design to its variance had the sample been obtained by simple random sampling. Empirical work suggests certain patterns for design effects of different types of statistics under different designs but theoretical work explaining these patterns is limited. This paper obtains general theoretical results on the sructure of design effects for a broad class of (statistical inference) under a two-stage sampling design. In particular, it discusses the relation between design effects of multivariate and of univariate statistics. This relation is of practical interest because it is of relevance to the imputation of standard errors for multivariate statistics such as correlation coefficients or regression coefficients using design effects of univariate statistics. The latter quantities are often routinely derived on completion of the survey. The former may be difficult to compute by standard procedures, either because of the absence of the necessary design information or because of software or degrees of freedom limitations.

Journal ArticleDOI
TL;DR: In this paper, a simple linear regularization procedure is introduced, as well as the special cases of constrained deconvolution, maximum entropy restoration and least-squares filtering, for the case of low noise-to-signal ratios.
Abstract: SUMMARY The problem is considered of restoring a blurred and/or noisy image using various regularization prescriptions. Preliminary work concerns the invertibility of point spread functions and the construction of a stochastic model for images. A simple linear regularization procedure is introduced, as well as the special cases of constrained deconvolution, maximum entropy restoration and least-squares filtering. Optimal choices for the degree of smoothing are obtained for the case of low noise-to-signal ratios. Certain techniques, prevalent in the image-restoration literature for choosing the degree of smoothing, are shown to oversmooth, in a well-defined sense.

Journal ArticleDOI
TL;DR: On explore les possibilites de l'estimation du maximum de vraisemblance avec une reference particuliere aux difficultes creees par les discontinuites de la fonction de VRAISEMblance.
Abstract: On explore les possibilites de l'estimation du maximum de vraisemblance avec une reference particuliere aux difficultes creees par les discontinuites de la fonction de vraisemblance

Journal ArticleDOI
TL;DR: In this article, a Bayesian decision-theoretic framework is used to develop influence measures, justified by the principles of decision theory, for regression fitting, with action space, loss function and a prior distribution for the unknown parameters.
Abstract: The identification of influential observations is a crucial phase of fitting a model. Because such observations have a strong impact on inferences or decisions based on the fit, an aberrant influential observation should be excluded from the data used to fit the model. On the other hand, if an influential observation arises from the same model underlying the rest of the data, it ought to be included. We address the characterization of influence measures, justified by the principles of decision theory. Texts by Belsley, Kuh, and Welsch (1980) and by Cook and Weisberg (1982) develop many measures of influence and survey others which have appeared in the literature. Most measures are motivated in a frequentist perspective, but several Bayesian measures have also been developed. See, for example, Pettit and Smith (1985) and Johnson and Geisser (1982). Armed with the ability to compute these measures one can generate reams of computer output when fitting a regression model to even a small data set. Apart from the discussion in Section 4.4 of Cook and Weisberg (1982) comparing four types of influence diagnostics, little has been said about the relative merits of different measures. Consequently, data analysts lack practical guidance in parsimoniously choosing appropriate influence measures when fitting a regression model to data. We use a Bayesian decision-theoretic framework to develop measures of influence. In Section 2, the regression-fitting problem is set up as a formal decision problem, complete with action space, loss function and a prior distribution for the unknown parameters. We then define influence measures in Section 3 based on the change in Bayes risk resulting from an observation's inclusion in or exclusion from the analysis. To express regression fitting as a decision problem, it must be simplified, perhaps to an unrealistic extent. Still, our study provides insight for defining influence measures in ideal settings. An observation's influence will depend on the purpose and context of the regression fitting, as characterized by the loss function and the prior distribution. Two examples are presented. For some cases of the loss and prior, these measures are equivalent to ones already proposed in the literature, and in' other cases new measures result.

Journal ArticleDOI
TL;DR: On etudie la robustesse des predicteurs de Copas (1983) vis-a-vis des ecarts a la distribution, et al..
Abstract: On etudie la robustesse des predicteurs de Copas (1983) vis-a-vis des ecarts a la distribution

Journal ArticleDOI
TL;DR: On obtient des resultats qui relient la matrices d'information, son inverse, l'estimateur des effets de traitement and la somme des carres des traitements ajustes a ceux des plans composants as discussed by the authors.
Abstract: On obtient des resultats qui relient la matrice d'information, son inverse, l'estimateur des effets de traitement et la somme des carres des traitements ajustes a ceux des plans composants

Journal ArticleDOI
TL;DR: In this paper, the authors considered incomplete block designs with equal block size and equal number of replicates such that balance is achieved for each main effect and interaction in a factorial experiment, where each block is split into as many plots as the number of treatments and all the treatments are assigned to the plots of each block.
Abstract: In agricultural field experiments it is desirable to divide the field into blocks of the same fertility to control soil heterogeneity. In a complete block design each block is split into as many plots as the number of treatments and all the treatments are assigned to the plots of each block. When a large number of treatment combinations are to be tested in a factorial experiment, it is undesirable to use complete blocks because in that case each block may not be homogeneous in their fertility. Therefore incomplete block designs are commonly used in factorial experiments. Balanced factorial designs are incomplete block designs with equal block size and equal number of replicates such that balance is achieved for each main effect and interaction. Bose (1947) used finite Euclidean geometries to construct balanced designs with all the main effects of the treatments unconfounded with the block effects. Nair and Rao (1948) showed that partially balanced incomplete block designs with an association scheme called extended group divisible scheme by Hinkelmann (1964) are balanced factorial designs. They also gave the statistical analysis and some constructions of two-factor balanced designs. Other methods of constructing balanced factorial designs are given by Rao (1956), Kishen and Srivastava (1959), and Kishen and Tyagi (1964). More recently Puri and Nigam (1976, 1978) developed the theory of balanced factorial designs in varying replicates and varying block sizes. Gupta (1983) discussed some methods for constructing block designs having orthogonal factorial structure, i.e., the treatment sum of squares adjusted for blocks can be partitioned orthogonally into sums of squares due to main effects and due to interactions. Most of the designs listed in his table do not achieve balance in main effects or interactions, but the advantage is they only require small number of replications. Kshirsagar (1966) showed that to achieve balance in all the main effects and interactions an incomplete block factorial design is necessarily an extended group divisible partially balanced design. We shall consider only two-factor balanced designs with equal replicate and equal block size. Two-factor balanced designs are called partially balanced incomplete block designs with rectangular association scheme by Vartak (1959). He used the Kronecker product of two

Journal ArticleDOI
TL;DR: In this paper, a definition generale and an approche unifiee a l'analyse d'une classe restreinte de tels plans are presented.
Abstract: On donne une definition generale et on donne une approche unifiee a l'analyse d'une classe restreinte de tels plans

Journal ArticleDOI
TL;DR: In this article, a stochastic model was used to investigate the effect of the proportion of demands that are prepared to wait for service on the performance of a multi-server system with repeat-last-number, auto-repeat and ring-back-when-free facilities.
Abstract: SUMMARY New developments in telecommunications technology are likely to lead to substantial increases in repeat-attempt rates and this may well adversely affect the performance of the telephone network. One aspect of this complex problem is considered in this paper, where we explore how the characteristics of a multi-server system are affected by the proportion of demands that are prepared to wait for service. Consider a complex multi-server system which under normal conditions immediately accepts any demand placed on it, but which is occasionally so busy that additional demands must either wait or go away and try again at a later time. How sensitive is the operation of the system to the proportion of demands which are prepared to wait? This paper explores the question with an analytical study of a very simple stochastic model. The question is motivated by new developments in telecommunications technology. The increasing use of repeat-last-number, auto-repeat and ring-back-when-free facilities is likely to influence telephone network performance since it may affect: Songhurst (1984) has undertaken a simulation study of circuit group blocking based on a complex and realistic network model, and has used this to examine the implications of changes in repeat-attempt behaviour and to propose restrictions that should apply to auto-repeat facilities in telephone instruments. The simulation study has so far been restricted to circuit group blocking as a cause of call failure: it thus leaves aside factor (i) and studies the joint consequences of factors (ii) and (iii). An aim of this paper is to investigate factor (iii) alone by considering the effect on a loss system of allowing calls to wait, with the accepted traffic held constant. We show that, even with accepted traffic held constant, allowing calls to wait has a deleterious effect on performance. The effect is small until the proportion of subscribers able to wait exceeds 50%, but increases rapidly as the proportion approaches 100%. The variability of line usage is increased and a greater number of first attempts fail, a phenomenon which is more marked the larger the circuit group. This paper does not suggest that new facilities will leave the level of accepted traffic unchanged. Indeed it seems likely that new facilities will increase the level of carried traffic, particularly within the peak periods of the daily cycle. This is a difficult factor to investigate: it