scispace - formally typeset
Search or ask a question

Showing papers on "Likelihood principle published in 1998"


Journal ArticleDOI
TL;DR: In this paper, it is shown that confidence regions enjoying the same convergence rates as those found for empirical likelihood can be obtained for the entire range of values of the Cressie-Read parameter, including -1, maximum entropy, 0, empirical likelihood, and 1, Pearson's Z2.
Abstract: SUMMARY The method of empirical likelihood can be viewed as one of allocating probabilities to an n-cell contingency table so as to minimise a goodness-of-fit criterion. It is shown that, when the Cressie-Read power-divergence statistic is used as the criterion, confidence regions enjoying the same convergence rates as those found for empirical likelihood can be obtained for the entire range of values of the Cressie-Read parameter ,{, including -1, maximum entropy, 0, empirical likelihood, and 1, Pearson's Z2. It is noted that, in the power-divergence family, empirical likelihood is the only member which is Bartlettcorrectable. However, simulation results suggest that, for the mean, using a scaled F distribution yields more accurate coverage levels for moderate sample sizes.

180 citations


Journal ArticleDOI
TL;DR: This paper reports the results of an extensive Monte Carlo study of the distribution of the likelihood ratio test statistic using the value of the restricted likelihood for testing random components in the linear mixed-effects model when the number of fixed components remains constant.
Abstract: This paper reports the results of an extensive Monte Carlo study of the distribution of the likelihood ratio test statistic using the value of the restricted likelihood for testing random components in the linear mixed-effects model when the number of fixed components remains constant. The distribution of this test statistic is considered when one additional random component is added. The distribution of the likelihood ratio test statistic computed using restricted maximum likelihood is compared to the likelihood ratio test statistic computed from the usual maximum likelihood. The rejection proportion is computed under the null hypothesis using a mixture of chi-square distributions. The restricted likelihood ratio statistic has a reasonable agreement with the maximum likelihood test statistic. For the parameter combinations considered, the rejection proportions are, in most cases, less than the nominal 5% level for both test statistics, though, on average, the rejection proportions for REML are closer to the nominal level than for ML.

150 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide a set of conditions by which they can relatively easily prove the asymptotic posterior normality under quite general situations of possible nonstationarity.
Abstract: Asymptotic normality of the posterior is a well understood result for dynamic as well as nondynamic models based on sets of abstract conditions whose actual applicability is hardly known especially for the case of nonstationarity. In this paper we provide a set of conditions by which we can relatively easily prove the asymptotic posterior normality under quite general situations of possible nonstationarity. This result reinforces and generalizes the point of Sims and Uhlig (1991) that inference based on the likelihood principle, explained by Berger and Wolpert (1988), will be unchanged regardless of whether the data are generated by a stationary process or by a unit root process. On the other hand, our conditions allow us to generalize the Bayesian information criterion known as the Schwarz criterion to the case of possible nonstationarity. In addition, we have shown that consistency of the maximum likelihood estimator, not the asymptotic normality of the estimator, with some minor additional assumptions is sufficient for asymptotic posterior normality.

77 citations


Journal ArticleDOI
TL;DR: In this paper, the authors suggest that the reason p values are so heavily used is because they provide information concerning the strength of the evidence provided by the experiment and demonstrate that, under some circumstances, the p value can be interpreted in the same manner as the likelihood ratio.
Abstract: According to almost any approach to statistical inference, attained significance levels, orp values, have little value. Despite this consensus among statistical experts,p values are usually reported extensively in research articles in a manner that invites misinterpretation. In the present article, I suggest that the reasonp values are so heavily used is because they provide information concerning the strength of the evidence provided by the experiment. In some typical hypothesis testing situations, researchers may be interested in the relative adequacy of two different theoretical accounts: one that predicts no difference across conditions, and another that predicts some difference. The appropriate statistic for this kind of comparison is the likelihood ratio,P(D|M 0)/P(D|M 1), whereM 0 andM 1 are the two theoretical accounts. Large values of the likelihood ratio provide evidence thatM 0 is a better account, whereas small values indicate thatM1 is better. I demonstrate that, under some circumstances, thep value can be interpreted in the same manner as the likelihood ratio. In particular, forZ,t, and sign tests, the likelihood ratio is an approximately linear function of thep value, with a slope between 2 and 3. Thus, researchers may reportp values in scientific communications because they are a proxy for the likelihood ratio and provide the readers with information about the strength of the evidence that is not otherwise available.

43 citations


Journal ArticleDOI
TL;DR: In this article, a semiparametric hazard model with parametrized time but general covariate dependency is formulated and analyzed inside the framework of counting process theory, and a profile likelihood principle is introduced for estimation of the parameters: the resulting estimator is $n^{1/2}$-consistent, asymptotically normal and achieves the semi-parametric efficiency bound.
Abstract: A semiparametric hazard model with parametrized time but general covariate dependency is formulated and analyzed inside the framework of counting process theory. A profile likelihood principle is introduced for estimation of the parameters: the resulting estimator is $n^{1/2}$-consistent, asymptotically normal and achieves the semiparametric efficiency bound. An estimation procedure for the nonparametric part is also given and its asymptotic properties are derived. We provide an application to mortality data.

41 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that there is no loss of efficiency in using a dual or empirical likelihood model, to second order, compared to either the artificial likelihood or the true likelihood test.
Abstract: Empirical and dual likelihood are two of a growing array of artificial or approximate likelihoods currently in use in statistics. A question of major interest focuses on the performance of these new constructs relative to ordinary parametric likelihoods. We consider here two criteria, local power and conditional properties. Looking at tests of a scalar parameter, we show that there is no loss of efficiency in using a dual or empirical likelihood model, to second order. To third order, either the artificial likelihood or the true likelihood test could be more efficient; this is determined by the Fisher information, the distance between the null and the alternative hypotheses, and the statistical curvature of the models. Conditionality properties are assessed by comparing empirical likelihood with quasilikelihood. If there is overdispersion present, or more generally an unknown amount of dispersion, then there is little difference in the ancillaries for conditional inference based on empirical likelihood or on other possible likelihoods, such as a quasilikelihood. If the amount of dispersion is known, however, it is correct to base inference on the quasilikelihood.

36 citations


Journal ArticleDOI
A. J. Watkins1
TL;DR: In this article, a framework for deriving the expectations associated with maximum likelihood estimation in the two-parameter Weibull distribution is presented, and these expectations involve tenns appearing in the derivates of the weibull probability density function.
Abstract: This short paper outlines a framework for deriving the expectations associated with maximum likelihood estimation in the two parameter Weibull distribution; these expectations involve tenns appearing in the derivates of the Weibull probability density function. This framework allows known results to be presented in a unified manner. For standard likelihood considerations, interests centres on the frrst and second order derivatives; however, we also give the expectations of third derivatives, which appear in more advanced likelihood-based results.

32 citations


Journal ArticleDOI
TL;DR: In this article, a least squares version of the empirical likelihood is proposed to overcome the computational difficulty of conventional empirical likelihood, where additional constraints are imposed to reflect additional and sought-after features of statistical analysis.
Abstract: In conventional empirical likelihood, there is exactly one structural constraint for every parameter. In some circumstances, additional constraints are imposed to reflect additional and sought-after features of statistical analysis. Such an augmented scheme uses the implicit power of empirical likelihood to produce very natural adaptive statistical methods, free of arbitrary tuning parameter choices, and does have good asymptotic properties. The price to be paid for such good properties is in extra computational difficulty. To overcome the computational difficulty, we propose a ‘least-squares’ version of the empirical likelihood. The method is illustrated by application to the case of combined empirical likelihood for the mean and the median in one sample location inference.

29 citations


Journal ArticleDOI
TL;DR: In this article, the problem of testing normal mean vector when the observations are missing from subsets of components is considered and three simple exact tests are proposed as alternatives to the traditional likelihood ratio test.
Abstract: The problem of testing normal mean vector when the observations are missing from subsets of components is considered. For a data matrix with a monotone pattern, three simple exact tests are proposed as alternatives to the traditional likelihood ratio test. Numerical power comparisons between the proposed tests and the likelihood ratio test suggest that one of the proposed tests is indeed comparable to the likelihood ratio test and the other two tests perform better than the likelihood ratio test over a part of the parameter space. The results are extended to a nonmonotone pattern and illustrated using an example.

26 citations


Book ChapterDOI
01 Jan 1998
TL;DR: In this paper, a nonparametric model and maximum likelihood estimation of the baseline survival function using the maximum likelihood method using full likelihood is presented. But the difference between the full and partial likelihood estimation is not explained.
Abstract: Publisher Summary In a general nonparametric model, explicit solution is given for the baseline survival function Connection of presented estimator with Cox's partial likelihood estimator is investigated Asymptotic variance of the parameter is calculated, using a model of random permutations Results are applied on survival analysis of cancer patients This paper describes the nonparametric model and gives the maximum likelihood estimation of the baseline survival function This model is parameterized and the parameters are estimated with the help of the maximum likelihood method using full likelihood It also describes the difference between the full and partial likelihood estimation of the parameter The expression likelihood estimation needs explanation in the nonparametric case, because the likelihood method is applicable when a dominated class of measures depends on some parameter

23 citations


Journal ArticleDOI
TL;DR: In this paper, the conditional profile restricted likelihood (CPRL) function and likelihood ratio (LR), Lagrange multiplier (LM) and Wald tests of the parameters involved in the covariance matrix of linear regression disturbances based on different modified likelihood functions were derived.

Journal ArticleDOI
TL;DR: The authors discuss one such situation--that where the number of contributors to the mixture is in dispute, and a way of dealing with the problem is presented.

Journal ArticleDOI
Tore Schweder1
TL;DR: An alternative methodology based on the likelihood principle is presented and compared to the Bayesian, and it is argued that the likelihood method often is advantageous in the scientific context.

Proceedings ArticleDOI
16 Aug 1998
TL;DR: A test carried on the training set to validate the model choice and if the selected model is required to give a calibrated prediction, if it predicts the frequencies of the training sample reasonably well, the penalty term adopted is accepted otherwise it is relaxed.
Abstract: Semiparametric density estimation using Gaussian mixtures is a powerful means that can give as good performance as a nonparametric estimator, without its heavy computational burden. A maximum penalised likelihood principle was previously proposed by the authors (1996) for selecting the best approximating mixture for an unknown density function. We propose here a test carried on the training set to validate the model choice. The selected model is required to give a calibrated prediction, i.e. if it predicts the frequencies of the training sample reasonably well, the penalty term adopted is accepted otherwise it is relaxed.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the likelihood ratio test can be performed with the cf2 distribution as asymptotic law, with the degrees of freedom f equal to the number of Euclidean parameters fixed under the hypothesis.
Abstract: The correlated gamma-frailty model is a generalization of Cox' proportional hazard model, which allows for correlation between individuals within the same group. The nonparametric maximum likelihood estimator in this model has previously been studied by Murphy (1994, 1995) and Parner (1998). Here we show that the likelihood ratio test can be performed with the cf2 distribution as asymptotic law, with the degrees of freedom f equal to the number of Euclidean parameters fixed under the hypothesis. As a side effect we also have a new proof for asymptotic normality and efficiency of the Euclidean component of the maximum likelihood parameter. Finally, we show how standard errors can be computed.

Journal ArticleDOI
Chuanhai Liu1
TL;DR: In this paper, the covariance matrix of a normal distribution is computed from its one-dimensional conditional distributions whose sample spaces span the sample space of the joint distribution, and applied to a linear mixed-effects model.
Abstract: SUMMARY This paper provides a method for computing the asymptotic covariance matrix from a likelihood function with known maximum likelihood estimate of the parameters. Philosophically, the basic idea is to assume that the likelihood function should be well approximated by a normal density when asymptotic results about the maximum likelihood estimate are applied for statistical inference. Technically, the method makes use of two facts: the information for a one-dimensional parameter can be well computed when the loglikelihood is approximately quadratic over the range corresponding to a small positive confidence interval; and the covariance matrix of a normal distribution can be obtained from its one-dimensional conditional distributions whose sample spaces span the sample space of the joint distribution. We illustrate the method with its application to a linear mixed-effects model.

Journal ArticleDOI
TL;DR: In this article, a limit theory was developed for the maximum likelihood estimator, based on a Gaussian likelihood, of the moving average parameter θ in an MA(1) model when θ is equal to or close to 1.

Journal ArticleDOI
TL;DR: In this paper, the authors show that, under certain regularity conditions, constructing likelihood ratio confidence regions using a boostrap estimate of the distribution of the likelihood ratio statistic leads to regions which have a coverage error of O(n-2), which is the same as that achieved using a Bartlett-correctedlihood ratio statistic.
Abstract: In this paper, we show that, under certain regularity conditions, constructing likelihood ratio confidence regions using a boostrap estimate of the distribution of the likelihood ratio statistic-instead of the usual chi 2 approximation-leads to regions which have a coverage error of O(n- 2), which is the same as that achieved using a Bartlett-corrected likelihood ratio statistic. We use the boostrap method to assess the uncertainty associated with dose-response parameters that arise in models for the Japanese atomic bomb survivors data.

Journal ArticleDOI
TL;DR: In this article, a factor analysis model with two normally distributed observations and one factor is studied and the maximum likelihood estimate of the factor loading is given in closed form, where the exact and approximate distributions of the estimation are considered.
Abstract: We study a factor analysis model with two normally distributed observations and one factor. In the case when the errors have equal variance, the maximum likelihood estimate of the factor loading is given in closed form. Exact and approximate distributions of the maximum likelihood estimate are considered. The exact distribution function is given in a complex form that involves the incomplete Beta function. Approximations to the distribution function are given for the cases of large sample sizes and small error variances. The accuracy of the approximations is discussed

01 Jan 1998
TL;DR: In this paper, a decision theoretic foundation of the maximum likelihood estimation method is provided, and it is shown that the estimator is finite sample efficient for the parameter, with respect to the mean squared error of the scores and within a large class of estimates; C includes for some of these models the unbiased, as well as the equivariant estimators of.
Abstract: SUMMARY. A decision theoretic foundation of the maximum likelihood estimation method is provided. In regular parametric models, the maximum likelihood estimatoris shown to be finite sample ecient for the parameter , with respect to the mean squared error of the scores and within a large class C of estimates; C includes for some of these models the unbiased, as well as the equivariant estimators of . This result may be used as a tool, to prove thatis optimal within C for the squared error loss without recourse to completeness, but also to pro- vide good estimates of the log-likelihood. Among other applications a finite sample property of Rao's test is also revealed, that is not shared either by Wald's test or the likelihood ratio test.

Journal ArticleDOI
TL;DR: In this paper, the modified profile likelihood was used to estimate the log odds ratio in a set of 2 × 2 tables. But the modified profiles were not used in the regression analysis of a common odds ratio, and it was shown that the asymptotic bias of the maximum modified profiles likelihood estimator is negligible for odds ratios less than 5.
Abstract: We evaluate the use of the modified profile likelihood in regression analysis of the log odds ratio in a set of 2 x 2 tables. It has been shown that unconditional maximum likelihood inference is biased when there are many nuisance parameters, while exact conditional maximum likelihood inference, though accurate, can be infeasible in large problems. The modified profile likelihood provides an estimator of the odds ratio that closely approximates the exact conditional maximum likelihood estimator. It is demonstrated that in the situation where the number of tables increases to infinity the corrected maximum modified profile likelihood estimator is consistent to first order while the unconditional maximum likelihood estimator is known to be badly inconsistent. Numerical work demonstrates that the asymptotic bias of the maximum modified profile likelihood estimator of a common odds ratio is negligible for odds ratios less than 5.

Journal ArticleDOI
01 May 1998
TL;DR: In this paper, the influence of observations on the goodness-of-fit test in maximum likelihood factor analysis was investigated by using the local influence method, where the test statistic forms a surface and the main diagnostics are the maximum slope of the perturbed surface, the direction vector corresponding to the curvature.
Abstract: The influence of observations on the goodness-of-fit test in maximum likelihood factor analysis is investigated by using the local influence method. Under an appropriate perturbation the test statistic forms a surface. One of main diagnostics is the maximum slope of the perturbed surface, the other is the direction vector corresponding to the curvature. These influence measures provide the information about jointly influential observations as well as individually influential observations.

Journal ArticleDOI
TL;DR: In this paper, it was demonstrated that an approximation developed in Davies (1977) for computing significance levels for likelihood ratio statistics may be used effectively in the context of linkage analysis, where additional parameters appear in the likelihood.
Abstract: In likelihood based tests of genetic linkage, computing significance levels for the lod score is straightforward when there are no parameters other than the recombination fraction in the likelihood: the tail of the distribution of the lod score is asymptotically one half the tail of the distribution of a a . However, when additional parameters appear in the likelihood, the χ 2 approximation can be quite inaccurate. The source of the difficulty is that the likelihood is mathematically independent of the additional parameters under the null hypothesis of no linkage. Here, it is demonstrated that an approximation developed in Davies (1977) for computing significance levels for likelihood ratio statistics may be used effectively in the context of linkage analysis.


Proceedings ArticleDOI
16 Dec 1998
TL;DR: In this paper, it was shown that the analytic center is a maximum likelihood estimator for a class of probability distributions and not a bounded error estimator, as was originally proposed for identification in the bounded error setting.
Abstract: The purpose of this paper is to show that the analytic center, originally proposed for identification in a bounded error setting, is a maximum likelihood estimator for a class of probability distributions.

Journal ArticleDOI
TL;DR: It is concluded that the model-based inferences provide a practical alternative to more common affected-sibling-pair tests when investigators have some knowledge about the mode of inheritance of a disease and that the methods may sometimes be useful for comparing the genetic relative risk with environmental relative risks.
Abstract: SUMMARY Using genetic marker data from affected sibling pairs, we study likelihood-based linkage analysis under quasi-recessive, quasi-domin, ant, and general single-locus models. We use e an epidemiologic parameterization under a model where the marker locuss closely linked to the putative disease susceptibility gene. This model and parameterization allow inferences about the relative risk associated with the susceptible genotype. We base inferences on approximate likelihoods that focus on the affected siblings in the sibship and, using these likelihoods, we derive closed-form maximum likelihood estimators for model parameters and closed-form likelihood ratio statistics for tests that the relative risk associated with the susceptible genotype is one. Under the general single-locus model, our likelihood ratio test is the same as the iteratively computed triangle test proposed by Holmans (1993, American Journal of Human Genetics 52, 362-374) for the case where marker identity-by-descent is known; our derivation gives a closed form for the test statistic. We present quartiles of the distribution of parameter estimates and critical values for the exact null distribution of our likelihood ratio test statistics; we also give large-sample approximations to their null distributions. We show that the powers of our likelihood ratio tests exceed the powers of more commonly used nonparametric affected-sibling-pair tests when the data meet the inheritance model assumptions used to derive the test; we also show that our tests' powers are robust to violation of model assumptions. We conclude that our model-based inferences provide a practical alternative to more common affected-sibling-pair tests when investigators have some knowledge about the mode of inheritance of a disease and that our methods may sometimes be useful for comparing the genetic relative risk with environmental relative risks.