scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Bayesian semiparametric inference for the accelerated failure‐time model

TL;DR: In this paper, a Markov-chain Monte Carlo (MCMC) method is developed to compute the features of the posterior distribution of a log-linear model, and a model selection method for obtaining a more parsimonious set of predictors is studied.
Abstract: Bayesian semiparametric inference is considered for a loglinear model. This model consists of a parametric component for the regression coefficients and a nonparametric component for the unknown error distribution. Bayesian analysis is studied for the case of a parametric prior on the regression coefficients and a mixture-of-Dirichlet-processes prior on the unknown error distribution. A Markov-chain Monte Carlo (MCMC) method is developed to compute the features of the posterior distribution. A model selection method for obtaining a more parsimonious set of predictors is studied. The method adds indicator variables to the regression equation. The set of indicator variables represents all the possible subsets to be considered. A MCMC method is developed to search stochastically for the best subset. These procedures are applied to two examples, one with censored data.
Citations
More filters
Book
07 Dec 2004
TL;DR: This chapter reviews Bayesian advances in survival analysis and discusses the various semiparametric modeling techniques that are now commonly used, with a focus on proportional hazards models.
Abstract: Great strides in the analysis of survival data using Bayesian methods have been made in the past ten years due to advances in Bayesian computation and the feasibility of such methods. In this chapter, we review Bayesian advances in survival analysis and discuss the various semiparametric modeling techniques that are now commonly used. We review parametric and semiparametric approaches to Bayesian survival analysis, with a focus on proportional hazards models. Reference to other types of models are also given. Keywords: beta process; Cox model; Dirichlet process; gamma process; Gibbs sampling; piecewise exponential model; Weibull model

858 citations

Reference EntryDOI
15 Jul 2005
TL;DR: This paper reviewed parametric and semiparametric approaches to Bayesian survival analysis, with a focus on proportional hazards models, and reference to other types of models are also given, including Gibbs sampling and Weibull model.
Abstract: Great strides in the analysis of survival data using Bayesian methods have been made in the past ten years due to advances in Bayesian computation and the feasibility of such methods. In this chapter, we review Bayesian advances in survival analysis and discuss the various semiparametric modeling techniques that are now commonly used. We review parametric and semiparametric approaches to Bayesian survival analysis, with a focus on proportional hazards models. Reference to other types of models are also given. Keywords: beta process; Cox model; Dirichlet process; gamma process; Gibbs sampling; piecewise exponential model; Weibull model

702 citations

Journal ArticleDOI
TL;DR: For each inference problem, relevant nonparametric Bayesian models and approaches including Dirichlet process models and variations, Polya trees, wavelet based models, neural network models, spline regression, CART, dependent DP models and model validation with DP and Polya tree extensions of parametric models are reviewed.
Abstract: We review the current state of nonparametric Bayesian inference. The discussion follows a list of important statistical inference problems, including density estimation, regression, survival analysis, hierarchical models and model validation. For each inference problem we review relevant nonparametric Bayesian models and approaches including Dirichlet process (DP) models and variations, Polya trees, wavelet based models, neural network models, spline regression, CART, dependent DP models and model validation with DP and Polya tree extensions of parametric models.

416 citations


Cites methods from "Bayesian semiparametric inference f..."

  • ...Models based on DP priors appear in Johnson and Christensen (1989) and Kuo and Mallick (1997)....

    [...]

Journal Article
TL;DR: In this paper, the current state of nonparametric Bayesian inference is reviewed and a list of important statistical inference problems, including density estimation, regression, survival analysis, hierarchical models and model validation are discussed.
Abstract: We review the current state of nonparametric Bayesian inference. The discussion follows a list of important statistical inference problems, including density estimation, regression, survival analysis, hierarchical models and model validation. For each inference problem we review relevant nonparametric Bayesian models and approaches including Dirichlet process (DP) models and variations, Polya trees, wavelet based models, neural network models, spline regression, CART, dependent DP models and model validation with DP and Polya tree extensions of parametric models.

413 citations

Journal ArticleDOI
TL;DR: A model that describes dependence across random distributions in an analysis of variance (ANOVA)-type fashion is proposed that can be rewritten as a DP mixture of ANOVA models, which inherits all computational advantages of standard DP mixture models.
Abstract: We consider dependent nonparametric models for related random probability distributions. For example, the random distributions might be indexed by a categorical covariate indicating the treatment levels in a clinical trial and might represent random effects distributions under the respective treatment combinations. We propose a model that describes dependence across random distributions in an analysis of variance (ANOVA)-type fashion. We define a probability model in such a way that marginally each random measure follows a Dirichlet process (DP) and use the dependent Dirichlet process to define the desired dependence across the related random measures. The resulting probability model can alternatively be described as a mixture of ANOVA models with a DP prior on the unknown mixing measure. The main features of the proposed approach are ease of interpretation and computational simplicity. Because the model follows the standard ANOVAstructure, interpretation and inference parallels conventions for ANOVA mode...

367 citations


Cites methods from "Bayesian semiparametric inference f..."

  • ...Kuo and Mallick (1997) included a regression in an accelerated failure time model....

    [...]

  • ...Such models are used, for example, in Gelfand and Kuo (1991), Doss (1994), and Kuo and Mallick (1997)....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this article, a modified Monte Carlo integration over configuration space is used to investigate the properties of a two-dimensional rigid-sphere system with a set of interacting individual molecules, and the results are compared to free volume equations of state and a four-term virial coefficient expansion.
Abstract: A general method, suitable for fast computing machines, for investigating such properties as equations of state for substances consisting of interacting individual molecules is described. The method consists of a modified Monte Carlo integration over configuration space. Results for the two‐dimensional rigid‐sphere system have been obtained on the Los Alamos MANIAC and are presented here. These results are compared to the free volume equation of state and to a four‐term virial coefficient expansion.

35,161 citations

Journal ArticleDOI
TL;DR: The focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normal- ity after transformations and marginalization, and the results are derived as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations.
Abstract: The Gibbs sampler, the algorithm of Metropolis and similar iterative simulation methods are potentially very helpful for summarizing multivariate distributions. Used naively, however, iterative simulation can give misleading answers. Our methods are simple and generally applicable to the output of any iterative simulation; they are designed for researchers primarily interested in the science underlying the data and models they are analyzing, rather than for researchers interested in the probability theory underlying the iterative simulations themselves. Our recommended strategy is to use several independent sequences, with starting points sampled from an overdispersed distribution. At each step of the iterative simulation, we obtain, for each univariate estimand of interest, a distributional estimate and an estimate of how much sharper the distributional estimate might become if the simulations were continued indefinitely. Because our focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normality after transformations and marginalization, we derive our results as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations. The methods are illustrated on a random-effects mixture model applied to experimental measurements of reaction times of normal and schizophrenic patients.

13,884 citations

01 Jan 1972
TL;DR: The drum mallets disclosed in this article are adjustable, by the percussion player, as to balance, overall weight, head characteristics and tone production of the mallet, whereby the adjustment can be readily obtained.
Abstract: The drum mallets disclosed are adjustable, by the percussion player, as to weight and/or balance and/or head characteristics, so as to vary the "feel" of the mallet, and thus also the tonal effect obtainable when playing upon kettle-drums, snare-drums, and other percussion instruments; and, typically, the mallet has frictionally slidable, removable and replaceable, external balancing mass means, positionable to serve as the striking head of the mallet, whereby the adjustment as to balance, overall weight, head characteristics and tone production may be readily obtained. In some forms, the said mass means regularly serves as a removable and replaceable striking head; while in other forms, the mass means comprises one or more thin elongated tubes having a frictionally-gripping fit on an elongated mallet body, so as to be manually slidable thereon but tight enough to avoid dislodgment under normal playing action; and such a tubular member may be slidable to the head-end of the mallet to serve as a striking head or it may be slidable to a position to serve as a hand grip; and one or more such tubular members may be placed in various positions along the length of the mallet. The mallet body may also have a tapered element at the head-end to assure retention of mass members especially of enlarged-head types; and the disclosure further includes such heads embodying a relatively hard inner portion and a relatively soft outer covering.

10,148 citations

Journal ArticleDOI
TL;DR: In this article, a class of prior distributions, called Dirichlet process priors, is proposed for nonparametric problems, for which treatment of many non-parametric statistical problems may be carried out, yielding results that are comparable to the classical theory.
Abstract: The Bayesian approach to statistical problems, though fruitful in many ways, has been rather unsuccessful in treating nonparametric problems. This is due primarily to the difficulty in finding workable prior distributions on the parameter space, which in nonparametric ploblems is taken to be a set of probability distributions on a given sample space. There are two desirable properties of a prior distribution for nonparametric problems. (I) The support of the prior distribution should be large--with respect to some suitable topology on the space of probability distributions on the sample space. (II) Posterior distributions given a sample of observations from the true probability distribution should be manageable analytically. These properties are antagonistic in the sense that one may be obtained at the expense of the other. This paper presents a class of prior distributions, called Dirichlet process priors, broad in the sense of (I), for which (II) is realized, and for which treatment of many nonparametric statistical problems may be carried out, yielding results that are comparable to the classical theory. In Section 2, we review the properties of the Dirichlet distribution needed for the description of the Dirichlet process given in Section 3. Briefly, this process may be described as follows. Let $\mathscr{X}$ be a space and $\mathscr{A}$ a $\sigma$-field of subsets, and let $\alpha$ be a finite non-null measure on $(\mathscr{X}, \mathscr{A})$. Then a stochastic process $P$ indexed by elements $A$ of $\mathscr{A}$, is said to be a Dirichlet process on $(\mathscr{X}, \mathscr{A})$ with parameter $\alpha$ if for any measurable partition $(A_1, \cdots, A_k)$ of $\mathscr{X}$, the random vector $(P(A_1), \cdots, P(A_k))$ has a Dirichlet distribution with parameter $(\alpha(A_1), \cdots, \alpha(A_k)). P$ may be considered a random probability measure on $(\mathscr{X}, \mathscr{A})$, The main theorem states that if $P$ is a Dirichlet process on $(\mathscr{X}, \mathscr{A})$ with parameter $\alpha$, and if $X_1, \cdots, X_n$ is a sample from $P$, then the posterior distribution of $P$ given $X_1, \cdots, X_n$ is also a Dirichlet process on $(\mathscr{X}, \mathscr{A})$ with a parameter $\alpha + \sum^n_1 \delta_{x_i}$, where $\delta_x$ denotes the measure giving mass one to the point $x$. In Section 4, an alternative definition of the Dirichlet process is given. This definition exhibits a version of the Dirichlet process that gives probability one to the set of discrete probability measures on $(\mathscr{X}, \mathscr{A})$. This is in contrast to Dubins and Freedman [2], whose methods for choosing a distribution function on the interval [0, 1] lead with probability one to singular continuous distributions. Methods of choosing a distribution function on [0, 1] that with probability one is absolutely continuous have been described by Kraft [7]. The general method of choosing a distribution function on [0, 1], described in Section 2 of Kraft and van Eeden [10], can of course be used to define the Dirichlet process on [0, 1]. Special mention must be made of the papers of Freedman and Fabius. Freedman [5] defines a notion of tailfree for a distribution on the set of all probability measures on a countable space $\mathscr{X}$. For a tailfree prior, posterior distribution given a sample from the true probability measure may be fairly easily computed. Fabius [3] extends the notion of tailfree to the case where $\mathscr{X}$ is the unit interval [0, 1], but it is clear his extension may be made to cover quite general $\mathscr{X}$. With such an extension, the Dirichlet process would be a special case of a tailfree distribution for which the posterior distribution has a particularly simple form. There are disadvantages to the fact that $P$ chosen by a Dirichlet process is discrete with probability one. These appear mainly because in sampling from a $P$ chosen by a Dirichlet process, we expect eventually to see one observation exactly equal to another. For example, consider the goodness-of-fit problem of testing the hypothesis $H_0$ that a distribution on the interval [0, 1] is uniform. If on the alternative hypothesis we place a Dirichlet process prior with parameter $\alpha$ itself a uniform measure on [0, 1], and if we are given a sample of size $n \geqq 2$, the only nontrivial nonrandomized Bayes rule is to reject $H_0$ if and only if two or more of the observations are exactly equal. This is really a test of the hypothesis that a distribution is continuous against the hypothesis that it is discrete. Thus, there is still a need for a prior that chooses a continuous distribution with probability one and yet satisfies properties (I) and (II). Some applications in which the possible doubling up of the values of the observations plays no essential role are presented in Section 5. These include the estimation of a distribution function, of a mean, of quantiles, of a variance and of a covariance. A two-sample problem is considered in which the Mann-Whitney statistic, equivalent to the rank-sum statistic, appears naturally. A decision theoretic upper tolerance limit for a quantile is also treated. Finally, a hypothesis testing problem concerning a quantile is shown to yield the sign test. In each of these problems, useful ways of combining prior information with the statistical observations appear. Other applications exist. In his Ph. D. dissertation [1], Charles Antoniak finds a need to consider mixtures of Dirichlet processes. He treats several problems, including the estimation of a mixing distribution, bio-assay, empirical Bayes problems, and discrimination problems.

5,033 citations